diff --git "a/python-qa-stackoverflow.jsonl" "b/python-qa-stackoverflow.jsonl" new file mode 100644--- /dev/null +++ "b/python-qa-stackoverflow.jsonl" @@ -0,0 +1,2988 @@ +{"question_id": 231767, "score": 13124, "creation_date": 1224800471, "tags": ["python", "iterator", "generator", "yield"], "instruction": "What does the \"yield\" keyword do in Python?\n\nWhat functionality does the yield keyword in Python provide? For example, I'm trying to understand this code1: def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild And this is the caller: result, candidates = [], [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result What happens when the method _get_child_candidates is called? Is a list returned? A single element? Is it called again? When will subsequent calls stop? 1. This piece of code was written by Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.", "output": "To understand what yield does, you must understand what generators are. And before you can understand generators, you must understand iterables. Iterables When you create a list, you can read its items one by one. Reading its items one by one is called iteration: >>> mylist = [1, 2, 3] >>> for i in mylist: ... print(i) 1 2 3 mylist is an iterable. When you use a list comprehension, you create a list, and so an iterable: >>> mylist = [x*x for x in range(3)] >>> for i in mylist: ... print(i) 0 1 4 Everything you can use \"for... in...\" on is an iterable; lists, strings, files... These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values. Generators Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly: >>> mygenerator = (x*x for x in range(3)) >>> for i in mygenerator: ... print(i) 0 1 4 It is just the same except you used () instead of []. BUT, you cannot perform for i in mygenerator a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end after calculating 4, one by one. Yield yield is a keyword that is used like return, except the function will return a generator. >>> def create_generator(): ... mylist = range(3) ... for i in mylist: ... yield i*i ... >>> mygenerator = create_generator() # create a generator >>> print(mygenerator) # mygenerator is an object! >>> for i in mygenerator: ... print(i) 0 1 4 Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once. To master yield, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky. Then, your code will continue from where it left off each time for uses the generator. Now the hard part: The first time the for calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield, then it'll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting yield. That can be because the loop has come to an end, or because you no longer satisfy an \"if/else\". Your code explained Generator: # Here you create the method of the node object that will return the generator def _get_child_candidates(self, distance, min_dist, max_dist): # Here is the code that will be called each time you use the generator object: # If there is still a child of the node object on its left # AND if the distance is ok, return the next child if self._leftchild and distance - max_dist < self._median: yield self._leftchild # If there is still a child of the node object on its right # AND if the distance is ok, return the next child if self._rightchild and distance + max_dist >= self._median: yield self._rightchild # If the function arrives here, the generator will be considered empty # There are no more than two values: the left and the right children Caller: # Create an empty list and a list with the current object reference result, candidates = list(), [self] # Loop on candidates (they contain only one element at the beginning) while candidates: # Get the last candidate and remove it from the list node = candidates.pop() # Get the distance between obj and the candidate distance = node._get_dist(obj) # If the distance is ok, then you can fill in the result if distance <= max_dist and distance >= min_dist: result.extend(node._values) # Add the children of the candidate to the candidate's list # so the loop will keep running until it has looked # at all the children of the children of the children, etc. of the candidate candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result This code contains several smart parts: The loop iterates on a list, but the list expands while the loop is being iterated. It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) exhausts all the values of the generator, but while keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node. The extend() method is a list object method that expects an iterable and adds its values to the list. Usually, we pass a list to it: >>> a = [1, 2] >>> b = [3, 4] >>> a.extend(b) >>> print(a) [1, 2, 3, 4] But in your code, it gets a generator, which is good because: You don't need to read the values twice. You may have a lot of children and you don't want them all stored in memory. And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question... You can stop here, or read a little bit to see an advanced use of a generator: Controlling a generator exhaustion >>> class Bank(): # Let's create a bank, building ATMs ... crisis = False ... def create_atm(self): ... while not self.crisis: ... yield \"$100\" >>> hsbc = Bank() # When everything's ok the ATM gives you as much as you want >>> corner_street_atm = hsbc.create_atm() >>> print(corner_street_atm.next()) $100 >>> print(corner_street_atm.next()) $100 >>> print([corner_street_atm.next() for cash in range(5)]) ['$100', '$100', '$100', '$100', '$100'] >>> hsbc.crisis = True # Crisis is coming, no more money! >>> print(corner_street_atm.next()) >>> wall_street_atm = hsbc.create_atm() # It's even true for new ATMs >>> print(wall_street_atm.next()) >>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty >>> print(corner_street_atm.next()) >>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business >>> for cash in brand_new_atm: ... print cash $100 $100 $100 $100 $100 $100 $100 $100 $100 ... Note: For Python 3, useprint(corner_street_atm.__next__()) or print(next(corner_street_atm)) It can be useful for various things like controlling access to a resource. Itertools, your best friend The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator? Chain two generators? Group values in a nested list with a one-liner? Map / Zip without creating another list? Then just import itertools. An example? Let's see the possible orders of arrival for a four-horse race: >>> horses = [1, 2, 3, 4] >>> races = itertools.permutations(horses) >>> print(races) >>> print(list(itertools.permutations(horses))) [(1, 2, 3, 4), (1, 2, 4, 3), (1, 3, 2, 4), (1, 3, 4, 2), (1, 4, 2, 3), (1, 4, 3, 2), (2, 1, 3, 4), (2, 1, 4, 3), (2, 3, 1, 4), (2, 3, 4, 1), (2, 4, 1, 3), (2, 4, 3, 1), (3, 1, 2, 4), (3, 1, 4, 2), (3, 2, 1, 4), (3, 2, 4, 1), (3, 4, 1, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 1, 3, 2), (4, 2, 1, 3), (4, 2, 3, 1), (4, 3, 1, 2), (4, 3, 2, 1)] Understanding the inner mechanisms of iteration Iteration is a process implying iterables (implementing the __iter__() method) and iterators (implementing the __next__() method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables. There is more about it in this article about how for loops work."} +{"question_id": 419163, "score": 8430, "creation_date": 1231301460, "tags": ["python", "namespaces", "program-entry-point", "python-module", "idioms"], "instruction": "What does if __name__ == \"__main__\": do?\n\nWhat does this do, and why should one include the if statement? if __name__ == \"__main__\": print(\"Hello, World!\") If you are trying to close a question where someone should be using this idiom and isn't, consider closing as a duplicate of Why is Python running my module when I import it, and how do I stop it? instead. For questions where someone simply hasn't called any functions, or incorrectly expects a function named main to be used as an entry point automatically, use Why doesn't the main() function run when I start a Python script? Where does the script start running?.", "output": "Short Answer It's boilerplate code that protects users from accidentally invoking the script when they didn't intend to. Here are some common problems when the guard is omitted from a script: If you import the guardless script in another script (e.g. import my_script_without_a_name_eq_main_guard), then the latter script will trigger the former to run at import time and using the second script's command line arguments. This is almost always a mistake. If you have a custom class in the guardless script and save it to a pickle file, then unpickling it in another script will trigger an import of the guardless script, with the same problems outlined in the previous bullet. Long Answer To better understand why and how this matters, we need to take a step back to understand how Python initializes scripts and how this interacts with its module import mechanism. Whenever the Python interpreter reads a source file, it does two things: it sets a few special variables like __name__, and then it executes all of the code found in the file. Let's see how this works and how it relates to your question about the __name__ checks we always see in Python scripts. Code Sample Let's use a slightly different code sample to explore how imports and scripts work. Suppose the following is in a file called foo.py. # Suppose this is foo.py. print(\"before import\") import math print(\"before function_a\") def function_a(): print(\"Function A\") print(\"before function_b\") def function_b(): print(\"Function B {}\".format(math.sqrt(100))) print(\"before __name__ guard\") if __name__ == '__main__': function_a() function_b() print(\"after __name__ guard\") Special Variables When the Python interpreter reads a source file, it first defines a few special variables. In this case, we care about the __name__ variable. When Your Module Is the Main Program If you are running your module (the source file) as the main program, e.g. python foo.py the interpreter will assign the hard-coded string \"__main__\" to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top # of your module when run as the main program. __name__ = \"__main__\" When Your Module Is Imported By Another On the other hand, suppose some other module is the main program and it imports your module. This means there's a statement like this in the main program, or in some other module the main program imports: # Suppose this is in some other main program. import foo The interpreter will search for your foo.py file (along with searching for a few other variants), and prior to executing that module, it will assign the name \"foo\" from the import statement to the __name__ variable, i.e. # It's as if the interpreter inserts this at the top # of your module when it's imported from another module. __name__ = \"foo\" Executing the Module's Code After the special variables are set up, the interpreter executes all the code in the module, one statement at a time. You may want to open another window on the side with the code sample so you can follow along with this explanation. Always It prints the string \"before import\" (without quotes). It loads the math module and assigns it to a variable called math. This is equivalent to replacing import math with the following (note that __import__ is a low-level function in Python that takes a string and triggers the actual import): # Find and load a module given its string name, \"math\", # then assign it to a local variable called math. math = __import__(\"math\") It prints the string \"before function_a\". It executes the def block, creating a function object, then assigning that function object to a variable called function_a. It prints the string \"before function_b\". It executes the second def block, creating another function object, then assigning it to a variable called function_b. It prints the string \"before __name__ guard\". Only When Your Module Is the Main Program If your module is the main program, then it will see that __name__ was indeed set to \"__main__\" and it calls the two functions, printing the strings \"Function A\" and \"Function B 10.0\". Only When Your Module Is Imported by Another (instead) If your module is not the main program but was imported by another one, then __name__ will be \"foo\", not \"__main__\", and it'll skip the body of the if statement. Always It will print the string \"after __name__ guard\" in both situations. Summary In summary, here's what'd be printed in the two cases: # What gets printed if foo is the main program before import before function_a before function_b before __name__ guard Function A Function B 10.0 after __name__ guard # What gets printed if foo is imported as a regular module before import before function_a before function_b before __name__ guard after __name__ guard Why Does It Work This Way? You might naturally wonder why anybody would want this. Well, sometimes you want to write a .py file that can be both used by other programs and/or modules as a module, and can also be run as the main program itself. Examples: Your module is a library, but you want to have a script mode where it runs some unit tests or a demo. Your module is only used as a main program, but it has some unit tests, and the testing framework works by importing .py files like your script and running special test functions. You don't want it to try running the script just because it's importing the module. Your module is mostly used as a main program, but it also provides a programmer-friendly API for advanced users. Beyond those examples, it's elegant that running a script in Python is just setting up a few magic variables and importing the script. \"Running\" the script is a side effect of importing the script's module. Food for Thought Question: Can I have multiple __name__ checking blocks? Answer: it's strange to do so, but the language won't stop you. Suppose the following is in foo2.py. What happens if you say python foo2.py on the command-line? Why? # Suppose this is foo2.py. import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters def function_a(): print(\"a1\") from foo2 import function_b print(\"a2\") function_b() print(\"a3\") def function_b(): print(\"b\") print(\"t1\") if __name__ == \"__main__\": print(\"m1\") function_a() print(\"m2\") print(\"t2\") Now, figure out what will happen in foo3.py (having removed the __name__ check): # Suppose this is foo3.py. import os, sys; sys.path.insert(0, os.path.dirname(__file__)) # needed for some interpreters def function_a(): print(\"a1\") from foo3 import function_b print(\"a2\") function_b() print(\"a3\") def function_b(): print(\"b\") print(\"t1\") print(\"m1\") function_a() print(\"m2\") print(\"t2\") What will this do when used as a script? When imported as a module? # Suppose this is in foo4.py __name__ = \"__main__\" def bar(): print(\"bar\") print(\"before __name__ guard\") if __name__ == \"__main__\": bar() print(\"after __name__ guard\")"} +{"question_id": 89228, "score": 6286, "creation_date": 1221701730, "tags": ["python", "shell", "terminal", "subprocess", "command"], "instruction": "How do I execute a program or call a system command?\n\nHow do I call an external command within Python as if I had typed it in a shell or command prompt?", "output": "Use subprocess.run: import subprocess subprocess.run([\"ls\", \"-l\"]) Another common way is os.system but you shouldn't use it because it is unsafe if any parts of the command come from outside your program or can contain spaces or other special characters, also subprocess.run is generally more flexible (you can get the stdout, stderr, the \"real\" status code, better error handling, etc.). Even the documentation for os.system recommends using subprocess instead. On Python 3.4 and earlier, use subprocess.call instead of .run: subprocess.call([\"ls\", \"-l\"])"} +{"question_id": 522563, "score": 5669, "creation_date": 1233960474, "tags": ["python", "loops", "list"], "instruction": "How can I access the index value in a 'for' loop?\n\nHow do I access the index while iterating over a sequence with a for loop? xs = [8, 23, 45] for x in xs: print(\"item #{} = {}\".format(index, x)) Desired output: item #1 = 8 item #2 = 23 item #3 = 45", "output": "Use the built-in function enumerate(): for idx, x in enumerate(xs): print(idx, x) It is non-Pythonic to manually index via for i in range(len(xs)): x = xs[i] or manually manage an additional state variable. Check out PEP 279 for more."} +{"question_id": 952914, "score": 5505, "creation_date": 1244147405, "tags": ["python", "list", "multidimensional-array", "flatten"], "instruction": "How do I make a flat list out of a list of lists?\n\nI have a list of lists like [ [1, 2, 3], [4, 5, 6], [7], [8, 9] ] How can I flatten it to get [1, 2, 3, 4, 5, 6, 7, 8, 9]? If your list of lists comes from a nested list comprehension, the problem can be solved more simply/directly by fixing the comprehension; please see How can I get a flat result from a list comprehension instead of a nested list?. The most popular solutions here generally only flatten one \"level\" of the nested list. See Flatten an irregular (arbitrarily nested) list of lists for solutions that completely flatten a deeply nested structure (recursively, in general).", "output": "A list of lists named xss can be flattened using a nested list comprehension: flat_list = [ x for xs in xss for x in xs ] The above is equivalent to: flat_list = [] for xs in xss: for x in xs: flat_list.append(x) Here is the corresponding function: def flatten(xss): return [x for xs in xss for x in xs] This is the fastest method. As evidence, using the timeit module in the standard library, we see: $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' '[x for xs in xss for x in xs]' 10000 loops, best of 3: 143 usec per loop $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' 'sum(xss, [])' 1000 loops, best of 3: 969 usec per loop $ python -mtimeit -s'xss=[[1,2,3],[4,5,6],[7],[8,9]]*99' 'reduce(lambda xs, ys: xs + ys, xss)' 1000 loops, best of 3: 1.1 msec per loop Explanation: the methods based on + (including the implied use in sum) are, of necessity, O(L**2) when there are L sublists -- as the intermediate result list keeps getting longer, at each step a new intermediate result list object gets allocated, and all the items in the previous intermediate result must be copied over (as well as a few new ones added at the end). So, for simplicity and without actual loss of generality, say you have L sublists of M items each: the first M items are copied back and forth L-1 times, the second M items L-2 times, and so on; total number of copies is M times the sum of x for x from 1 to L excluded, i.e., M * (L**2)/2. The list comprehension just generates one list, once, and copies each item over (from its original place of residence to the result list) also exactly once."} +{"question_id": 509211, "score": 4704, "creation_date": 1233700262, "tags": ["python", "slice", "sequence"], "instruction": "How slicing in Python works\n\nHow does Python's slice notation work? That is: when I write code like a[x:y:z], a[:], a[::2] etc., how can I understand which elements end up in the slice? See Why are slice and range upper-bound exclusive? to learn why xs[0:2] == [xs[0], xs[1]], not [..., xs[2]]. See Make a new list containing every Nth item in the original list for xs[::N]. See How does assignment work with list slices? to learn what xs[0:2] = [\"a\", \"b\"] does.", "output": "The syntax is: a[start:stop] # items start through stop-1 a[start:] # items start through the rest of the array a[:stop] # items from the beginning through stop-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: a[start:stop:step] # start through not past stop, by step The key point to remember is that the :stop value represents the first value that is not in the selected slice. So, the difference between stop and start is the number of elements selected (if step is 1, the default). The other feature is that start or stop may be a negative number, which means it counts from the end of the array instead of the beginning. So: a[-1] # last item in the array a[-2:] # last two items in the array a[:-2] # everything except the last two items Similarly, step may be a negative number: a[::-1] # all items in the array, reversed a[1::-1] # the first two items, reversed a[:-3:-1] # the last two items, reversed a[-3::-1] # everything except the last two items, reversed Python is kind to the programmer if there are fewer items than you ask for. For example, if you ask for a[:-2] and a only contains one element, you get an empty list instead of an error. Sometimes you would prefer the error, so you have to be aware that this may happen. Relationship with the slice object A slice object can represent a slicing operation, i.e.: a[start:stop:step] is equivalent to: a[slice(start, stop, step)] Slice objects also behave slightly differently depending on the number of arguments, similar to range(), i.e. both slice(stop) and slice(start, stop[, step]) are supported. To skip specifying a given argument, one might use None, so that e.g. a[start:] is equivalent to a[slice(start, None)] or a[::-1] is equivalent to a[slice(None, None, -1)]. While the :-based notation is very helpful for simple slicing, the explicit use of slice() objects simplifies the programmatic generation of slicing."} +{"question_id": 16476924, "score": 4246, "creation_date": 1368169489, "tags": ["python", "pandas", "dataframe", "loops"], "instruction": "How can I iterate over rows in a Pandas DataFrame?\n\nI have a pandas dataframe, df: c1 c2 0 10 100 1 11 110 2 12 120 How do I iterate over the rows of this dataframe? For every row, I want to access its elements (values in cells) by the name of the columns. For example: for row in df.rows: print(row['c1'], row['c2']) I found a similar question, which suggests using either of these: for date, row in df.T.iteritems(): for row in df.iterrows(): But I do not understand what the row object is and how I can work with it.", "output": "DataFrame.iterrows is a generator which yields both the index and row (as a Series): import pandas as pd df = pd.DataFrame({'c1': [10, 11, 12], 'c2': [100, 110, 120]}) df = df.reset_index() # make sure indexes pair with number of rows for index, row in df.iterrows(): print(row['c1'], row['c2']) 10 100 11 110 12 120 Obligatory disclaimer from the documentation Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed and can be avoided with one of the following approaches: Look for a vectorized solution: many operations can be performed using built-in methods or NumPy functions, (boolean) indexing, \u2026 When you have a function that cannot work on the full DataFrame/Series at once, it is better to use apply() instead of iterating over the values. See the docs on function application. If you need to do iterative manipulations on the values but performance is important, consider writing the inner loop with cython or numba. See the enhancing performance section for some examples of this approach. Other answers in this thread delve into greater depth on alternatives to iter* functions if you are interested to learn more."} +{"question_id": 423379, "score": 4047, "creation_date": 1231393502, "tags": ["python", "global-variables", "scope"], "instruction": "How can I use a global variable in a function?\n\nHow do I create or use a global variable inside a function? How do I use a global variable that was defined in one function inside other functions? Failing to use the global keyword where appropriate often causes UnboundLocalError. The precise rules for this are explained at UnboundLocalError on local variable when reassigned after first use. Generally, please close other questions as a duplicate of that question when an explanation is sought, and this question when someone simply needs to know the global keyword.", "output": "You can use a global variable within other functions by declaring it as global within each function that assigns a value to it: globvar = 0 def set_globvar_to_one(): global globvar # Needed to modify global copy of globvar globvar = 1 def print_globvar(): print(globvar) # No need for global declaration to read value of globvar set_globvar_to_one() print_globvar() # Prints 1 Since it's unclear whether globvar = 1 is creating a local variable or changing a global variable, Python defaults to creating a local variable, and makes you explicitly choose the other behavior with the global keyword. See other answers if you want to share a global variable across modules."} +{"question_id": 415511, "score": 3992, "creation_date": 1231217663, "tags": ["python", "datetime", "time"], "instruction": "How do I get the current time in Python?\n\nHow do I get the current time in Python?", "output": "Use datetime: >>> import datetime >>> now = datetime.datetime.now() >>> now datetime.datetime(2009, 1, 6, 15, 8, 24, 78915) >>> str(now) '2009-01-06 15:08:24.078915' For just the clock time without the date: >>> now.time() datetime.time(15, 8, 24, 78915) >>> str(now.time()) '15:08:24.078915' To save typing, you can import the datetime object from the datetime module: from datetime import datetime Then remove the prefix datetime. from all of the above."} +{"question_id": 606191, "score": 3885, "creation_date": 1236082981, "tags": ["python", "string", "python-3.x"], "instruction": "Convert bytes to a string in Python 3\n\nI captured the standard output of an external program into a bytes object: >>> from subprocess import * >>> stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0] >>> stdout b'total 0\\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\\n' I want to convert that to a normal Python string, so that I can print it like this: >>> print(stdout) -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2 How do I convert the bytes object to a str with Python 3? See Best way to convert string to bytes in Python 3? for the other way around.", "output": "Decode the bytes object to produce a string: >>> b\"abcde\".decode(\"utf-8\") 'abcde' The above example assumes that the bytes object is in UTF-8, because it is a common encoding. However, you should use the encoding your data is actually in!"} +{"question_id": 1436703, "score": 3831, "creation_date": 1253161665, "tags": ["python", "magic-methods", "repr"], "instruction": "What is the difference between __str__ and __repr__?\n\nWhat is the difference between __str__ and __repr__ in Python?", "output": "Alex Martelli summarized well but, surprisingly, was too succinct. First, let me reiterate the main points in Alex\u2019s post: The default implementation is useless (it\u2019s hard to think of one which wouldn\u2019t be, but yeah) __repr__ goal is to be unambiguous __str__ goal is to be readable Container\u2019s __str__ uses contained objects\u2019 __repr__ Default implementation is useless This is mostly a surprise because Python\u2019s defaults tend to be fairly useful. However, in this case, having a default for __repr__ which would act like: return \"%s(%r)\" % (self.__class__, self.__dict__) Or in new f-string formatting: return f\"{self.__class__!s}({self.__dict__!r})\" would have been too dangerous (for example, too easy to get into infinite recursion if objects reference each other). So Python cops out. Note that there is one default which is true: if __repr__ is defined, and __str__ is not, the object will behave as though __str__=__repr__. This means, in simple terms: almost every object you implement should have a functional __repr__ that\u2019s usable for understanding the object. Implementing __str__ is optional: do that if you need a \u201cpretty print\u201d functionality (for example, used by a report generator). The goal of __repr__ is to be unambiguous Let me come right out and say it \u2014 I do not believe in debuggers. I don\u2019t really know how to use any debugger, and have never used one seriously. Furthermore, I believe that the big fault in debuggers is their basic nature \u2014 most failures I debug happened a long long time ago, in a galaxy far far away. This means that I do believe, with religious fervor, in logging. Logging is the lifeblood of any decent fire-and-forget server system. Python makes it easy to log: with maybe some project specific wrappers, all you need is a log(INFO, \"I am in the weird function and a is\", a, \"and b is\", b, \"but I got a null C \u2014 using default\", default_c) But you have to do the last step \u2014 make sure every object you implement has a useful repr, so code like that can just work. This is why the \u201ceval\u201d thing comes up: if you have enough information so eval(repr(c))==c, that means you know everything there is to know about c. If that\u2019s easy enough, at least in a fuzzy way, do it. If not, make sure you have enough information about c anyway. I usually use an eval-like format: \"MyClass(this=%r,that=%r)\" % (self.this,self.that). It does not mean that you can actually construct MyClass, or that those are the right constructor arguments \u2014 but it is a useful form to express \u201cthis is everything you need to know about this instance\u201d. Note: I used %r above, not %s. You always want to use repr() [or %r formatting character, equivalently] inside __repr__ implementation, or you\u2019re defeating the goal of repr. You want to be able to differentiate MyClass(3) and MyClass(\"3\"). The goal of __str__ is to be readable Specifically, it is not intended to be unambiguous \u2014 notice that str(3)==str(\"3\"). Likewise, if you implement an IP abstraction, having the str of it look like 192.168.1.1 is just fine. When implementing a date/time abstraction, the str can be \"2010/4/12 15:35:22\", etc. The goal is to represent it in a way that a user, not a programmer, would want to read it. Chop off useless digits, pretend to be some other class \u2014 as long is it supports readability, it is an improvement. Container\u2019s __str__ uses contained objects\u2019 __repr__ This seems surprising, doesn\u2019t it? It is a little, but how readable would it be if it used their __str__? [moshe is, 3, hello world, this is a list, oh I don't know, containing just 4 elements] Not very. Specifically, the strings in a container would find it way too easy to disturb its string representation. In the face of ambiguity, remember, Python resists the temptation to guess. If you want the above behavior when you\u2019re printing a list, just print(\"[\" + \", \".join(lst) + \"]\") (you can probably also figure out what to do about dictionaries). Summary Implement __repr__ for any class you implement. This should be second nature. Implement __str__ if you think it would be useful to have a string version which errs on the side of readability."} +{"question_id": 17071871, "score": 3632, "creation_date": 1371058925, "tags": ["python", "pandas", "dataframe", "indexing", "filter"], "instruction": "How do I select rows from a DataFrame based on column values?\n\nHow can I select rows from a DataFrame based on values in some column in Pandas? In SQL, I would use: SELECT * FROM table WHERE column_name = some_value", "output": "To select rows whose column value equals a scalar, some_value, use ==: df.loc[df['column_name'] == some_value] To select rows whose column value is in an iterable, some_values, use isin: df.loc[df['column_name'].isin(some_values)] Combine multiple conditions with &: df.loc[(df['column_name'] >= A) & (df['column_name'] <= B)] Note the parentheses. Due to Python's operator precedence rules, & binds more tightly than <= and >=. Thus, the parentheses in the last example are necessary. Without the parentheses df['column_name'] >= A & df['column_name'] <= B is parsed as df['column_name'] >= (A & df['column_name']) <= B which results in a Truth value of a Series is ambiguous error. To select rows whose column value does not equal some_value, use !=: df.loc[df['column_name'] != some_value] The isin returns a boolean Series, so to select rows whose value is not in some_values, negate the boolean Series using ~: df = df.loc[~df['column_name'].isin(some_values)] # .loc is not in-place replacement For example, import pandas as pd import numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8) * 2}) print(df) # A B C D # 0 foo one 0 0 # 1 bar one 1 2 # 2 foo two 2 4 # 3 bar three 3 6 # 4 foo two 4 8 # 5 bar two 5 10 # 6 foo one 6 12 # 7 foo three 7 14 print(df.loc[df['A'] == 'foo']) yields A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 If you have multiple values you want to include, put them in a list (or more generally, any iterable) and use isin: print(df.loc[df['B'].isin(['one','three'])]) yields A B C D 0 foo one 0 0 1 bar one 1 2 3 bar three 3 6 6 foo one 6 12 7 foo three 7 14 Note, however, that if you wish to do this many times, it is more efficient to make an index first, and then use df.loc: df = df.set_index(['B']) print(df.loc['one']) yields A C D B one foo 0 0 one bar 1 2 one foo 6 12 or, to include multiple values from the index use df.index.isin: df.loc[df.index.isin(['one','two'])] yields A C D B one foo 0 0 one bar 1 2 two foo 2 4 two foo 4 8 two bar 5 10 one foo 6 12"} +{"question_id": 3437059, "score": 3585, "creation_date": 1281322370, "tags": ["python", "string", "substring", "contains"], "instruction": "Does Python have a string 'contains' substring method?\n\nI'm looking for a string.contains or string.indexof method in Python. I want to do: if not somestring.contains(\"blah\"): continue", "output": "Use the in operator: if \"blah\" not in somestring: continue Note: This is case-sensitive."} +{"question_id": 6996603, "score": 3543, "creation_date": 1312895142, "tags": ["python", "file-io", "directory", "delete-file"], "instruction": "How can I delete a file or folder in Python?\n\nHow can I delete a file or folder in Python?", "output": "Use one of these methods: pathlib.Path.unlink() removes a file or symbolic link. pathlib.Path.rmdir() removes an empty directory. shutil.rmtree() deletes a directory and all its contents. On Python 3.3 and below, you can use these methods instead of the pathlib ones: os.remove() removes a file. os.unlink() removes a symbolic link. os.rmdir() removes an empty directory."} +{"question_id": 1132941, "score": 3509, "creation_date": 1247680837, "tags": ["python", "language-design", "default-parameters", "least-astonishment"], "instruction": "\"Least Astonishment\" and the Mutable Default Argument\n\ndef foo(a=[]): a.append(5) return a Python novices expect this function called with no parameter to always return a list with only one element: [5]. The result is different and astonishing: >>> foo() [5] >>> foo() [5, 5] >>> foo() [5, 5, 5] >>> foo() [5, 5, 5, 5] >>> foo() The behavior has an underlying explanation, but it is unexpected if you don't understand internals. What is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?) Edit: Baczek made an interesting example. Together with your comments and Utaal's in particular, I elaborated: def a(): print(\"a executed\") return [] def b(x=a()): x.append(5) print(x) a executed >>> b() [5] >>> b() [5, 5] It seems that the design decision was relative to where to put the scope of parameters: inside the function, or \"together\" with it? Doing the binding inside the function would mean that x is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the def line would be \"hybrid\" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time. The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.", "output": "Actually, this is not a design flaw, and it is not because of internals or performance. It comes simply from the fact that functions in Python are first-class objects, and not only a piece of code. As soon as you think of it this way, then it completely makes sense: a function is an object being evaluated on its definition; default parameters are kind of \"member data\" and therefore their state may change from one call to the other - exactly as in any other object. In any case, the Effbot (Fredrik Lundh) has a very nice explanation of the reasons for this behavior in Default Parameter Values in Python. I found it very clear, and I really suggest reading it for a better knowledge of how function objects work."} +{"question_id": 36901, "score": 3498, "creation_date": 1220195075, "tags": ["python", "syntax", "parameter-passing", "variadic-functions", "argument-unpacking"], "instruction": "What does ** (double star/asterisk) and * (star/asterisk) do for parameters?\n\nWhat do *args and **kwargs mean in these function definitions? def foo(x, y, *args): pass def bar(x, y, **kwargs): pass See What do ** (double star/asterisk) and * (star/asterisk) mean in a function call? for the complementary question about arguments.", "output": "The *args and **kwargs are common idioms to allow an arbitrary number of arguments to functions, as described in the section more on defining functions in the Python tutorial. The *args will give you all positional arguments as a tuple: def foo(*args): for a in args: print(a) foo(1) # 1 foo(1, 2, 3) # 1 # 2 # 3 The **kwargs will give you all keyword arguments as a dictionary: def bar(**kwargs): for a in kwargs: print(a, kwargs[a]) bar(name='one', age=27) # name one # age 27 Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, bar=None, **kwargs): print(kind, args, bar, kwargs) foo(123, 'a', 'b', apple='red') # 123 ('a', 'b') None {'apple': 'red'} It is also possible to use this the other way around: def foo(a, b, c): print(a, b, c) obj = {'b':10, 'c':'lee'} foo(100, **obj) # 100 10 lee Another usage of the *l idiom is to unpack argument lists when calling a function. def foo(bar, lee): print(bar, lee) baz = [1, 2] foo(*baz) # 1 2 In Python 3 it is possible to use *l on the left side of an assignment (Extended Iterable Unpacking), though it gives a list instead of a tuple in this context: first, *rest = [1, 2, 3, 4] # first = 1 # rest = [2, 3, 4] Also Python 3 adds a new semantic (refer PEP 3102): def func(arg1, arg2, arg3, *, kwarg1, kwarg2): pass Such function accepts only 3 positional arguments, and everything after * can only be passed as keyword arguments. Note: A Python dict, semantically used for keyword argument passing, is arbitrarily ordered. However, in Python 3.6+, keyword arguments are guaranteed to remember insertion order. \"The order of elements in **kwargs now corresponds to the order in which keyword arguments were passed to the function.\" - What\u2019s New In Python 3.6. In fact, all dicts in CPython 3.6 will remember insertion order as an implementation detail, and this becomes standard in Python 3.7."} +{"question_id": 2612802, "score": 3368, "creation_date": 1270889386, "tags": ["python", "list", "clone", "mutable"], "instruction": "How do I clone a list so that it doesn't change unexpectedly after assignment?\n\nWhile using new_list = my_list, any modifications to new_list changes my_list every time. Why is this, and how can I clone or copy the list to prevent it? For example: >>> my_list = [1, 2, 3] >>> new_list = my_list >>> new_list.append(4) >>> my_list [1, 2, 3, 4]", "output": "new_list = my_list doesn't actually create a second list. The assignment just copies the reference to the list, not the actual list, so both new_list and my_list refer to the same list after the assignment. To actually copy the list, you have several options: You can use the built-in list.copy() method (available since Python 3.3): new_list = old_list.copy() You can slice it: new_list = old_list[:] Alex Martelli's opinion (at least back in 2007) about this is, that it is a weird syntax and it does not make sense to use it ever. ;) (In his opinion, the next one is more readable). You can use the built-in list() constructor: new_list = list(old_list) You can use generic copy.copy(): import copy new_list = copy.copy(old_list) This is a little slower than list() because it has to find out the datatype of old_list first. If you need to copy the elements of the list as well, use generic copy.deepcopy(): import copy new_list = copy.deepcopy(old_list) Obviously the slowest and most memory-needing method, but sometimes unavoidable. This operates recursively; it will handle any number of levels of nested lists (or other containers). Example: import copy class Foo(object): def __init__(self, val): self.val = val def __repr__(self): return f'Foo({self.val!r})' foo = Foo(1) a = ['foo', foo] b = a.copy() c = a[:] d = list(a) e = copy.copy(a) f = copy.deepcopy(a) # edit orignal list and instance a.append('baz') foo.val = 5 print(f'original: {a}\\nlist.copy(): {b}\\nslice: {c}\\nlist(): {d}\\ncopy: {e}\\ndeepcopy: {f}') Result: original: ['foo', Foo(5), 'baz'] list.copy(): ['foo', Foo(5)] slice: ['foo', Foo(5)] list(): ['foo', Foo(5)] copy: ['foo', Foo(5)] deepcopy: ['foo', Foo(1)]"} +{"question_id": 1720421, "score": 3238, "creation_date": 1258009449, "tags": ["python", "list", "concatenation"], "instruction": "How do I concatenate two lists in Python?\n\nHow do I concatenate two lists in Python? Example: listone = [1, 2, 3] listtwo = [4, 5, 6] Expected outcome: >>> joinedlist [1, 2, 3, 4, 5, 6]", "output": "Use the + operator to combine the lists: listone = [1, 2, 3] listtwo = [4, 5, 6] joinedlist = listone + listtwo Output: >>> joinedlist [1, 2, 3, 4, 5, 6] NOTE: This will create a new list with a shallow copy of the items in the first list, followed by a shallow copy of the items in the second list. Use copy.deepcopy() to get deep copies of lists."} +{"question_id": 252703, "score": 3109, "creation_date": 1225432536, "tags": ["python", "list", "data-structures", "append", "extend"], "instruction": "What is the difference between Python's list methods append and extend?\n\nWhat's the difference between the list methods append() and extend()?", "output": ".append() appends a single object at the end of the list: >>> x = [1, 2, 3] >>> x.append([4, 5]) >>> print(x) [1, 2, 3, [4, 5]] .extend() appends multiple objects that are taken from inside the specified iterable: >>> x = [1, 2, 3] >>> x.extend([4, 5]) >>> print(x) [1, 2, 3, 4, 5]"} +{"question_id": 11277432, "score": 3013, "creation_date": 1341088079, "tags": ["python", "dictionary", "unset"], "instruction": "How can I remove a key from a Python dictionary?\n\nI want to remove a key from a dictionary if it is present. I currently use this code: if key in my_dict: del my_dict[key] Without the if statement, the code will raise KeyError if the key is not present. How can I handle this more simply? See Delete an element from a dictionary for more general approaches to the problem of removing a key from a dict (including ones which produce a modified copy).", "output": "To delete a key regardless of whether it is in the dictionary, use the two-argument form of dict.pop(): my_dict.pop('key', None) This will return my_dict[key] if key exists in the dictionary, and None otherwise. If the second parameter is not specified (i.e. my_dict.pop('key')) and key does not exist, a KeyError is raised. To delete a key that is guaranteed to exist, you can also use del my_dict['key'] This will raise a KeyError if the key is not in the dictionary."} +{"question_id": 2720014, "score": 2887, "creation_date": 1272360205, "tags": ["python", "pip", "pypi"], "instruction": "How to upgrade all Python packages with pip\n\nIs it possible to upgrade all Python packages at one time with pip? Note: that there is a feature request for this on the official issue tracker.", "output": "There isn't a built-in flag yet. Starting with pip version 22.3, the --outdated and --format=freeze have become mutually exclusive. Use Python, to parse the JSON output: pip --disable-pip-version-check list --outdated --format=json | python -c \"import json, sys; print('\\n'.join([x['name'] for x in json.load(sys.stdin)]))\" | xargs -n1 pip install -U If you are using pip<22.3 you can use: pip list --outdated --format=freeze | grep -v '^\\-e' | cut -d = -f 1 | xargs -n1 pip install -U For older versions of pip: pip freeze --local | grep -v '^\\-e' | cut -d = -f 1 | xargs -n1 pip install -U The grep is to skip editable (\"-e\") package definitions, as suggested by @jawache. (Yes, you could replace grep+cut with sed or awk or perl or...). The -n1 flag for xargs prevents stopping everything if updating one package fails (thanks @andsens). Note: there are infinite potential variations for this. I'm trying to keep this answer short and simple, but please do suggest variations in the comments!"} +{"question_id": 72899, "score": 2856, "creation_date": 1221575267, "tags": ["python", "list", "sorting", "dictionary", "data-structures"], "instruction": "How can I sort a list of dictionaries by a value of the dictionary in Python?\n\nHow do I sort a list of dictionaries by a specific key's value? Given: [{'name': 'Homer', 'age': 39}, {'name': 'Bart', 'age': 10}] When sorted by name, it should become: [{'name': 'Bart', 'age': 10}, {'name': 'Homer', 'age': 39}]", "output": "The sorted() function takes a key= parameter newlist = sorted(list_to_be_sorted, key=lambda d: d['name']) Alternatively, you can use operator.itemgetter instead of defining the function yourself from operator import itemgetter newlist = sorted(list_to_be_sorted, key=itemgetter('name')) For completeness, add reverse=True to sort in descending order newlist = sorted(list_to_be_sorted, key=itemgetter('name'), reverse=True)"} +{"question_id": 379906, "score": 2778, "creation_date": 1229651546, "tags": ["python", "parsing", "floating-point", "type-conversion", "integer"], "instruction": "How do I parse a string to a float or int?\n\nHow can I convert an str to a float? \"545.2222\" -> 545.2222 Or an str to a int? \"31\" -> 31 For the reverse, see Convert integer to string in Python and Converting a float to a string without rounding it. Please instead use How can I read inputs as numbers? to close duplicate questions where OP received a string from user input and immediately wants to convert it, or was hoping for input (in 3.x) to convert the type automatically.", "output": ">>> a = \"545.2222\" >>> float(a) 545.22220000000004 >>> int(float(a)) 545"} +{"question_id": 1602934, "score": 2675, "creation_date": 1256151909, "tags": ["python", "dictionary"], "instruction": "Check if a given key already exists in a dictionary\n\nI wanted to test if a key exists in a dictionary before updating the value for the key. I wrote the following code: if 'key1' in dict.keys(): print \"blah\" else: print \"boo\" I think this is not the best way to accomplish this task. Is there a better way to test for a key in the dictionary?", "output": "in tests for the existence of a key in a dict: d = {\"key1\": 10, \"key2\": 23} if \"key1\" in d: print(\"this will execute\") if \"nonexistent key\" in d: print(\"this will not\") Use dict.get() to provide a default value when the key does not exist: d = {} for i in range(100): key = i % 10 d[key] = d.get(key, 0) + 1 To provide a default value for every key, either use dict.setdefault() on each assignment: d = {} for i in range(100): d[i % 10] = d.setdefault(i % 10, 0) + 1 ...or better, use defaultdict from the collections module: from collections import defaultdict d = defaultdict(int) for i in range(100): d[i % 10] += 1"} +{"question_id": 610883, "score": 2534, "creation_date": 1236177959, "tags": ["python", "class", "object", "attributes", "attributeerror"], "instruction": "How can I check if an object has an attribute?\n\nHow do I check if an object has some attribute? For example: >>> a = SomeClass() >>> a.property Traceback (most recent call last): File \"\", line 1, in AttributeError: SomeClass instance has no attribute 'property' How do I tell if a has the attribute property before using it?", "output": "Try hasattr(): if hasattr(a, 'property'): a.property See zweiterlinde's answer, which offers good advice about asking forgiveness! It is a very Pythonic approach! The general practice in Python is that, if the property is likely to be there most of the time, simply call it and either let the exception propagate, or trap it with a try/except block. This will likely be faster than hasattr. If the property is likely to not be there most of the time, or you're not sure, using hasattr will probably be faster than repeatedly falling into an exception block."} +{"question_id": 12943819, "score": 2053, "creation_date": 1350509923, "tags": ["python", "json", "formatting", "pretty-print"], "instruction": "How to prettyprint a JSON file?\n\nHow do I pretty-print a JSON file in Python?", "output": "Use the indent= parameter of json.dump() or json.dumps() to specify how many spaces to indent by: >>> import json >>> your_json = '[\"foo\", {\"bar\": [\"baz\", null, 1.0, 2]}]' >>> parsed = json.loads(your_json) >>> print(json.dumps(parsed, indent=4)) [ \"foo\", { \"bar\": [ \"baz\", null, 1.0, 2 ] } ] To parse a file, use json.load(): with open('filename.txt', 'r') as handle: parsed = json.load(handle)"} +{"question_id": 354038, "score": 2022, "creation_date": 1228853022, "tags": ["python", "casting", "floating-point", "type-conversion", "integer"], "instruction": "How do I check if a string represents a number (float or int)?\n\nHow do I check if a string represents a numeric value in Python? def is_number(s): try: float(s) return True except ValueError: return False The above works, but it seems clunky. Editor's note: If what you are testing comes from user input, it is still a string even if it represents an int or a float. For converting the input, see How can I read inputs as numbers? For ensuring that the input represents an int or float (or other requirements) before proceeding, see Asking the user for input until they give a valid response", "output": "Which, not only is ugly and slow I'd dispute both. A regex or other string parsing method would be uglier and slower. I'm not sure that anything much could be faster than the above. It calls the function and returns. Try/except doesn't introduce much overhead because the most common exception is caught without an extensive search of stack frames. The issue across programming languages is that any numeric conversion function has two kinds of results: A number, if the number is valid; A status code (e.g., via errno) or exception to show that no valid number could be parsed. C (as an example) hacks around this a number of ways. Python lays it out clearly and explicitly. I think your code for doing this is just fine. The only thing that could be cleaner is moving the return True into an else block, to be clear that it's not part of the code under test \u2013 not that there's much ambiguity. def is_number(s): try: float(s) except ValueError: # Failed return False else: # Succeeded return True"} +{"question_id": 7225900, "score": 1990, "creation_date": 1314589984, "tags": ["python", "pip", "virtualenv", "requirements.txt"], "instruction": "How can I install packages using pip according to the requirements.txt file from a local directory?\n\nHere is the problem: I have a requirements.txt file that looks like: BeautifulSoup==3.2.0 Django==1.3 Fabric==1.2.0 Jinja2==2.5.5 PyYAML==3.09 Pygments==1.4 SQLAlchemy==0.7.1 South==0.7.3 amqplib==0.6.1 anyjson==0.3 ... I have a local archive directory containing all the packages + others. I have created a new virtualenv with bin/virtualenv testing Upon activating it, I tried to install the packages according to requirements.txt from the local archive directory. source bin/activate pip install -r /path/to/requirements.txt -f file:///path/to/archive/ I got some output that seems to indicate that the installation is fine: Downloading/unpacking Fabric==1.2.0 (from -r ../testing/requirements.txt (line 3)) Running setup.py egg_info for package Fabric warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no files found matching 'fabfile.py' Downloading/unpacking South==0.7.3 (from -r ../testing/requirements.txt (line 8)) Running setup.py egg_info for package South .... But a later check revealed that none of the packages are installed properly. I cannot import the packages, and none are found in the site-packages directory of my virtualenv. So what went wrong?", "output": "This works for me: pip install -r requirements.txt --no-index --find-links file:///tmp/packages --no-index - Ignore package index (only look at --find-links URLs instead). -f, --find-links - If is a URL or a path to an HTML file, then parse for links to archives. If is a local path or a file:// URL that's a directory, then look for archives in the directory listing."} +{"question_id": 14132789, "score": 1899, "creation_date": 1357185040, "tags": ["python", "python-import", "relative-path", "python-packaging", "relative-import"], "instruction": "Relative imports for the billionth time\n\nI've been here: PEP 328 \u2013 Imports: Multi-Line and Absolute/Relative Modules, Packages Python packages: relative imports Python relative import example code does not work Relative imports in Python 2.5 Relative imports in Python Python: Disabling relative import and plenty of URLs that I did not copy, some on SO, some on other sites, back when I thought I'd have the solution quickly. The forever-recurring question is this: how do I solve this \"Attempted relative import in non-package\" message? ImportError: attempted relative import with no known parent package I built an exact replica of the package on pep-0328: package/ __init__.py subpackage1/ __init__.py moduleX.py moduleY.py subpackage2/ __init__.py moduleZ.py moduleA.py The imports were done from the console. I did make functions named spam and eggs in their appropriate modules. Naturally, it didn't work. The answer is apparently in the 4th URL I listed, but it's all alumni to me. There was this response on one of the URLs I visited: Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system. The above response looks promising, but it's all hieroglyphs to me. How do I make Python not return to me \"Attempted relative import in non-package\"? It has an answer that involves -m, supposedly. Why does Python give that error message? What does by \"non-package\" mean? Why and how do you define a 'package'?", "output": "Script vs. Module Here's an explanation. The short version is that there is a big difference between directly running a Python file, and importing that file from somewhere else. Just knowing what directory a file is in does not determine what package Python thinks it is in. That depends, additionally, on how you load the file into Python (by running or by importing). There are two ways to load a Python file: as the top-level script, or as a module. A file is loaded as the top-level script if you execute it directly, for instance by typing python myfile.py on the command line. It is loaded as a module when an import statement is encountered inside some other file. There can only be one top-level script at a time; the top-level script is the Python file you ran to start things off. Naming When a file is loaded, it is given a name (which is stored in its __name__ attribute). If it was loaded as the top-level script, its name is __main__. If it was loaded as a module, its name is the filename, preceded by the names of any packages/subpackages of which it is a part, separated by dots. So for instance in your example: package/ __init__.py subpackage1/ __init__.py moduleX.py moduleA.py if you imported moduleX (note: imported, not directly executed), its name would be package.subpackage1.moduleX. If you imported moduleA, its name would be package.moduleA. However, if you directly run moduleX from the command line, its name will instead be __main__, and if you directly run moduleA from the command line, its name will be __main__. When a module is run as the top-level script, it loses its normal name and its name is instead __main__. Accessing a module NOT through its containing package There is an additional wrinkle: the module's name depends on whether it was imported \"directly\" from the directory it is in or imported via a package. This only makes a difference if you run Python in a directory, and try to import a file in that same directory (or a subdirectory of it). For instance, if you start the Python interpreter in the directory package/subpackage1 and then do import moduleX, the name of moduleX will just be moduleX, and not package.subpackage1.moduleX. This is because Python adds the current directory to its search path when the interpreter is entered interactively; if it finds the to-be-imported module in the current directory, it will not know that that directory is part of a package, and the package information will not become part of the module's name. A special case is if you run the interpreter interactively (e.g., just type python and start entering Python code on the fly). In this case, the name of that interactive session is __main__. Now here is the crucial thing for your error message: if a module's name has no dots, it is not considered to be part of a package. It doesn't matter where the file actually is on disk. All that matters is what its name is, and its name depends on how you loaded it. Now look at the quote you included in your question: Relative imports use a module's name attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to 'main') then relative imports are resolved as if the module were a top-level module, regardless of where the module is actually located on the file system. Relative imports... Relative imports use the module's name to determine where it is in a package. When you use a relative import like from .. import foo, the dots indicate to step up some number of levels in the package hierarchy. For instance, if your current module's name is package.subpackage1.moduleX, then ..moduleA would mean package.moduleA. For a from .. import to work, the module's name must have at least as many dots as there are in the import statement. ... are only relative in a package However, if your module's name is __main__, it is not considered to be in a package. Its name has no dots, and therefore you cannot use from .. import statements inside it. If you try to do so, you will get the \"relative-import in non-package\" error. Scripts can't import relative What you probably did is you tried to run moduleX or the like from the command line. When you did this, its name was set to __main__, which means that relative imports within it will fail, because its name does not reveal that it is in a package. Note that this will also happen if you run Python from the same directory where a module is, and then try to import that module, because, as described above, Python will find the module in the current directory \"too early\" without realizing it is part of a package. Also remember that when you run the interactive interpreter, the \"name\" of that interactive session is always __main__. Thus you cannot do relative imports directly from an interactive session. Relative imports are only for use within module files. Two solutions: If you really do want to run moduleX directly, but you still want it to be considered part of a package, you can do python -m package.subpackage1.moduleX. The -m tells Python to load it as a module, not as the top-level script. Or perhaps you don't actually want to run moduleX, you just want to run some other script, say myfile.py, that uses functions inside moduleX. If that is the case, put myfile.py somewhere else \u2013 not inside the package directory \u2013 and run it. If inside myfile.py you do things like from package.moduleA import spam, it will work fine. Notes For either of these solutions, the package directory (package in your example) must be accessible from the Python module search path (sys.path). If it is not, you will not be able to use anything in the package reliably at all. Since Python 2.6, the module's \"name\" for package-resolution purposes is determined not just by its __name__ attributes but also by the __package__ attribute. That's why I'm avoiding using the explicit symbol __name__ to refer to the module's \"name\". Since Python 2.6 a module's \"name\" is effectively __package__ + '.' + __name__, or just __name__ if __package__ is None.)"} +{"question_id": 16981921, "score": 1885, "creation_date": 1370600810, "tags": ["python", "python-3.x", "python-import"], "instruction": "Relative imports in Python 3\n\nI want to import a function from another file in the same directory. Usually, one of the following works: from .mymodule import myfunction from mymodule import myfunction ...but the other one gives me one of these errors: ImportError: attempted relative import with no known parent package ModuleNotFoundError: No module named 'mymodule' SystemError: Parent module '' not loaded, cannot perform relative import Why is this?", "output": "unfortunately, this module needs to be inside the package, and it also needs to be runnable as a script, sometimes. Any idea how I could achieve that? It's quite common to have a layout like this... main.py mypackage/ __init__.py mymodule.py myothermodule.py ...with a mymodule.py like this... #!/usr/bin/env python3 # Exported function def as_int(a): return int(a) # Test function for module def _test(): assert as_int('1') == 1 if __name__ == '__main__': _test() ...a myothermodule.py like this... #!/usr/bin/env python3 from .mymodule import as_int # Exported function def add(a, b): return as_int(a) + as_int(b) # Test function for module def _test(): assert add('1', '1') == 2 if __name__ == '__main__': _test() ...and a main.py like this... #!/usr/bin/env python3 from mypackage.myothermodule import add def main(): print(add('1', '1')) if __name__ == '__main__': main() ...which works fine when you run main.py or mypackage/mymodule.py, but fails with mypackage/myothermodule.py, due to the relative import... from .mymodule import as_int The way you're supposed to run it is... python3 -m mypackage.myothermodule ...but it's somewhat verbose, and doesn't mix well with a shebang line like #!/usr/bin/env python3. The simplest fix for this case, assuming the name mymodule is globally unique, would be to avoid using relative imports, and just use... from mymodule import as_int ...although, if it's not unique, or your package structure is more complex, you'll need to include the directory containing your package directory in PYTHONPATH, and do it like this... from mypackage.mymodule import as_int ...or if you want it to work \"out of the box\", you can frob the PYTHONPATH in code first with this... import sys import os SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) sys.path.append(os.path.dirname(SCRIPT_DIR)) from mypackage.mymodule import as_int It's kind of a pain, but there's a clue as to why in an email written by a certain Guido van Rossum... I'm -1 on this and on any other proposed twiddlings of the __main__ machinery. The only use case seems to be running scripts that happen to be living inside a module's directory, which I've always seen as an antipattern. To make me change my mind you'd have to convince me that it isn't. Whether running scripts inside a package is an antipattern or not is subjective, but personally I find it really useful in a package I have which contains some custom wxPython widgets, so I can run the script for any of the source files to display a wx.Frame containing only that widget for testing purposes."} +{"question_id": 67631, "score": 1872, "creation_date": 1221517855, "tags": ["python", "python-import", "python-module"], "instruction": "How can I import a module dynamically given the full path?\n\nHow do I load a Python module given its full path? Note that the file can be anywhere in the filesystem where the user has access rights. See also: How to import a module given its name as string?", "output": "Let's have MyClass in module.name module defined at /path/to/file.py. Below is how we import MyClass from this module For Python 3.5+ use (docs): import importlib.util import sys spec = importlib.util.spec_from_file_location(\"module.name\", \"/path/to/file.py\") foo = importlib.util.module_from_spec(spec) sys.modules[\"module.name\"] = foo spec.loader.exec_module(foo) foo.MyClass() For Python 3.3 and 3.4 use: from importlib.machinery import SourceFileLoader foo = SourceFileLoader(\"module.name\", \"/path/to/file.py\").load_module() foo.MyClass() (Although this has been deprecated in Python 3.4.) For Python 2 use: import imp foo = imp.load_source('module.name', '/path/to/file.py') foo.MyClass() There are equivalent convenience functions for compiled Python files and DLLs. See also http://bugs.python.org/issue21436."} +{"question_id": 5574702, "score": 1817, "creation_date": 1302137950, "tags": ["python", "printing", "stderr"], "instruction": "How do I print to stderr in Python?\n\nThere are several ways to write to stderr: print >> sys.stderr, \"spam\" # Python 2 only. sys.stderr.write(\"spam\\n\") os.write(2, b\"spam\\n\") from __future__ import print_function print(\"spam\", file=sys.stderr) What are the differences between these methods? Which method should be preferred?", "output": "I found this to be the only one short, flexible, portable and readable: import sys def eprint(*args, **kwargs): print(*args, file=sys.stderr, **kwargs) The optional function eprint saves some repetition. It can be used in the same way as the standard print function: >>> print(\"Test\") Test >>> eprint(\"Test\") Test >>> eprint(\"foo\", \"bar\", \"baz\", sep=\"---\") foo---bar---baz"} +{"question_id": 6760685, "score": 1809, "creation_date": 1311158877, "tags": ["python", "singleton", "decorator", "base-class", "metaclass"], "instruction": "What is the best way of implementing a singleton in Python?\n\nI have multiple classes which would become singletons (my use case is for a logger, but this is not important). I do not wish to clutter several classes with added gumph when I can simply inherit or decorate. Best methods: Method 1: A decorator def singleton(class_): instances = {} def getinstance(*args, **kwargs): if class_ not in instances: instances[class_] = class_(*args, **kwargs) return instances[class_] return getinstance @singleton class MyClass(BaseClass): pass Pros Decorators are additive in a way that is often more intuitive than multiple inheritance. Cons While objects created using MyClass() would be true singleton objects, MyClass itself is a function, not a class, so you cannot call class methods from it. Also for x = MyClass(); y = MyClass(); t = type(n)(); then x == y but x != t && y != t Method 2: A base class class Singleton(object): _instance = None def __new__(class_, *args, **kwargs): if not isinstance(class_._instance, class_): class_._instance = object.__new__(class_, *args, **kwargs) return class_._instance class MyClass(Singleton, BaseClass): pass Pros It's a true class Cons Multiple inheritance - eugh! __new__ could be overwritten during inheritance from a second base class? One has to think more than is necessary. Method 3: A metaclass class Singleton(type): _instances = {} def __call__(cls, *args, **kwargs): if cls not in cls._instances: cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs) return cls._instances[cls] #Python2 class MyClass(BaseClass): __metaclass__ = Singleton #Python3 class MyClass(BaseClass, metaclass=Singleton): pass Pros It's a true class Auto-magically covers inheritance Uses __metaclass__ for its proper purpose (and made me aware of it) Cons Are there any? Method 4: decorator returning a class with the same name def singleton(class_): class class_w(class_): _instance = None def __new__(class_, *args, **kwargs): if class_w._instance is None: class_w._instance = super(class_w, class_).__new__(class_, *args, **kwargs) class_w._instance._sealed = False return class_w._instance def __init__(self, *args, **kwargs): if self._sealed: return super(class_w, self).__init__(*args, **kwargs) self._sealed = True class_w.__name__ = class_.__name__ return class_w @singleton class MyClass(BaseClass): pass Pros It's a true class Auto-magically covers inheritance Cons Is there not an overhead for creating each new class? Here we are creating two classes for each class we wish to make a singleton. While this is fine in my case, I worry that this might not scale. Of course there is a matter of debate as to whether it aught to be too easy to scale this pattern... What is the point of the _sealed attribute Can't call methods of the same name on base classes using super() because they will recurse. This means you can't customize __new__ and can't subclass a class that needs you to call up to __init__. Method 5: a module a module file singleton.py Pros Simple is better than complex Cons Not lazily instantiated This question is not for the discussion of whether or not the singleton design pattern is desirable, is an anti-pattern, or for any religious wars, but to discuss how this pattern is best implemented in Python in such a way that is most Pythonic. In this instance I define 'most Pythonic' to mean that it follows the 'principle of least astonishment'.", "output": "You just need a decorator, different depending on the python version. Notice how foo gets printed only once. Python 3.2+ Implementation: from functools import lru_cache @lru_cache(maxsize=None) class CustomClass(object): def __init__(self, arg): print(f\"CustomClass initialised with {arg}\") self.arg = arg Usage c1 = CustomClass(\"foo\") c2 = CustomClass(\"foo\") c3 = CustomClass(\"bar\") print(c1 == c2) print(c1 == c3) Output >>> CustomClass initialised with foo >>> CustomClass initialised with bar >>> True >>> False Python 3.9+ Implementation: from functools import cache @cache class CustomClass(object): ..."} +{"question_id": 678236, "score": 1789, "creation_date": 1237912863, "tags": ["python", "string", "path"], "instruction": "How do I get the filename without the extension from a path in Python?\n\nHow do I get the filename without the extension from a path in Python? \"/path/to/some/file.txt\" \u2192 \"file\"", "output": "Python 3.4+ Use pathlib.Path.stem >>> from pathlib import Path >>> Path(\"/path/to/file.txt\").stem 'file' >>> Path(\"/path/to/file.tar.gz\").stem 'file.tar' Python < 3.4 Use os.path.splitext in combination with os.path.basename: >>> os.path.splitext(os.path.basename(\"/path/to/file.txt\"))[0] 'file' >>> os.path.splitext(os.path.basename(\"/path/to/file.tar.gz\"))[0] 'file.tar'"} +{"question_id": 582336, "score": 1758, "creation_date": 1235491286, "tags": ["python", "performance", "optimization", "time-complexity", "profiling"], "instruction": "How do I profile a Python script?\n\nProject Euler and other coding contests often have a maximum time to run or people boast of how fast their particular solution runs. With Python, sometimes the approaches are somewhat kludgey - i.e., adding timing code to __main__. What is a good way to profile how long a Python program takes to run?", "output": "Python includes a profiler called cProfile. It not only gives the total running time, but also times each function separately, and tells you how many times each function was called, making it easy to determine where you should make optimizations. You can call it from within your code, or from the interpreter, like this: import cProfile cProfile.run('foo()') Even more usefully, you can invoke cProfile when running a script: python -m cProfile myscript.py Or when running a module: python -m cProfile -m mymodule To make it even easier, I made a little batch file called 'profile.bat': python -m cProfile %1 So all I have to do is run: profile euler048.py And I get this: 1007 function calls in 0.061 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.061 0.061 :1() 1000 0.051 0.000 0.051 0.000 euler048.py:2() 1 0.005 0.005 0.061 0.061 euler048.py:2() 1 0.000 0.000 0.061 0.061 {execfile} 1 0.002 0.002 0.053 0.053 {map} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler objects} 1 0.000 0.000 0.000 0.000 {range} 1 0.003 0.003 0.003 0.003 {sum} For more information, check out this tutorial from PyCon 2013 titled Python Profiling Also via YouTube."} +{"question_id": 4700614, "score": 1723, "creation_date": 1295107803, "tags": ["python", "matplotlib", "seaborn", "legend"], "instruction": "How to put the legend outside the plot\n\nI have a series of 20 plots (not subplots) to be made in a single figure. I want the legend to be outside of the box. At the same time, I do not want to change the axes, as the size of the figure gets reduced. I want to keep the legend box outside the plot area (I want the legend to be outside at the right side of the plot area). Is there a way to reduce the font size of the text inside the legend box, so that the size of the legend box will be small?", "output": "You can make the legend text smaller by specifying set_size of FontProperties. Resources: Legend guide matplotlib.legend matplotlib.pyplot.legend matplotlib.font_manager set_size(self, size) Valid font size are xx-small, x-small, small, medium, large, x-large, xx-large, larger, smaller, and None. Real Python: Python Plotting With Matplotlib (Guide) import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties fontP = FontProperties() fontP.set_size('xx-small') p1, = plt.plot([1, 2, 3], label='Line 1') p2, = plt.plot([3, 2, 1], label='Line 2') plt.legend(handles=[p1, p2], title='title', bbox_to_anchor=(1.05, 1), loc='upper left', prop=fontP) fontsize='xx-small' also works, without importing FontProperties. plt.legend(handles=[p1, p2], title='title', bbox_to_anchor=(1.05, 1), loc='upper left', fontsize='xx-small')"} +{"question_id": 60208, "score": 1716, "creation_date": 1221266190, "tags": ["python", "switch-statement"], "instruction": "Replacements for switch statement in Python?\n\nI want to write a function in Python that returns different fixed values based on the value of an input index. In other languages I would use a switch or case statement, but Python does not appear to have a switch statement. What are the recommended Python solutions in this scenario?", "output": "Python 3.10 (2021) introduced the match-case statement, which provides a first-class implementation of a \"switch\" for Python. For example: def f(x): match x: case 'a': return 1 case 'b': return 2 case _: return 0 # 0 is the default case if x is not found The match-case statement is considerably more powerful than this simple example. Documentation: match statements (under the \"More Control Flow Tools\" page) The match statement (under \"Compound statements\" page) PEP 634 \u2013 Structural Pattern Matching: Specification PEP 636 \u2013 Structural Pattern Matching: Tutorial If you need to support Python \u2264 3.9, use a dictionary instead: def f(x): return { 'a': 1, 'b': 2, }.get(x, 0) # default case"} +{"question_id": 1471994, "score": 1656, "creation_date": 1253801812, "tags": ["python", "pypi", "setup.py", "python-packaging"], "instruction": "What is setup.py?\n\nWhat is setup.py and how can it be configured or used?", "output": "setup.py is a Python file, the presence of which is an indication that the module/package you are about to install has likely been packaged and distributed with Distutils, which is the standard for distributing Python Modules. This allows you to easily install Python packages. Often it's enough to write: $ pip install . pip will use setup.py to install your module. Avoid calling setup.py directly."} +{"question_id": 1747817, "score": 1612, "creation_date": 1258452473, "tags": ["python", "dictionary", "list-comprehension", "dictionary-comprehension"], "instruction": "Create a dictionary with comprehension\n\nCan I use list comprehension syntax to create a dictionary? For example, by iterating over pairs of keys and values: d = {... for k, v in zip(keys, values)}", "output": "Use a dict comprehension (Python 2.7 and later): {key: value for key, value in zip(keys, values)} Alternatively, use the dict constructor: pairs = [('a', 1), ('b', 2)] dict(pairs) # \u2192 {'a': 1, 'b': 2} dict((k, v + 10) for k, v in pairs) # \u2192 {'a': 11, 'b': 12} Given separate lists of keys and values, use the dict constructor with zip: keys = ['a', 'b'] values = [1, 2] dict(zip(keys, values)) # \u2192 {'a': 1, 'b': 2}"} +{"question_id": 961632, "score": 1595, "creation_date": 1244370158, "tags": ["python", "string", "integer"], "instruction": "Convert integer to string in Python\n\nHow do I convert an integer to a string? 42 \u27f6 \"42\" For the reverse, see How do I parse a string to a float or int?. Floats can be handled similarly, but handling the decimal points can be tricky because floating-point values are not precise. See Converting a float to a string without rounding it for more specific advice.", "output": ">>> str(42) '42' >>> int('42') 42 Links to the documentation: int() str() str(x) converts any object x to a string by calling x.__str__(), or repr(x) if x doesn't have a __str__() method."} +{"question_id": 8858008, "score": 1533, "creation_date": 1326493078, "tags": ["python", "file", "file-handling", "python-os"], "instruction": "How do I move a file in Python?\n\nHow can I do the equivalent of mv in Python? mv \"path/to/current/file.foo\" \"path/to/new/destination/for/file.foo\"", "output": "os.rename(), os.replace(), or shutil.move() All employ the same syntax: import os import shutil os.rename(\"path/to/current/file.foo\", \"path/to/new/destination/for/file.foo\") os.replace(\"path/to/current/file.foo\", \"path/to/new/destination/for/file.foo\") shutil.move(\"path/to/current/file.foo\", \"path/to/new/destination/for/file.foo\") The filename (\"file.foo\") must be included in both the source and destination arguments. If it differs between the two, the file will be renamed as well as moved. The directory within which the new file is being created must already exist. On Windows, a file with that name must not exist or an exception will be raised, but os.replace() will silently replace a file even in that occurrence. shutil.move simply calls os.rename in most cases. However, if the destination is on a different disk than the source, it will instead copy and then delete the source file."} +{"question_id": 9942594, "score": 1518, "creation_date": 1333109201, "tags": ["python", "unicode", "beautifulsoup", "python-2.x", "python-unicode"], "instruction": "UnicodeEncodeError: 'ascii' codec can't encode character u'\\xa0' in position 20: ordinal not in range(128)\n\nI'm having problems dealing with unicode characters from text fetched from different web pages (on different sites). I am using BeautifulSoup. The problem is that the error is not always reproducible; it sometimes works with some pages, and sometimes, it barfs by throwing a UnicodeEncodeError. I have tried just about everything I can think of, and yet I have not found anything that works consistently without throwing some kind of Unicode-related error. One of the sections of code that is causing problems is shown below: agent_telno = agent.find('div', 'agent_contact_number') agent_telno = '' if agent_telno is None else agent_telno.contents[0] p.agent_info = str(agent_contact + ' ' + agent_telno).strip() Here is a stack trace produced on SOME strings when the snippet above is run: Traceback (most recent call last): File \"foobar.py\", line 792, in p.agent_info = str(agent_contact + ' ' + agent_telno).strip() UnicodeEncodeError: 'ascii' codec can't encode character u'\\xa0' in position 20: ordinal not in range(128) I suspect that this is because some pages (or more specifically, pages from some of the sites) may be encoded, whilst others may be unencoded. All the sites are based in the UK and provide data meant for UK consumption - so there are no issues relating to internalization or dealing with text written in anything other than English. Does anyone have any ideas as to how to solve this so that I can CONSISTENTLY fix this problem?", "output": "Read the Python Unicode HOWTO. This error is the very first example. Do not use str() to convert from unicode to encoded text / bytes. Instead, use .encode() to encode the string: p.agent_info = u' '.join((agent_contact, agent_telno)).encode('utf-8').strip() or work entirely in unicode."} +{"question_id": 3768895, "score": 1510, "creation_date": 1285156339, "tags": ["python", "json", "serialization"], "instruction": "How to make a class JSON serializable\n\nHow to make a Python class serializable? class FileItem: def __init__(self, fname): self.fname = fname Attempt to serialize to JSON: >>> import json >>> x = FileItem('/foo/bar') >>> json.dumps(x) TypeError: Object of type 'FileItem' is not JSON serializable", "output": "Do you have an idea about the expected output? For example, will this do? >>> f = FileItem(\"/foo/bar\") >>> magic(f) '{\"fname\": \"/foo/bar\"}' In that case you can merely call json.dumps(f.__dict__). If you want more customized output then you will have to subclass JSONEncoder and implement your own custom serialization. For a trivial example, see below. >>> from json import JSONEncoder >>> class MyEncoder(JSONEncoder): def default(self, o): return o.__dict__ >>> MyEncoder().encode(f) '{\"fname\": \"/foo/bar\"}' Then you pass this class into the json.dumps() method as cls kwarg: json.dumps(cls=MyEncoder) If you also want to decode then you'll have to supply a custom object_hook to the JSONDecoder class. For example: >>> def from_json(json_object): if 'fname' in json_object: return FileItem(json_object['fname']) >>> f = JSONDecoder(object_hook = from_json).decode('{\"fname\": \"/foo/bar\"}') >>> f <__main__.FileItem object at 0x9337fac> >>>"} +{"question_id": 31684375, "score": 1508, "creation_date": 1438108143, "tags": ["python", "dependencies", "python-import", "requirements.txt"], "instruction": "Automatically create file 'requirements.txt'\n\nSometimes I download the Python source code from GitHub and don't know how to install all the dependencies. If there isn't any requirements.txt file I have to create it by hand. Given the Python source code directory, is it possible to create requirements.txt automatically from the import section?", "output": "Use Pipenv or other tools is recommended for improving your development flow. pip3 freeze > requirements.txt # Python3 pip freeze > requirements.txt # Python2 If you do not use a virtual environment, pigar will be a good choice for you."} +{"question_id": 1952464, "score": 1469, "creation_date": 1261570435, "tags": ["python", "iterable"], "instruction": "Python: how to determine if an object is iterable?\n\nIs there a method like isiterable? The only solution I have found so far is to call: hasattr(myObj, '__iter__') but I am not sure how foolproof this is.", "output": "Checking for __iter__ works on sequence types, but it would fail on e.g. strings in Python 2. I would like to know the right answer too, until then, here is one possibility (which would work on strings, too): try: some_object_iterator = iter(some_object) except TypeError as te: print(some_object, 'is not iterable') The iter built-in checks for the __iter__ method or in the case of strings the __getitem__ method. Another general pythonic approach is to assume an iterable, then fail gracefully if it does not work on the given object. The Python glossary: Pythonic programming style that determines an object's type by inspection of its method or attribute signature rather than by explicit relationship to some type object (\"If it looks like a duck and quacks like a duck, it must be a duck.\") By emphasizing interfaces rather than specific types, well-designed code improves its flexibility by allowing polymorphic substitution. Duck-typing avoids tests using type() or isinstance(). Instead, it typically employs the EAFP (Easier to Ask Forgiveness than Permission) style of programming. ... try: _ = (e for e in my_object) except TypeError: print(my_object, 'is not iterable') The collections module provides some abstract base classes, which allow to ask classes or instances if they provide particular functionality, for example: from collections.abc import Iterable if isinstance(e, Iterable): # e is iterable However, this does not check for classes that are iterable through __getitem__."} +{"question_id": 17330160, "score": 1427, "creation_date": 1372279635, "tags": ["python", "properties", "decorator", "python-decorators", "python-internals"], "instruction": "How does the @property decorator work in Python?\n\nI would like to understand how the built-in function property works. What confuses me is that property can also be used as a decorator, but it only takes arguments when used as a built-in function and not when used as a decorator. This example is from the documentation: class C: def __init__(self): self._x = None def getx(self): return self._x def setx(self, value): self._x = value def delx(self): del self._x x = property(getx, setx, delx, \"I'm the 'x' property.\") property's arguments are getx, setx, delx and a doc string. In the code below property is used as a decorator. The object of it is the x function, but in the code above there is no place for an object function in the arguments. class C: def __init__(self): self._x = None @property def x(self): \"\"\"I'm the 'x' property.\"\"\" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x How are the x.setter and x.deleter decorators created in this case?", "output": "The property() function returns a special descriptor object: >>> property() It is this object that has extra methods: >>> property().getter >>> property().setter >>> property().deleter These act as decorators too. They return a new property object: >>> property().getter(None) that is a copy of the old object, but with one of the functions replaced. Remember, that the @decorator syntax is just syntactic sugar; the syntax: @property def foo(self): return self._foo really means the same thing as def foo(self): return self._foo foo = property(foo) so foo the function is replaced by property(foo), which we saw above is a special object. Then when you use @foo.setter(), what you are doing is call that property().setter method I showed you above, which returns a new copy of the property, but this time with the setter function replaced with the decorated method. The following sequence also creates a full-on property, by using those decorator methods. First we create some functions: >>> def getter(self): print('Get!') ... >>> def setter(self, value): print('Set to {!r}!'.format(value)) ... >>> def deleter(self): print('Delete!') ... Then, we create a property object with only a getter: >>> prop = property(getter) >>> prop.fget is getter True >>> prop.fset is None True >>> prop.fdel is None True Next we use the .setter() method to add a setter: >>> prop = prop.setter(setter) >>> prop.fget is getter True >>> prop.fset is setter True >>> prop.fdel is None True Last we add a deleter with the .deleter() method: >>> prop = prop.deleter(deleter) >>> prop.fget is getter True >>> prop.fset is setter True >>> prop.fdel is deleter True Last but not least, the property object acts as a descriptor object, so it has .__get__(), .__set__() and .__delete__() methods to hook into instance attribute getting, setting and deleting: >>> class Foo: pass ... >>> prop.__get__(Foo(), Foo) Get! >>> prop.__set__(Foo(), 'bar') Set to 'bar'! >>> prop.__delete__(Foo()) Delete! The Descriptor Howto includes a pure Python sample implementation of the property() type: class Property: \"Emulate PyProperty_Type() in Objects/descrobject.c\" def __init__(self, fget=None, fset=None, fdel=None, doc=None): self.fget = fget self.fset = fset self.fdel = fdel if doc is None and fget is not None: doc = fget.__doc__ self.__doc__ = doc def __get__(self, obj, objtype=None): if obj is None: return self if self.fget is None: raise AttributeError(\"unreadable attribute\") return self.fget(obj) def __set__(self, obj, value): if self.fset is None: raise AttributeError(\"can't set attribute\") self.fset(obj, value) def __delete__(self, obj): if self.fdel is None: raise AttributeError(\"can't delete attribute\") self.fdel(obj) def getter(self, fget): return type(self)(fget, self.fset, self.fdel, self.__doc__) def setter(self, fset): return type(self)(self.fget, fset, self.fdel, self.__doc__) def deleter(self, fdel): return type(self)(self.fget, self.fset, fdel, self.__doc__)"} +{"question_id": 8369219, "score": 1411, "creation_date": 1322930874, "tags": ["python", "string"], "instruction": "How can I read a text file into a string variable and strip newlines?\n\nI have a text file that looks like: ABC DEF How can I read the file into a single-line string without newlines, in this case creating a string 'ABCDEF'? For reading the file into a list of lines, but removing the trailing newline character from each line, see How to read a file without newlines?.", "output": "You could use: with open('data.txt', 'r') as file: data = file.read().replace('\\n', '') Or if the file content is guaranteed to be one line: with open('data.txt', 'r') as file: data = file.read().rstrip()"} +{"question_id": 11248073, "score": 1409, "creation_date": 1340897804, "tags": ["python", "pip", "virtualenv", "python-packaging"], "instruction": "How do I remove all packages installed by pip?\n\nHow do I uninstall all packages installed by pip from my currently activated virtual environment?", "output": "I've found this snippet as an alternative solution. It's a more graceful removal of libraries than remaking the virtualenv: pip freeze | xargs pip uninstall -y In case you have packages installed via VCS, you need to exclude those lines and remove the packages manually (elevated from the comments below): pip freeze --exclude-editable | xargs pip uninstall -y If you have packages installed directly from github/gitlab, those will have @. Like: django @ git+https://github.com/django.git@ You can add cut -d \"@\" -f1 to get just the package name that is required to uninstall it. pip freeze | cut -d \"@\" -f1 | xargs pip uninstall -y"} +{"question_id": 2802726, "score": 1405, "creation_date": 1273496284, "tags": ["python", "if-statement", "syntax", "conditional-operator"], "instruction": "Putting a simple if-then-else statement on one line\n\nHow do I write an if-then-else statement in Python so that it fits on one line? For example, I want a one line version of: if count == N: count = 0 else: count = N + 1 In Objective-C, I would write this as: count = count == N ? 0 : count + 1;", "output": "That's more specifically a ternary operator expression than an if-then, here's the python syntax value_when_true if condition else value_when_false Better Example: (thanks Mr. Burns) 'Yes' if fruit == 'Apple' else 'No' Now with assignment and contrast with if syntax fruit = 'Apple' isApple = True if fruit == 'Apple' else False vs fruit = 'Apple' isApple = False if fruit == 'Apple' : isApple = True"} +{"question_id": 33533148, "score": 1383, "creation_date": 1446675474, "tags": ["python", "pycharm", "python-typing"], "instruction": "How do I type hint a method with the type of the enclosing class?\n\nI have the following code in Python 3: class Position: def __init__(self, x: int, y: int): self.x = x self.y = y def __add__(self, other: Position) -> Position: return Position(self.x + other.x, self.y + other.y) But my editor (PyCharm) says that the reference Position can not be resolved (in the __add__ method). How should I specify that I expect the return type to be of type Position? I think this is actually a PyCharm issue. It actually uses the information in its warnings, and code completion. But correct me if I'm wrong, and need to use some other syntax.", "output": "I guess you got this exception: NameError: name 'Position' is not defined This is because in the original implementation of annotations, Position must be defined before you can use it in an annotation. Python 3.14+: It'll just work Python 3.14 has a new, lazily evaluated annotation implementation specified by PEP 749 and 649. Annotations will be compiled to special __annotate__ functions, executed when an object's __annotations__ dict is first accessed instead of at the point where the annotation itself occurs. Thus, annotating your function as def __add__(self, other: Position) -> Position: no longer requires Position to already exist: class Position: def __add__(self, other: Position) -> Position: ... Python 3.7+, deprecated: from __future__ import annotations from __future__ import annotations turns on an older solution to this problem, PEP 563, where all annotations are saved as strings instead of as __annotate__ functions or evaluated values. This was originally planned to become the default behavior, and almost became the default in 3.10 before being reverted. With the acceptance of PEP 749, this will be deprecated in Python 3.14, and it will be removed in a future Python version. Still, it works for now: from __future__ import annotations class Position: def __add__(self, other: Position) -> Position: ... Python 3+: Use a string This is the original workaround, specified in PEP 484. Write your annotations as string literals containing the text of whatever expression you originally wanted to use as an annotation: class Position: def __add__(self, other: 'Position') -> 'Position': ... from __future__ import annotations effectively automates doing this for all annotations in a file. typing.Self might sometimes be appropriate Introduced in Python 3.11, typing.Self refers to the type of the current instance, even if that type is a subclass of the class the annotation appears in. So if you have the following code: from typing import Self class Parent: def me(self) -> Self: return self class Child(Parent): pass x: Child = Child().me() then Child().me() is treated as returning Child, instead of Parent. This isn't always what you want. But when it is, it's pretty convenient. For Python versions < 3.11, if you have typing_extensions installed, you can use: from typing_extensions import Self Sources The relevant parts of PEP 484, PEP 563, and PEP 649, to spare you the trip: Forward references When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later. A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work: class Tree: def __init__(self, left: Tree, right: Tree): self.left = left self.right = right To address this, we write: class Tree: def __init__(self, left: 'Tree', right: 'Tree'): self.left = left self.right = right The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated. and PEP 563, deprecated: Implementation In Python 3.10, function and variable annotations will no longer be evaluated at definition time. Instead, a string form will be preserved in the respective __annotations__ dictionary. Static type checkers will see no difference in behavior, whereas tools using annotations at runtime will have to perform postponed evaluation. ... Enabling the future behavior in Python 3.7 The functionality described above can be enabled starting from Python 3.7 using the following special import: from __future__ import annotations and PEP 649: Overview This PEP adds a new dunder attribute to the objects that support annotations\u2013functions, classes, and modules. The new attribute is called __annotate__, and is a reference to a function which computes and returns that object\u2019s annotations dict. At compile time, if the definition of an object includes annotations, the Python compiler will write the expressions computing the annotations into its own function. When run, the function will return the annotations dict. The Python compiler then stores a reference to this function in __annotate__ on the object. Furthermore, __annotations__ is redefined to be a \u201cdata descriptor\u201d which calls this annotation function once and caches the result. Things that you may be tempted to do instead A. Define a dummy Position Before the class definition, place a dummy definition: class Position(object): pass class Position: def __init__(self, x: int, y: int): self.x = x self.y = y def __add__(self, other: Position) -> Position: return Position(self.x + other.x, self.y + other.y) This will get rid of the NameError and may even look OK: >>> Position.__add__.__annotations__ {'other': __main__.Position, 'return': __main__.Position} But is it? >>> for k, v in Position.__add__.__annotations__.items(): ... print(k, 'is Position:', v is Position) return is Position: False other is Position: False And mypy will report a pile of errors: main.py:4: error: Name \"Position\" already defined on line 1 [no-redef] main.py:11: error: Too many arguments for \"Position\" [call-arg] main.py:11: error: \"Position\" has no attribute \"x\" [attr-defined] main.py:11: error: \"Position\" has no attribute \"y\" [attr-defined] Found 4 errors in 1 file (checked 1 source file) B. Monkey-patch in order to add the annotations: You may want to try some Python metaprogramming magic and write a decorator to monkey-patch the class definition in order to add annotations: class Position: ... def __add__(self, other): return self.__class__(self.x + other.x, self.y + other.y) The decorator should be responsible for the equivalent of this: Position.__add__.__annotations__['return'] = Position Position.__add__.__annotations__['other'] = Position It'll work right at runtime: >>> for k, v in Position.__add__.__annotations__.items(): ... print(k, 'is Position:', v is Position) return is Position: True other is Position: True But static analyzers like mypy won't understand it, and static analysis is the biggest use case of type annotations."} +{"question_id": 2709821, "score": 1360, "creation_date": 1272226948, "tags": ["python", "class", "oop", "self"], "instruction": "What is the purpose of the `self` parameter? Why is it needed?\n\nConsider this example: class MyClass: def func(self, name): self.name = name I know that self refers to the specific instance of MyClass. But why must func explicitly include self as a parameter? Why do we need to use self in the method's code? Some other languages make this implicit, or use special syntax instead. For a language-agnostic consideration of the design decision, see What is the advantage of having this/self pointer mandatory explicit?. To close debugging questions where OP omitted a self parameter for a method and got a TypeError, use TypeError: method() takes 1 positional argument but 2 were given instead. If OP omitted self. in the body of the method and got a NameError, consider How can I call a function within a class?.", "output": "The reason you need to use self is because Python does not use special syntax to refer to instance attributes. Python decided to do methods in a way that makes the instance to which the method belongs be passed automatically but not received automatically, the first parameter of methods is the instance the method is called on. That makes methods entirely the same as functions and leaves the actual name to use up to you (although self is the convention, and people will generally frown at you when you use something else.) self is not special to the code, it's just another object. Python could have done something else to distinguish normal names from attributes -- special syntax like Ruby has, or requiring declarations like C++ and Java do, or perhaps something yet more different -- but it didn't. Python's all for making things explicit, making it obvious what's what, and although it doesn't do it entirely everywhere, it does do it for instance attributes. That's why assigning to an instance attribute needs to know what instance to assign to, and that's why it needs self."} +{"question_id": 715417, "score": 1358, "creation_date": 1238787860, "tags": ["python", "string", "boolean"], "instruction": "Converting from a string to boolean in Python\n\nHow do I convert a string into a boolean in Python? This attempt returns True: >>> bool(\"False\") True", "output": "Really, you just compare the string to whatever you expect to accept as representing true, so you can do this: s == 'True' Or to checks against a whole bunch of values: s.lower() in ['true', '1', 't', 'y', 'yes', 'yeah', 'yup', 'certainly', 'uh-huh'] Be cautious when using the following: >>> bool(\"foo\") True >>> bool(\"False\") # beware! True >>> bool(\"\") False Empty strings evaluate to False, but everything else evaluates to True. So this should not be used for any kind of parsing purposes."} +{"question_id": 534839, "score": 1342, "creation_date": 1234309881, "tags": ["python", "uuid", "guid", "uniqueidentifier"], "instruction": "How to create a GUID/UUID in Python\n\nHow do I create a GUID/UUID in Python that is platform independent? I hear there is a method using ActivePython on Windows but it's Windows only because it uses COM. Is there a method using plain Python?", "output": "The uuid module provides immutable UUID objects (the UUID class) and the functions uuid1(), uuid3(), uuid4(), uuid5() for generating version 1, 3, 4, and 5 UUIDs as specified in RFC 4122. If all you want is a unique ID, you should probably call uuid1() or uuid4(). Note that uuid1() may compromise privacy since it creates a UUID containing the computer\u2019s network address. uuid4() creates a random UUID. UUID versions 6, 7 and 8 - new Universally Unique Identifier (UUID) formats for use in modern applications and as database keys - (draft) rfc - are available from https://pypi.org/project/uuid6/ Docs: Python 2 Python 3 Examples (for both Python 2 and 3): >>> import uuid >>> # make a random UUID >>> uuid.uuid4() UUID('bd65600d-8669-4903-8a14-af88203add38') >>> # Convert a UUID to a string of hex digits in standard form >>> str(uuid.uuid4()) 'f50ec0b7-f960-400d-91f0-c42a6d44e3d0' >>> # Convert a UUID to a 32-character hexadecimal string >>> uuid.uuid4().hex '9fe2c4e93f654fdbb24c02b15259716c'"} +{"question_id": 1504717, "score": 1332, "creation_date": 1254411614, "tags": ["python", "string", "comparison", "identity", "equality"], "instruction": "Why does comparing strings using either '==' or 'is' sometimes produce a different result?\n\nTwo string variables are set to the same value. s1 == s2 always returns True, but s1 is s2 sometimes returns False. If I open my Python interpreter and do the same is comparison, it succeeds: >>> s1 = 'text' >>> s2 = 'text' >>> s1 is s2 True Why is this?", "output": "is is identity testing, and == is equality testing. What happens in your code would be emulated in the interpreter like this: >>> a = 'pub' >>> b = ''.join(['p', 'u', 'b']) >>> a == b True >>> a is b False So, no wonder they're not the same, right? In other words: a is b is the equivalent of id(a) == id(b)"} +{"question_id": 845058, "score": 1323, "creation_date": 1241950925, "tags": ["python", "text-files", "line-count"], "instruction": "How to get the line count of a large file cheaply in Python\n\nHow do I get a line count of a large file in the most memory- and time-efficient manner? def file_len(filename): with open(filename) as f: for i, _ in enumerate(f): pass return i + 1", "output": "You can't get any better than that. After all, any solution will have to read the entire file, figure out how many \\n you have, and return that result. Do you have a better way of doing that without reading the entire file? Not sure... The best solution will always be I/O-bound, best you can do is make sure you don't use unnecessary memory, but it looks like you have that covered. [Edit May 2023] As commented in many other answers, in Python 3 there are better alternatives. The for loop is not the most efficient. For example, using mmap or buffers is more efficient."} +{"question_id": 739993, "score": 1322, "creation_date": 1239453258, "tags": ["python", "module", "pip"], "instruction": "How do I get a list of locally installed Python modules?\n\nHow do I get a list of Python modules installed on my computer?", "output": "Solution Do not use with pip > 10.0! My 50 cents for getting a pip freeze-like list from a Python script: import pip installed_packages = pip.get_installed_distributions() installed_packages_list = sorted([\"%s==%s\" % (i.key, i.version) for i in installed_packages]) print(installed_packages_list) As a (too long) one liner: sorted([\"%s==%s\" % (i.key, i.version) for i in pip.get_installed_distributions()]) Giving: ['behave==1.2.4', 'enum34==1.0', 'flask==0.10.1', 'itsdangerous==0.24', 'jinja2==2.7.2', 'jsonschema==2.3.0', 'markupsafe==0.23', 'nose==1.3.3', 'parse-type==0.3.4', 'parse==1.6.4', 'prettytable==0.7.2', 'requests==2.3.0', 'six==1.6.1', 'vioozer-metadata==0.1', 'vioozer-users-server==0.1', 'werkzeug==0.9.4'] Scope This solution applies to the system scope or to a virtual environment scope, and covers packages installed by setuptools, pip and (god forbid) easy_install. My use case I added the result of this call to my Flask server, so when I call it with http://example.com/exampleServer/environment I get the list of packages installed on the server's virtualenv. It makes debugging a whole lot easier. Caveats I have noticed a strange behaviour of this technique - when the Python interpreter is invoked in the same directory as a setup.py file, it does not list the package installed by setup.py. Steps to reproduce: Create a virtual environment $ cd /tmp $ virtualenv test_env New python executable in test_env/bin/python Installing setuptools, pip...done. $ source test_env/bin/activate (test_env) $ Clone a Git repository with setup.py (test_env) $ git clone https://github.com/behave/behave.git Cloning into 'behave'... remote: Reusing existing pack: 4350, done. remote: Total 4350 (delta 0), reused 0 (delta 0) Receiving objects: 100% (4350/4350), 1.85 MiB | 418.00 KiB/s, done. Resolving deltas: 100% (2388/2388), done. Checking connectivity... done. We have behave's setup.py in /tmp/behave: (test_env) $ ls /tmp/behave/setup.py /tmp/behave/setup.py Install the Python package from the Git repository (test_env) $ cd /tmp/behave && pip install . running install ... Installed /private/tmp/test_env/lib/python2.7/site-packages/enum34-1.0-py2.7.egg Finished processing dependencies for behave==1.2.5a1 If we run the aforementioned solution from /tmp >>> import pip >>> sorted([\"%s==%s\" % (i.key, i.version) for i in pip.get_installed_distributions()]) ['behave==1.2.5a1', 'enum34==1.0', 'parse-type==0.3.4', 'parse==1.6.4', 'six==1.6.1'] >>> import os >>> os.getcwd() '/private/tmp' If we run the aforementioned solution from /tmp/behave >>> import pip >>> sorted([\"%s==%s\" % (i.key, i.version) for i in pip.get_installed_distributions()]) ['enum34==1.0', 'parse-type==0.3.4', 'parse==1.6.4', 'six==1.6.1'] >>> import os >>> os.getcwd() '/private/tmp/behave' behave==1.2.5a1 is missing from the second example, because the working directory contains behave's setup.py file. I could not find any reference to this issue in the documentation. Perhaps I shall open a bug for it."} +{"question_id": 7604966, "score": 1319, "creation_date": 1317344466, "tags": ["python", "integer"], "instruction": "Maximum and Minimum values for ints\n\nHow do I represent minimum and maximum values for integers in Python? In Java, we have Integer.MIN_VALUE and Integer.MAX_VALUE. See also: What is the maximum float in Python?.", "output": "Python 3 In Python 3, this question doesn't apply. The plain int type is unbounded. However, you might actually be looking for information about the current interpreter's word size, which will be the same as the machine's word size in most cases. That information is still available in Python 3 as sys.maxsize, which is the maximum value representable by a signed word. Equivalently, it's the size of the largest possible list or in-memory sequence. Generally, the maximum value representable by an unsigned word will be sys.maxsize * 2 + 1, and the number of bits in a word will be math.log2(sys.maxsize * 2 + 2). See this answer for more information. Python 2 In Python 2, the maximum value for plain int values is available as sys.maxint: >>> sys.maxint # on my system, 2**63-1 9223372036854775807 You can calculate the minimum value with -sys.maxint - 1 as shown in the docs. Python seamlessly switches from plain to long integers once you exceed this value. So most of the time, you won't need to know it."} +{"question_id": 1823058, "score": 1309, "creation_date": 1259622703, "tags": ["python", "number-formatting"], "instruction": "How to print a number using commas as thousands separators\n\nHow do I print an integer with commas as thousands separators? 1234567 \u27f6 1,234,567 It does not need to be locale-specific to decide between periods and commas.", "output": "Locale-agnostic: use _ as the thousand separator f'{value:_}' # For Python \u22653.6 Note that this will NOT format in the user's current locale and will always use _ as the thousand separator, so for example: 1234567 \u27f6 1_234_567 English style: use , as the thousand separator '{:,}'.format(value) # For Python \u22652.7 f'{value:,}' # For Python \u22653.6 Locale-aware import locale locale.setlocale(locale.LC_ALL, '') # Use '' for auto, or force e.g. to 'en_US.UTF-8' '{:n}'.format(value) # For Python \u22652.7 f'{value:n}' # For Python \u22653.6 Reference Per Format Specification Mini-Language, The ',' option signals the use of a comma for a thousands separator. For a locale aware separator, use the 'n' integer presentation type instead. and: The '_' option signals the use of an underscore for a thousands separator for floating point presentation types and for integer presentation type 'd'. For integer presentation types 'b', 'o', 'x', and 'X', underscores will be inserted every 4 digits."} +{"question_id": 1663807, "score": 1305, "creation_date": 1257197184, "tags": ["python", "list", "for-loop", "iterator"], "instruction": "How do I iterate through two lists in parallel?\n\nI have two iterables, and I want to go over them in pairs: foo = [1, 2, 3] bar = [4, 5, 6] for (f, b) in iterate_together(foo, bar): print(\"f:\", f, \" | b:\", b) That should result in: f: 1 | b: 4 f: 2 | b: 5 f: 3 | b: 6 One way to do it is to iterate over the indices: for i in range(len(foo)): print(\"f:\", foo[i], \" | b:\", bar[i]) But that seems somewhat unpythonic to me. Is there a better way to do it? Related tasks: * How to merge lists into a list of tuples? - given the above foo and bar, create the list [(1, 4), (2, 5), (3, 6)]. * How can I make a dictionary (dict) from separate lists of keys and values? - create the dict {1: 4, 2: 5, 3: 6}. * Create a dictionary with comprehension - constructing dict using zip in a dict comprehension.", "output": "Python 3 for f, b in zip(foo, bar): print(f, b) zip stops when the shorter of foo or bar stops. In Python 3, zip returns an iterator of tuples, like itertools.izip in Python2. To get a list of tuples, use list(zip(foo, bar)). And to zip until both iterators are exhausted, you would use itertools.zip_longest. Python 2 In Python 2, zip returns a list of tuples. This is fine when foo and bar are not massive. If they are both massive then forming zip(foo,bar) is an unnecessarily massive temporary variable, and should be replaced by itertools.izip or itertools.izip_longest, which returns an iterator instead of a list. import itertools for f,b in itertools.izip(foo,bar): print(f,b) for f,b in itertools.izip_longest(foo,bar): print(f,b) izip stops when either foo or bar is exhausted. izip_longest stops when both foo and bar are exhausted. When the shorter iterator(s) are exhausted, izip_longest yields a tuple with None in the position corresponding to that iterator. You can also set a different fillvalue besides None if you wish. See here for the full story. Note also that zip and its zip-like brethen can accept an arbitrary number of iterables as arguments. For example, for num, cheese, color in zip([1,2,3], ['manchego', 'stilton', 'brie'], ['red', 'blue', 'green']): print('{} {} {}'.format(num, color, cheese)) prints 1 red manchego 2 blue stilton 3 green brie"} +{"question_id": 472000, "score": 1296, "creation_date": 1232689043, "tags": ["python", "oop", "python-internals", "slots"], "instruction": "Usage of __slots__?\n\nWhat is the purpose of __slots__ in Python \u2014 especially with respect to when I would want to use it, and when not?", "output": "TLDR The special attribute __slots__ allows you to explicitly state which instance attributes you expect your object instances to have, with the expected results: faster attribute access. space savings in memory. The space savings is from: Storing value references in slots instead of __dict__. Denying __dict__ and __weakref__ creation if parent classes deny them and you declare __slots__. This has the effect of denying the creation of non-slotted attributes on its instances, including within the class body (such as in methods like __init__). Quick Caveats Small caveat, you should only declare a particular slot one time in an inheritance tree. For example: class Base: __slots__ = 'foo', 'bar' class Right(Base): __slots__ = 'baz', class Wrong(Base): __slots__ = 'foo', 'bar', 'baz' # redundant foo and bar Python doesn't object when you get this wrong (it probably should), and problems might not otherwise manifest, but your objects will take up more space than they should. Python 3.8: >>> from sys import getsizeof >>> getsizeof(Right()), getsizeof(Wrong()) (56, 72) This is because Base's slot descriptor has a slot separate from Wrong's. This shouldn't usually come up, but it could: >>> w = Wrong() >>> w.foo = 'foo' >>> Base.foo.__get__(w) Traceback (most recent call last): File \"\", line 1, in AttributeError: foo >>> Wrong.foo.__get__(w) 'foo' The biggest caveat is for multiple inheritance - multiple \"parent classes with nonempty slots\" cannot be combined. To accommodate this restriction, follow best practices: create abstractions with empty __slots__ for every parent class (or for every parent class but one), then inherit from these abstractions instead of their concrete versions in your new concrete class. (The original parent classes should also inherit from their respective abstractions, of course.) See section on multiple inheritance below for an example. Requirements To have attributes named in __slots__ to actually be stored in slots instead of a __dict__, a class must inherit from object (automatic in Python 3, but must be explicit in Python 2). To prevent the creation of a __dict__, you must inherit from object and all classes in the inheritance must declare __slots__ and none of them can have a '__dict__' entry. There are a lot of details if you wish to keep reading. Why use __slots__ Faster attribute access The creator of Python, Guido van Rossum, states that he actually created __slots__ for faster attribute access. It's trivial to demonstrate measurably significant speedup: import timeit class Foo(object): __slots__ = 'foo', class Bar(object): pass slotted = Foo() not_slotted = Bar() def get_set_delete_fn(obj): def get_set_delete(): obj.foo = 'foo' obj.foo del obj.foo return get_set_delete and >>> min(timeit.repeat(get_set_delete_fn(slotted))) 0.2846834529991611 >>> min(timeit.repeat(get_set_delete_fn(not_slotted))) 0.3664822799983085 The slotted access is almost 30% faster in Python 3.5 on Ubuntu. >>> 0.3664822799983085 / 0.2846834529991611 1.2873325658284342 In Python 2 on Windows I have measured it about 15% faster. Memory Savings Another purpose of __slots__ is to reduce the space in memory that each object instance takes up. My own contribution to the documentation clearly states the reasons behind this: The space saved over using __dict__ can be significant. SQLAlchemy attributes a lot of memory savings to __slots__. To verify this, using the Anaconda distribution of Python 2.7 on Ubuntu Linux, with guppy.hpy (aka heapy) and sys.getsizeof, the size of a class instance without __slots__ declared, and nothing else, is 64 bytes. That does not include the __dict__. Thank you Python for lazy evaluation again, the __dict__ is apparently not called into existence until it is referenced, but classes without data are usually useless. When called into existence, the __dict__ attribute is a minimum of 280 bytes additionally. In contrast, a class instance with __slots__ declared to be () (no data) is only 16 bytes, and 56 total bytes with one item in slots, 64 with two. For 64 bit Python, I illustrate the memory consumption in bytes in Python 2.7 and 3.6, for __slots__ and __dict__ (no slots defined) for each point where the dict grows in 3.6 (except for 0, 1, and 2 attributes): Python 2.7 Python 3.6 attrs __slots__ __dict__* __slots__ __dict__* | *(no slots defined) none 16 56 + 272\u2020 16 56 + 112\u2020 | \u2020if __dict__ referenced one 48 56 + 272 48 56 + 112 two 56 56 + 272 56 56 + 112 six 88 56 + 1040 88 56 + 152 11 128 56 + 1040 128 56 + 240 22 216 56 + 3344 216 56 + 408 43 384 56 + 3344 384 56 + 752 So, in spite of smaller dicts in Python 3, we see how nicely __slots__ scales for instances to save us memory, and that is a major reason you would want to use __slots__. Just for completeness of my notes, note that there is a one-time cost per slot in the class's namespace of 64 bytes in Python 2, and 72 bytes in Python 3, because slots use data descriptors like properties, called \"members\". >>> Foo.foo >>> type(Foo.foo) >>> getsizeof(Foo.foo) 72 Demonstration To deny the creation of a __dict__, you must subclass object. Everything subclasses object in Python 3, but in Python 2 you had to be explicit: class Base(object): __slots__ = () now: >>> b = Base() >>> b.a = 'a' Traceback (most recent call last): File \"\", line 1, in b.a = 'a' AttributeError: 'Base' object has no attribute 'a' Or subclass another class that defines __slots__ class Child(Base): __slots__ = ('a',) and now: c = Child() c.a = 'a' but: >>> c.b = 'b' Traceback (most recent call last): File \"\", line 1, in c.b = 'b' AttributeError: 'Child' object has no attribute 'b' To allow __dict__ creation while subclassing slotted objects, just add '__dict__' to the __slots__ (note that slots are ordered, and you shouldn't repeat slots that are already in parent classes): class SlottedWithDict(Child): __slots__ = ('__dict__', 'b') swd = SlottedWithDict() swd.a = 'a' swd.b = 'b' swd.c = 'c' and >>> swd.__dict__ {'c': 'c'} Or you don't even need to declare __slots__ in your subclass, and you will still use slots from the parents, but not restrict the creation of a __dict__: class NoSlots(Child): pass ns = NoSlots() ns.a = 'a' ns.b = 'b' and: >>> ns.__dict__ {'b': 'b'} However, __slots__ may cause problems for multiple inheritance: class BaseA(object): __slots__ = ('a',) class BaseB(object): __slots__ = ('b',) ```python Because creating a child class from parents with both non-empty slots fails: ```python >>> class Child(BaseA, BaseB): __slots__ = () Traceback (most recent call last): File \"\", line 1, in class Child(BaseA, BaseB): __slots__ = () TypeError: Error when calling the metaclass bases multiple bases have instance lay-out conflict If you run into this problem, You could just remove __slots__ from the parents, or if you have control of the parents, give them empty slots, or refactor to abstractions: from abc import ABC class AbstractA(ABC): __slots__ = () class BaseA(AbstractA): __slots__ = ('a',) class AbstractB(ABC): __slots__ = () class BaseB(AbstractB): __slots__ = ('b',) class Child(AbstractA, AbstractB): __slots__ = ('a', 'b') c = Child() # no problem! Add '__dict__' to __slots__ to get dynamic assignment class Foo(object): __slots__ = 'bar', 'baz', '__dict__' and now: >>> foo = Foo() >>> foo.boink = 'boink' So with '__dict__' in slots we lose some of the size benefits with the upside of having dynamic assignment and still having slots for the names we do expect. When you inherit from an object that isn't slotted, you get the same sort of semantics when you use __slots__ - names that are in __slots__ point to slotted values, while any other values are put in the instance's __dict__. Avoiding __slots__ because you want to be able to add attributes on the fly is actually not a good reason - just add \"__dict__\" to your __slots__ if this is required. You can similarly add __weakref__ to __slots__ explicitly if you need that feature. Set to empty tuple when subclassing a namedtuple The namedtuple builtin make immutable instances that are very lightweight (essentially, the size of tuples) but to get the benefits, you need to do it yourself if you subclass them: from collections import namedtuple class MyNT(namedtuple('MyNT', 'bar baz')): \"\"\"MyNT is an immutable and lightweight object\"\"\" __slots__ = () usage: >>> nt = MyNT('bar', 'baz') >>> nt.bar 'bar' >>> nt.baz 'baz' And trying to assign an unexpected attribute raises an AttributeError because we have prevented the creation of __dict__: >>> nt.quux = 'quux' Traceback (most recent call last): File \"\", line 1, in AttributeError: 'MyNT' object has no attribute 'quux' You can allow __dict__ creation by leaving off __slots__ = (), but you can't use non-empty __slots__ with subtypes of tuple. Biggest Caveat: Multiple inheritance Even when non-empty slots are the same for multiple parents, they cannot be used together: class Foo(object): __slots__ = 'foo', 'bar' class Bar(object): __slots__ = 'foo', 'bar' # alas, would work if empty, i.e. () >>> class Baz(Foo, Bar): pass Traceback (most recent call last): File \"\", line 1, in TypeError: Error when calling the metaclass bases multiple bases have instance lay-out conflict Using an empty __slots__ in the parent seems to provide the most flexibility, allowing the child to choose to prevent or allow (by adding '__dict__' to get dynamic assignment, see section above) the creation of a __dict__: class Foo(object): __slots__ = () class Bar(object): __slots__ = () class Baz(Foo, Bar): __slots__ = ('foo', 'bar') b = Baz() b.foo, b.bar = 'foo', 'bar' You don't have to have slots - so if you add them, and remove them later, it shouldn't cause any problems. Going out on a limb here: If you're composing mixins or using abstract base classes, which aren't intended to be instantiated, an empty __slots__ in those parents seems to be the best way to go in terms of flexibility for subclassers. To demonstrate, first, let's create a class with code we'd like to use under multiple inheritance class AbstractBase: __slots__ = () def __init__(self, a, b): self.a = a self.b = b def __repr__(self): return f'{type(self).__name__}({repr(self.a)}, {repr(self.b)})' We could use the above directly by inheriting and declaring the expected slots: class Foo(AbstractBase): __slots__ = 'a', 'b' But we don't care about that, that's trivial single inheritance, we need another class we might also inherit from, maybe with a noisy attribute: class AbstractBaseC: __slots__ = () @property def c(self): print('getting c!') return self._c @c.setter def c(self, arg): print('setting c!') self._c = arg Now if both bases had nonempty slots, we couldn't do the below. (In fact, if we wanted, we could have given AbstractBase nonempty slots a and b, and left them out of the below declaration - leaving them in would be wrong): class Concretion(AbstractBase, AbstractBaseC): __slots__ = 'a b _c'.split() And now we have functionality from both via multiple inheritance, and can still deny __dict__ and __weakref__ instantiation: >>> c = Concretion('a', 'b') >>> c.c = c setting c! >>> c.c getting c! Concretion('a', 'b') >>> c.d = 'd' Traceback (most recent call last): File \"\", line 1, in AttributeError: 'Concretion' object has no attribute 'd' Other cases to avoid slots Avoid them when you want to perform __class__ assignment with another class that doesn't have them (and you can't add them) unless the slot layouts are identical. (I am very interested in learning who is doing this and why.) Avoid them if you want to subclass variable length builtins like long, tuple, or str, and you want to add attributes to them. Avoid them if you insist on providing default values via class attributes for instance variables. You may be able to tease out further caveats from the rest of the __slots__ documentation. Critiques of other answers The current top answers cite outdated information and are quite hand-wavy and miss the mark in some important ways. Do not \"only use __slots__ when instantiating lots of objects\" I quote: \"You would want to use __slots__ if you are going to instantiate a lot (hundreds, thousands) of objects of the same class.\" Abstract Base Classes, for example, from the collections module, are not instantiated, yet __slots__ are declared for them. Why? If a user wishes to deny __dict__ or __weakref__ creation, those things must not be available in the parent classes. __slots__ contributes to reusability when creating interfaces or mixins. It is true that many Python users aren't writing for reusability, but when you are, having the option to deny unnecessary space usage is valuable. __slots__ doesn't break pickling When pickling a slotted object, you may find it complains with a misleading TypeError: >>> pickle.loads(pickle.dumps(f)) TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled This is actually incorrect. This message comes from the oldest protocol, which is the default. You can select the latest protocol with the -1 argument. In Python 2.7 this would be 2 (which was introduced in 2.3), and in 3.6 it is 4. >>> pickle.loads(pickle.dumps(f, -1)) <__main__.Foo object at 0x1129C770> in Python 2.7: >>> pickle.loads(pickle.dumps(f, 2)) <__main__.Foo object at 0x1129C770> in Python 3.6 >>> pickle.loads(pickle.dumps(f, 4)) <__main__.Foo object at 0x1129C770> So I would keep this in mind, as it is a solved problem. Critique of the (until Oct 2, 2016) accepted answer The first paragraph is half short explanation, half predictive. Here's the only part that actually answers the question The proper use of __slots__ is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This saves the overhead of one dict for every object that uses slots The second half is wishful thinking, and off the mark: While this is sometimes a useful optimization, it would be completely unnecessary if the Python interpreter was dynamic enough so that it would only require the dict when there actually were additions to the object. Python actually does something similar to this, only creating the __dict__ when it is accessed, but creating lots of objects with no data is fairly ridiculous. The second paragraph oversimplifies and misses actual reasons to avoid __slots__. The below is not a real reason to avoid slots (for actual reasons, see the rest of my answer above.): They change the behavior of the objects that have slots in a way that can be abused by control freaks and static typing weenies. It then goes on to discuss other ways of accomplishing that perverse goal with Python, not discussing anything to do with __slots__. The third paragraph is more wishful thinking. Together it is mostly off-the-mark content that the answerer didn't even author and contributes to ammunition for critics of the site. Memory usage evidence Create some normal objects and slotted objects: >>> class Foo(object): pass >>> class Bar(object): __slots__ = () Instantiate a million of them: >>> foos = [Foo() for f in xrange(1000000)] >>> bars = [Bar() for b in xrange(1000000)] Inspect with guppy.hpy().heap(): >>> guppy.hpy().heap() Partition of a set of 2028259 objects. Total size = 99763360 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000000 49 64000000 64 64000000 64 __main__.Foo 1 169 0 16281480 16 80281480 80 list 2 1000000 49 16000000 16 96281480 97 __main__.Bar 3 12284 1 987472 1 97268952 97 str ... Access the regular objects and their __dict__ and inspect again: >>> for f in foos: ... f.__dict__ >>> guppy.hpy().heap() Partition of a set of 3028258 objects. Total size = 379763480 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000000 33 280000000 74 280000000 74 dict of __main__.Foo 1 1000000 33 64000000 17 344000000 91 __main__.Foo 2 169 0 16281480 4 360281480 95 list 3 1000000 33 16000000 4 376281480 99 __main__.Bar 4 12284 0 987472 0 377268952 99 str ... This is consistent with the history of Python, from Unifying types and classes in Python 2.2. If you subclass a built-in type, extra space is automatically added to the instances to accomodate __dict__ and __weakrefs__. (The __dict__ is not initialized until you use it though, so you shouldn't worry about the space occupied by an empty dictionary for each instance you create.) If you don't need this extra space, you can add the phrase \"__slots__ = []\" to your class."} +{"question_id": 403421, "score": 1290, "creation_date": 1230741692, "tags": ["python", "list", "sorting", "reverse"], "instruction": "How do I sort a list of objects based on an attribute of the objects?\n\nI have a list of Python objects that I want to sort by a specific attribute of each object: [Tag(name=\"toe\", count=10), Tag(name=\"leg\", count=2), ...] How do I sort the list by .count in descending order?", "output": "To sort the list in place: orig_list.sort(key=lambda x: x.count, reverse=True) To return a new list, use sorted: new_list = sorted(orig_list, key=lambda x: x.count, reverse=True) Explanation: key=lambda x: x.count sorts by count. reverse=True sorts in descending order. More on sorting by keys."} +{"question_id": 8270092, "score": 1283, "creation_date": 1322229081, "tags": ["python", "string", "trim", "removing-whitespace"], "instruction": "Remove all whitespace in a string\n\nI want to eliminate all the whitespace from a string, on both ends, and in between words. I have this Python code: def my_handle(self): sentence = ' hello apple ' sentence.strip() But that only eliminates the whitespace on both sides of the string. How do I remove all whitespace?", "output": "If you want to remove leading and ending whitespace, use str.strip(): >>> \" hello apple \".strip() 'hello apple' If you want to remove all space characters, use str.replace() (NB this only removes the \u201cnormal\u201d ASCII space character ' ' U+0020 but not any other whitespace): >>> \" hello apple \".replace(\" \", \"\") 'helloapple' If you want to remove all whitespace and then leave a single space character between words, use str.split() followed by str.join(): >>> \" \".join(\" hello apple \".split()) 'hello apple' If you want to remove all whitespace then change the above leading \" \" to \"\": >>> \"\".join(\" hello apple \".split()) 'helloapple'"} +{"question_id": 129507, "score": 1276, "creation_date": 1222286435, "tags": ["python", "unit-testing", "exception"], "instruction": "How do you test that a Python function throws an exception?\n\nHow does one write a unit test that fails only if a function doesn't throw an expected exception?", "output": "Use TestCase.assertRaises from the unittest module, for example: import mymod class MyTestCase(unittest.TestCase): def test1(self): self.assertRaises(SomeCoolException, mymod.myfunc)"} +{"question_id": 3462143, "score": 1270, "creation_date": 1281555490, "tags": ["python", "performance", "list", "set", "set-difference"], "instruction": "Get difference between two lists with Unique Entries\n\nI have two lists in Python: temp1 = ['One', 'Two', 'Three', 'Four'] temp2 = ['One', 'Two'] Assuming the elements in each list are unique, I want to create a third list with items from the first list which are not in the second list: temp3 = ['Three', 'Four'] Are there any fast ways without cycles and checking?", "output": "To get elements which are in temp1 but not in temp2 (assuming uniqueness of the elements in each list): In [5]: list(set(temp1) - set(temp2)) Out[5]: ['Four', 'Three'] Beware that it is asymmetric : In [5]: set([1, 2]) - set([2, 3]) Out[5]: set([1]) where you might expect/want it to equal set([1, 3]). If you do want set([1, 3]) as your answer, you can use set([1, 2]).symmetric_difference(set([2, 3]))."} +{"question_id": 12453580, "score": 1260, "creation_date": 1347859945, "tags": ["python", "string", "list", "concatenation"], "instruction": "How to concatenate (join) items in a list to a single string\n\nHow do I concatenate a list of strings into a single string? For example, given ['this', 'is', 'a', 'sentence'], how do I get \"this-is-a-sentence\"? For handling a few strings in separate variables, see How do I append one string to another in Python?. For the opposite process - creating a list from a string - see How do I split a string into a list of characters? or How do I split a string into a list of words? as appropriate.", "output": "Use str.join: >>> words = ['this', 'is', 'a', 'sentence'] >>> '-'.join(words) 'this-is-a-sentence' >>> ' '.join(words) 'this is a sentence'"} +{"question_id": 4028904, "score": 1257, "creation_date": 1288136599, "tags": ["python", "cross-platform", "home-directory"], "instruction": "What is a cross-platform way to get the home directory?\n\nI need to get the location of the home directory of the current logged-on user. Currently, I've been using the following on Linux: os.getenv(\"HOME\") However, this does not work on Windows. What is the correct cross-platform way to do this ?", "output": "On Python 3.5+ you can use pathlib.Path.home(): from pathlib import Path home = Path.home() # example usage: with open(home / \".ssh\" / \"known_hosts\") as f: lines = f.readlines() to get a pathlib.PosixPath object. Use str() to convert to a string if necessary. On older Python versions, you can use os.path.expanduser. from os.path import expanduser home = expanduser(\"~\")"} +{"question_id": 1185524, "score": 1254, "creation_date": 1248641678, "tags": ["python", "string", "whitespace", "trim", "strip"], "instruction": "How do I trim whitespace?\n\nIs there a Python function that will trim whitespace (spaces and tabs) from a string? So that given input \" \\t example string\\t \" becomes \"example string\".", "output": "For whitespace on both sides, use str.strip: s = \" \\t a string example\\t \" s = s.strip() For whitespace on the right side, use str.rstrip: s = s.rstrip() For whitespace on the left side, use str.lstrip: s = s.lstrip() You can provide an argument to strip arbitrary characters to any of these functions, like this: s = s.strip(' \\t\\n\\r') This will strip any space, \\t, \\n, or \\r characters from both sides of the string. The examples above only remove strings from the left-hand and right-hand sides of strings. If you want to also remove characters from the middle of a string, try re.sub: import re print(re.sub('[\\s+]', '', s)) That should print out: astringexample"} +{"question_id": 2793324, "score": 1216, "creation_date": 1273304908, "tags": ["python", "list"], "instruction": "Is there a simple way to delete a list element by value?\n\nI want to remove a value from a list if it exists in the list (which it may not). a = [1, 2, 3, 4] b = a.index(6) del a[b] print(a) The above gives the error: ValueError: list.index(x): x not in list So I have to do this: a = [1, 2, 3, 4] try: b = a.index(6) del a[b] except: pass print(a) But is there not a simpler way to do this?", "output": "To remove the first occurrence of an element, use list.remove: >>> xs = ['a', 'b', 'c', 'd'] >>> xs.remove('b') >>> print(xs) ['a', 'c', 'd'] To remove all occurrences of an element, use a list comprehension: >>> xs = ['a', 'b', 'c', 'd', 'b', 'b', 'b', 'b'] >>> xs = [x for x in xs if x != 'b'] >>> print(xs) ['a', 'c', 'd']"} +{"question_id": 20638006, "score": 1215, "creation_date": 1387293891, "tags": ["python", "dictionary", "pandas", "dataframe"], "instruction": "Convert list of dictionaries to a pandas DataFrame\n\nHow can I convert a list of dictionaries into a DataFrame? I want to turn [{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': \"february\"}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] into month points points_h1 time year 0 NaN 50 NaN 5:00 2010 1 february 25 NaN 6:00 NaN 2 january 90 NaN 9:00 NaN 3 june NaN 20 NaN NaN", "output": "If ds is a list of dicts: df = pd.DataFrame(ds) Note: this does not work with nested data."} +{"question_id": 16923281, "score": 1166, "creation_date": 1370364416, "tags": ["python", "pandas", "dataframe", "csv", "file-io"], "instruction": "Writing a pandas DataFrame to CSV file\n\nI have a dataframe in pandas which I would like to write to a CSV file. I am doing this using: df.to_csv('out.csv') And getting the following error: UnicodeEncodeError: 'ascii' codec can't encode character u'\\u03b1' in position 20: ordinal not in range(128) Is there any way to get around this easily (i.e. I have unicode characters in my data frame)? And is there a way to write to a tab delimited file instead of a CSV using e.g. a 'to-tab' method (that I don't think exists)?", "output": "To delimit by a tab you can use the sep argument of to_csv: df.to_csv(file_name, sep='\\t') To use a specific encoding (e.g. 'utf-8') use the encoding argument: df.to_csv(file_name, sep='\\t', encoding='utf-8') In many cases you will want to remove the index and add a header: df.to_csv(file_name, sep='\\t', encoding='utf-8', index=False, header=True)"} +{"question_id": 626759, "score": 1164, "creation_date": 1236613285, "tags": ["python", "list", "tuples"], "instruction": "What's the difference between lists and tuples?\n\nWhat are the differences between lists and tuples, and what are their respective advantages and disadvantages?", "output": "Apart from tuples being immutable there is also a semantic distinction that should guide their usage. Tuples are heterogeneous data structures (i.e., their entries have different meanings), while lists are homogeneous sequences. Tuples have structure, lists have order. Using this distinction makes code more explicit and understandable. One example would be pairs of page and line number to reference locations in a book, e.g.: my_location = (42, 11) # page number, line number You can then use this as a key in a dictionary to store notes on locations. A list on the other hand could be used to store multiple locations. Naturally one might want to add or remove locations from the list, so it makes sense that lists are mutable. On the other hand it doesn't make sense to add or remove items from an existing location - hence tuples are immutable. There might be situations where you want to change items within an existing location tuple, for example when iterating through the lines of a page. But tuple immutability forces you to create a new location tuple for each new value. This seems inconvenient on the face of it, but using immutable data like this is a cornerstone of value types and functional programming techniques, which can have substantial advantages. There are some interesting articles on this issue, e.g. \"Python Tuples are Not Just Constant Lists\" or \"Understanding tuples vs. lists in Python\". The official Python documentation also mentions this \"Tuples are immutable, and usually contain an heterogeneous sequence ...\". In a statically typed language like Haskell the values in a tuple generally have different types and the length of the tuple must be fixed. In a list the values all have the same type and the length is not fixed. So the difference is very obvious. Finally there is the namedtuple in Python, which makes sense because a tuple is already supposed to have structure. This underlines the idea that tuples are a light-weight alternative to classes and instances."} +{"question_id": 1773805, "score": 1161, "creation_date": 1258758976, "tags": ["python", "yaml"], "instruction": "How can I parse a YAML file in Python\n\nHow can I parse a YAML file in Python?", "output": "The easiest method without relying on C headers is PyYaml (documentation), which can be installed via pip install pyyaml: import yaml with open(\"example.yaml\") as stream: try: print(yaml.safe_load(stream)) except yaml.YAMLError as exc: print(exc) yaml.load() also exists, but yaml.safe_load() should always be preferred to avoid introducing the possibility for arbitrary code execution. So unless you explicitly need the arbitrary object serialization/deserialization use safe_load. The PyYaml project supports versions up through the YAML 1.1 specification. If YAML 1.2 specification support is needed, see ruamel.yaml as noted in this answer. Also, you could also use a drop in replacement for pyyaml, that keeps your yaml file ordered the same way you had it, called oyaml. View snyk of oyaml here"} +{"question_id": 9233027, "score": 1152, "creation_date": 1328899437, "tags": ["python", "python-3.x", "unicode", "file-io", "decode"], "instruction": "UnicodeDecodeError: 'charmap' codec can't decode byte X in position Y: character maps to \n\nI'm trying to get a Python 3 program to do some manipulations with a text file filled with information. However, when trying to read the file I get the following error: Traceback (most recent call last): File \"SCRIPT LOCATION\", line NUMBER, in text = file.read() File \"C:\\Python31\\lib\\encodings\\cp1252.py\", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to `` Editor's note: After reading this Q&A, see How to determine the encoding of text if you need help figuring out the encoding of the file you are trying to open.", "output": "The file in question is not using the CP1252 encoding. It's using another encoding. Which one you have to figure out yourself. Common ones are Latin-1 and UTF-8. Since 0x90 doesn't actually mean anything in Latin-1, UTF-8 (where 0x90 is a continuation byte) is more likely. You specify the encoding when you open the file: file = open(filename, encoding=\"utf-8\")"} +{"question_id": 20180543, "score": 1142, "creation_date": 1385324701, "tags": ["python"], "instruction": "How do I check the versions of Python modules?\n\nI installed the Python modules construct and statlib using setuptools: sudo apt-get install python-setuptools sudo easy_install statlib sudo easy_install construct How do I check their versions from the command line?", "output": "Use pip instead of easy_install. With pip, list all installed packages and their versions via: pip freeze On most Linux systems, you can pipe this to grep (or findstr on Windows) to find the row for the particular package you're interested in. Linux: pip freeze | grep lxml lxml==2.3 Windows: pip freeze | findstr lxml lxml==2.3 For an individual module, you can try the __version__ attribute. However, there are modules without it: python -c \"import requests; print(requests.__version__)\" 2.14.2 python -c \"import lxml; print(lxml.__version__)\" Traceback (most recent call last): File \"\", line 1, in AttributeError: 'module' object has no attribute '__version__' Lastly, as the commands in your question are prefixed with sudo, it appears you're installing to the global python environment. I strongly advise to take look into Python virtual environment managers, for example virtualenvwrapper."} +{"question_id": 9733638, "score": 1128, "creation_date": 1331884009, "tags": ["python", "json", "python-requests", "cherrypy"], "instruction": "How can I POST JSON data with Python's Requests library?\n\nI need to POST JSON content from a client to a server. I'm using Python 2.7.1 and simplejson. The client is using Requests. The server is CherryPy. I can GET hard-coded JSON content from the server (the code is not shown), but when I try to POST JSON content to the server, I get \"400 Bad Request\". Here is my client code: data = {'sender': 'Alice', 'receiver': 'Bob', 'message': 'We did it!'} data_json = simplejson.dumps(data) payload = {'json_payload': data_json} r = requests.post(\"http://localhost:8080\", data=payload) Here is the server code. class Root(object): def __init__(self, content): self.content = content print self.content # This works exposed = True def GET(self): cherrypy.response.headers['Content-Type'] = 'application/json' return simplejson.dumps(self.content) def POST(self): self.content = simplejson.loads(cherrypy.request.body.read()) How can I fix this problem?", "output": "Starting with Requests version 2.4.2, you can use the json= parameter (which takes a dictionary) instead of data= (which takes a string) in the call: >>> import requests >>> r = requests.post('http://httpbin.org/post', json={\"key\": \"value\"}) >>> r.status_code 200 >>> r.json() {'args': {}, 'data': '{\"key\": \"value\"}', 'files': {}, 'form': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Content-Length': '16', 'Content-Type': 'application/json', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.4.3 CPython/3.4.0', 'X-Request-Id': 'xx-xx-xx'}, 'json': {'key': 'value'}, 'origin': 'x.x.x.x', 'url': 'http://httpbin.org/post'}"} +{"question_id": 32490629, "score": 1072, "creation_date": 1441840825, "tags": ["python", "date", "datetime", "formatting", "python-datetime"], "instruction": "Getting today's date in YYYY-MM-DD in Python?\n\nIs there a nicer way than the following to return today's date in the YYYY-MM-DD format? str(datetime.datetime.today()).split()[0]", "output": "Use strftime: >>> from datetime import datetime >>> datetime.today().strftime('%Y-%m-%d') '2021-01-26' To also include a zero-padded Hour:Minute:Second at the end: >>> datetime.today().strftime('%Y-%m-%d %H:%M:%S') '2021-01-26 16:50:03' To get the UTC date and time: >>> datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') '2021-01-27 00:50:03'"} +{"question_id": 308999, "score": 1058, "creation_date": 1227279220, "tags": ["python", "decorator", "functools"], "instruction": "What does functools.wraps do?\n\nIn a comment on this answer to another question, someone said that they weren't sure what functools.wraps was doing. So, I'm asking this question so that there will be a record of it on StackOverflow for future reference: what does functools.wraps do, exactly?", "output": "When you use a decorator, you're replacing one function with another. In other words, if you have a decorator def logged(func): def with_logging(*args, **kwargs): print(func.__name__ + \" was called\") return func(*args, **kwargs) return with_logging then when you say @logged def f(x): \"\"\"does some math\"\"\" return x + x * x it's exactly the same as saying def f(x): \"\"\"does some math\"\"\" return x + x * x f = logged(f) and your function f is replaced with the function with_logging. Unfortunately, this means that if you then say print(f.__name__) it will print with_logging because that's the name of your new function. In fact, if you look at the docstring for f, it will be blank because with_logging has no docstring, and so the docstring you wrote won't be there anymore. Also, if you look at the pydoc result for that function, it won't be listed as taking one argument x; instead it'll be listed as taking *args and **kwargs because that's what with_logging takes. If using a decorator always meant losing this information about a function, it would be a serious problem. That's why we have functools.wraps. This takes a function used in a decorator and adds the functionality of copying over the function name, docstring, arguments list, etc. And since wraps is itself a decorator, the following code does the correct thing: from functools import wraps def logged(func): @wraps(func) def with_logging(*args, **kwargs): print(func.__name__ + \" was called\") return func(*args, **kwargs) return with_logging @logged def f(x): \"\"\"does some math\"\"\" return x + x * x print(f.__name__) # prints 'f' print(f.__doc__) # prints 'does some math'"} +{"question_id": 2186525, "score": 1051, "creation_date": 1265134790, "tags": ["python", "path", "filesystems", "glob", "fnmatch"], "instruction": "How to use glob() to find files recursively?\n\nI would like to list all files recursively in a directory. I currently have a directory structure like this: src/main.c src/dir/file1.c src/another-dir/file2.c src/another-dir/nested/files/file3.c I've tried to do the following: from glob import glob glob(os.path.join('src','*.c')) But this will only get be files directly in the src subfolder, e.g. I get main.c but I will not get file1.c, file2.c etc. from glob import glob glob(os.path.join('src','*.c')) glob(os.path.join('src','*','*.c')) glob(os.path.join('src','*','*','*.c')) glob(os.path.join('src','*','*','*','*.c')) But this is obviously limited and clunky, how can I do this properly?", "output": "There are a couple of ways: pathlib.Path().rglob() Use pathlib.Path().rglob() from the pathlib module, which was introduced in Python 3.5. from pathlib import Path for path in Path('src').rglob('*.c'): print(path.name) glob.glob() If you don't want to use pathlib, use glob.glob(): from glob import glob for filename in glob('src/**/*.c', recursive=True): print(filename) For cases where matching files beginning with a dot (.); like files in the current directory or hidden files on Unix based system, use the os.walk() solution below. os.walk() For older Python versions, use os.walk() to recursively walk a directory and fnmatch.filter() to match against a simple expression: import fnmatch import os matches = [] for root, dirnames, filenames in os.walk('src'): for filename in fnmatch.filter(filenames, '*.c'): matches.append(os.path.join(root, filename)) This version should also be faster depending on how many files you have, as the pathlib module has a bit of overhead over os.walk()."} +{"question_id": 1937622, "score": 1050, "creation_date": 1261355161, "tags": ["python", "datetime", "date"], "instruction": "Convert date to datetime in Python\n\nIs there a built-in method for converting a date to a datetime in Python, for example getting the datetime for the midnight of the given date? The opposite conversion is easy: datetime has a .date() method. Do I really have to manually call datetime(d.year, d.month, d.day)?", "output": "You can use datetime.combine(date, time); for the time, you create a datetime.time object initialized to midnight. from datetime import date from datetime import datetime dt = datetime.combine(date.today(), datetime.min.time())"} +{"question_id": 11707586, "score": 1046, "creation_date": 1343547891, "tags": ["python", "pandas", "printing", "column-width"], "instruction": "How do I expand the output display to see more columns of a Pandas DataFrame?\n\nIs there a way to widen the display of output in either interactive or script-execution mode? Specifically, I am using the describe() function on a Pandas DataFrame. When the DataFrame is five columns (labels) wide, I get the descriptive statistics that I want. However, if the DataFrame has any more columns, the statistics are suppressed and something like this is returned: >> Index: 8 entries, count to max >> Data columns: >> x1 8 non-null values >> x2 8 non-null values >> x3 8 non-null values >> x4 8 non-null values >> x5 8 non-null values >> x6 8 non-null values >> x7 8 non-null values The \"8\" value is given whether there are 6 or 7 columns. What does the \"8\" refer to? I have already tried dragging the IDLE window larger, as well as increasing the \"Configure IDLE\" width options, to no avail.", "output": "(For Pandas versions before 0.23.4, see at bottom.) Use pandas.set_option(optname, val), or equivalently pd.options. = val. Like: import pandas as pd pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) Pandas will try to autodetect the size of your terminal window if you set pd.options.display.width = 0. Here is the help for set_option: set_option(pat,value) - Sets the value of the specified option Available options: display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format, height, line_width, max_columns, max_colwidth, max_info_columns, max_info_rows, max_rows, max_seq_items, mpl_style, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, width] mode.[sim_interactive, use_inf_as_null] Parameters ---------- pat - str/regexp which should match a single option. Note: partial matches are supported for convenience, but unless you use the full option name (e.g., *x.y.z.option_name*), your code may break in future versions if new options with similar names are introduced. value - new value of option. Returns ------- None Raises ------ KeyError if no such option exists display.chop_threshold: [default: None] [currently: None] : float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. display.colheader_justify: [default: right] [currently: right] : 'left'/'right' Controls the justification of column headers. used by DataFrameFormatter. display.column_space: [default: 12] [currently: 12]No description available. display.date_dayfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the day first, eg 20/01/2005 display.date_yearfirst: [default: False] [currently: False] : boolean When True, prints and parses dates with the year first, e.g., 2005/01/20 display.encoding: [default: UTF-8] [currently: UTF-8] : str/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. display.expand_frame_repr: [default: True] [currently: True] : boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, `max_columns` is still respected, but the output will wrap-around across multiple \"pages\" if it's width exceeds `display.width`. display.float_format: [default: None] [currently: None] : callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See core.format.EngFormatter for an example. display.height: [default: 60] [currently: 1000] : int Deprecated. (Deprecated, use `display.height` instead.) display.line_width: [default: 80] [currently: 1000] : int Deprecated. (Deprecated, use `display.width` instead.) display.max_columns: [default: 20] [currently: 500] : int max_rows and max_columns are used in __repr__() methods to decide if to_string() or info() is used to render an object to a string. In case python/IPython is running in a terminal this can be set to 0 and Pandas will correctly auto-detect the width the terminal and swap to a smaller format in case all columns would not fit vertically. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. 'None' value means unlimited. display.max_colwidth: [default: 50] [currently: 50] : int The maximum width in characters of a column in the repr of a Pandas data structure. When the column overflows, a \"...\" placeholder is embedded in the output. display.max_info_columns: [default: 100] [currently: 100] : int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. display.max_info_rows: [default: 1690785] [currently: 1690785] : int or None max_info_rows is the maximum number of rows for which a frame will perform a null check on its columns when repr'ing To a console. The default is 1,000,000 rows. So, if a DataFrame has more 1,000,000 rows there will be no null check performed on the columns and thus the representation will take much less time to display in an interactive session. A value of None means always perform a null check when repr'ing. display.max_rows: [default: 60] [currently: 500] : int This sets the maximum number of rows Pandas should output when printing out various output. For example, this value determines whether the repr() for a dataframe prints out fully or just a summary repr. 'None' value means unlimited. display.max_seq_items: [default: None] [currently: None] : int or None when pretty-printing a long sequence, no more then `max_seq_items` will be printed. If items are ommitted, they will be denoted by the addition of \"...\" to the resulting string. If set to None, the number of items to be printed is unlimited. display.mpl_style: [default: None] [currently: None] : bool Setting this to 'default' will modify the rcParams used by matplotlib to give plots a more pleasing visual style by default. Setting this to None/False restores the values to their initial value. display.multi_sparse: [default: True] [currently: True] : boolean \"sparsify\" MultiIndex display (don't display repeated elements in outer levels within groups) display.notebook_repr_html: [default: True] [currently: True] : boolean When True, IPython notebook will use html representation for Pandas objects (if it is available). display.pprint_nest_depth: [default: 3] [currently: 3] : int Controls the number of nested levels to process when pretty-printing display.precision: [default: 7] [currently: 7] : int Floating point output precision (number of significant digits). This is only a suggestion display.width: [default: 80] [currently: 1000] : int Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and Pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. mode.sim_interactive: [default: False] [currently: False] : boolean Whether to simulate interactive mode for purposes of testing mode.use_inf_as_null: [default: False] [currently: False] : boolean True means treat None, NaN, INF, -INF as null (old way), False means None and NaN are null, but INF, -INF are not null (new way). Call def: pd.set_option(self, *args, **kwds) Older version information Much of this has been deprecated. As @bmu mentioned, Pandas auto detects (by default) the size of the display area, a summary view will be used when an object repr does not fit on the display. You mentioned resizing the IDLE window, to no effect. If you do print df.describe().to_string() does it fit on the IDLE window? The terminal size is determined by pandas.util.terminal.get_terminal_size() (deprecated and removed), this returns a tuple containing the (width, height) of the display. Does the output match the size of your IDLE window? There might be an issue (there was one before when running a terminal in Emacs). Note that it is possible to bypass the autodetect, pandas.set_printoptions(max_rows=200, max_columns=10) will never switch to summary view if number of rows, columns does not exceed the given limits. The max_colwidth option helps in seeing untruncated form of each column."} +{"question_id": 3964681, "score": 1041, "creation_date": 1287450553, "tags": ["python", "file-io"], "instruction": "Find all files in a directory with extension .txt in Python\n\nHow can I find all the files in a directory having the extension .txt in python?", "output": "You can use glob: import glob, os os.chdir(\"/mydir\") for file in glob.glob(\"*.txt\"): print(file) or simply os.listdir: import os for file in os.listdir(\"/mydir\"): if file.endswith(\".txt\"): print(os.path.join(\"/mydir\", file)) or if you want to traverse directory, use os.walk: import os for root, dirs, files in os.walk(\"/mydir\"): for file in files: if file.endswith(\".txt\"): print(os.path.join(root, file))"} +{"question_id": 1896918, "score": 1036, "creation_date": 1260720623, "tags": ["python", "unit-testing"], "instruction": "Running unittest with typical test directory structure\n\nThe very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: \"To run the unit tests do X.\"", "output": "The best solution in my opinion is to use the unittest command line interface which will add the directory to the sys.path so you don't have to (done in the TestLoader class). For example for a directory structure like this: new_project \u251c\u2500\u2500 antigravity.py \u2514\u2500\u2500 test_antigravity.py You can just run: $ cd new_project $ python -m unittest test_antigravity For a directory structure like yours: new_project \u251c\u2500\u2500 antigravity \u2502 \u251c\u2500\u2500 __init__.py # make it a package \u2502 \u2514\u2500\u2500 antigravity.py \u2514\u2500\u2500 test \u251c\u2500\u2500 __init__.py # also make test a package \u2514\u2500\u2500 test_antigravity.py And in the test modules inside the test package, you can import the antigravity package and its modules as usual: # import the package import antigravity # import the antigravity module from antigravity import antigravity # or an object inside the antigravity module from antigravity.antigravity import my_object Running a single test module: To run a single test module, in this case test_antigravity.py: $ cd new_project $ python -m unittest test.test_antigravity Just reference the test module the same way you import it. Running a single test case or test method: Also you can run a single TestCase or a single test method: $ python -m unittest test.test_antigravity.GravityTestCase $ python -m unittest test.test_antigravity.GravityTestCase.test_method Running all tests: You can also use test discovery which will discover and run all the tests for you, they must be modules or packages named test*.py (can be changed with the -p, --pattern flag): $ cd new_project $ python -m unittest discover $ # Also works without discover for Python 3 $ # as suggested by @Burrito in the comments $ python -m unittest This will run all the test*.py modules inside the test package. Here you can find the updated official documentation of discovery."} +{"question_id": 432842, "score": 1033, "creation_date": 1231677283, "tags": ["python", "logical-operators"], "instruction": "How do you get the logical xor of two variables in Python?\n\nHow do you get the logical xor of two variables in Python? For example, I have two variables that I expect to be strings. I want to test that only one of them contains a True value (is not None or an empty string): str1 = raw_input(\"Enter string one:\") str2 = raw_input(\"Enter string two:\") if logical_xor(str1, str2): print \"ok\" else: print \"bad\" The ^ operator is bitwise, and not defined on all objects: >>> 1 ^ 1 0 >>> 2 ^ 1 3 >>> \"abc\" ^ \"\" Traceback (most recent call last): File \"\", line 1, in TypeError: unsupported operand type(s) for ^: 'str' and 'str'", "output": "If you're already normalizing the inputs to booleans, then != is xor. bool(a) != bool(b)"} +{"question_id": 323972, "score": 1030, "creation_date": 1227797753, "tags": ["python", "multithreading", "python-multithreading", "kill", "terminate"], "instruction": "Is there any way to kill a Thread?\n\nIs it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?", "output": "It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases: the thread is holding a critical resource that must be closed properly the thread has created several other threads that must be killed as well. The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit. For example: import threading class StoppableThread(threading.Thread): \"\"\"Thread class with a stop() method. The thread itself has to check regularly for the stopped() condition.\"\"\" def __init__(self, *args, **kwargs): super(StoppableThread, self).__init__(*args, **kwargs) self._stop_event = threading.Event() def stop(self): self._stop_event.set() def stopped(self): return self._stop_event.is_set() In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals. There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it. The following code allows (with some restrictions) to raise an Exception in a Python thread: def _async_raise(tid, exctype): '''Raises an exception in the threads with id tid''' if not inspect.isclass(exctype): raise TypeError(\"Only types can be raised (not instances)\") res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), ctypes.py_object(exctype)) if res == 0: raise ValueError(\"invalid thread id\") elif res != 1: # \"if it returns a number greater than one, you're in trouble, # and you should call it again with exc=NULL to revert the effect\" ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None) raise SystemError(\"PyThreadState_SetAsyncExc failed\") class ThreadWithExc(threading.Thread): '''A thread class that supports raising an exception in the thread from another thread. ''' def _get_my_tid(self): \"\"\"determines this (self's) thread id CAREFUL: this function is executed in the context of the caller thread, to get the identity of the thread represented by this instance. \"\"\" if not self.is_alive(): # Note: self.isAlive() on older version of Python raise threading.ThreadError(\"the thread is not active\") # do we have it cached? if hasattr(self, \"_thread_id\"): return self._thread_id # no, look for it in the _active dict for tid, tobj in threading._active.items(): if tobj is self: self._thread_id = tid return tid # TODO: in python 2.6, there's a simpler way to do: self.ident raise AssertionError(\"could not determine the thread's id\") def raise_exc(self, exctype): \"\"\"Raises the given exception type in the context of this thread. If the thread is busy in a system call (time.sleep(), socket.accept(), ...), the exception is simply ignored. If you are sure that your exception should terminate the thread, one way to ensure that it works is: t = ThreadWithExc( ... ) ... t.raise_exc( SomeException ) while t.isAlive(): time.sleep( 0.1 ) t.raise_exc( SomeException ) If the exception is to be caught by the thread, you need a way to check that your thread has caught it. CAREFUL: this function is executed in the context of the caller thread, to raise an exception in the context of the thread represented by this instance. \"\"\" _async_raise( self._get_my_tid(), exctype ) (Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.) As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption. A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup."} +{"question_id": 477486, "score": 1028, "creation_date": 1232878843, "tags": ["python", "floating-point", "range"], "instruction": "How do I use a decimal step value for range()?\n\nHow do I iterate between 0 and 1 by a step of 0.1? This says that the step argument cannot be zero: for i in range(0, 1, 0.1): print(i)", "output": "Rather than using a decimal step directly, it's much safer to express this in terms of how many points you want. Otherwise, floating-point rounding error is likely to give you a wrong result. Use the linspace function from the NumPy library (which isn't part of the standard library but is relatively easy to obtain). linspace takes a number of points to return, and also lets you specify whether or not to include the right endpoint: >>> np.linspace(0,1,11) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) >>> np.linspace(0,1,10,endpoint=False) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) If you really want to use a floating-point step value, use numpy.arange: >>> import numpy as np >>> np.arange(0.0, 1.0, 0.1) array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) Floating-point rounding error will cause problems, though. Here's a simple case where rounding error causes arange to produce a length-4 array when it should only produce 3 numbers: >>> numpy.arange(1, 1.3, 0.1) array([1. , 1.1, 1.2, 1.3])"} +{"question_id": 209513, "score": 1016, "creation_date": 1224178083, "tags": ["python", "string", "hex"], "instruction": "Convert hex string to integer in Python\n\nHow do I convert a hex string to an integer? \"0xffff\" \u27f6 65535 \"ffff\" \u27f6 65535", "output": "Without the 0x prefix, you need to specify the base explicitly, otherwise there's no way to tell: x = int(\"deadbeef\", 16) With the 0x prefix, Python can distinguish hex and decimal automatically: >>> print(int(\"0xdeadbeef\", 0)) 3735928559 >>> print(int(\"10\", 0)) 10 (You must specify 0 as the base in order to invoke this prefix-guessing behavior; if you omit the second parameter, int() will assume base-10.)"} +{"question_id": 899103, "score": 1006, "creation_date": 1243014738, "tags": ["python", "file", "list", "file-io", "newline"], "instruction": "Writing a list to a file with Python, with newlines\n\nHow do I write a list to a file? writelines() doesn't insert newline characters, so I need to do: f.writelines([f\"{line}\\n\" for line in lines])", "output": "Use a loop: with open('your_file.txt', 'w') as f: for line in lines: f.write(f\"{line}\\n\") For Python <3.6: with open('your_file.txt', 'w') as f: for line in lines: f.write(\"%s\\n\" % line) For Python 2, one may also use: with open('your_file.txt', 'w') as f: for line in lines: print >> f, line If you're keen on a single function call, at least remove the square brackets [], so that the strings to be printed get made one at a time (a genexp rather than a listcomp) -- no reason to take up all the memory required to materialize the whole list of strings."} +{"question_id": 1854, "score": 1003, "creation_date": 1217906598, "tags": ["python", "operating-system", "cross-platform", "platform-agnostic"], "instruction": "How to identify which OS Python is running on\n\nWhat do I need to look at to see whether I'm on Windows or Unix, etc.?", "output": ">>> import os >>> os.name 'posix' >>> import platform >>> platform.system() 'Linux' >>> platform.release() '2.6.22-15-generic' The output of platform.system() is as follows: Linux: Linux Mac: Darwin Windows: Windows See: platform \u2014 Access to underlying platform\u2019s identifying data"} +{"question_id": 15221473, "score": 1001, "creation_date": 1362479358, "tags": ["python", "upgrade", "virtualenv", "pip", "package-managers"], "instruction": "How do I update/upgrade pip itself from inside my virtual environment?\n\nI'm able to update pip-managed packages, but how do I update pip itself? According to pip --version, I currently have pip 1.1 installed in my virtualenv and I want to update to the latest version. What's the command for that? Do I need to use distribute or is there a native pip or virtualenv command? I've already tried pip update and pip update pip with no success.", "output": "pip is just a PyPI package like any other; you could use it to upgrade itself the same way you would upgrade any package: pip install --upgrade pip On Windows the recommended command is: py -m pip install --upgrade pip"} +{"question_id": 1247486, "score": 996, "creation_date": 1249688611, "tags": ["python", "list-comprehension", "map-function"], "instruction": "List comprehension vs map\n\nIs there a reason to prefer using map() over list comprehension or vice versa? Is either of them generally more efficient or considered generally more Pythonic than the other?", "output": "map may be microscopically faster in some cases (when you're not making a lambda for the purpose, but using the same function in map and a list comprehension). List comprehensions may be faster in other cases and most (not all) Pythonistas consider them more direct and clearer. An example of the tiny speed advantage of map when using exactly the same function: $ python -m timeit -s'xs=range(10)' 'map(hex, xs)' 100000 loops, best of 3: 4.86 usec per loop $ python -m timeit -s'xs=range(10)' '[hex(x) for x in xs]' 100000 loops, best of 3: 5.58 usec per loop An example of how performance comparison gets completely reversed when map needs a lambda: $ python -m timeit -s'xs=range(10)' 'map(lambda x: x+2, xs)' 100000 loops, best of 3: 4.24 usec per loop $ python -m timeit -s'xs=range(10)' '[x+2 for x in xs]' 100000 loops, best of 3: 2.32 usec per loop"} +{"question_id": 4617034, "score": 985, "creation_date": 1294330603, "tags": ["python", "file-io"], "instruction": "How can I open multiple files using \"with open\" in Python?\n\nI want to change a couple of files at one time, iff I can write to all of them. I'm wondering if I somehow can combine the multiple open calls with the with statement: try: with open('a', 'w') as a and open('b', 'w') as b: do_something() except IOError as e: print 'Operation failed: %s' % e.strerror If that's not possible, what would an elegant solution to this problem look like?", "output": "As of Python 2.7 (or 3.1 respectively) you can write with open('a', 'w') as a, open('b', 'w') as b: do_something() (Historical note: In earlier versions of Python, you can sometimes use contextlib.nested() to nest context managers. This won't work as expected for opening multiples files, though -- see the linked documentation for details.) In the rare case that you want to open a variable number of files all at the same time, you can use contextlib.ExitStack, starting from Python version 3.3: with ExitStack() as stack: files = [stack.enter_context(open(fname)) for fname in filenames] # Do something with \"files\" Note that more commonly you want to process files sequentially rather than opening all of them at the same time, in particular if you have a variable number of files: for fname in filenames: with open(fname) as f: # Process f"} +{"question_id": 6392739, "score": 983, "creation_date": 1308352770, "tags": ["python", "syntax"], "instruction": "What does the \"at\" (@) symbol do in Python?\n\nWhat does the @ symbol do in Python?", "output": "An @ symbol at the beginning of a line is used for class and function decorators: PEP 318: Decorators Python Decorators - Python Wiki The most common Python decorators are: @property @classmethod @staticmethod An @ in the middle of a line is probably matrix multiplication: @ as a binary operator."} +{"question_id": 682504, "score": 979, "creation_date": 1238000421, "tags": ["python", "class", "constructor"], "instruction": "What is a clean \"pythonic\" way to implement multiple constructors?\n\nI can't find a definitive answer for this. As far as I know, you can't have multiple __init__ functions in a Python class. So how do I solve this problem? Suppose I have a class called Cheese with the number_of_holes property. How can I have two ways of creating cheese objects... One that takes a number of holes like this: parmesan = Cheese(num_holes=15). And one that takes no arguments and just randomizes the number_of_holes property: gouda = Cheese(). I can think of only one way to do this, but this seems clunky: class Cheese: def __init__(self, num_holes=0): if num_holes == 0: # Randomize number_of_holes else: number_of_holes = num_holes What do you say? Is there another way?", "output": "Actually None is much better for \"magic\" values: class Cheese: def __init__(self, num_holes=None): if num_holes is None: ... Now if you want complete freedom of adding more parameters: class Cheese: def __init__(self, *args, **kwargs): # args -- tuple of anonymous arguments # kwargs -- dictionary of named arguments self.num_holes = kwargs.get('num_holes', random_holes()) To better explain the concept of *args and **kwargs (you can actually change these names): def f(*args, **kwargs): print('args:', args, 'kwargs:', kwargs) >>> f('a') args: ('a',) kwargs: {} >>> f(ar='a') args: () kwargs: {'ar': 'a'} >>> f(1,2,param=3) args: (1, 2) kwargs: {'param': 3} http://docs.python.org/reference/expressions.html#calls"} +{"question_id": 9542738, "score": 972, "creation_date": 1330740214, "tags": ["python", "list", "find"], "instruction": "Find a value in a list\n\nI use the following to check if item is in my_list: if item in my_list: print(\"Desired item is in list\") Is \"if item in my_list:\" the most \"pythonic\" way of finding an item in a list? EDIT FOR REOPENING: the question has been considered duplicate, but I'm not entirely convinced: here this question is roughly \"what is the most Pythonic way to find an element in a list\". And the first answer to the question is really extensive in all Python ways to do this. Whereas on the linked duplicate question and its corresponding answer, the focus is roughly only limited to the 'in' key word in Python. I think it is really limiting, compared to the current question. And I think the answer to this current question, is more relevant and elaborated that the answer of the proposed duplicate question/answer.", "output": "As for your first question: \"if item is in my_list:\" is perfectly fine and should work if item equals one of the elements inside my_list. The item must exactly match an item in the list. For instance, \"abc\" and \"ABC\" do not match. Floating point values in particular may suffer from inaccuracy. For instance, 1 - 1/3 != 2/3. As for your second question: There's actually several possible ways if \"finding\" things in lists. Checking if something is inside This is the use case you describe: Checking whether something is inside a list or not. As you know, you can use the in operator for that: 3 in [1, 2, 3] # => True Filtering a collection That is, finding all elements in a sequence that meet a certain condition. You can use list comprehension or generator expressions for that: matches = [x for x in lst if fulfills_some_condition(x)] matches = (x for x in lst if x > 6) The latter will return a generator which you can imagine as a sort of lazy list that will only be built as soon as you iterate through it. By the way, the first one is exactly equivalent to matches = filter(fulfills_some_condition, lst) in Python 2. Here you can see higher-order functions at work. In Python 3, filter doesn't return a list, but a generator-like object. Finding the first occurrence If you only want the first thing that matches a condition (but you don't know what it is yet), it's fine to use a for loop (possibly using the else clause as well, which is not really well-known). You can also use next(x for x in lst if ...) which will return the first match or raise a StopIteration if none is found. Alternatively, you can use next((x for x in lst if ...), [default value]) Finding the location of an item For lists, there's also the index method that can sometimes be useful if you want to know where a certain element is in the list: [1,2,3].index(2) # => 1 [1,2,3].index(4) # => ValueError However, note that if you have duplicates, .index always returns the lowest index:...... [1,2,3,2].index(2) # => 1 If there are duplicates and you want all the indexes then you can use enumerate() instead: [i for i,x in enumerate([1,2,3,2]) if x==2] # => [1, 3]"} +{"question_id": 4690600, "score": 970, "creation_date": 1295004830, "tags": ["python", "exception", "logging", "except", "python-logging"], "instruction": "python exception message capturing\n\nimport ftplib import urllib2 import os import logging logger = logging.getLogger('ftpuploader') hdlr = logging.FileHandler('ftplog.log') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') hdlr.setFormatter(formatter) logger.addHandler(hdlr) logger.setLevel(logging.INFO) FTPADDR = \"some ftp address\" def upload_to_ftp(con, filepath): try: f = open(filepath,'rb') # file to send con.storbinary('STOR '+ filepath, f) # Send the file f.close() # Close file and FTP logger.info('File successfully uploaded to '+ FTPADDR) except, e: logger.error('Failed to upload to ftp: '+ str(e)) This doesn't seem to work, I get syntax error, what is the proper way of doing this for logging all kind of exceptions to a file", "output": "You have to define which type of exception you want to catch. So write except Exception as e: instead of except, e: for a general exception. Other possibility is to write your whole try/except code this way: try: with open(filepath,'rb') as f: con.storbinary('STOR '+ filepath, f) logger.info('File successfully uploaded to '+ FTPADDR) except Exception as e: # works on python 3.x logger.error('Failed to upload to ftp: %s', repr(e)) In older versions of Python 2.x, use except Exception, e instead of except Exception as e: try: with open(filepath,'rb') as f: con.storbinary('STOR '+ filepath, f) logger.info('File successfully uploaded to %s', FTPADDR) except Exception, e: # works on python 2.x logger.error('Failed to upload to ftp: %s', repr(e))"} +{"question_id": 10377998, "score": 966, "creation_date": 1335754681, "tags": ["python", "iterator", "directory"], "instruction": "How can I iterate over files in a given directory?\n\nI need to iterate through all .asm files inside a given directory and do some actions on them. How can this be done in a efficient way?", "output": "Python 3.6 version of the above answer, using os - assuming that you have the directory path as a str object in a variable called directory_in_str: import os directory = os.fsencode(directory_in_str) for file in os.listdir(directory): filename = os.fsdecode(file) if filename.endswith(\".asm\") or filename.endswith(\".py\"): # print(os.path.join(directory, filename)) continue else: continue Or recursively, using pathlib: from pathlib import Path pathlist = Path(directory_in_str).glob('**/*.asm') for path in pathlist: # because path is object not string path_in_str = str(path) # print(path_in_str) Use rglob to replace glob('**/*.asm') with rglob('*.asm') This is like calling Path.glob() with '**/' added in front of the given relative pattern: from pathlib import Path pathlist = Path(directory_in_str).rglob('*.asm') for path in pathlist: # because path is object not string path_in_str = str(path) # print(path_in_str) Original answer: import os for filename in os.listdir(\"/path/to/dir/\"): if filename.endswith(\".asm\") or filename.endswith(\".py\"): # print(os.path.join(directory, filename)) continue else: continue"} +{"question_id": 53645882, "score": 950, "creation_date": 1544078475, "tags": ["python", "pandas", "join", "merge", "concatenation"], "instruction": "Pandas Merging 101\n\nHow can I perform a (INNER| (LEFT|RIGHT|FULL) OUTER) JOIN with pandas? How do I add NaNs for missing rows after a merge? How do I get rid of NaNs after merging? Can I merge on the index? How do I merge multiple DataFrames? Cross join with pandas merge? join? concat? update? Who? What? Why?! ... and more. I've seen these recurring questions asking about various facets of the pandas merge functionality. Most of the information regarding merge and its various use cases today is fragmented across dozens of badly worded, unsearchable posts. The aim here is to collate some of the more important points for posterity. This Q&A is meant to be the next installment in a series of helpful user guides on common pandas idioms (see this post on pivoting, and this post on concatenation, which I will be touching on, later). Please note that this post is not meant to be a replacement for the documentation, so please read that as well! Some of the examples are taken from there. Table of Contents For ease of access. Merging basics - basic types of joins (read this first) Index-based joins Generalizing to multiple DataFrames Cross join", "output": "This post aims to give readers a primer on SQL-flavored merging with Pandas, how to use it, and when not to use it. In particular, here's what this post will go through: The basics - types of joins (LEFT, RIGHT, OUTER, INNER) merging with different column names merging with multiple columns avoiding duplicate merge key column in output What this post (and other posts by me on this thread) will not go through: Performance-related discussions and timings (for now). Mostly notable mentions of better alternatives, wherever appropriate. Handling suffixes, removing extra columns, renaming outputs, and other specific use cases. There are other (read: better) posts that deal with that, so figure it out! Note Most examples default to INNER JOIN operations while demonstrating various features, unless otherwise specified. Furthermore, all the DataFrames here can be copied and replicated so you can play with them. Also, see this post on how to read DataFrames from your clipboard. Lastly, all visual representation of JOIN operations have been hand-drawn using Google Drawings. Inspiration from here. Enough talk - just show me how to use merge! Setup & Basics np.random.seed(0) left = pd.DataFrame({'key': ['A', 'B', 'C', 'D'], 'value': np.random.randn(4)}) right = pd.DataFrame({'key': ['B', 'D', 'E', 'F'], 'value': np.random.randn(4)}) left key value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right key value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 For the sake of simplicity, the key column has the same name (for now). An INNER JOIN is represented by Note This, along with the forthcoming figures all follow this convention: blue indicates rows that are present in the merge result red indicates rows that are excluded from the result (i.e., removed) green indicates missing values that are replaced with NaNs in the result To perform an INNER JOIN, call merge on the left DataFrame, specifying the right DataFrame and the join key (at the very least) as arguments. left.merge(right, on='key') # Or, if you want to be explicit # left.merge(right, on='key', how='inner') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 This returns only rows from left and right which share a common key (in this example, \"B\" and \"D). A LEFT OUTER JOIN, or LEFT JOIN is represented by This can be performed by specifying how='left'. left.merge(right, on='key', how='left') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 Carefully note the placement of NaNs here. If you specify how='left', then only keys from left are used, and missing data from right is replaced by NaN. And similarly, for a RIGHT OUTER JOIN, or RIGHT JOIN which is... ...specify how='right': left.merge(right, on='key', how='right') key value_x value_y 0 B 0.400157 1.867558 1 D 2.240893 -0.977278 2 E NaN 0.950088 3 F NaN -0.151357 Here, keys from right are used, and missing data from left is replaced by NaN. Finally, for the FULL OUTER JOIN, given by specify how='outer'. left.merge(right, on='key', how='outer') key value_x value_y 0 A 1.764052 NaN 1 B 0.400157 1.867558 2 C 0.978738 NaN 3 D 2.240893 -0.977278 4 E NaN 0.950088 5 F NaN -0.151357 This uses the keys from both frames, and NaNs are inserted for missing rows in both. The documentation summarizes these various merges nicely: Other JOINs - LEFT-Excluding, RIGHT-Excluding, and FULL-Excluding/ANTI JOINs If you need LEFT-Excluding JOINs and RIGHT-Excluding JOINs in two steps. For LEFT-Excluding JOIN, represented as Start by performing a LEFT OUTER JOIN and then filtering to rows coming from left only (excluding everything from the right), (left.merge(right, on='key', how='left', indicator=True) .query('_merge == \"left_only\"') .drop('_merge', axis=1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN Where, left.merge(right, on='key', how='left', indicator=True) key value_x value_y _merge 0 A 1.764052 NaN left_only 1 B 0.400157 1.867558 both 2 C 0.978738 NaN left_only 3 D 2.240893 -0.977278 both And similarly, for a RIGHT-Excluding JOIN, (left.merge(right, on='key', how='right', indicator=True) .query('_merge == \"right_only\"') .drop('_merge', axis=1)) key value_x value_y 2 E NaN 0.950088 3 F NaN -0.151357 Lastly, if you are required to do a merge that only retains keys from the left or right, but not both (IOW, performing an ANTI-JOIN), You can do this in similar fashion\u2014 (left.merge(right, on='key', how='outer', indicator=True) .query('_merge != \"both\"') .drop('_merge', axis=1)) key value_x value_y 0 A 1.764052 NaN 2 C 0.978738 NaN 4 E NaN 0.950088 5 F NaN -0.151357 Different names for key columns If the key columns are named differently\u2014for example, left has keyLeft, and right has keyRight instead of key\u2014then you will have to specify left_on and right_on as arguments instead of on: left2 = left.rename({'key':'keyLeft'}, axis=1) right2 = right.rename({'key':'keyRight'}, axis=1) left2 keyLeft value 0 A 1.764052 1 B 0.400157 2 C 0.978738 3 D 2.240893 right2 keyRight value 0 B 1.867558 1 D -0.977278 2 E 0.950088 3 F -0.151357 left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner') keyLeft value_x keyRight value_y 0 B 0.400157 B 1.867558 1 D 2.240893 D -0.977278 Avoiding duplicate key column in output When merging on keyLeft from left and keyRight from right, if you only want either of the keyLeft or keyRight (but not both) in the output, you can start by setting the index as a preliminary step. left3 = left2.set_index('keyLeft') left3.merge(right2, left_index=True, right_on='keyRight') value_x keyRight value_y 0 0.400157 B 1.867558 1 2.240893 D -0.977278 Contrast this with the output of the command just before (that is, the output of left2.merge(right2, left_on='keyLeft', right_on='keyRight', how='inner')), you'll notice keyLeft is missing. You can figure out what column to keep based on which frame's index is set as the key. This may matter when, say, performing some OUTER JOIN operation. Merging only a single column from one of the DataFrames For example, consider right3 = right.assign(newcol=np.arange(len(right))) right3 key value newcol 0 B 1.867558 0 1 D -0.977278 1 2 E 0.950088 2 3 F -0.151357 3 If you are required to merge only \"newcol\" (without any of the other columns), you can usually just subset columns before merging: left.merge(right3[['key', 'newcol']], on='key') key value newcol 0 B 0.400157 0 1 D 2.240893 1 If you're doing a LEFT OUTER JOIN, a more performant solution would involve map: # left['newcol'] = left['key'].map(right3.set_index('key')['newcol'])) left.assign(newcol=left['key'].map(right3.set_index('key')['newcol'])) key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 As mentioned, this is similar to, but faster than left.merge(right3[['key', 'newcol']], on='key', how='left') key value newcol 0 A 1.764052 NaN 1 B 0.400157 0.0 2 C 0.978738 NaN 3 D 2.240893 1.0 Merging on multiple columns To join on more than one column, specify a list for on (or left_on and right_on, as appropriate). left.merge(right, on=['key1', 'key2'] ...) Or, in the event the names are different, left.merge(right, left_on=['lkey1', 'lkey2'], right_on=['rkey1', 'rkey2']) Other useful merge* operations and functions Merging a DataFrame with Series on index: See this answer. Besides merge, DataFrame.update and DataFrame.combine_first are also used in certain cases to update one DataFrame with another. pd.merge_ordered is a useful function for ordered JOINs. pd.merge_asof (read: merge_asOf) is useful for approximate joins. This section only covers the very basics, and is designed to only whet your appetite. For more examples and cases, see the documentation on merge, join, and concat as well as the links to the function specifications. Continue Reading Jump to other topics in Pandas Merging 101 to continue learning: Merging basics - basic types of joins * Index-based joins Generalizing to multiple DataFrames Cross join *You are here."} +{"question_id": 1550226, "score": 949, "creation_date": 1255252016, "tags": ["python", "setup.py", "pypi"], "instruction": "Python setup.py uninstall\n\nI have installed a Python package with python setup.py install. How do I uninstall it?", "output": "Note: Avoid using python setup.py install. Instead, use pip install . You need to remove all files manually, and also undo any other stuff that installation did manually. If you don't know the list of all files, you can reinstall it with the --record option, and take a look at the list this produces. To record a list of installed files, you can use: python setup.py install --record files.txt Once you want to uninstall you can use xargs to do the removal: xargs rm -rf < files.txt Or if you're running Windows, use PowerShell: Get-Content files.txt | ForEach-Object {Remove-Item $_ -Recurse -Force} Then delete also the containing directory, e.g. /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/my_module-0.1.egg/ on macOS. It doesn't have any files, but Python will still import an empty module: >>> import my_module >>> my_module.__file__ None Once deleted, Python shows: >>> import my_module Traceback (most recent call last): File \"\", line 1, in ModuleNotFoundError: No module named 'my_module'"} +{"question_id": 3965104, "score": 946, "creation_date": 1287458418, "tags": ["python", "nonetype"], "instruction": "What is the best \"not None\" test in Python\n\nOut of these not None tests. if val != None: if not (val is None): if val is not None: Which one is preferable, and why?", "output": "if val is not None: # ... is the Pythonic idiom for testing that a variable is not set to None. This idiom has particular uses in the case of declaring keyword functions with default parameters. is tests identity in Python. Because there is one and only one instance of None present in a running Python script/program, is is the optimal test for this. As Johnsyweb points out, this is discussed in PEP 8 under \"Programming Recommendations\". As for why this is preferred to if not (val is None): # ... this is simply part of the Zen of Python: \"Readability counts.\" Good Python is often close to good pseudocode."} +{"question_id": 42950, "score": 937, "creation_date": 1220489684, "tags": ["python", "date"], "instruction": "Get the last day of the month\n\nCan Python's standard library (or dateutil if not) easily determine (i.e. one function call) the last day of a given month?", "output": "calendar.monthrange provides this information: calendar.monthrange(year, month) Returns weekday of first day of the month and number of days in month, for the specified year and month. >>> import calendar >>> calendar.monthrange(2008, 2) # leap years are handled correctly (calendar.FRIDAY, 29) >>> calendar.monthrange(2100, 2) # years divisible by 100 but not 400 aren't leap years (calendar.MONDAY, 28) # the module uses the Georgian calendar extended into the past and # future, so leap days in the distant past will differ from Julian: >>> calendar.monthrange(1100, 2) (calendar.THURSDAY, 28) # note also that pre-Python 3.12, the REPL renders the weekday # as a bare integer: >>> calendar.monthrange(2002, 1) (1, 31) so simply: calendar.monthrange(year, month)[1]"} +{"question_id": 3899980, "score": 933, "creation_date": 1286707407, "tags": ["python", "matplotlib", "font-size"], "instruction": "How to change the font size on a matplotlib plot\n\nHow does one change the font size for all elements (ticks, labels, title) on a matplotlib plot? I know how to change the tick label sizes, this is done with: import matplotlib matplotlib.rc('xtick', labelsize=20) matplotlib.rc('ytick', labelsize=20) But how does one change the rest?", "output": "From the matplotlib documentation, font = {'family' : 'normal', 'weight' : 'bold', 'size' : 22} matplotlib.rc('font', **font) This sets the font of all items to the font specified by the kwargs object, font. Alternatively, you could also use the rcParams update method as suggested in this answer: matplotlib.rcParams.update({'font.size': 22}) or import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 22}) You can find a full list of available properties on the Customizing matplotlib page."} +{"question_id": 1207406, "score": 931, "creation_date": 1248968202, "tags": ["python", "iteration"], "instruction": "How to remove items from a list while iterating?\n\nI'm iterating over a list of tuples in Python, and am attempting to remove them if they meet certain criteria. for tup in somelist: if determine(tup): code_to_remove_tup What should I use in place of code_to_remove_tup? I can't figure out how to remove the item in this fashion.", "output": "You can use a list comprehension to create a new list containing only the elements you don't want to remove: somelist = [x for x in somelist if not determine(x)] Or, by assigning to the slice somelist[:], you can mutate the existing list to contain only the items you want: somelist[:] = [x for x in somelist if not determine(x)] This approach could be useful if there are other references to somelist that need to reflect the changes. Instead of a comprehension, you could also use itertools. In Python 2: from itertools import ifilterfalse somelist[:] = ifilterfalse(determine, somelist) Or in Python 3: from itertools import filterfalse somelist[:] = filterfalse(determine, somelist)"} +{"question_id": 10712002, "score": 930, "creation_date": 1337733504, "tags": ["python", "list"], "instruction": "Create an empty list with certain size in Python\n\nHow do I create an empty list that can hold 10 elements? After that, I want to assign values in that list. For example: xs = list() for i in range(0, 9): xs[i] = i However, that gives IndexError: list assignment index out of range. Why?", "output": "You cannot assign to a list like xs[i] = value, unless the list already is initialized with at least i+1 elements (because the first index is 0). Instead, use xs.append(value) to add elements to the end of the list. (Though you could use the assignment notation if you were using a dictionary instead of a list.) Creating an empty list: >>> xs = [None] * 10 >>> xs [None, None, None, None, None, None, None, None, None, None] Assigning a value to an existing element of the above list: >>> xs[1] = 5 >>> xs [None, 5, None, None, None, None, None, None, None, None] Keep in mind that something like xs[15] = 5 would still fail, as our list has only 10 elements. range(x) creates a list from [0, 1, 2, ... x-1] # 2.X only. Use list(range(10)) in 3.X. >>> xs = range(10) >>> xs [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Using a function to create a list: >>> def display(): ... xs = [] ... for i in range(9): # This is just to tell you how to create a list. ... xs.append(i) ... return xs ... >>> print display() [0, 1, 2, 3, 4, 5, 6, 7, 8] List comprehension (Using the squares because for range you don't need to do all this, you can just return range(0,9) ): >>> def display(): ... return [x**2 for x in range(9)] ... >>> print display() [0, 1, 4, 9, 16, 25, 36, 49, 64]"} +{"question_id": 1894269, "score": 926, "creation_date": 1260641943, "tags": ["python", "string", "list", "text-parsing"], "instruction": "How to convert string representation of list to a list\n\nI was wondering what the simplest way is to convert a string representation of a list like the following to a list: x = '[ \"A\",\"B\",\"C\" , \" D\"]' Even in cases where the user puts spaces in between the commas, and spaces inside of the quotes, I need to handle that as well and convert it to: x = [\"A\", \"B\", \"C\", \"D\"] I know I can strip spaces with strip() and split() and check for non-letter characters. But the code was getting very kludgy. Is there a quick function that I'm not aware of?", "output": ">>> import ast >>> x = '[ \"A\",\"B\",\"C\" , \" D\"]' >>> x = ast.literal_eval(x) >>> x ['A', 'B', 'C', ' D'] >>> x = [n.strip() for n in x] >>> x ['A', 'B', 'C', 'D'] ast.literal_eval: Evaluate an expression node or a string containing only a Python literal or container display. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, None and Ellipsis. This can be used for evaluating strings containing Python values without the need to parse the values oneself. It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing."} +{"question_id": 2081640, "score": 926, "creation_date": 1263745377, "tags": ["python", "unicode", "python-2.x", "rawstring"], "instruction": "What exactly do \"u\" and \"r\" string prefixes do, and what are raw string literals?\n\nWhile asking this question, I realized I didn't know much about raw strings. For somebody claiming to be a Django trainer, this sucks. I know what an encoding is, and I know what u'' alone does since I get what is Unicode. But what does r'' do exactly? What kind of string does it result in? And above all, what the heck does ur'' do? Finally, is there any reliable way to go back from a Unicode string to a simple raw string? Ah, and by the way, if your system and your text editor charset are set to UTF-8, does u'' actually do anything?", "output": "There's not really any \"raw string\"; there are raw string literals, which are exactly the string literals marked by an r before the opening quote. A \"raw string literal\" is a slightly different syntax for a string literal, in which a backslash, \\, is taken as meaning \"just a backslash\" (except when it comes right before a quote that would otherwise terminate the literal) -- no \"escape sequences\" to represent newlines, tabs, backspaces, form-feeds, and so on. In normal string literals, each backslash must be doubled up to avoid being taken as the start of an escape sequence. This syntax variant exists mostly because the syntax of regular expression patterns is heavy with backslashes (but never at the end, so the \"except\" clause above doesn't matter) and it looks a bit better when you avoid doubling up each of them -- that's all. It also gained some popularity to express native Windows file paths (with backslashes instead of regular slashes like on other platforms), but that's very rarely needed (since normal slashes mostly work fine on Windows too) and imperfect (due to the \"except\" clause above). r'...' is a byte string (in Python 2.*), ur'...' is a Unicode string (again, in Python 2.*), and any of the other three kinds of quoting also produces exactly the same types of strings (so for example r'...', r'''...''', r\"...\", r\"\"\"...\"\"\" are all byte strings, and so on). Not sure what you mean by \"going back\" - there is no intrinsically back and forward directions, because there's no raw string type, it's just an alternative syntax to express perfectly normal string objects, byte or unicode as they may be. And yes, in Python 2.*, u'...' is of course always distinct from just '...' -- the former is a unicode string, the latter is a byte string. What encoding the literal might be expressed in is a completely orthogonal issue. E.g., consider (Python 2.6): >>> sys.getsizeof('ciao') 28 >>> sys.getsizeof(u'ciao') 34 The Unicode object of course takes more memory space (very small difference for a very short string, obviously ;-)."} +{"question_id": 1987694, "score": 925, "creation_date": 1262310706, "tags": ["python", "arrays", "numpy", "output-formatting"], "instruction": "How do I print the full NumPy array, without truncation?\n\nWhen I print a numpy array, I get a truncated representation, but I want the full array. >>> numpy.arange(10000) array([ 0, 1, 2, ..., 9997, 9998, 9999]) >>> numpy.arange(10000).reshape(250,40) array([[ 0, 1, 2, ..., 37, 38, 39], [ 40, 41, 42, ..., 77, 78, 79], [ 80, 81, 82, ..., 117, 118, 119], ..., [9880, 9881, 9882, ..., 9917, 9918, 9919], [9920, 9921, 9922, ..., 9957, 9958, 9959], [9960, 9961, 9962, ..., 9997, 9998, 9999]])", "output": "Use numpy.set_printoptions: import sys import numpy numpy.set_printoptions(threshold=sys.maxsize)"} +{"question_id": 13784192, "score": 922, "creation_date": 1355021438, "tags": ["python", "pandas", "dataframe"], "instruction": "Creating an empty Pandas DataFrame, and then filling it\n\nI'm starting from the pandas DataFrame documentation here: Introduction to data structures I'd like to iteratively fill the DataFrame with values in a time series kind of calculation. I'd like to initialize the DataFrame with columns A, B, and timestamp rows, all 0 or all NaN. I'd then add initial values and go over this data calculating the new row from the row before, say row[A][t] = row[A][t-1]+1 or so. I'm currently using the code as below, but I feel it's kind of ugly and there must be a way to do this with a DataFrame directly or just a better way in general. import pandas as pd import datetime as dt import scipy as s base = dt.datetime.today().date() dates = [ base - dt.timedelta(days=x) for x in range(9, -1, -1) ] valdict = {} symbols = ['A','B', 'C'] for symb in symbols: valdict[symb] = pd.Series( s.zeros(len(dates)), dates ) for thedate in dates: if thedate > dates[0]: for symb in valdict: valdict[symb][thedate] = 1 + valdict[symb][thedate - dt.timedelta(days=1)]", "output": "Here's a couple of suggestions: Use date_range for the index: import datetime import pandas as pd import numpy as np todays_date = datetime.datetime.now().date() index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D') columns = ['A','B', 'C'] Note: we could create an empty DataFrame (with NaNs) simply by writing: df_ = pd.DataFrame(index=index, columns=columns) df_ = df_.fillna(0) # With 0s rather than NaNs To do these type of calculations for the data, use a NumPy array: data = np.array([np.arange(10)]*3).T Hence we can create the DataFrame: In [10]: df = pd.DataFrame(data, index=index, columns=columns) In [11]: df Out[11]: A B C 2012-11-29 0 0 0 2012-11-30 1 1 1 2012-12-01 2 2 2 2012-12-02 3 3 3 2012-12-03 4 4 4 2012-12-04 5 5 5 2012-12-05 6 6 6 2012-12-06 7 7 7 2012-12-07 8 8 8 2012-12-08 9 9 9"} +{"question_id": 36921951, "score": 904, "creation_date": 1461865590, "tags": ["python", "pandas", "filter", "conditional-statements", "valueerror"], "instruction": "Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()\n\nI want to filter my dataframe with an or condition to keep rows with a particular column's values that are outside the range [-0.25, 0.25]. I tried: df = df[(df['col'] < -0.25) or (df['col'] > 0.25)] But I get the error: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().", "output": "The or and and Python statements require truth-values. For pandas, these are considered ambiguous, so you should use \"bitwise\" | (or) or & (and) operations: df = df[(df['col'] < -0.25) | (df['col'] > 0.25)] These are overloaded for these kinds of data structures to yield the element-wise or or and. Just to add some more explanation to this statement: The exception is thrown when you want to get the bool of a pandas.Series: >>> import pandas as pd >>> x = pd.Series([1]) >>> bool(x) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). You hit a place where the operator implicitly converted the operands to bool (you used or but it also happens for and, if and while): >>> x or x ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). >>> x and x ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). >>> if x: ... print('fun') ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). >>> while x: ... print('fun') ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Besides these four statements, there are several Python functions that hide some bool calls (like any, all, filter, ...). These are normally not problematic with pandas.Series, but for completeness I wanted to mention these. In your case, the exception isn't really helpful, because it doesn't mention the right alternatives. For and and or, if you want element-wise comparisons, you can use: numpy.logical_or: >>> import numpy as np >>> np.logical_or(x, y) or simply the | operator: >>> x | y numpy.logical_and: >>> np.logical_and(x, y) or simply the & operator: >>> x & y If you're using the operators, then be sure to set your parentheses correctly because of operator precedence. There are several logical NumPy functions which should work on pandas.Series. The alternatives mentioned in the Exception are more suited if you encountered it when doing if or while. I'll shortly explain each of these: If you want to check if your Series is empty: >>> x = pd.Series([]) >>> x.empty True >>> x = pd.Series([1]) >>> x.empty False Python normally interprets the length of containers (like list, tuple, ...) as truth-value if it has no explicit Boolean interpretation. So if you want the Python-like check, you could do: if x.size or if not x.empty instead of if x. If your Series contains one and only one Boolean value: >>> x = pd.Series([100]) >>> (x > 50).bool() True >>> (x < 50).bool() False If you want to check the first and only item of your Series (like .bool(), but it works even for non-Boolean contents): >>> x = pd.Series([100]) >>> x.item() 100 If you want to check if all or any item is not-zero, not-empty or not-False: >>> x = pd.Series([0, 1, 2]) >>> x.all() # Because one element is zero False >>> x.any() # because one (or more) elements are non-zero True"} +{"question_id": 19960077, "score": 897, "creation_date": 1384362667, "tags": ["python", "pandas", "dataframe", "indexing", "filter"], "instruction": "How to filter Pandas dataframe using 'in' and 'not in' like in SQL\n\nHow can I achieve the equivalents of SQL's IN and NOT IN? I have a list with the required values. Here's the scenario: df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']}) countries_to_keep = ['UK', 'China'] # pseudo-code: df[df['country'] not in countries_to_keep] My current way of doing this is as follows: df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']}) df2 = pd.DataFrame({'country': ['UK', 'China'], 'matched': True}) # IN df.merge(df2, how='inner', on='country') # NOT IN not_in = df.merge(df2, how='left', on='country') not_in = not_in[pd.isnull(not_in['matched'])] But this seems like a horrible kludge. Can anyone improve on it?", "output": "You can use pd.Series.isin. For \"IN\" use: something.isin(somewhere) Or for \"NOT IN\": ~something.isin(somewhere) As a worked example: >>> df country 0 US 1 UK 2 Germany 3 China >>> countries_to_keep ['UK', 'China'] >>> df.country.isin(countries_to_keep) 0 False 1 True 2 False 3 True Name: country, dtype: bool >>> df[df.country.isin(countries_to_keep)] country 1 UK 3 China >>> df[~df.country.isin(countries_to_keep)] country 0 US 2 Germany"} +{"question_id": 772124, "score": 896, "creation_date": 1240312872, "tags": ["python", "ellipsis"], "instruction": "What does the Ellipsis object do?\n\nWhile idly surfing the namespace I noticed an odd looking object called Ellipsis, it does not seem to be or do anything special, but it's a globally available builtin. After a search I found that it is used in some obscure variant of the slicing syntax by Numpy and Scipy... but almost nothing else. Was this object added to the language specifically to support Numpy + Scipy? Does Ellipsis have any generic meaning or use at all? D:\\workspace\\numpy>python Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> Ellipsis Ellipsis", "output": "This came up in another question recently. I'll elaborate on my answer from there: Ellipsis is an object that can appear in slice notation. For example: myList[1:2, ..., 0] Its interpretation is purely up to whatever implements the __getitem__ function and sees Ellipsis objects there, but its main (and intended) use is in the numpy third-party library, which adds a multidimensional array type. Since there are more than one dimensions, slicing becomes more complex than just a start and stop index; it is useful to be able to slice in multiple dimensions as well. E.g., given a 4 \u00d7 4 array, the top left area would be defined by the slice [:2, :2]: >>> a array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]) >>> a[:2, :2] # top left array([[1, 2], [5, 6]]) Extending this further, Ellipsis is used here to indicate a placeholder for the rest of the array dimensions not specified. Think of it as indicating the full slice [:] for all the dimensions in the gap it is placed, so for a 3d array, a[..., 0] is the same as a[:, :, 0] and for 4d a[:, :, :, 0], similarly, a[0, ..., 0] is a[0, :, :, 0] (with however many colons in the middle make up the full number of dimensions in the array). Interestingly, in python3, the Ellipsis literal (...) is usable outside the slice syntax, so you can actually write: >>> ... Ellipsis EDIT: Ellipsis is also used in the standard library typing module: e.g. Callable[..., int] to indicate a callable that returns an int without specifying the signature, or Tuple[str, ...] to indicate a variable-length homogeneous tuple of strings."} +{"question_id": 104420, "score": 880, "creation_date": 1221849663, "tags": ["python", "algorithm", "permutation", "combinatorics"], "instruction": "How do I generate all permutations of a list?\n\nHow do I generate all the permutations of a list? For example: permutations([]) [] permutations([1]) [1] permutations([1, 2]) [1, 2] [2, 1] permutations([1, 2, 3]) [1, 2, 3] [1, 3, 2] [2, 1, 3] [2, 3, 1] [3, 1, 2] [3, 2, 1]", "output": "Use itertools.permutations from the standard library: import itertools list(itertools.permutations([1, 2, 3])) A demonstration of how itertools.permutations might be implemented: def permutations(elements): if len(elements) <= 1: yield elements return for perm in permutations(elements[1:]): for i in range(len(elements)): # nb elements[0:1] works in both string and list contexts yield perm[:i] + elements[0:1] + perm[i:] A couple of alternative approaches are listed in the documentation of itertools.permutations. Here's one: def permutations(iterable, r=None): # permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC # permutations(range(3)) --> 012 021 102 120 201 210 pool = tuple(iterable) n = len(pool) r = n if r is None else r if r > n: return indices = range(n) cycles = range(n, n-r, -1) yield tuple(pool[i] for i in indices[:r]) while n: for i in reversed(range(r)): cycles[i] -= 1 if cycles[i] == 0: indices[i:] = indices[i+1:] + indices[i:i+1] cycles[i] = n - i else: j = cycles[i] indices[i], indices[-j] = indices[-j], indices[i] yield tuple(pool[i] for i in indices[:r]) break else: return And another, based on itertools.product: def permutations(iterable, r=None): pool = tuple(iterable) n = len(pool) r = n if r is None else r for indices in product(range(n), repeat=r): if len(set(indices)) == r: yield tuple(pool[i] for i in indices)"} +{"question_id": 47071256, "score": 876, "creation_date": 1509613434, "tags": ["python", "pip"], "instruction": "How to update/upgrade a package using pip?\n\nWhat is the way to update a package using pip? These commands do not work: pip update pip upgrade I know this is a simple question but it is needed as it is not so easy to find (pip documentation doesn't pop up and other questions from stack overflow are relevant but are not exactly about that)", "output": "This is the way pip install --upgrade or in short pip install -U Using sudo will ask to enter your root password to confirm the action, but although common, is considered unsafe. If you do not have a root password (if you are not the admin) you should probably work with virtualenv. You can also use the user flag to install it on this user only. pip install --upgrade --user"} +{"question_id": 17839973, "score": 868, "creation_date": 1374684024, "tags": ["python", "pandas", "dataframe", "constructor", "valueerror"], "instruction": "Constructing DataFrame from values in variables yields \"ValueError: If using all scalar values, you must pass an index\"\n\nI have two variables as follows. a = 2 b = 3 I want to construct a DataFrame from this: df2 = pd.DataFrame({'A':a, 'B':b}) This generates an error: ValueError: If using all scalar values, you must pass an index I tried this also: df2 = (pd.DataFrame({'a':a, 'b':b})).reset_index() This gives the same error message. How do I do what I want?", "output": "The error message says that if you're passing scalar values, you have to pass an index. So you can either not use scalar values for the columns -- e.g. use a list: >>> df = pd.DataFrame({'A': [a], 'B': [b]}) >>> df A B 0 2 3 or use scalar values and pass an index: >>> df = pd.DataFrame({'A': a, 'B': b}, index=[0, 3]) >>> df A B 0 2 3 3 2 3"} +{"question_id": 431684, "score": 863, "creation_date": 1231619296, "tags": ["python", "path", "cd"], "instruction": "How do I change the working directory in Python?\n\ncd is the shell command to change the working directory. What is the equivalent in Python?", "output": "You can change the working directory with: import os os.chdir(path) You should be careful that changing the directory may result in destructive changes your code applies in the new location. Potentially worse still, do not catch exceptions such as WindowsError and OSError after changing directory as that may mean destructive changes are applied in the old location! If you're on Python 3.11 or newer, then consider using this context manager to ensure you return to the original working directory when you're done: from contextlib import chdir with chdir(path): # do stuff here If you're on an older version of Python, Brian M. Hunt's answer shows how to roll your own context manager: his answer. Changing the current working directory in a subprocess does not change the current working directory in the parent process. This is true of the Python interpreter as well. You cannot use os.chdir() to change the CWD of the calling process."} +{"question_id": 20461165, "score": 860, "creation_date": 1386549256, "tags": ["python", "pandas", "dataframe", "join", "indexing"], "instruction": "How to convert index of a pandas dataframe into a column\n\nHow to convert an index of a dataframe into a column? For example: gi ptt_loc 0 384444683 593 1 384444684 594 2 384444686 596 to index1 gi ptt_loc 0 0 384444683 593 1 1 384444684 594 2 2 384444686 596", "output": "either: df['index1'] = df.index or .reset_index: df = df.reset_index() If you have a multi-index frame with 3 levels of index, like: >>> df val tick tag obs 2016-02-26 C 2 0.0139 2016-02-27 A 2 0.5577 2016-02-28 C 6 0.0303 and you want to convert the 1st (tick) and 3rd (obs) levels in the index into columns, you could do: >>> df.reset_index(level=['tick', 'obs']) tick obs val tag C 2016-02-26 2 0.0139 A 2016-02-27 2 0.5577 C 2016-02-28 6 0.0303"} +{"question_id": 16584552, "score": 840, "creation_date": 1368698802, "tags": ["python", "github", "pip", "requirements.txt"], "instruction": "How to state in requirements.txt a direct github source\n\nI've installed a library using the command pip install git+git://github.com/mozilla/elasticutils.git which installs it directly from a Github repository. This works fine and I want to have that dependency in my requirements.txt. I've looked at other tickets like this but that didn't solve my problem. If I put something like -f git+git://github.com/mozilla/elasticutils.git elasticutils==0.7.dev in the requirements.txt file, a pip install -r requirements.txt results in the following output: Downloading/unpacking elasticutils==0.7.dev (from -r requirements.txt (line 20)) Could not find a version that satisfies the requirement elasticutils==0.7.dev (from -r requirements.txt (line 20)) (from versions: ) No distributions matching the version for elasticutils==0.7.dev (from -r requirements.txt (line 20)) The documentation of the requirements file does not mention links using the git+git protocol specifier, so maybe this is just not supported. Does anybody have a solution for my problem?", "output": "Normally your requirements.txt file would look something like this: package-one==1.9.4 package-two==3.7.1 package-three==1.0.1 ... To specify a Github repo, you do not need the package-name== convention. The examples below update package-two using a GitHub repo. The text after @ denotes the specifics of the package. Specify commit hash (41b95ec in the context of updated requirements.txt): package-one==1.9.4 package-two @ git+https://github.com/owner/repo@41b95ec package-three==1.0.1 Specify branch name (main): package-two @ git+https://github.com/owner/repo@main Specify tag (0.1): package-two @ git+https://github.com/owner/repo@0.1 Specify release (3.7.1): package-two @ git+https://github.com/owner/repo@releases/tag/v3.7.1 Note that in certain versions of pip you will need to update the package version in the package's setup.py, or pip will assume the requirement is already satisfied and not install the new version. For instance, if you have 1.2.1 installed, and want to fork this package with your own version, you could use the above technique in your requirements.txt and then update setup.py to 1.2.1.1. See also the pip documentation on VCS support."} +{"question_id": 20646822, "score": 840, "creation_date": 1387323062, "tags": ["python", "flask", "static-files"], "instruction": "How to serve static files in Flask\n\nI've got an application that I threw together in Flask and for now it is just serving up a single static HTML page with some links to CSS and JS. And I can't find where in the documentation Flask describes returning static files. Yes, I could use render_template but I know the data is not templatized. I'd have thought send_file or url_for was the right thing, but I could not get those to work. In the meantime, I am opening the files, reading content, and rigging up a Response with appropriate mimetype: import os.path from flask import Flask, Response app = Flask(__name__) app.config.from_object(__name__) def root_dir(): # pragma: no cover return os.path.abspath(os.path.dirname(__file__)) def get_file(filename): # pragma: no cover try: src = os.path.join(root_dir(), filename) # Figure out how flask returns static files # Tried: # - render_template # - send_file # This should not be so non-obvious return open(src).read() except IOError as exc: return str(exc) @app.route('/', methods=['GET']) def metrics(): # pragma: no cover content = get_file('jenkins_analytics.html') return Response(content, mimetype=\"text/html\") @app.route('/', defaults={'path': ''}) @app.route('/') def get_resource(path): # pragma: no cover mimetypes = { \".css\": \"text/css\", \".html\": \"text/html\", \".js\": \"application/javascript\", } complete_path = os.path.join(root_dir(), path) ext = os.path.splitext(path)[1] mimetype = mimetypes.get(ext, \"text/html\") content = get_file(complete_path) return Response(content, mimetype=mimetype) if __name__ == '__main__': # pragma: no cover app.run(port=80) Someone want to give a code sample or url for this?", "output": "In production, configure the HTTP server (Nginx, Apache, etc.) in front of your application to serve requests to /static from the static folder. A dedicated web server is very good at serving static files efficiently, although you probably won't notice a difference compared to Flask at low volumes. Flask automatically creates a /static/ route that will serve any filename under the static folder next to the Python module that defines your Flask app. Use url_for to link to static files: url_for('static', filename='js/analytics.js') You can also use send_from_directory to serve files from a directory in your own route. This takes a base directory and a path, and ensures that the path is contained in the directory, which makes it safe to accept user-provided paths. This can be useful in cases where you want to check something before serving the file, such as if the logged in user has permission. from flask import send_from_directory @app.route('/reports/') def send_report(path): # Using request args for path will expose you to directory traversal attacks return send_from_directory('reports', path) WARNING: Do not use send_file or send_static_file with a user-supplied path. This will expose you to directory traversal attacks. send_from_directory was designed to safely handle user-supplied paths under a known directory, and will raise an error if the path attempts to escape the directory. If you are generating a file in memory without writing it to the filesystem, you can pass a BytesIO object to send_file to serve it like a file. You'll need to pass other arguments to send_file in this case since it can't infer things like the file name or content type."} +{"question_id": 431628, "score": 839, "creation_date": 1231617103, "tags": ["python", "django", "search", "django-queryset", "django-q"], "instruction": "How to combine multiple QuerySets in Django?\n\nI'm trying to build the search for a Django site I am building, and in that search, I am searching across three different models. And to get pagination on the search result list, I would like to use a generic object_list view to display the results. But to do that, I have to merge three QuerySets into one. How can I do that? I've tried this: result_list = [] page_list = Page.objects.filter( Q(title__icontains=cleaned_search_term) | Q(body__icontains=cleaned_search_term)) article_list = Article.objects.filter( Q(title__icontains=cleaned_search_term) | Q(body__icontains=cleaned_search_term) | Q(tags__icontains=cleaned_search_term)) post_list = Post.objects.filter( Q(title__icontains=cleaned_search_term) | Q(body__icontains=cleaned_search_term) | Q(tags__icontains=cleaned_search_term)) for x in page_list: result_list.append(x) for x in article_list: result_list.append(x) for x in post_list: result_list.append(x) return object_list( request, queryset=result_list, template_object_name='result', paginate_by=10, extra_context={ 'search_term': search_term}, template_name=\"search/result_list.html\") But this doesn't work. I get an error when I try to use that list in the generic view. The list is missing the clone attribute. How can I merge the three lists, page_list, article_list and post_list?", "output": "Concatenating the querysets into a list is the simplest approach. If the database will be hit for all querysets anyway (e.g. because the result needs to be sorted), this won't add further cost. from itertools import chain result_list = list(chain(page_list, article_list, post_list)) Using itertools.chain is faster than looping each list and appending elements one by one, since itertools is implemented in C. It also consumes less memory than converting each queryset into a list before concatenating. Now it's possible to sort the resulting list e.g. by date (as requested in hasen j's comment to another answer). The sorted() function conveniently accepts a generator and returns a list: from operator import attrgetter result_list = sorted( chain(page_list, article_list, post_list), key=attrgetter('date_created') ) You can reverse the sort order: result_list = sorted( chain(page_list, article_list, post_list), key=attrgetter('date_created'), reverse=True, ) attrgetter is equivalet to the following lambda (this was the way it had to be done before Python 2.4): result_list = sorted( chain(page_list, article_list, post_list), key=lambda instance: instance.date_created, )"} +{"question_id": 5607551, "score": 838, "creation_date": 1302379632, "tags": ["python", "urllib", "urlencode"], "instruction": "How to urlencode a querystring in Python?\n\nI am trying to urlencode this string before I submit. queryString = 'eventName=' + evt.fields[\"eventName\"] + '&' + 'eventDescription=' + evt.fields[\"eventDescription\"];", "output": "Python 3 Use urllib.parse.urlencode: >>> import urllib.parse >>> f = { 'eventName' : 'myEvent', 'eventDescription' : 'cool event'} >>> urllib.parse.urlencode(f) eventName=myEvent&eventDescription=cool+event Note that this does not do url encoding in the commonly used sense (look at the output). For that use urllib.parse.quote_plus. Python 2 You need to pass your parameters into urllib.urlencode() as either a mapping (dict), or a sequence of 2-tuples, like: >>> import urllib >>> f = { 'eventName' : 'myEvent', 'eventDescription' : 'cool event'} >>> urllib.urlencode(f) 'eventName=myEvent&eventDescription=cool+event'"} +{"question_id": 15753701, "score": 838, "creation_date": 1364859298, "tags": ["python", "argparse"], "instruction": "How can I pass a list as a command-line argument with argparse?\n\nI am trying to pass a list as an argument to a command line program. Is there an argparse option to pass a list as option? parser.add_argument('-l', '--list', type=list, action='store', dest='list', help=' Set flag', required=True) Script is called like below python test.py -l \"265340 268738 270774 270817\"", "output": "SHORT ANSWER Use the nargs option or the 'append' setting of the action option (depending on how you want the user interface to behave). nargs parser.add_argument('-l','--list', nargs='+', help=' Set flag', required=True) # Use like: # python arg.py -l 1234 2345 3456 4567 nargs='+' takes 1 or more arguments, nargs='*' takes zero or more. append parser.add_argument('-l','--list', action='append', help=' Set flag', required=True) # Use like: # python arg.py -l 1234 -l 2345 -l 3456 -l 4567 With append you provide the option multiple times to build up the list. Don't use type=list!!! - There is probably no situation where you would want to use type=list with argparse. Ever. LONG ANSWER Let's take a look in more detail at some of the different ways one might try to do this, and the end result. import argparse parser = argparse.ArgumentParser() # By default it will fail with multiple arguments. parser.add_argument('--default') # Telling the type to be a list will also fail for multiple arguments, # but give incorrect results for a single argument. parser.add_argument('--list-type', type=list) # This will allow you to provide multiple arguments, but you will get # a list of lists which is not desired. parser.add_argument('--list-type-nargs', type=list, nargs='+') # This is the correct way to handle accepting multiple arguments. # '+' == 1 or more. # '*' == 0 or more. # '?' == 0 or 1. # An int is an explicit number of arguments to accept. parser.add_argument('--nargs', nargs='+') # To make the input integers parser.add_argument('--nargs-int-type', nargs='+', type=int) # An alternate way to accept multiple inputs, but you must # provide the flag once per input. Of course, you can use # type=int here if you want. parser.add_argument('--append-action', action='append') # To show the results of the given option to screen. for _, value in parser.parse_args()._get_kwargs(): if value is not None: print(value) Here is the output you can expect: $ python arg.py --default 1234 2345 3456 4567 ... arg.py: error: unrecognized arguments: 2345 3456 4567 $ python arg.py --list-type 1234 2345 3456 4567 ... arg.py: error: unrecognized arguments: 2345 3456 4567 $ # Quotes won't help here... $ python arg.py --list-type \"1234 2345 3456 4567\" ['1', '2', '3', '4', ' ', '2', '3', '4', '5', ' ', '3', '4', '5', '6', ' ', '4', '5', '6', '7'] $ python arg.py --list-type-nargs 1234 2345 3456 4567 [['1', '2', '3', '4'], ['2', '3', '4', '5'], ['3', '4', '5', '6'], ['4', '5', '6', '7']] $ python arg.py --nargs 1234 2345 3456 4567 ['1234', '2345', '3456', '4567'] $ python arg.py --nargs-int-type 1234 2345 3456 4567 [1234, 2345, 3456, 4567] $ # Negative numbers are handled perfectly fine out of the box. $ python arg.py --nargs-int-type -1234 2345 -3456 4567 [-1234, 2345, -3456, 4567] $ python arg.py --append-action 1234 --append-action 2345 --append-action 3456 --append-action 4567 ['1234', '2345', '3456', '4567'] Takeaways: Use nargs or action='append' nargs can be more straightforward from a user perspective, but it can be unintuitive if there are positional arguments because argparse can't tell what should be a positional argument and what belongs to the nargs; if you have positional arguments then action='append' may end up being a better choice. The above is only true if nargs is given '*', '+', or '?'. If you provide an integer number (such as 4) then there will be no problem mixing options with nargs and positional arguments because argparse will know exactly how many values to expect for the option. Don't use quotes on the command line1 Don't use type=list, as it will return a list of lists This happens because under the hood argparse uses the value of type to coerce each individual given argument you your chosen type, not the aggregate of all arguments. You can use type=int (or whatever) to get a list of ints (or whatever) 1: I don't mean in general.. I mean using quotes to pass a list to argparse is not what you want."} +{"question_id": 311627, "score": 829, "creation_date": 1227379027, "tags": ["python", "date", "datetime", "formatting"], "instruction": "How to print a date in a regular format?\n\nThis is my code: import datetime today = datetime.date.today() print(today) This prints: 2008-11-22 which is exactly what I want. But, I have a list I'm appending this to and then suddenly everything goes \"wonky\". Here is the code: import datetime mylist = [datetime.date.today()] print(mylist) This prints [datetime.date(2008, 11, 22)]. How can I get just a simple date like 2008-11-22?", "output": "The WHY: dates are objects In Python, dates are objects. Therefore, when you manipulate them, you manipulate objects, not strings or timestamps. Any object in Python has TWO string representations: The regular representation that is used by print can be get using the str() function. It is most of the time the most common human readable format and is used to ease display. So str(datetime.datetime(2008, 11, 22, 19, 53, 42)) gives you '2008-11-22 19:53:42'. The alternative representation that is used to represent the object nature (as a data). It can be get using the repr() function and is handy to know what kind of data your manipulating while you are developing or debugging. repr(datetime.datetime(2008, 11, 22, 19, 53, 42)) gives you 'datetime.datetime(2008, 11, 22, 19, 53, 42)'. What happened is that when you have printed the date using print, it used str() so you could see a nice date string. But when you have printed mylist, you have printed a list of objects and Python tried to represent the set of data, using repr(). The How: what do you want to do with that? Well, when you manipulate dates, keep using the date objects all long the way. They got thousand of useful methods and most of the Python API expect dates to be objects. When you want to display them, just use str(). In Python, the good practice is to explicitly cast everything. So just when it's time to print, get a string representation of your date using str(date). One last thing. When you tried to print the dates, you printed mylist. If you want to print a date, you must print the date objects, not their container (the list). E.G, you want to print all the date in a list : for date in mylist : print str(date) Note that in that specific case, you can even omit str() because print will use it for you. But it should not become a habit :-) Practical case, using your code import datetime mylist = [] today = datetime.date.today() mylist.append(today) print mylist[0] # print the date object, not the container ;-) 2008-11-22 # It's better to always use str() because : print \"This is a new day : \", mylist[0] # will work >>> This is a new day : 2008-11-22 print \"This is a new day : \" + mylist[0] # will crash >>> cannot concatenate 'str' and 'datetime.date' objects print \"This is a new day : \" + str(mylist[0]) >>> This is a new day : 2008-11-22 Advanced date formatting Dates have a default representation, but you may want to print them in a specific format. In that case, you can get a custom string representation using the strftime() method. strftime() expects a string pattern explaining how you want to format your date. E.G : print today.strftime('We are the %d, %b %Y') >>> 'We are the 22, Nov 2008' All the letter after a \"%\" represent a format for something: %d is the day number (2 digits, prefixed with leading zero's if necessary) %m is the month number (2 digits, prefixed with leading zero's if necessary) %b is the month abbreviation (3 letters) %B is the month name in full (letters) %y is the year number abbreviated (last 2 digits) %Y is the year number full (4 digits) etc. Have a look at the official documentation, or McCutchen's quick reference you can't know them all. Since PEP3101, every object can have its own format used automatically by the method format of any string. In the case of the datetime, the format is the same used in strftime. So you can do the same as above like this: print \"We are the {:%d, %b %Y}\".format(today) >>> 'We are the 22, Nov 2008' The advantage of this form is that you can also convert other objects at the same time. With the introduction of Formatted string literals (since Python 3.6, 2016-12-23) this can be written as import datetime f\"{datetime.datetime.now():%Y-%m-%d}\" >>> '2017-06-15' Localization Dates can automatically adapt to the local language and culture if you use them the right way, but it's a bit complicated. Maybe for another question on SO(Stack Overflow) ;-)"} +{"question_id": 4998629, "score": 827, "creation_date": 1297726933, "tags": ["python", "string", "split", "delimiter"], "instruction": "Split string with multiple delimiters in Python\n\nI found some answers online, but I have no experience with regular expressions, which I believe is what is needed here. I have a string that needs to be split by either a ';' or ', ' That is, it has to be either a semicolon or a comma followed by a space. Individual commas without trailing spaces should be left untouched Example string: \"b-staged divinylsiloxane-bis-benzocyclobutene [124221-30-3], mesitylene [000108-67-8]; polymerized 1,2-dihydro-2,2,4- trimethyl quinoline [026780-96-1]\" should be split into a list containing the following: ('b-staged divinylsiloxane-bis-benzocyclobutene [124221-30-3]' , 'mesitylene [000108-67-8]', 'polymerized 1,2-dihydro-2,2,4- trimethyl quinoline [026780-96-1]')", "output": "Luckily, Python has this built-in :) import re # Regex pattern splits on substrings \"; \" and \", \" re.split('; |, ', string_to_split) Update: Following your comment: >>> string_to_split = 'Beautiful, is; better*than\\nugly' >>> import re >>> re.split('; |, |\\*|\\n', string_to_split) ['Beautiful', 'is', 'better', 'than', 'ugly']"} +{"question_id": 3420122, "score": 827, "creation_date": 1281053316, "tags": ["python", "dictionary"], "instruction": "Filter dict to contain only certain keys?\n\nI've got a dict that has a whole bunch of entries. I'm only interested in a select few of them. Is there an easy way to prune all the other ones out?", "output": "Constructing a new dict: dict_you_want = {key: old_dict[key] for key in your_keys} Uses dictionary comprehension. If you use a version which lacks them (ie Python 2.6 and earlier), make it dict((key, old_dict[key]) for ...). It's the same, though uglier. Note that this, unlike jnnnnn's version, has stable performance (depends only on number of your_keys) for old_dicts of any size. Both in terms of speed and memory. Since this is a generator expression, it processes one item at a time, and it doesn't looks through all items of old_dict. Removing everything in-place: unwanted = set(old_dict) - set(your_keys) for unwanted_key in unwanted: del your_dict[unwanted_key]"} +{"question_id": 18946662, "score": 824, "creation_date": 1379870653, "tags": ["python", "performance", "jit", "pypy", "cpython"], "instruction": "Why shouldn't I use PyPy over CPython if PyPy is 6.3 times faster?\n\nI've been hearing a lot about the PyPy project. They claim it is 6.3 times faster than the CPython interpreter on their site. Whenever we talk about dynamic languages like Python, speed is one of the top issues. To solve this, they say PyPy is 6.3 times faster. The second issue is parallelism, the infamous Global Interpreter Lock (GIL). For this, PyPy says it can give GIL-less Python. If PyPy can solve these great challenges, what are its weaknesses that are preventing wider adoption? That is to say, what's preventing someone like me, a typical Python developer, from switching to PyPy right now?", "output": "NOTE: PyPy is more mature and better supported now than it was in 2013, when this question was asked. Avoid drawing conclusions from out-of-date information. PyPy, as others have been quick to mention, has tenuous support for C extensions. It has support, but typically at slower-than-Python speeds and it's iffy at best. Hence a lot of modules simply require CPython. Check the list of supported packages, but look at the date that list was updated, because it's not not kept in lockstep with actual support, so it's still possible that packages that marked unsupported on that list are actually supported. Python support typically lags a few versions behind, so if you absolutely need the latest features, you may need to wait a while before PyPy supports them. PyPy sometimes isn't actually faster for \"scripts\", which a lot of people use Python for. These are the short-running programs that do something simple and small. Because PyPy is a JIT compiler its main advantages come from long run times and simple types (such as numbers). PyPy's pre-JIT speeds can be bad compared to CPython. Inertia. Moving to PyPy often requires retooling, which for some people and organizations is simply too much work. Those are the main reasons that affect me, I'd say."} +{"question_id": 7368789, "score": 823, "creation_date": 1315614488, "tags": ["python", "string", "list", "integer"], "instruction": "Convert all strings in a list to integers\n\nHow do I convert all strings in a list to integers? ['1', '2', '3'] \u27f6 [1, 2, 3]", "output": "Given: xs = ['1', '2', '3'] Use map then list to obtain a list of integers: list(map(int, xs)) In Python 2, list was unnecessary since map returned a list: map(int, xs)"} +{"question_id": 2921847, "score": 821, "creation_date": 1274969438, "tags": ["python", "syntax", "parameter-passing", "iterable-unpacking", "argument-unpacking"], "instruction": "What do ** (double star/asterisk) and * (star/asterisk) mean in a function call?\n\nIn code like zip(*x) or f(**k), what do the * and ** respectively mean? How does Python implement that behaviour, and what are the performance implications? See also: Expanding tuples into arguments. Please use that one to close questions where OP needs to use * on an argument and doesn't know it exists. Similarly, use Converting Python dict to kwargs? for the case of using **. See What does ** (double star/asterisk) and * (star/asterisk) do for parameters? for the complementary question about parameters. See What do ** (double star/asterisk) and * (star/asterisk) inside square brakets mean for class and function declarations in Python 3.12+? for the complementary question when used for type parameters.", "output": "A single star * unpacks a sequence or collection into positional arguments. Suppose we have def add(a, b): return a + b values = (1, 2) Using the * unpacking operator, we can write s = add(*values), which will be equivalent to writing s = add(1, 2). The double star ** does the same thing for a dictionary, providing values for named arguments: values = {'a': 1, 'b': 2} s = add(**values) # equivalent to add(a=1, b=2) Both operators can be used for the same function call. For example, given: def sum(a, b, c, d): return a + b + c + d values1 = (1, 2) values2 = {'c': 10, 'd': 15} then s = add(*values1, **values2) is equivalent to s = sum(1, 2, c=10, d=15). See also the relevant section of the tutorial in the Python documentation. Similarly, * and ** can be used for parameters. Using * allows a function to accept any number of positional arguments, which will be collected into a single parameter: def add(*values): s = 0 for v in values: s = s + v return s Now when the function is called like s = add(1, 2, 3, 4, 5), values will be the tuple (1, 2, 3, 4, 5) (which, of course, produces the result 15). Similarly, a parameter marked with ** will receive a dict: def get_a(**values): return values['a'] s = get_a(a=1, b=2) # returns 1 this allows for specifying a large number of optional parameters without having to declare them. Again, both can be combined: def add(*values, **options): s = 0 for i in values: s = s + i if \"neg\" in options: if options[\"neg\"]: s = -s return s s = add(1, 2, 3, 4, 5) # returns 15 s = add(1, 2, 3, 4, 5, neg=True) # returns -15 s = add(1, 2, 3, 4, 5, neg=False) # returns 15"} +{"question_id": 492387, "score": 813, "creation_date": 1233246886, "tags": ["python", "indentation", "python-2.x"], "instruction": "IndentationError: unindent does not match any outer indentation level, although the indentation looks correct\n\nWhen I compile the Python code below, I get IndentationError: unindent does not match any outer indentation level import sys def Factorial(n): # Return factorial result = 1 for i in range (1,n): result = result * i print \"factorial is \",result return result Why?", "output": "One possible cause for this error is that there might be spaces mixed with tabs for indentation. Try doing a search & replace to replace all tabs with a few spaces. Try this: import sys def Factorial(n): # return factorial result = 1 for i in range (1,n): result = result * i print \"factorial is \",result return result print Factorial(10)"} +{"question_id": 6130768, "score": 801, "creation_date": 1306356592, "tags": ["python", "dictionary", "key", "nonetype"], "instruction": "Return a default value if a dictionary key is not available\n\nI need a way to get a dictionary value if its key exists, or simply return None, if it does not. However, Python raises a KeyError exception if you search for a key that does not exist. I know that I can check for the key, but I am looking for something more explicit. Is there a way to just return None if the key does not exist? See also: Why dict.get(key) instead of dict[key]?", "output": "You can use dict.get() value = d.get(key) which will return None if key is not in d. You can also provide a different default value that will be returned instead of None (unless the dict actually contains None as the value for this key): value = d.get(key, \"empty\")"} +{"question_id": 5067604, "score": 794, "creation_date": 1298301096, "tags": ["python", "function", "introspection"], "instruction": "Determine function name from within that function\n\nIs there a way to determine a function's name from within the function? def foo(): print(\"my name is\", __myname__) # <== how do I calculate this at runtime? In the example above, the body of foo will somehow access the function name \"foo\" without hard-coding it. The output would be: >>> foo() my name is foo", "output": "If you don't want to play with the stack yourself, you should either use \"bar\" or bar.__name__ depending on context. Python doesn't have a feature to access the function or its name within the function itself. A magic __function__ had been proposed for Python 3.0 but rejected. See PEP 3130 \u2013 Access to Current Module/Class/Function. The given rejection notice is: This PEP is rejected. It is not clear how it should be implemented or what the precise semantics should be in edge cases, and there aren't enough important use cases given. response has been lukewarm at best."} +{"question_id": 1641219, "score": 789, "creation_date": 1256781258, "tags": ["python", "class", "private"], "instruction": "Does Python have \u201cprivate\u201d variables in classes?\n\nI'm coming from the Java world and reading Bruce Eckels' Python 3 Patterns, Recipes and Idioms. While reading about classes, it goes on to say that in Python there is no need to declare instance variables. You just use them in the constructor, and boom, they are there. So for example: class Simple: def __init__(self, s): print(\"inside the simple constructor\") self.s = s def show(self): print(self.s) def showMsg(self, msg): print(msg + ':', self.show()) If that\u2019s true, then any object of class Simple can just change the value of variable s outside of the class. For example: if __name__ == \"__main__\": x = Simple(\"constructor argument\") x.s = \"test15\" # this changes the value x.show() x.showMsg(\"A message\") In Java, we have been taught about public/private/protected variables. Those keywords make sense because at times you want variables in a class to which no one outside the class has access to. Why is that not required in Python?", "output": "It's cultural. In Python, you don't write to other classes' instance or class variables. In Java, nothing prevents you from doing the same if you really want to - after all, you can always edit the source of the class itself to achieve the same effect. Python drops that pretence of security and encourages programmers to be responsible. In practice, this works very nicely. If you want to emulate private variables for some reason, you can always use the __ prefix from PEP 8. Python mangles the names of variables like __foo so that they're not easily visible to code outside the namespace that contains them (although you can get around it if you're determined enough, just like you can get around Java's protections if you work at it). By the same convention, the _ prefix means _variable should be used internally in the class (or module) only, even if you're not technically prevented from accessing it from somewhere else. You don't play around with another class's variables that look like __foo or _bar."} +{"question_id": 20906474, "score": 781, "creation_date": 1388761246, "tags": ["python", "pandas", "csv", "dataframe", "concatenation"], "instruction": "Import multiple CSV files into pandas and concatenate into one DataFrame\n\nI would like to read several CSV files from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far: import glob import pandas as pd # Get data file names path = r'C:\\DRO\\DCL_rawdata_files' filenames = glob.glob(path + \"/*.csv\") dfs = [] for filename in filenames: dfs.append(pd.read_csv(filename)) # Concatenate all data into one DataFrame big_frame = pd.concat(dfs, ignore_index=True) I guess I need some help within the for loop?", "output": "See pandas: IO tools for all of the available .read_ methods. Try the following code if all of the CSV files have the same columns. I have added header=0, so that after reading the CSV file's first row, it can be assigned as the column names. import pandas as pd import glob import os path = r'C:\\DRO\\DCL_rawdata_files' # use your path all_files = glob.glob(os.path.join(path , \"/*.csv\")) li = [] for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) frame = pd.concat(li, axis=0, ignore_index=True) Or, with attribution to a comment from Sid. all_files = glob.glob(os.path.join(path, \"*.csv\")) df = pd.concat((pd.read_csv(f) for f in all_files), ignore_index=True) It's often necessary to identify each sample of data, which can be accomplished by adding a new column to the dataframe. pathlib from the standard library will be used for this example. It treats paths as objects with methods, instead of strings to be sliced. Imports and Setup from pathlib import Path import pandas as pd import numpy as np path = r'C:\\DRO\\DCL_rawdata_files' # or unix / linux / mac path # Get the files from the path provided in the OP files = Path(path).glob('*.csv') # .rglob to get subdirectories Option 1: Add a new column with the file name dfs = list() for f in files: data = pd.read_csv(f) # .stem is method for pathlib objects to get the filename w/o the extension data['file'] = f.stem dfs.append(data) df = pd.concat(dfs, ignore_index=True) Option 2: Add a new column with a generic name using enumerate dfs = list() for i, f in enumerate(files): data = pd.read_csv(f) data['file'] = f'File {i}' dfs.append(data) df = pd.concat(dfs, ignore_index=True) Option 3: Create the dataframes with a list comprehension, and then use np.repeat to add a new column. [f'S{i}' for i in range(len(dfs))] creates a list of strings to name each dataframe. [len(df) for df in dfs] creates a list of lengths Attribution for this option goes to this plotting answer. # Read the files into dataframes dfs = [pd.read_csv(f) for f in files] # Combine the list of dataframes df = pd.concat(dfs, ignore_index=True) # Add a new column df['Source'] = np.repeat([f'S{i}' for i in range(len(dfs))], [len(df) for df in dfs]) Option 4: One liners using .assign to create the new column, with attribution to a comment from C8H10N4O2 df = pd.concat((pd.read_csv(f).assign(filename=f.stem) for f in files), ignore_index=True) or df = pd.concat((pd.read_csv(f).assign(Source=f'S{i}') for i, f in enumerate(files)), ignore_index=True)"} +{"question_id": 8213522, "score": 772, "creation_date": 1321886283, "tags": ["python", "matplotlib", "plot"], "instruction": "When to use cla(), clf() or close() for clearing a plot\n\nMatplotlib offers these functions: cla() # Clear axis clf() # Clear figure close() # Close a figure window When should I use each function and what exactly does it do?", "output": "They all do different things, since matplotlib uses a hierarchical order in which a figure window contains a figure which may consist of many axes. Additionally, there are functions from the pyplot interface and there are methods on the Figure class. I will discuss both cases below. pyplot interface pyplot is a module that collects a couple of functions that allow matplotlib to be used in a functional manner. I here assume that pyplot has been imported as import matplotlib.pyplot as plt. In this case, there are three different commands that remove stuff: See matplotlib.pyplot Functions: plt.cla() clears an axis, i.e. the currently active axis in the current figure. It leaves the other axes untouched. plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots. plt.close() closes a window, which will be the current window, if not specified otherwise. Which functions suits you best depends thus on your use-case. The close() function furthermore allows one to specify which window should be closed. The argument can either be a number or name given to a window when it was created using figure(number_or_name) or it can be a figure instance fig obtained, i.e., usingfig = figure(). If no argument is given to close(), the currently active window will be closed. Furthermore, there is the syntax close('all'), which closes all figures. methods of the Figure class Additionally, the Figure class provides methods for clearing figures. I'll assume in the following that fig is an instance of a Figure: fig.clear() clears the entire figure, and is in the primary list of functions to get and modify the figure state. This call is equivalent to plt.clf() only if fig is the current figure. fig.clf() is a synonym for fig.clear() but is discouraged. Note that even del fig will not close the associated figure window. As far as I know the only way to close a figure window is using plt.close(fig) as described above."} +{"question_id": 4435169, "score": 771, "creation_date": 1292290889, "tags": ["python", "string", "append"], "instruction": "How do I append one string to another in Python?\n\nHow do I efficiently append one string to another? Are there any faster alternatives to: var1 = \"foo\" var2 = \"bar\" var3 = var1 + var2 For handling multiple strings in a list, see How to concatenate (join) items in a list to a single string. See How do I put a variable\u2019s value inside a string (interpolate it into the string)? if some inputs are not strings, but the result should still be a string.", "output": "If you only have one reference to a string and you concatenate another string to the end, CPython now special cases this and tries to extend the string in place. The end result is that the operation is amortized O(n). e.g. s = \"\" for i in range(n): s += str(i) used to be O(n^2), but now it is O(n). More information From the source (bytesobject.c): void PyBytes_ConcatAndDel(register PyObject **pv, register PyObject *w) { PyBytes_Concat(pv, w); Py_XDECREF(w); } /* The following function breaks the notion that strings are immutable: it changes the size of a string. We get away with this only if there is only one module referencing the object. You can also think of it as creating a new string object and destroying the old one, only more efficiently. In any case, don't use this if the string may already be known to some other part of the code... Note that if there's not enough memory to resize the string, the original string object at *pv is deallocated, *pv is set to NULL, an \"out of memory\" exception is set, and -1 is returned. Else (on success) 0 is returned, and the value in *pv may or may not be the same as on input. As always, an extra byte is allocated for a trailing \\0 byte (newsize does *not* include that), and a trailing \\0 byte is stored. */ int _PyBytes_Resize(PyObject **pv, Py_ssize_t newsize) { register PyObject *v; register PyBytesObject *sv; v = *pv; if (!PyBytes_Check(v) || Py_REFCNT(v) != 1 || newsize < 0) { *pv = 0; Py_DECREF(v); PyErr_BadInternalCall(); return -1; } /* XXX UNREF/NEWREF interface should be more symmetrical */ _Py_DEC_REFTOTAL; _Py_ForgetReference(v); *pv = (PyObject *) PyObject_REALLOC((char *)v, PyBytesObject_SIZE + newsize); if (*pv == NULL) { PyObject_Del(v); PyErr_NoMemory(); return -1; } _Py_NewReference(*pv); sv = (PyBytesObject *) *pv; Py_SIZE(sv) = newsize; sv->ob_sval[newsize] = '\\0'; sv->ob_shash = -1; /* invalidate cached hash value */ return 0; } It's easy enough to verify empirically. $ python -m timeit -s\"s=''\" \"for i in xrange(10):s+='a'\" 1000000 loops, best of 3: 1.85 usec per loop $ python -m timeit -s\"s=''\" \"for i in xrange(100):s+='a'\" 10000 loops, best of 3: 16.8 usec per loop $ python -m timeit -s\"s=''\" \"for i in xrange(1000):s+='a'\" 10000 loops, best of 3: 158 usec per loop $ python -m timeit -s\"s=''\" \"for i in xrange(10000):s+='a'\" 1000 loops, best of 3: 1.71 msec per loop $ python -m timeit -s\"s=''\" \"for i in xrange(100000):s+='a'\" 10 loops, best of 3: 14.6 msec per loop $ python -m timeit -s\"s=''\" \"for i in xrange(1000000):s+='a'\" 10 loops, best of 3: 173 msec per loop It's important however to note that this optimisation isn't part of the Python spec. It's only in the cPython implementation as far as I know. The same empirical testing on pypy or jython for example might show the older O(n**2) performance. $ pypy -m timeit -s\"s=''\" \"for i in xrange(10):s+='a'\" 10000 loops, best of 3: 90.8 usec per loop $ pypy -m timeit -s\"s=''\" \"for i in xrange(100):s+='a'\" 1000 loops, best of 3: 896 usec per loop $ pypy -m timeit -s\"s=''\" \"for i in xrange(1000):s+='a'\" 100 loops, best of 3: 9.03 msec per loop $ pypy -m timeit -s\"s=''\" \"for i in xrange(10000):s+='a'\" 10 loops, best of 3: 89.5 msec per loop So far so good, but then, $ pypy -m timeit -s\"s=''\" \"for i in xrange(100000):s+='a'\" 10 loops, best of 3: 12.8 sec per loop ouch even worse than quadratic. So pypy is doing something that works well with short strings, but performs poorly for larger strings."} +{"question_id": 5929107, "score": 770, "creation_date": 1304876408, "tags": ["python", "function", "parameters", "arguments", "decorator"], "instruction": "Decorators with parameters?\n\nI have a problem with the transfer of the variable insurance_mode by the decorator. I would do it by the following decorator statement: @execute_complete_reservation(True) def test_booking_gta_object(self): self.test_select_gta_object() but unfortunately, this statement does not work. Perhaps maybe there is better way to solve this problem. def execute_complete_reservation(test_case,insurance_mode): def inner_function(self,*args,**kwargs): self.test_create_qsf_query() test_case(self,*args,**kwargs) self.test_select_room_option() if insurance_mode: self.test_accept_insurance_crosseling() else: self.test_decline_insurance_crosseling() self.test_configure_pax_details() self.test_configure_payer_details return inner_function", "output": "The syntax for decorators with arguments is a bit different - the decorator with arguments should return a function that will take a function and return another function. So it should really return a normal decorator. A bit confusing, right? What I mean is: def decorator_factory(argument): def decorator(function): def wrapper(*args, **kwargs): funny_stuff() something_with_argument(argument) result = function(*args, **kwargs) more_funny_stuff() return result return wrapper return decorator Here you can read more on the subject - it's also possible to implement this using callable objects and that is also explained there. Usage: @decorator_factory(\"Some argument\") def function_to_be_decorated(args): print(f\"Do something with '{args}'.\") decorator_factory(\"Some argument\") uses the given argument to create a standard, argumentless decorator. So the following block is functionally identical to the one above: created_decorator = decorator_factory(\"Some argument\") @created_decorator def function_to_be_decorated(args): print(f\"Do something with '{args}'.\")"} +{"question_id": 16511337, "score": 769, "creation_date": 1368387884, "tags": ["python", "exception", "python-requests", "request"], "instruction": "Correct way to try/except using Python requests module?\n\ntry: r = requests.get(url, params={'s': thing}) except requests.ConnectionError, e: print(e) Is this correct? Is there a better way to structure this? Will this cover all my bases?", "output": "Have a look at the Requests exception docs. In short: In the event of a network problem (e.g. DNS failure, refused connection, etc), Requests will raise a ConnectionError exception. In the event of the rare invalid HTTP response, Requests will raise an HTTPError exception. If a request times out, a Timeout exception is raised. If a request exceeds the configured number of maximum redirections, a TooManyRedirects exception is raised. All exceptions that Requests explicitly raises inherit from requests.exceptions.RequestException. To answer your question, what you show will not cover all of your bases. You'll only catch connection-related errors, not ones that time out. What to do when you catch the exception is really up to the design of your script/program. Is it acceptable to exit? Can you go on and try again? If the error is catastrophic and you can't go on, then yes, you may abort your program by raising SystemExit (a nice way to both print an error and call sys.exit). You can either catch the base-class exception, which will handle all cases: try: r = requests.get(url, params={'s': thing}) except requests.exceptions.RequestException as e: # This is the correct syntax raise SystemExit(e) Or you can catch them separately and do different things. try: r = requests.get(url, params={'s': thing}) except requests.exceptions.Timeout: # Maybe set up for a retry, or continue in a retry loop except requests.exceptions.TooManyRedirects: # Tell the user their URL was bad and try a different one except requests.exceptions.RequestException as e: # catastrophic error. bail. raise SystemExit(e) As Christian pointed out: If you want http errors (e.g. 401 Unauthorized) to raise exceptions, you can call Response.raise_for_status. That will raise an HTTPError, if the response was an http error. An example: try: r = requests.get('http://www.google.com/nothere') r.raise_for_status() except requests.exceptions.HTTPError as err: raise SystemExit(err) Will print: 404 Client Error: Not Found for url: http://www.google.com/nothere"} +{"question_id": 2491222, "score": 765, "creation_date": 1269251963, "tags": ["python", "file-rename"], "instruction": "How to rename a file using Python\n\nI want to change a.txt to b.kml.", "output": "Use os.rename: import os os.rename('a.txt', 'b.kml') Usage: os.rename('from.extension.whatever','to.another.extension')"} +{"question_id": 518021, "score": 762, "creation_date": 1233869207, "tags": ["python", "arrays", "methods"], "instruction": "Is arr.__len__() the preferred way to get the length of an array in Python?\n\nIn Python, is the following the only way to get the number of elements? arr.__len__() If so, why the strange syntax?", "output": "my_list = [1,2,3,4,5] len(my_list) # 5 The same works for tuples: my_tuple = (1,2,3,4,5) len(my_tuple) # 5 And strings, which are really just arrays of characters: my_string = 'hello world' len(my_string) # 11 It was intentionally done this way so that lists, tuples and other container types or iterables didn't all need to explicitly implement a public .length() method, instead you can just check the len() of anything that implements the 'magic' __len__() method. Sure, this may seem redundant, but length checking implementations can vary considerably, even within the same language. It's not uncommon to see one collection type use a .length() method while another type uses a .length property, while yet another uses .count(). Having a language-level keyword unifies the entry point for all these types. So even objects you may not consider to be lists of elements could still be length-checked. This includes strings, queues, trees, etc. The functional nature of len() also lends itself well to functional styles of programming. lengths = map(len, list_of_containers)"} +{"question_id": 1265665, "score": 759, "creation_date": 1250077573, "tags": ["python", "string", "integer"], "instruction": "How can I check if a string represents an int, without using try/except?\n\nIs there any way to tell whether a string represents an integer (e.g., '3', '-17' but not '3.14' or 'asfasfas') Without using a try/except mechanism? is_int('3.14') == False is_int('-7') == True", "output": "If you're really just annoyed at using try/excepts all over the place, please just write a helper function: def represents_int(s): try: int(s) except ValueError: return False else: return True >>> print(represents_int(\"+123\")) True >>> print(represents_int(\"10.0\")) False It's going to be WAY more code to exactly cover all the strings that Python considers integers. I say just be pythonic on this one."} +{"question_id": 19798153, "score": 757, "creation_date": 1383682814, "tags": ["python", "pandas", "dataframe", "vectorization"], "instruction": "Difference between map, applymap and apply methods in Pandas\n\nCan you tell me when to use these vectorization methods with basic examples? I see that map is a Series method whereas the rest are DataFrame methods. I got confused about apply and applymap methods though. Why do we have two methods for applying a function to a DataFrame? Again, simple examples which illustrate the usage would be great!", "output": "Comparing map, applymap and apply: Context Matters The major differences are: Definition map is defined on Series only applymap is defined on DataFrames only apply is defined on both Input argument map accepts dict, Series, or callable applymap and apply accept callable only Behavior map is elementwise for Series applymap is elementwise for DataFrames apply also works elementwise but is suited to more complex operations and aggregation. The behaviour and return value depends on the function. Use case (the most important difference) map is meant for mapping values from one domain to another, so is optimised for performance, e.g., df['A'].map({1:'a', 2:'b', 3:'c'}) applymap is good for elementwise transformations across multiple rows/columns, e.g., df[['A', 'B', 'C']].applymap(str.strip) apply is for applying any function that cannot be vectorised, e.g., df['sentences'].apply(nltk.sent_tokenize) Also see When should I (not) want to use pandas apply() in my code? for a writeup I made a while back on the most appropriate scenarios for using apply. (Note that there aren't many, but there are a few\u2014 apply is generally slow.) Summarising map applymap apply Defined on Series? Yes No Yes Defined on DataFrame? No Yes Yes Argument dict, Series, or callable1 callable2 callable Elementwise? Yes Yes Yes Aggregation? No No Yes Use Case Transformation/mapping3 Transformation More complex functions Returns Series DataFrame scalar, Series, or DataFrame4 Footnotes map when passed a dictionary/Series will map elements based on the keys in that dictionary/Series. Missing values will be recorded as NaN in the output. applymap in more recent versions has been optimised for some operations. You will find applymap slightly faster than apply in some cases. My suggestion is to test them both and use whatever works better. map is optimised for elementwise mappings and transformation. Operations that involve dictionaries or Series will enable pandas to use faster code paths for better performance. Series.apply returns a scalar for aggregating operations, Series otherwise. Similarly for DataFrame.apply. Note that apply also has fastpaths when called with certain NumPy functions such as mean, sum, etc."} +{"question_id": 9708902, "score": 756, "creation_date": 1331753621, "tags": ["python", "yield"], "instruction": "In practice, what are the main uses for the \"yield from\" syntax in Python 3.3?\n\nI'm having a hard time wrapping my brain around PEP 380. What are the situations where yield from is useful? What is the classic use case? Why is it compared to micro-threads? So far I have used generators, but never really used coroutines (introduced by PEP-342). Despite some similarities, generators and coroutines are basically two different concepts. Understanding coroutines (not only generators) is the key to understanding the new syntax. IMHO coroutines are the most obscure Python feature, most books make it look useless and uninteresting. Thanks for the great answers, but special thanks to agf and his comment linking to David Beazley presentations.", "output": "Let's get one thing out of the way first. The explanation that yield from g is equivalent to for v in g: yield v does not even begin to do justice to what yield from is all about. Because, let's face it, if all yield from does is expand the for loop, then it does not warrant adding yield from to the language and preclude a whole bunch of new features from being implemented in Python 2.x. What yield from does is it establishes a transparent, bidirectional connection between the caller and the sub-generator: The connection is \"transparent\" in the sense that it will propagate everything correctly, not just the elements being generated (e.g. exceptions are propagated). The connection is \"bidirectional\" in the sense that data can be both sent from and to a generator. (If we were talking about TCP, yield from g might mean \"now temporarily disconnect my client's socket and reconnect it to this other server socket\".) BTW, if you are not sure what sending data to a generator even means, you need to drop everything and read about coroutines first\u2014they're very useful (contrast them with subroutines), but unfortunately lesser-known in Python. Dave Beazley's Curious Course on Coroutines is an excellent start. Read slides 24-33 for a quick primer. Reading data from a generator using yield from def reader(): \"\"\"A generator that fakes a read from a file, socket, etc.\"\"\" for i in range(4): yield '<< %s' % i def reader_wrapper(g): # Manually iterate over data produced by reader for v in g: yield v wrap = reader_wrapper(reader()) for i in wrap: print(i) # Result << 0 << 1 << 2 << 3 Instead of manually iterating over reader(), we can just yield from it. def reader_wrapper(g): yield from g That works, and we eliminated one line of code. And probably the intent is a little bit clearer (or not). But nothing life changing. Sending data to a generator (coroutine) using yield from - Part 1 Now let's do something more interesting. Let's create a coroutine called writer that accepts data sent to it and writes to a socket, fd, etc. def writer(): \"\"\"A coroutine that writes data *sent* to it to fd, socket, etc.\"\"\" while True: w = (yield) print('>> ', w) Now the question is, how should the wrapper function handle sending data to the writer, so that any data that is sent to the wrapper is transparently sent to the writer()? def writer_wrapper(coro): # TBD pass w = writer() wrap = writer_wrapper(w) wrap.send(None) # \"prime\" the coroutine for i in range(4): wrap.send(i) # Expected result >> 0 >> 1 >> 2 >> 3 The wrapper needs to accept the data that is sent to it (obviously) and should also handle the StopIteration when the for loop is exhausted. Evidently just doing for x in coro: yield x won't do. Here is a version that works. def writer_wrapper(coro): coro.send(None) # prime the coro while True: try: x = (yield) # Capture the value that's sent coro.send(x) # and pass it to the writer except StopIteration: pass Or, we could do this. def writer_wrapper(coro): yield from coro That saves 6 lines of code, make it much much more readable and it just works. Magic! Sending data to a generator yield from - Part 2 - Exception handling Let's make it more complicated. What if our writer needs to handle exceptions? Let's say the writer handles a SpamException and it prints *** if it encounters one. class SpamException(Exception): pass def writer(): while True: try: w = (yield) except SpamException: print('***') else: print('>> ', w) What if we don't change writer_wrapper? Does it work? Let's try # writer_wrapper same as above w = writer() wrap = writer_wrapper(w) wrap.send(None) # \"prime\" the coroutine for i in [0, 1, 2, 'spam', 4]: if i == 'spam': wrap.throw(SpamException) else: wrap.send(i) # Expected Result >> 0 >> 1 >> 2 *** >> 4 # Actual Result >> 0 >> 1 >> 2 Traceback (most recent call last): ... redacted ... File ... in writer_wrapper x = (yield) __main__.SpamException Um, it's not working because x = (yield) just raises the exception and everything comes to a crashing halt. Let's make it work, but manually handling exceptions and sending them or throwing them into the sub-generator (writer) def writer_wrapper(coro): \"\"\"Works. Manually catches exceptions and throws them\"\"\" coro.send(None) # prime the coro while True: try: try: x = (yield) except Exception as e: # This catches the SpamException coro.throw(e) else: coro.send(x) except StopIteration: pass This works. # Result >> 0 >> 1 >> 2 *** >> 4 But so does this! def writer_wrapper(coro): yield from coro The yield from transparently handles sending the values or throwing values into the sub-generator. This still does not cover all the corner cases though. What happens if the outer generator is closed? What about the case when the sub-generator returns a value (yes, in Python 3.3+, generators can return values), how should the return value be propagated? That yield from transparently handles all the corner cases is really impressive. yield from just magically works and handles all those cases. I personally feel yield from is a poor keyword choice because it does not make the two-way nature apparent. There were other keywords proposed (like delegate but were rejected because adding a new keyword to the language is much more difficult than combining existing ones. In summary, it's best to think of yield from as a transparent two way channel between the caller and the sub-generator. References: PEP 380 - Syntax for delegating to a sub-generator (Ewing) [v3.3, 2009-02-13] PEP 342 - Coroutines via Enhanced Generators (GvR, Eby) [v2.5, 2005-05-10]"} +{"question_id": 2812520, "score": 755, "creation_date": 1273595538, "tags": ["python", "pip"], "instruction": "Dealing with multiple Python versions and PIP\n\nIs there a way to make pip play well with multiple versions of Python? For example, I want to use pip to explicitly install things to either my site 2.5 installation or my site 2.6 installation. For example, with easy_install, I use easy_install-2.{5,6}. And, yes \u2014 I know about virtualenv, and no \u2014 it's not a solution to this particular problem.", "output": "The current recommendation is to use python -m pip, where python is the version of Python you would like to use. This is the recommendation because it works across all versions of Python, and in all forms of virtualenv. For example: # The system default Python installation: python -m pip install fish # A virtualenv's Python installation: .env/bin/python -m pip install fish # A specific version of python: python-3.6 -m pip install fish Previous answer, left for posterity: Since version 0.8, Pip supports pip-{version}. You can use it the same as easy_install-{version}: pip-2.5 install myfoopackage pip-2.6 install otherpackage pip-2.7 install mybarpackage pip changed its schema to use pipVERSION instead of pip-VERSION in version 1.5. You should use the following if you have pip >= 1.5: pip2.6 install otherpackage pip2.7 install mybarpackage Check Versioned commands consistent with Python. #1053 for more details References: Need pip-x.y scripts #200 v0.8 changelog or News for pip, v0.8"} +{"question_id": 3323001, "score": 755, "creation_date": 1279926290, "tags": ["python", "recursion", "limit", "stack-overflow", "sys"], "instruction": "What is the maximum recursion depth, and how to increase it?\n\nI have this tail recursive function here: def recursive_function(n, sum): if n < 1: return sum else: return recursive_function(n-1, sum+n) c = 998 print(recursive_function(c, 0)) It works up to n=997, then it just breaks and spits out a RecursionError: maximum recursion depth exceeded in comparison. Is this just a stack overflow? Is there a way to get around it?", "output": "It is a guard against a stack overflow, yes. Python (or rather, the CPython implementation) doesn't optimize tail recursion, and unbridled recursion causes stack overflows. You can check the recursion limit with sys.getrecursionlimit: import sys print(sys.getrecursionlimit()) and change the recursion limit with sys.setrecursionlimit: sys.setrecursionlimit(1500) but doing so is dangerous -- the standard limit is a little conservative, but Python stackframes can be quite big. Python isn't a functional language and tail recursion is not a particularly efficient technique. Rewriting the algorithm iteratively, if possible, is generally a better idea."} +{"question_id": 3348460, "score": 753, "creation_date": 1280268882, "tags": ["python", "windows", "csv"], "instruction": "CSV file written with Python has blank lines between each row\n\nimport csv with open('thefile.csv', 'rb') as f: data = list(csv.reader(f)) import collections counter = collections.defaultdict(int) for row in data: counter[row[10]] += 1 with open('/pythonwork/thefile_subset11.csv', 'w') as outfile: writer = csv.writer(outfile) for row in data: if counter[row[10]] >= 504: writer.writerow(row) This code reads thefile.csv, makes changes, and writes results to thefile_subset1. However, when I open the resulting csv in Microsoft Excel, there is an extra blank line after each record! Is there a way to make it not put an extra blank line?", "output": "The csv.writer module directly controls line endings and writes \\r\\n into the file directly. In Python 3 the file must be opened in untranslated text mode with the parameters 'w', newline='' (empty string) or it will write \\r\\r\\n on Windows, where the default text mode will translate each \\n into \\r\\n. #!python3 with open('/pythonwork/thefile_subset11.csv', 'w', newline='') as outfile: writer = csv.writer(outfile) If using the Path module: from pathlib import Path import csv with Path('/pythonwork/thefile_subset11.csv').open('w', newline='') as outfile: writer = csv.writer(outfile) If using the StringIO module to build an in-memory result, the result string will contain the translated line terminator: from io import StringIO import csv s = StringIO() writer = csv.writer(s) writer.writerow([1,2,3]) print(repr(s.getvalue())) # '1,2,3\\r\\n' (Windows result) If writing that string to a file later, remember to use newline='': # built-in open() with open('/pythonwork/thefile_subset11.csv', 'w', newline='') as f: f.write(s.getvalue()) # Path's open() with Path('/pythonwork/thefile_subset11.csv').open('w', newline='') as f: f.write(s.getvalue()) # Path's write_text() added the newline parameter to Python 3.10. Path('/pythonwork/thefile_subset11.csv').write_text(s.getvalue(), newline='') In Python 2, use binary mode to open outfile with mode 'wb' instead of 'w' to prevent Windows newline translation. Python 2 also has problems with Unicode and requires other workarounds to write non-ASCII text. See the Python 2 link below and the UnicodeReader and UnicodeWriter examples at the end of the page if you have to deal with writing Unicode strings to CSVs on Python 2, or look into the 3rd party unicodecsv module: #!python2 with open('/pythonwork/thefile_subset11.csv', 'wb') as outfile: writer = csv.writer(outfile) Documentation Links https://docs.python.org/3/library/csv.html#csv.writer https://docs.python.org/2/library/csv.html#csv.writer"} +{"question_id": 166506, "score": 751, "creation_date": 1223035416, "tags": ["python", "network-programming", "ip-address"], "instruction": "Finding local IP addresses using Python's stdlib\n\nHow can I find local IP addresses (i.e. 192.168.x.x or 10.0.x.x) in Python platform independently and using only the standard library?", "output": "import socket socket.gethostbyname(socket.gethostname()) This won't work always (returns 127.0.0.1 on machines having the hostname in /etc/hosts as 127.0.0.1), a paliative would be what gimel shows, use socket.getfqdn() instead. Of course your machine needs a resolvable hostname. If you need to get all the ip addresses, you can instead use: socket.gethostbyname_ex(socket.gethostname()) See the documentation for more information about gethostbyname_ex."} +{"question_id": 5420789, "score": 742, "creation_date": 1300977235, "tags": ["python", "virtualenv", "psycopg2", "pip"], "instruction": "How to install psycopg2 with \"pip\" on Python?\n\nI'm using virtualenv and I need to install \"psycopg2\". I have done the following: pip install http://pypi.python.org/packages/source/p/psycopg2/psycopg2-2.4.tar.gz#md5=24f4368e2cfdc1a2b03282ddda814160 And I have the following messages: Downloading/unpacking http://pypi.python.org/packages/source/p/psycopg2/psycopg2 -2.4.tar.gz#md5=24f4368e2cfdc1a2b03282ddda814160 Downloading psycopg2-2.4.tar.gz (607Kb): 607Kb downloaded Running setup.py egg_info for package from http://pypi.python.org/packages/sou rce/p/psycopg2/psycopg2-2.4.tar.gz#md5=24f4368e2cfdc1a2b03282ddda814160 Error: pg_config executable not found. Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. Complete output from command python setup.py egg_info: running egg_info creating pip-egg-info\\psycopg2.egg-info writing pip-egg-info\\psycopg2.egg-info\\PKG-INFO writing top-level names to pip-egg-info\\psycopg2.egg-info\\top_level.txt writing dependency_links to pip-egg-info\\psycopg2.egg-info\\dependency_links.txt writing manifest file 'pip-egg-info\\psycopg2.egg-info\\SOURCES.txt' warning: manifest_maker: standard file '-c' not found Error: pg_config executable not found. Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. ---------------------------------------- Command python setup.py egg_info failed with error code 1 Storing complete log in C:\\Documents and Settings\\anlopes\\Application Data\\pip\\p ip.log My question, I only need to do this to get the psycopg2 working? python setup.py build_ext --pg-config /path/to/pg_config build ...", "output": "Note: Since a while back, there are binary wheels for Windows in PyPI, so this should no longer be an issue for Windows users. Below are solutions for Linux, Mac users, since lots of them find this post through web searches. Option 1 Install the psycopg2-binary PyPI package instead, it has Python wheels for Linux and Mac OS. pip install psycopg2-binary Option 2 Install the prerequsisites for building the psycopg2 package from source: Debian/Ubuntu Python version Command Note Default Python 3 sudo apt install libpq-dev python3-dev Python 3.x sudo apt install libpq-dev python3.x-dev substitute x in command Python 2 sudo apt install libpq-dev python-dev If that's not enough, you might additionally need to install sudo apt install build-essential or sudo apt install postgresql-server-dev-all as well before installing psycopg2 again. CentOS 6 See Banjer's answer macOS See nichochar's answer"} +{"question_id": 1810743, "score": 740, "creation_date": 1259358828, "tags": ["python", "working-directory"], "instruction": "How to set the current working directory?\n\nHow to set the current working directory in Python?", "output": "Try os.chdir import os os.chdir(path) Change the current working directory to path. Availability: Unix, Windows."} +{"question_id": 2474015, "score": 735, "creation_date": 1268954435, "tags": ["python", "list", "max", "min", "argmax"], "instruction": "Getting the index of the returned max or min item using max()/min() on a list\n\nI'm using Python's max and min functions on lists for a minimax algorithm, and I need the index of the value returned by max() or min(). In other words, I need to know which move produced the max (at a first player's turn) or min (second player) value. for i in range(9): new_board = current_board.new_board_with_move([i / 3, i % 3], player) if new_board: temp = min_max(new_board, depth + 1, not is_min_level) values.append(temp) if is_min_level: return min(values) else: return max(values) I need to be able to return the actual index of the min or max value, not just the value.", "output": "Find the minimum value with min() then find that value's index with .index(): values.index(min(values)) Or the maximum: values.index(max(values)) If your list contains repeats of the minimum or maximum value this will return the index of the first one."} +{"question_id": 9835762, "score": 734, "creation_date": 1332489599, "tags": ["python", "list", "duplicates"], "instruction": "How do I find the duplicates in a list and create another list with them?\n\nHow do I find the duplicates in a list of integers and create another list of the duplicates?", "output": "To remove duplicates use set(a). To print duplicates, something like: a = [1,2,3,2,1,5,6,5,5,5] import collections print([item for item, count in collections.Counter(a).items() if count > 1]) ## [1, 2, 5] Note that Counter is not particularly efficient (timings) and probably overkill here. set will perform better. This code computes a list of unique elements in the source order: seen = set() uniq = [] for x in a: if x not in seen: uniq.append(x) seen.add(x) or, more concisely: seen = set() uniq = [x for x in a if x not in seen and not seen.add(x)] I don't recommend the latter style, because it is not obvious what not seen.add(x) is doing (the set add() method always returns None, hence the need for not). To compute the list of duplicated elements without libraries: seen = set() dupes = set() for x in a: if x in seen: dupes.add(x) else: seen.add(x) or, more concisely: seen = set() dupes = {x for x in a if x in seen or seen.add(x)} If list elements are not hashable, you cannot use sets/dicts and have to resort to a quadratic time solution (compare each with each). For example: a = [[1], [2], [3], [1], [5], [3]] no_dupes = [x for n, x in enumerate(a) if x not in a[:n]] print no_dupes # [[1], [2], [3], [5]] dupes = [x for n, x in enumerate(a) if x in a[:n]] print dupes # [[1], [3]]"} +{"question_id": 1112343, "score": 730, "creation_date": 1247266161, "tags": ["python", "signals"], "instruction": "How do I capture SIGINT in Python?\n\nI'm working on a python script that starts several processes and database connections. Every now and then I want to kill the script with a Ctrl+C signal, and I'd like to do some cleanup. In Perl I'd do this: $SIG{'INT'} = 'exit_gracefully'; sub exit_gracefully { print \"Caught ^C \\n\"; exit (0); } How do I do the analogue of this in Python?", "output": "Register your handler with signal.signal like this: #!/usr/bin/env python import signal import sys def signal_handler(sig, frame): print('You pressed Ctrl+C!') sys.exit(0) signal.signal(signal.SIGINT, signal_handler) print('Press Ctrl+C') signal.pause() Code adapted from here. More documentation on signal can be found here."} +{"question_id": 301134, "score": 727, "creation_date": 1227074997, "tags": ["python", "python-import"], "instruction": "How can I import a module dynamically given its name as string?\n\nI'm writing a Python application that takes a command as an argument, for example: $ python myapp.py command1 I want the application to be extensible, that is, to be able to add new modules that implement new commands without having to change the main application source. The tree looks something like: myapp/ __init__.py commands/ __init__.py command1.py command2.py foo.py bar.py So I want the application to find the available command modules at runtime and execute the appropriate one. Python defines an __import__() function, which takes a string for a module name: __import__(name, globals=None, locals=None, fromlist=(), level=0) The function imports the module name, potentially using the given globals and locals to determine how to interpret the name in a package context. The fromlist gives the names of objects or submodules that should be imported from the module given by name. Source: https://docs.python.org/3/library/functions.html#__import__ So currently I have something like: command = sys.argv[1] try: command_module = __import__(\"myapp.commands.%s\" % command, fromlist=[\"myapp.commands\"]) except ImportError: # Display error message command_module.run() This works just fine, I'm just wondering if there is possibly a more idiomatic way to accomplish what we are doing with this code. Note that I specifically don't want to get in to using eggs or extension points. This is not an open-source project and I don't expect there to be \"plugins\". The point is to simplify the main application code and remove the need to modify it each time a new command module is added. See also: How can I import a module dynamically given the full path?", "output": "With Python older than 2.7/3.1, that's pretty much how you do it. For newer versions, see importlib.import_module for Python 2 and Python 3. Or using __import__ you can import a list of modules by doing this: >>> moduleNames = ['sys', 'os', 're', 'unittest'] >>> moduleNames ['sys', 'os', 're', 'unittest'] >>> modules = map(__import__, moduleNames) Ripped straight from Dive Into Python."} +{"question_id": 38727520, "score": 725, "creation_date": 1470160896, "tags": ["python", "python-typing"], "instruction": "How do I add default parameters to functions when using type hinting?\n\nIf I have a function like this: def foo(name, opts={}): pass And I want to add type hints to the parameters, how do I do it? The way I assumed gives me a syntax error: def foo(name: str, opts={}: dict) -> str: pass The following doesn't throw a syntax error but it doesn't seem like the intuitive way to handle this case: def foo(name: str, opts: dict={}) -> str: pass I can't find anything in the typing documentation or on a Google search. Edit: I didn't know how default arguments worked in Python, but for the sake of this question, I will keep the examples above. In general it's much better to do the following: def foo(name: str, opts: dict=None) -> str: if not opts: opts={} pass", "output": "Your second way is correct. def foo(opts: dict = {}): pass print(foo.__annotations__) this outputs {'opts': } Although it is not explicitly mentioned in PEP 484, type hints are a specific use of function annotations, as outlined in PEP 3107. The syntax section clearly demonstrates that keyword arguments can be annotated in this manner. I strongly advise against using mutable keyword arguments. More information here."} +{"question_id": 33945261, "score": 724, "creation_date": 1448563500, "tags": ["python", "python-typing"], "instruction": "How to specify multiple types using type-hints\n\nI have a function in python that can either return a bool or a list. Is there a way to specify the types using type hints? For example, is this the correct way to do it? def foo(id) -> list or bool: ...", "output": "From the documentation - Union Type: A union object holds the value of the | (bitwise or) operation on multiple type objects. These types are intended primarily for type annotations. The union type expression enables cleaner type hinting syntax compared to typing.Union. This use of | was added in Python 3.10. Hence the proper way to represent more than one return data type is: def foo(client_id: str) -> list | bool: For earlier versions, use typing.Union: from typing import Union def foo(client_id: str) -> Union[list, bool]: But do note that typing is not enforced. Python continues to remain a dynamically-typed language. The annotation syntax has been developed to help during the development of the code prior to being released into production. As PEP 484 states, \"no type checking happens at runtime.\" >>> def foo(a: str) -> list: ... return \"Works\" ... >>> foo(1) 'Works' As you can see I am passing an int value and returning a str. However the __annotations__ will be set to the respective values. >>> foo.__annotations__ {'return': , 'a': } Please go through PEP 483 for more about Type hints. Also see What are type hints in Python 3.5?? Kindly note that this is available only for Python 3.5 and upwards. This is mentioned clearly in PEP 484."} +{"question_id": 1369526, "score": 723, "creation_date": 1251917871, "tags": ["python"], "instruction": "What is the python keyword \"with\" used for?\n\nWhat is the python keyword \"with\" used for? Example from: http://docs.python.org/tutorial/inputoutput.html >>> with open('/tmp/workfile', 'r') as f: ... read_data = f.read() >>> f.closed True", "output": "In python the with keyword is used when working with unmanaged resources (like file streams). It is similar to the using statement in VB.NET and C#. It allows you to ensure that a resource is \"cleaned up\" when the code that uses it finishes running, even if exceptions are thrown. It provides 'syntactic sugar' for try/finally blocks. From Python Docs: The with statement clarifies code that previously would use try...finally blocks to ensure that clean-up code is executed. The with statement is a control-flow structure whose basic structure is: with expression [as variable]: with-block The expression is evaluated, and it should result in an object that supports the context management protocol (that is, has __enter__() and __exit__() methods). Update fixed VB callout per Scott Wisniewski's comment. I was indeed confusing with with using."} +{"question_id": 9510474, "score": 721, "creation_date": 1330574795, "tags": ["python", "pip"], "instruction": "pip uses incorrect cached package version, instead of the user-specified version\n\nI need to install psycopg2 v2.4.1 specifically. I accidentally did: pip install psycopg2 Instead of: pip install psycopg2==2.4.1 That installs 2.4.4 instead of the earlier version. Now even after I pip uninstall psycopg2 and attempt to reinstall with the correct version, it appears that pip is re-using the cache it downloaded the first time. How can I force pip to clear out its download cache and use the specific version I'm including in the command?", "output": "If using pip 6.0 or newer, try adding the --no-cache-dir option (source). If using pip older than pip 6.0, upgrade it with pip install -U pip."} +{"question_id": 19828822, "score": 720, "creation_date": 1383803135, "tags": ["python", "pandas", "dataframe", "membership", "any"], "instruction": "How do I check if a pandas DataFrame is empty?\n\nHow do I check if a pandas DataFrame is empty? I'd like to print some message in the terminal if the DataFrame is empty.", "output": "You can use the attribute df.empty to check whether it's empty or not: if df.empty: print('DataFrame is empty!') Source: Pandas Documentation"} +{"question_id": 434287, "score": 718, "creation_date": 1231728502, "tags": ["python", "list", "loops", "optimization", "chunks"], "instruction": "How to iterate over a list in chunks\n\nI have a Python script which takes as input a list of integers, which I need to work with four integers at a time. Unfortunately, I don't have control of the input, or I'd have it passed in as a list of four-element tuples. Currently, I'm iterating over it this way: for i in range(0, len(ints), 4): # dummy op for example code foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3] It looks a lot like \"C-think\", though, which makes me suspect there's a more pythonic way of dealing with this situation. The list is discarded after iterating, so it needn't be preserved. Perhaps something like this would be better? while ints: foo += ints[0] * ints[1] + ints[2] * ints[3] ints[0:4] = [] Still doesn't quite \"feel\" right, though. :-/ Update: With the release of Python 3.12, I've changed the accepted answer. For anyone who has not (or cannot) make the jump to Python 3.12 yet, I encourage you to check out the previous accepted answer or any of the other excellent, backwards-compatible answers below. Related question: How do you split a list into evenly sized chunks in Python?", "output": "As of Python 3.12, the itertools module gains a batched function that specifically covers iterating over batches of an input iterable, where the final batch may be incomplete (each batch is a tuple). Per the example code given in the docs: >>> for batch in batched('ABCDEFG', 3): ... print(batch) ... ('A', 'B', 'C') ('D', 'E', 'F') ('G',) Performance notes: The implementation of batched, like all itertools functions to date, is at the C layer, so it's capable of optimizations Python level code cannot match, e.g. On each pull of a new batch, it proactively allocates a tuple of precisely the correct size (for all but the last batch), instead of building the tuple up element by element with amortized growth causing multiple reallocations (the way a solution calling tuple on an islice does) It only needs to look up the .__next__ function of the underlying iterator once per batch, not n times per batch (the way a zip_longest((iter(iterable),) * n)-based approach does) The check for the end case is a simple C level NULL check (trivial, and required to handle possible exceptions anyway) Handling the end case is a C goto followed by a direct realloc (no making a copy into a smaller tuple) down to the already known final size, since it's tracking how many elements it has successfully pulled (no complex \"create sentinel for use as fillvalue and do Python level if/else checks for each batch to see if it's empty, with the final batch requiring a search for where the fillvalue appeared last, to create the cut-down tuple\" required by zip_longest-based solutions). Between all these advantages, it should massively outperform any Python-level solution (even highly optimized ones that push most or all of the per-item work to the C layer), regardless of whether the input iterable is long or short, and regardless of whether the batch size and the size of the final (possibly incomplete) batch (zip_longest-based solutions using guaranteed unique fillvalues for safety are the best possible solution for almost all cases when itertools.batched is not available, but they can suffer in pathological cases of \"few large batches, with final batch mostly, not completely, filled\", especially pre-3.10 when bisect can't be used to optimize slicing off the fillvalues from O(n) linear search down to O(log n) binary search, but batched avoids that search entirely, so it won't experience pathological cases at all)."} +{"question_id": 773, "score": 718, "creation_date": 1217788029, "tags": ["python", "python-itertools"], "instruction": "How do I use itertools.groupby()?\n\nI haven't been able to find an understandable explanation of how to actually use Python's itertools.groupby() function. What I'm trying to do is this: Take a list - in this case, the children of an objectified lxml element Divide it into groups based on some criteria Then later iterate over each of these groups separately. I've reviewed the documentation, but I've had trouble trying to apply them beyond a simple list of numbers. So, how do I use of itertools.groupby()? Is there another technique I should be using? Pointers to good \"prerequisite\" reading would also be appreciated.", "output": "IMPORTANT NOTE: You may have to sort your data first. The part I didn't get is that in the example construction groups = [] uniquekeys = [] for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k) k is the current grouping key, and g is an iterator that you can use to iterate over the group defined by that grouping key. In other words, the groupby iterator itself returns iterators. Here's an example of that, using clearer variable names: from itertools import groupby things = [(\"animal\", \"bear\"), (\"animal\", \"duck\"), (\"plant\", \"cactus\"), (\"vehicle\", \"speed boat\"), (\"vehicle\", \"school bus\")] for key, group in groupby(things, lambda x: x[0]): for thing in group: print(\"A %s is a %s.\" % (thing[1], key)) print(\"\") This will give you the output: A bear is a animal. A duck is a animal. A cactus is a plant. A speed boat is a vehicle. A school bus is a vehicle. In this example, things is a list of tuples where the first item in each tuple is the group the second item belongs to. The groupby() function takes two arguments: (1) the data to group and (2) the function to group it with. Here, lambda x: x[0] tells groupby() to use the first item in each tuple as the grouping key. In the above for statement, groupby returns three (key, group iterator) pairs - once for each unique key. You can use the returned iterator to iterate over each individual item in that group. Here's a slightly different example with the same data, using a list comprehension: for key, group in groupby(things, lambda x: x[0]): listOfThings = \" and \".join([thing[1] for thing in group]) print(key + \"s: \" + listOfThings + \".\") This will give you the output: animals: bear and duck. plants: cactus. vehicles: speed boat and school bus."} +{"question_id": 18039057, "score": 717, "creation_date": 1375581285, "tags": ["python", "csv", "pandas"], "instruction": "pandas.parser.CParserError: Error tokenizing data\n\nI'm trying to use pandas to manipulate a .csv file but I get this error: pandas.parser.CParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 12 I have tried to read the pandas docs, but found nothing. My code is simple: path = 'GOOG Key Ratios.csv' #print(open(path).read()) data = pd.read_csv(path) How can I resolve this? Should I use the csv module or another language?", "output": "you could also try; data = pd.read_csv('file1.csv', on_bad_lines='skip') Do note that this will cause the offending lines to be skipped. If you don't expect many bad lines and want to (at least) know their amount and IDs, use on_bad_lines='warn'. For advanced handling of bads, you can pass a callable. Edit For Pandas < 1.3.0 try data = pd.read_csv(\"file1.csv\", error_bad_lines=False) as per pandas API reference."} +{"question_id": 4406389, "score": 710, "creation_date": 1291963551, "tags": ["python", "list", "list-comprehension", "if-statement"], "instruction": "if else in a list comprehension\n\nI have a list l: l = [22, 13, 45, 50, 98, 69, 43, 44, 1] For numbers above 45 inclusive, I would like to add 1; and for numbers less than it, 5. I tried [x+1 for x in l if x >= 45 else x+5] But it gives me a syntax error. How can I achieve an if \u2013 else like this in a list comprehension?", "output": ">>> l = [22, 13, 45, 50, 98, 69, 43, 44, 1] >>> [x+1 if x >= 45 else x+5 for x in l] [27, 18, 46, 51, 99, 70, 48, 49, 6] Do-something if , else do-something else."} +{"question_id": 2051192, "score": 707, "creation_date": 1263319529, "tags": ["python", "package", "egg"], "instruction": "What is a Python egg?\n\nI'm trying to understand how Python packages work. Presumably eggs are some sort of packaging mechanism, but what would be a quick overview of what role they play and may be some information on why they're useful and how to create them?", "output": "*Note: Egg packaging has been superseded by Wheel packaging.* Same concept as a .jar file in Java, it is a .zip file with some metadata files renamed .egg, for distributing code as bundles. Specifically: The Internal Structure of Python Eggs A \"Python egg\" is a logical structure embodying the release of a specific version of a Python project, comprising its code, resources, and metadata. There are multiple formats that can be used to physically encode a Python egg, and others can be developed. However, a key principle of Python eggs is that they should be discoverable and importable. That is, it should be possible for a Python application to easily and efficiently find out what eggs are present on a system, and to ensure that the desired eggs' contents are importable. The .egg format is well-suited to distribution and the easy uninstallation or upgrades of code, since the project is essentially self-contained within a single directory or file, unmingled with any other projects' code or resources. It also makes it possible to have multiple versions of a project simultaneously installed, such that individual programs can select the versions they wish to use."} +{"question_id": 13081532, "score": 706, "creation_date": 1351231007, "tags": ["python", "json", "flask"], "instruction": "Return JSON response from Flask view\n\nI have a function that analyzes a CSV file with Pandas and produces a dict with summary information. I want to return the results as a response from a Flask view. How do I return a JSON response? @app.route(\"/summary\") def summary(): d = make_summary() # send it back as json", "output": "A view can directly return a Python dict or list and Flask will call jsonify automatically. @app.route(\"/summary\") def summary(): d = make_summary() return d For older Flask versions, or to return a different JSON-serializable object, import and use jsonify. from flask import jsonify @app.route(\"/summary\") def summary(): d = make_summary() return jsonify(d)"} +{"question_id": 4172448, "score": 706, "creation_date": 1289650624, "tags": ["python", "line-breaks"], "instruction": "Is it possible to break a long line to multiple lines in Python?\n\nJust like C, you can break a long line into multiple short lines. But in Python, if I do this, there will be an indent error... Is it possible?", "output": "From PEP 8 - Style Guide for Python Code: The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets and braces. If necessary, you can add an extra pair of parentheses around an expression, but sometimes using a backslash looks better. Make sure to indent the continued line appropriately. Example of implicit line continuation: a = ( '1' + '2' + '3' - '4' ) b = some_function( param1=foo( \"a\", \"b\", \"c\" ), param2=bar(\"d\"), ) On the topic of line breaks around a binary operator, it goes on to say: For decades the recommended style was to break after binary operators. But this can hurt readability in two ways: the operators tend to get scattered across different columns on the screen, and each operator is moved away from its operand and onto the previous line. In Python code, it is permissible to break before or after a binary operator, as long as the convention is consistent locally. For new code Knuth's style (line breaks before the operator) is suggested. Example of explicit line continuation: a = '1' \\ + '2' \\ + '3' \\ - '4'"} +{"question_id": 2642613, "score": 697, "creation_date": 1271303503, "tags": ["python", "django", "foreign-keys", "many-to-many"], "instruction": "What is related_name used for?\n\nWhat is the related_name argument useful for on ManyToManyField and ForeignKey fields? For example, given the following code, what is the effect of related_name='maps'? class Map(db.Model): members = models.ManyToManyField(User, related_name='maps', verbose_name=_('members'))", "output": "The related_name attribute specifies the name of the reverse relation from the User model back to your model. If you don't specify a related_name, Django automatically creates one using the name of your model with the suffix _set, for instance User.map_set.all(). If you do specify, e.g. related_name=maps on the User model, User.map_set will still work, but the User.maps. syntax is obviously a bit cleaner and less clunky; so for example, if you had a user object current_user, you could use current_user.maps.all() to get all instances of your Map model that have a relation to current_user. The Django documentation has more details. To disable creating the backwards relationship entirely, set related_name to \"+\"."} +{"question_id": 107705, "score": 697, "creation_date": 1221902240, "tags": ["python", "stdout", "output-buffering"], "instruction": "Disable output buffering\n\nIs output buffering enabled by default in Python's interpreter for sys.stdout? If the answer is positive, what are all the ways to disable it? Suggestions so far: Use the -u command line switch Wrap sys.stdout in an object that flushes after every write Set PYTHONUNBUFFERED env var sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) Is there any other way to set some global flag in sys/sys.stdout programmatically during execution? If you just want to flush after a specific write using print, see How can I flush the output of the print function?.", "output": "From Magnus Lycka answer on a mailing list: You can skip buffering for a whole python process using python -u or by setting the environment variable PYTHONUNBUFFERED. You could also replace sys.stdout with some other stream like wrapper which does a flush after every call. class Unbuffered(object): def __init__(self, stream): self.stream = stream def write(self, data): self.stream.write(data) self.stream.flush() def writelines(self, datas): self.stream.writelines(datas) self.stream.flush() def __getattr__(self, attr): return getattr(self.stream, attr) import sys sys.stdout = Unbuffered(sys.stdout) print 'Hello'"} +{"question_id": 13890935, "score": 693, "creation_date": 1355562897, "tags": ["python", "time", "timezone"], "instruction": "Does Python's time.time() return the local or UTC timestamp?\n\nDoes time.time() in the Python time module return the system's time or the time in UTC?", "output": "The time.time() function returns the number of seconds since the epoch, as a float. Note that \u201cthe epoch\u201d is defined as the start of January 1st, 1970 in UTC. So the epoch is defined in terms of UTC and establishes a global moment in time. No matter where on Earth you are, \u201cseconds past epoch\u201d (time.time()) returns the same value at the same moment. Here is some sample output I ran on my computer, converting it to a string as well. >>> import time >>> ts = time.time() >>> ts 1355563265.81 >>> import datetime >>> datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S') '2012-12-15 01:21:05' >>> The ts variable is the time returned in seconds. I then converted it to a human-readable string using the datetime library."} +{"question_id": 7427101, "score": 693, "creation_date": 1316070706, "tags": ["python", "argparse"], "instruction": "Simple argparse example wanted: 1 argument, 3 results\n\nThe documentation for the argparse python module, while excellent I'm sure, is too much for my tiny beginner brain to grasp right now. All I want to do is \"If arg is A, do this, if B do that, if none of the above show help and quit\". I don't need to do math on the command line or meddle with formatting lines on the screen or change option characters.", "output": "My understanding of the question is two-fold. First, the simplest possible argparse example. Of course, to be dead-simple, it's got to be a toy example, i.e. all overhead with little power, but it might get you started. import argparse parser = argparse.ArgumentParser() parser.add_argument(\"a\") args = parser.parse_args() if args.a == 'magic.name': print('You nailed it!') But this positional argument is now required. If you leave it out when invoking this program, you'll get an error about missing arguments. This leads me to the second part of the question. You seem to want a single optional argument without a named label (the --option labels). My suggestion would be to modify the code above as follows: import argparse parser = argparse.ArgumentParser() parser.add_argument(\"a\", nargs='?') args = parser.parse_args() if args.a is None: print('I can tell that no argument was given and I can deal with that here.') elif args.a == 'magic.name': print('You nailed it!') else: print(args.a) There may well be a more elegant solution, but this works and is minimalist. Note: If you want a different default value instead of None, use the default parameter to .add_argument."} +{"question_id": 37835179, "score": 692, "creation_date": 1465992891, "tags": ["python", "mypy", "python-typing"], "instruction": "How can I specify the function type in my type hints?\n\nHow can I specify the type hint of a variable as a function type? There is no typing.Function, and I could not find anything in the relevant PEP, PEP 483.", "output": "As @jonrsharpe noted in a comment, this can be done with collections.abc.Callable: from collections.abc import Callable def my_function(func: Callable): Note: Callable on its own is equivalent to Callable[..., Any]. Such a Callable takes any number and type of arguments (...) and returns a value of any type (Any). If this is too unconstrained, one may also specify the types of the input argument list and return type. For example, given: def sum(a: int, b: int) -> int: return a+b The corresponding annotation is: Callable[[int, int], int] That is, the parameters are sub-scripted in the outer subscription with the return type as the second element in the outer subscription. In general: Callable[[ParamType1, ParamType2, ..., ParamTypeN], ReturnType]"} +{"question_id": 14770735, "score": 690, "creation_date": 1360320117, "tags": ["python", "matplotlib", "subplot", "figure"], "instruction": "How do I change the figure size with subplots?\n\nHow do I increase the figure size for this figure? This does nothing: f.figsize(15, 15) Example code from the link: import matplotlib.pyplot as plt import numpy as np # Simple data to display in various forms x = np.linspace(0, 2 * np.pi, 400) y = np.sin(x ** 2) plt.close('all') # Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot') # Two subplots, the axes array is 1-d f, axarr = plt.subplots(2, sharex=True) axarr[0].plot(x, y) axarr[0].set_title('Sharing X axis') axarr[1].scatter(x, y) # Two subplots, unpack the axes array immediately f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.plot(x, y) ax1.set_title('Sharing Y axis') ax2.scatter(x, y) # Three subplots sharing both x/y axes f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=True) ax1.plot(x, y) ax1.set_title('Sharing both axes') ax2.scatter(x, y) ax3.scatter(x, 2 * y ** 2 - 1, color='r') # Fine-tune figure; make subplots close to each other and hide x ticks for # all but bottom plot. f.subplots_adjust(hspace=0) plt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=False) # row and column sharing f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', sharey='row') ax1.plot(x, y) ax1.set_title('Sharing x per column, y per row') ax2.scatter(x, y) ax3.scatter(x, 2 * y ** 2 - 1, color='r') ax4.plot(x, 2 * y ** 2 - 1, color='r') # Four axes, returned as a 2-d array f, axarr = plt.subplots(2, 2) axarr[0, 0].plot(x, y) axarr[0, 0].set_title('Axis [0,0]') axarr[0, 1].scatter(x, y) axarr[0, 1].set_title('Axis [0,1]') axarr[1, 0].plot(x, y ** 2) axarr[1, 0].set_title('Axis [1,0]') axarr[1, 1].scatter(x, y ** 2) axarr[1, 1].set_title('Axis [1,1]') # Fine-tune figure; hide x ticks for top plots and y ticks for right plots plt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False) plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False) # Four polar axes f, axarr = plt.subplots(2, 2, subplot_kw=dict(projection='polar')) axarr[0, 0].plot(x, y) axarr[0, 0].set_title('Axis [0,0]') axarr[0, 1].scatter(x, y) axarr[0, 1].set_title('Axis [0,1]') axarr[1, 0].plot(x, y ** 2) axarr[1, 0].set_title('Axis [1,0]') axarr[1, 1].scatter(x, y ** 2) axarr[1, 1].set_title('Axis [1,1]') # Fine-tune figure; make subplots farther from each other. f.subplots_adjust(hspace=0.3) plt.show()", "output": "Use .set_figwidth and .set_figheight on the matplotlib.figure.Figure object returned by plt.subplots(), or set both with f.set_size_inches(w, h). f.set_figheight(15) f.set_figwidth(15) Note: Unlike set_size_inches(), where the measurement unit is explicitly mentioned in the function's name, this is not the case for set_figwidth() and set_figheight(), which also use inches. This information is provided by the documentation of the function. Alternatively, when using .subplots() to create a new figure, specify figsize=: f, axs = plt.subplots(2, 2, figsize=(15, 15)) .subplots accepts **fig_kw, which are passed to pyplot.figure, and is where figsize can be found. Setting the figure's size may trigger the ValueError exception: Image size of 240000x180000 pixels is too large. It must be less than 2^16 in each direction This is a common problem for using the set_fig*() functions due to the assumptions that they work with pixels and not inches (obviously 240000*180000 inches is too much)."} +{"question_id": 4455076, "score": 687, "creation_date": 1292448461, "tags": ["python", "arrays", "numpy", "indexing"], "instruction": "How do I access the ith column of a NumPy multidimensional array?\n\nGiven: test = np.array([[1, 2], [3, 4], [5, 6]]) test[i] gives the ith row (e.g. [1, 2]). How do I access the ith column? (e.g. [1, 3, 5]). Also, would this be an expensive operation?", "output": "With: test = np.array([[1, 2], [3, 4], [5, 6]]) To access column 0: >>> test[:, 0] array([1, 3, 5]) To access row 0: >>> test[0, :] array([1, 2]) This is covered in Section 1.4 (Indexing) of the NumPy reference. This is quick, at least in my experience. It's certainly much quicker than accessing each element in a loop."} +{"question_id": 16573332, "score": 687, "creation_date": 1368645764, "tags": ["python", "json", "curl"], "instruction": "JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n\nI am getting error Expecting value: line 1 column 1 (char 0) when trying to decode JSON. The URL I use for the API call works fine in the browser, but gives this error when done through a curl request. The following is the code I use for the curl request. The error happens at return simplejson.loads(response_json) response_json = self.web_fetch(url) response_json = response_json.decode('utf-8') return json.loads(response_json) def web_fetch(self, url): buffer = StringIO() curl = pycurl.Curl() curl.setopt(curl.URL, url) curl.setopt(curl.TIMEOUT, self.timeout) curl.setopt(curl.WRITEFUNCTION, buffer.write) curl.perform() curl.close() response = buffer.getvalue().strip() return response Traceback: File \"/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/django/core/handlers/base.py\" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File \"/Users/nab/Desktop/pricestore/pricemodels/views.py\" in view_category 620. apicall=api.API().search_parts(category_id= str(categoryofpart.api_id), manufacturer = manufacturer, filter = filters, start=(catpage-1)*20, limit=20, sort_by='[[\"mpn\",\"asc\"]]') File \"/Users/nab/Desktop/pricestore/pricemodels/api.py\" in search_parts 176. return simplejson.loads(response_json) File \"/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/__init__.py\" in loads 455. return _default_decoder.decode(s) File \"/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py\" in decode 374. obj, end = self.raw_decode(s) File \"/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py\" in raw_decode 393. return self.scan_once(s, idx=_w(s, idx).end()) Exception Type: JSONDecodeError at /pricemodels/2/dir/ Exception Value: Expecting value: line 1 column 1 (char 0)", "output": "Your code produced an empty response body; you'd want to check for that or catch the exception raised. It is possible the server responded with a 204 No Content response, or a non-200-range status code was returned (404 Not Found, etc.). Check for this. Note: There is no need to decode a response from UTF8 to Unicode, the json.loads() method can handle UTF8-encoded data natively. pycurl has a very archaic API. Unless you have a specific requirement for using it, there are better choices. Either requests or httpx offer much friendlier APIs, including JSON support. If you can, replace your call with the following httpx code: import httpx response = httpx.get(url) response.raise_for_status() # raises exception when not a 2xx response if response.status_code != 204: return response.json() Of course, this won't protect you from a URL that doesn't comply with HTTP standards; when using arbitrary URLs where this is a possibility, check if the server intended to give you JSON by checking the Content-Type header, and for good measure catch the exception: if ( response.status_code != 204 and response.headers[\"content-type\"].strip().startswith(\"application/json\") ): try: return response.json() except ValueError: # decide how to handle a server that's misbehaving to this extent"} +{"question_id": 6893968, "score": 681, "creation_date": 1312168830, "tags": ["python", "multithreading", "function", "return-value", "python-multithreading"], "instruction": "How to get the return value from a thread?\n\nThe function foo below returns a string 'foo'. How can I get the value 'foo' which is returned from the thread's target? from threading import Thread def foo(bar): print('hello {}'.format(bar)) return 'foo' thread = Thread(target=foo, args=('world!',)) thread.start() return_value = thread.join() The \"one obvious way to do it\", shown above, doesn't work: thread.join() returned None.", "output": "In Python 3.2+, stdlib concurrent.futures module provides a higher level API to threading, including passing return values or exceptions from a worker thread back to the main thread. You can call the result() method on a Future instance, and it will wait until the thread is completed before returning the result value of the thread's function. import concurrent.futures def foo(bar): print('hello {}'.format(bar)) return 'foo' with concurrent.futures.ThreadPoolExecutor() as executor: future = executor.submit(foo, 'world!') return_value = future.result() print(return_value)"} +{"question_id": 13295735, "score": 680, "creation_date": 1352400639, "tags": ["python", "pandas", "dataframe", "nan", "fillna"], "instruction": "How to replace NaN values in a dataframe column\n\nI have a Pandas Dataframe as below: itm Date Amount 67 420 2012-09-30 00:00:00 65211 68 421 2012-09-09 00:00:00 29424 69 421 2012-09-16 00:00:00 29877 70 421 2012-09-23 00:00:00 30990 71 421 2012-09-30 00:00:00 61303 72 485 2012-09-09 00:00:00 71781 73 485 2012-09-16 00:00:00 NaN 74 485 2012-09-23 00:00:00 11072 75 485 2012-09-30 00:00:00 113702 76 489 2012-09-09 00:00:00 64731 77 489 2012-09-16 00:00:00 NaN When I try to apply a function to the Amount column, I get the following error: ValueError: cannot convert float NaN to integer I have tried applying a function using math.isnan, pandas' .replace method, .sparse data attribute from pandas 0.9, if NaN == NaN statement in a function; I have also looked at this Q/A; none of them works. How do I do it?", "output": "DataFrame.fillna() or Series.fillna() will do this for you. Example: In [7]: df Out[7]: 0 1 0 NaN NaN 1 -0.494375 0.570994 2 NaN NaN 3 1.876360 -0.229738 4 NaN NaN In [8]: df.fillna(0) Out[8]: 0 1 0 0.000000 0.000000 1 -0.494375 0.570994 2 0.000000 0.000000 3 1.876360 -0.229738 4 0.000000 0.000000 To fill the NaNs in only one column, select just that column. In [12]: df[1] = df[1].fillna(0) In [13]: df Out[13]: 0 1 0 NaN 0.000000 1 -0.494375 0.570994 2 NaN 0.000000 3 1.876360 -0.229738 4 NaN 0.000000 Or you can use the built in column-specific functionality: df = df.fillna({1: 0})"} +{"question_id": 3389574, "score": 677, "creation_date": 1280765420, "tags": ["python", "list", "contains"], "instruction": "Check if multiple strings exist in another string\n\nHow can I check if any of the strings in an array exists in another string? For example: a = ['a', 'b', 'c'] s = \"a123\" if a in s: print(\"some of the strings found in s\") else: print(\"no strings found in s\") How can I replace the if a in s: line to get the appropriate result?", "output": "You can use any: a_string = \"A string is more than its parts!\" matches = [\"more\", \"wholesome\", \"milk\"] if any(x in a_string for x in matches): Similarly to check if all the strings from the list are found, use all instead of any."} +{"question_id": 14827650, "score": 668, "creation_date": 1360654649, "tags": ["python", "matplotlib", "google-maps-markers", "scatter-plot"], "instruction": "pyplot scatter plot marker size\n\nIn the pyplot document for scatter plot: matplotlib.pyplot.scatter(x, y, s=20, c='b', marker='o', cmap=None, norm=None, vmin=None, vmax=None, alpha=None, linewidths=None, faceted=True, verts=None, hold=None, **kwargs) The marker size s: size in points^2. It is a scalar or an array of the same length as x and y. What kind of unit is points^2? What does it mean? Does s=100 mean 10 pixel x 10 pixel? Basically I'm trying to make scatter plots with different marker sizes, and I want to figure out what does the s number mean.", "output": "s is an area s is an area (measured in pt2), and is the square of a length (measured in pt): s = area = length**2 For example, these are all proportional to length**2: square_area = w**2 --> 1 * length**2 circle_area = \u03c0 * r**2 --> \u03c0/4 * length**2 To double the width and height of any marker, multiply s by a factor of 4: A = W * H --> (2W) * (2H) = 4A (Warning: s is proportional to the marker's shaded area, but is usually not equal to it.) Why is it like this? There is a reason, however, that the size of markers is defined in this way. Because of the scaling of area as the square of width, doubling the width actually appears to increase the size by more than a factor 2 (in fact it increases it by a factor of 4). To see this consider the following two examples and the output they produce. # doubling the width of markers x = [0,2,4,6,8,10] y = [0]*len(x) s = [20*4**n for n in range(len(x))] plt.scatter(x,y,s=s) plt.show() gives Notice how the size increases very quickly. If instead we have # doubling the area of markers x = [0,2,4,6,8,10] y = [0]*len(x) s = [20*2**n for n in range(len(x))] plt.scatter(x,y,s=s) plt.show() gives Now the apparent size of the markers increases roughly linearly in an intuitive fashion. As for the exact meaning of what a 'point' is, it is fairly arbitrary for plotting purposes, you can just scale all of your sizes by a constant until they look reasonable. Edit: (In response to comment from @Emma) It's probably confusing wording on my part. The question asked about doubling the width of a circle so in the first picture for each circle (as we move from left to right) it's width is double the previous one so for the area this is an exponential with base 4. Similarly the second example each circle has area double the last one which gives an exponential with base 2. However it is the second example (where we are scaling area) that doubling area appears to make the circle twice as big to the eye. Thus if we want a circle to appear a factor of n bigger we would increase the area by a factor n not the radius so the apparent size scales linearly with the area. Edit to visualize the comment by @TomaszGandor: This is what it looks like for different functions of the marker size: x = [0,2,4,6,8,10,12,14,16,18] s_exp = [20*2**n for n in range(len(x))] s_square = [20*n**2 for n in range(len(x))] s_linear = [20*n for n in range(len(x))] plt.scatter(x,[1]*len(x),s=s_exp, label='$s=2^n$', lw=1) plt.scatter(x,[0]*len(x),s=s_square, label='$s=n^2$') plt.scatter(x,[-1]*len(x),s=s_linear, label='$s=n$') plt.ylim(-1.5,1.5) plt.legend(loc='center left', bbox_to_anchor=(1.1, 0.5), labelspacing=3) plt.show()"} +{"question_id": 273946, "score": 664, "creation_date": 1226099284, "tags": ["python", "image", "python-imaging-library", "thumbnails"], "instruction": "How do I resize an image using PIL and maintain its aspect ratio?\n\nIs there an obvious way to do this that I'm missing? I'm just trying to make thumbnails.", "output": "Define a maximum size. Then, compute a resize ratio by taking min(maxwidth/width, maxheight/height). The proper size is oldsize*ratio. There is of course also a library method to do this: the method Image.thumbnail. Below is an (edited) example from the PIL documentation. import os, sys import Image size = 128, 128 for infile in sys.argv[1:]: outfile = os.path.splitext(infile)[0] + \".thumbnail\" if infile != outfile: try: im = Image.open(infile) im.thumbnail(size, Image.Resampling.LANCZOS) im.save(outfile, \"JPEG\") except IOError: print \"cannot create thumbnail for '%s'\" % infile"} +{"question_id": 2504411, "score": 658, "creation_date": 1269387328, "tags": ["python", "string"], "instruction": "Proper indentation for multiline strings?\n\nWhat is the proper indentation for Python multiline strings within a function? def method(): string = \"\"\"line one line two line three\"\"\" or def method(): string = \"\"\"line one line two line three\"\"\" or something else? It looks kind of weird to have the string hanging outside the function in the first example.", "output": "You probably want to line up with the \"\"\" def foo(): string = \"\"\"line one line two line three\"\"\" Since the newlines and spaces are included in the string itself, you will have to postprocess it. If you don't want to do that and you have a whole lot of text, you might want to store it separately in a text file. If a text file does not work well for your application and you don't want to postprocess, I'd probably go with def foo(): string = (\"this is an \" \"implicitly joined \" \"string\") If you want to postprocess a multiline string to trim out the parts you don't need, you should consider the textwrap module or the technique for postprocessing docstrings presented in PEP 257: def trim(docstring): import sys if not docstring: return '' # Convert tabs to spaces (following the normal Python rules) # and split into a list of lines: lines = docstring.expandtabs().splitlines() # Determine minimum indentation (first line doesn't count): indent = sys.maxint for line in lines[1:]: stripped = line.lstrip() if stripped: indent = min(indent, len(line) - len(stripped)) # Remove indentation (first line is special): trimmed = [lines[0].strip()] if indent < sys.maxint: for line in lines[1:]: trimmed.append(line[indent:].rstrip()) # Strip off trailing and leading blank lines: while trimmed and not trimmed[-1]: trimmed.pop() while trimmed and not trimmed[0]: trimmed.pop(0) # Return a single string: return '\\n'.join(trimmed)"} +{"question_id": 893333, "score": 657, "creation_date": 1242917487, "tags": ["python", "with-statement"], "instruction": "Multiple variables in a 'with' statement?\n\nIs it possible to declare more than one variable using a with statement in Python? Something like: from __future__ import with_statement with open(\"out.txt\",\"wt\"), open(\"in.txt\") as file_out, file_in: for line in file_in: file_out.write(line) ... or is cleaning up two resources at the same time the problem?", "output": "It is possible in Python 3 since v3.1 and Python 2.7. The new with syntax supports multiple context managers: with A() as a, B() as b, C() as c: doSomething(a,b,c) Unlike the contextlib.nested, this guarantees that a and b will have their __exit__()'s called even if C() or it's __enter__() method raises an exception. You can also use earlier variables in later definitions (h/t Ahmad below): with A() as a, B(a) as b, C(a, b) as c: doSomething(a, c) As of Python 3.9 (however only officially reported in 3.10), you can use parentheses: with ( A() as a, B(a) as b, C(a, b) as c, ): doSomething(a, c)"} +{"question_id": 16694907, "score": 655, "creation_date": 1369234057, "tags": ["python", "download", "stream", "python-requests"], "instruction": "Download a large file in Python with Requests\n\nRequests is a really nice library. I'd like to use it for downloading big files (greater than 1 GB). The problem is it's not possible to keep the whole file in memory; I need to read it in chunks. And this is a problem with the following code: import requests def DownloadFile(url) local_filename = url.split('/')[-1] r = requests.get(url) f = open(local_filename, 'wb') for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: # filter out keep-alive new chunks f.write(chunk) f.close() return For some reason, it doesn't work this way; it still loads the response into memory before it is saved to a file.", "output": "The missing piece is to turn on streaming with stream=True\u2014that's what tells Requests not to read the whole content into memory before you have a chance to look at it. With the following streaming code, the Python memory usage is restricted regardless of the size of the downloaded file: def download_file(url): local_filename = url.split('/')[-1] # NOTE the stream=True parameter below with requests.get(url, stream=True) as r: r.raise_for_status() with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=8192): # If you have chunk encoded response uncomment if # and set chunk_size parameter to None. #if chunk: f.write(chunk) return local_filename Note that the number of bytes returned using iter_content is not exactly the chunk_size; it's expected to be a random number that is often far bigger, and is expected to be different in every iteration. See body-content-workflow and Response.iter_content for further reference."} +{"question_id": 4564559, "score": 654, "creation_date": 1293728379, "tags": ["python", "python-2.7", "exception", "stack-trace"], "instruction": "Get exception description and stack trace which caused an exception, all as a string\n\nHow to convert a caught Exception (its description and stack trace) into a str for external use? try: method_that_can_raise_an_exception(params) except Exception as e: print(complete_exception_description(e))", "output": "See the traceback module, specifically the format_exc() function. Here. import traceback try: raise ValueError except ValueError: tb = traceback.format_exc() else: tb = \"No error\" finally: print(tb)"} +{"question_id": 8917885, "score": 650, "creation_date": 1326922993, "tags": ["python", "version", "windows-server"], "instruction": "Which version of Python do I have installed?\n\nI have to run a Python script on a Windows server. How can I know which version of Python I have, and does it even really matter? I was thinking of updating to the latest version of Python.", "output": "To check the version of one's Python's Software version, one should use the following code in command prompt: python -V Reference: http://docs.python.org/using/cmdline.html#generic-options"} +{"question_id": 26886653, "score": 649, "creation_date": 1415794092, "tags": ["python", "pandas", "dataframe", "conditional-statements", "switch-statement"], "instruction": "Create new column based on values from other columns / apply a function of multiple columns, row-wise in Pandas\n\nI want to apply my custom function (it uses an if-else ladder) to these six columns (ERI_Hispanic, ERI_AmerInd_AKNatv, ERI_Asian, ERI_Black_Afr.Amer, ERI_HI_PacIsl, ERI_White) in each row of my dataframe. I've tried different methods from other questions but still can't seem to find the right answer for my problem. The critical piece of this is that if the person is counted as Hispanic they can't be counted as anything else. Even if they have a \"1\" in another ethnicity column they still are counted as Hispanic not two or more races. Similarly, if the sum of all the ERI columns is greater than 1 they are counted as two or more races and can't be counted as a unique ethnicity(except for Hispanic). It's almost like doing a for loop through each row and if each record meets a criterion they are added to one list and eliminated from the original. From the dataframe below I need to calculate a new column based on the following spec in SQL: CRITERIA IF [ERI_Hispanic] = 1 THEN RETURN \u201cHispanic\u201d ELSE IF SUM([ERI_AmerInd_AKNatv] + [ERI_Asian] + [ERI_Black_Afr.Amer] + [ERI_HI_PacIsl] + [ERI_White]) > 1 THEN RETURN \u201cTwo or More\u201d ELSE IF [ERI_AmerInd_AKNatv] = 1 THEN RETURN \u201cA/I AK Native\u201d ELSE IF [ERI_Asian] = 1 THEN RETURN \u201cAsian\u201d ELSE IF [ERI_Black_Afr.Amer] = 1 THEN RETURN \u201cBlack/AA\u201d ELSE IF [ERI_HI_PacIsl] = 1 THEN RETURN \u201cHaw/Pac Isl.\u201d ELSE IF [ERI_White] = 1 THEN RETURN \u201cWhite\u201d Comment: If the ERI Flag for Hispanic is True (1), the employee is classified as \u201cHispanic\u201d Comment: If more than 1 non-Hispanic ERI Flag is true, return \u201cTwo or More\u201d DATAFRAME lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined 0 MOST JEFF E 0 0 0 0 0 1 White 1 CRUISE TOM E 0 0 0 1 0 0 White 2 DEPP JOHNNY 0 0 0 0 0 1 Unknown 3 DICAP LEO 0 0 0 0 0 1 Unknown 4 BRANDO MARLON E 0 0 0 0 0 0 White 5 HANKS TOM 0 0 0 0 0 1 Unknown 6 DENIRO ROBERT E 0 1 0 0 0 1 White 7 PACINO AL E 0 0 0 0 0 1 White 8 WILLIAMS ROBIN E 0 0 1 0 0 0 White 9 EASTWOOD CLINT E 0 0 0 0 0 1 White", "output": "OK, two steps to this - first is to write a function that does the translation you want - I've put an example together based on your pseudo-code: def label_race(row): if row['eri_hispanic'] == 1: return 'Hispanic' if row['eri_afr_amer'] + row['eri_asian'] + row['eri_hawaiian'] + row['eri_nat_amer'] + row['eri_white'] > 1: return 'Two Or More' if row['eri_nat_amer'] == 1: return 'A/I AK Native' if row['eri_asian'] == 1: return 'Asian' if row['eri_afr_amer'] == 1: return 'Black/AA' if row['eri_hawaiian'] == 1: return 'Haw/Pac Isl.' if row['eri_white'] == 1: return 'White' return 'Other' You may want to go over this, but it seems to do the trick - notice that the parameter going into the function is considered to be a Series object labelled \"row\". Next, use the apply function in pandas to apply the function - e.g. df.apply(label_race, axis=1) Note the axis=1 specifier, that means that the application is done at a row, rather than a column level. The results are here: 0 White 1 Hispanic 2 White 3 White 4 Other 5 White 6 Two Or More 7 White 8 Haw/Pac Isl. 9 White If you're happy with those results, then run it again, saving the results into a new column in your original dataframe. df['race_label'] = df.apply(label_race, axis=1) The resultant dataframe looks like this (scroll to the right to see the new column): lname fname rno_cd eri_afr_amer eri_asian eri_hawaiian eri_hispanic eri_nat_amer eri_white rno_defined race_label 0 MOST JEFF E 0 0 0 0 0 1 White White 1 CRUISE TOM E 0 0 0 1 0 0 White Hispanic 2 DEPP JOHNNY NaN 0 0 0 0 0 1 Unknown White 3 DICAP LEO NaN 0 0 0 0 0 1 Unknown White 4 BRANDO MARLON E 0 0 0 0 0 0 White Other 5 HANKS TOM NaN 0 0 0 0 0 1 Unknown White 6 DENIRO ROBERT E 0 1 0 0 0 1 White Two Or More 7 PACINO AL E 0 0 0 0 0 1 White White 8 WILLIAMS ROBIN E 0 0 1 0 0 0 White Haw/Pac Isl. 9 EASTWOOD CLINT E 0 0 0 0 0 1 White White"} +{"question_id": 6344076, "score": 645, "creation_date": 1308057447, "tags": ["python", "packaging", "setuptools", "distutils", "distribute"], "instruction": "Differences between distribute, distutils, setuptools and distutils2?\n\nThe Situation I\u2019m trying to port an open-source library to Python 3. (SymPy, if anyone is wondering.) So, I need to run 2to3 automatically when building for Python 3. To do that, I need to use distribute. Therefore, I need to port the current system, which (according to the doctest) is distutils. The Problem Unfortunately, I\u2019m not sure what\u2019s the difference between these modules\u2014distutils, distribute, setuptools. The documentation is sketchy as best, as they all seem to be a fork of one another, intended to be compatible in most circumstances (but actually, not all)\u2026and so on, and so forth. The Question Could someone explain the differences? What am I supposed to use? What is the most modern solution? (As an aside, I\u2019d also appreciate some guide on porting to Distribute, but that\u2019s a tad beyond the scope of the question\u2026)", "output": "As of May 2022, most of the other answers to this question are several years out-of-date. When you come across advice on Python packaging issues, remember to look at the date of publication, and don't trust out-of-date information. The Python Packaging User Guide is worth a read. Every page has a \"last updated\" date displayed, so you can check the recency of the manual, and it's quite comprehensive. The fact that it's hosted on a subdomain of python.org of the Python Software Foundation just adds credence to it. The Project Summaries page is especially relevant here. Summary of tools: Here's a summary of the Python packaging landscape: Supported tools: setuptools was developed to overcome Distutils' limitations, and is not included in the standard library. It introduced a command-line utility called easy_install. It also introduced the setuptools Python package that can be imported in your setup.py script, and the pkg_resources Python package that can be imported in your code to locate data files installed with a distribution. One of its gotchas is that it monkey-patches the distutils Python package. It should work well with pip. It sees regular releases. Official docs | Pypi page | GitHub repo | setuptools section of Python Package User Guide scikit-build is an improved build system generator that internally uses CMake to build compiled Python extensions. Because scikit-build isn't based on distutils, it doesn't really have any of its limitations. When ninja-build is present, scikit-build can compile large projects over three times faster than the alternatives. It should work well with pip. Official docs | Pypi page | GitHub repo | scikit-build section of Python Package User Guide distlib is a library that provides functionality that is used by higher level tools like pip. Official Docs | Pypi page | Bitbucket repo | distlib section of Python Package User Guide packaging is also a library that provides functionality used by higher level tools like pip and setuptools Official Docs | Pypi page | GitHub repo | packaging section of Python Package User Guide Deprecated/abandoned tools: distutils is still included in the standard library of Python, but is considered deprecated as of Python 3.10. It is useful for simple Python distributions, but lacks features. It introduces the distutils Python package that can be imported in your setup.py script. Official docs | distutils section of Python Package User Guide distribute was a fork of setuptools. It shared the same namespace, so if you had Distribute installed, import setuptools would actually import the package distributed with Distribute. Distribute was merged back into Setuptools 0.7, so you don't need to use Distribute any more. In fact, the version on Pypi is just a compatibility layer that installs Setuptools. distutils2 was an attempt to take the best of distutils, setuptools and distribute and become the standard tool included in Python's standard library. The idea was that distutils2 would be distributed for old Python versions, and that distutils2 would be renamed to packaging for Python 3.3, which would include it in its standard library. These plans did not go as intended, however, and currently, distutils2 is an abandoned project. The latest release was in March 2012, and its Pypi home page has finally been updated to reflect its death. Others: There are other tools, if you are interested, read Project Summaries in the Python Packaging User Guide. I won't list them all, to not repeat that page, and to keep the answer matching the question, which was only about distribute, distutils, setuptools and distutils2. Recommendation: If all of this is new to you, and you don't know where to start, I would recommend learning setuptools, along with pip and virtualenv, which all work very well together. If you're looking into virtualenv, you might be interested in this question: What is the difference between venv, pyvenv, pyenv, virtualenv, virtualenvwrapper, etc?. (Yes, I know, I groan with you.)"} +{"question_id": 33225947, "score": 645, "creation_date": 1445299737, "tags": ["javascript", "python", "google-chrome", "selenium", "selenium-chromedriver"], "instruction": "Can a website detect when you are using Selenium with chromedriver?\n\nI've been testing out Selenium with Chromedriver and I noticed that some pages can detect that you're using Selenium even though there's no automation at all. Even when I'm just browsing manually just using Chrome through Selenium and Xephyr I often get a page saying that suspicious activity was detected. I've checked my user agent, and my browser fingerprint, and they are all exactly identical to the normal Chrome browser. When I browse to these sites in normal Chrome everything works fine, but the moment I use Selenium I'm detected. In theory, chromedriver and Chrome should look literally exactly the same to any web server, but somehow they can detect it. If you want some test code try out this: from pyvirtualdisplay import Display from selenium import webdriver display = Display(visible=1, size=(1600, 902)) display.start() chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--disable-extensions') chrome_options.add_argument('--profile-directory=Default') chrome_options.add_argument(\"--incognito\") chrome_options.add_argument(\"--disable-plugins-discovery\"); chrome_options.add_argument(\"--start-maximized\") driver = webdriver.Chrome(chrome_options=chrome_options) driver.delete_all_cookies() driver.set_window_size(800,800) driver.set_window_position(0,0) print 'arguments done' driver.get('http://stubhub.com') If you browse around stubhub you'll get redirected and 'blocked' within one or two requests. I've been investigating this and I can't figure out how they can tell that a user is using Selenium. How do they do it? I installed the Selenium IDE plugin in Firefox and I got banned when I went to stubhub.com in the normal Firefox browser with only the additional plugin. When I use Fiddler to view the HTTP requests being sent back and forth I've noticed that the 'fake browser's' requests often have 'no-cache' in the response header. Results like this Is there a way to detect that I'm in a Selenium Webdriver page from JavaScript? suggest that there should be no way to detect when you are using a webdriver. But this evidence suggests otherwise. The site uploads a fingerprint to their servers, but I checked and the fingerprint of Selenium is identical to the fingerprint when using Chrome. This is one of the fingerprint payloads that they send to their servers: {\"appName\":\"Netscape\",\"platform\":\"Linuxx86_64\",\"cookies\":1,\"syslang\":\"en-US\",\"userlang\":\"en- US\",\"cpu\":\"\",\"productSub\":\"20030107\",\"setTimeout\":1,\"setInterval\":1,\"plugins\": {\"0\":\"ChromePDFViewer\",\"1\":\"ShockwaveFlash\",\"2\":\"WidevineContentDecryptionMo dule\",\"3\":\"NativeClient\",\"4\":\"ChromePDFViewer\"},\"mimeTypes\": {\"0\":\"application/pdf\",\"1\":\"ShockwaveFlashapplication/x-shockwave- flash\",\"2\":\"FutureSplashPlayerapplication/futuresplash\",\"3\":\"WidevineContent DecryptionModuleapplication/x-ppapi-widevine- cdm\",\"4\":\"NativeClientExecutableapplication/x- nacl\",\"5\":\"PortableNativeClientExecutableapplication/x- pnacl\",\"6\":\"PortableDocumentFormatapplication/x-google-chrome- pdf\"},\"screen\":{\"width\":1600,\"height\":900,\"colorDepth\":24},\"fonts\": {\"0\":\"monospace\",\"1\":\"DejaVuSerif\",\"2\":\"Georgia\",\"3\":\"DejaVuSans\",\"4\":\"Trebu chetMS\",\"5\":\"Verdana\",\"6\":\"AndaleMono\",\"7\":\"DejaVuSansMono\",\"8\":\"LiberationM ono\",\"9\":\"NimbusMonoL\",\"10\":\"CourierNew\",\"11\":\"Courier\"}} It's identical in Selenium and in Chrome. VPNs work for a single use, but they get detected after I load the first page. Clearly some JavaScript code is being run to detect Selenium.", "output": "Replacing cdc_ string You can use Vim or Perl to replace the cdc_ string in chromedriver. See the answer by @Erti-Chris Eelmaa to learn more about that string and how it's a detection point. Using Vim or Perl prevents you from having to recompile source code or use a hex editor. Make sure to make a copy of the original chromedriver before attempting to edit it. Our goal is to alter the cdc_ string, which looks something like $cdc_lasutopfhvcZLmcfl. The methods below were tested on chromedriver version 2.41.578706. Using Vim vim -b /path/to/chromedriver After running the line above, you'll probably see a bunch of gibberish. Do the following: Replace all instances of cdc_ with dog_ by typing :%s/cdc_/dog_/g. dog_ is just an example. You can choose anything as long as it has the same amount of characters as the search string (e.g., cdc_), otherwise the chromedriver will fail. To save the changes and quit, type :wq! and press return. If you need to quit without saving changes, type :q! and press return. The -b option tells vim upfront to open the file as a binary, so it won't mess with things like (missing) line endings (especially at the end of the file). Using Perl The line below replaces all cdc_ occurrences with dog_. Credit to Vic Seedoubleyew: perl -pi -e 's/cdc_/dog_/g' /path/to/chromedriver Make sure that the replacement string (e.g., dog_) has the same number of characters as the search string (e.g., cdc_), otherwise the chromedriver will fail. Wrapping Up To verify that all occurrences of cdc_ were replaced: grep \"cdc_\" /path/to/chromedriver If no output was returned, the replacement was successful. Go to the altered chromedriver and double click on it. A terminal window should open up. If you don't see killed in the output, you've successfully altered the driver. Make sure that the name of the altered chromedriver binary is chromedriver, and that the original binary is either moved from its original location or renamed. My Experience With This Method I was previously being detected on a website while trying to log in, but after replacing cdc_ with an equal sized string, I was able to log in. Like others have said though, if you've already been detected, you might get blocked for a plethora of other reasons even after using this method. So you may have to try accessing the site that was detecting you using a VPN, different network, etc."} +{"question_id": 40208051, "score": 644, "creation_date": 1477258753, "tags": ["python", "selenium", "firefox", "selenium-firefoxdriver", "geckodriver"], "instruction": "Selenium using Python - Geckodriver executable needs to be in PATH\n\nI am going over Sweigart's Automate the Boring Stuff with Python text. I'm using IDLE and already installed the Selenium module and the Firefox browser. Whenever I tried to run the webdriver function, I get this: from selenium import webdriver browser = webdriver.Firefox() Exception: Exception ignored in: > Traceback (most recent call last): File \"C:\\Python\\Python35\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 163, in __del__ self.stop() File \"C:\\Python\\Python35\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 135, in stop if self.process is None: AttributeError: 'Service' object has no attribute 'process' Exception ignored in: > Traceback (most recent call last): File \"C:\\Python\\Python35\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 163, in __del__ self.stop() File \"C:\\Python\\Python35\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 135, in stop if self.process is None: AttributeError: 'Service' object has no attribute 'process' Traceback (most recent call last): File \"C:\\Python\\Python35\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 64, in start stdout=self.log_file, stderr=self.log_file) File \"C:\\Python\\Python35\\lib\\subprocess.py\", line 947, in __init__ restore_signals, start_new_session) File \"C:\\Python\\Python35\\lib\\subprocess.py\", line 1224, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"\", line 1, in browser = webdriver.Firefox() File \"C:\\Python\\Python35\\lib\\site-packages\\selenium\\webdriver\\firefox\\webdriver.py\", line 135, in __init__ self.service.start() File \"C:\\Python\\Python35\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 71, in start os.path.basename(self.path), self.start_error_message) selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH. I think I need to set the path for geckodriver, but I am not sure how, so how would I do this?", "output": "selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH. First of all you will need to download latest executable geckodriver from here to run latest Firefox using Selenium Actually, the Selenium client bindings tries to locate the geckodriver executable from the system PATH. You will need to add the directory containing the executable to the system path. On Unix systems you can do the following to append it to your system\u2019s search path, if you\u2019re using a Bash-compatible shell: export PATH=$PATH:/path/to/directory/of/executable/downloaded/in/previous/step On Windows you will need to update the Path system variable to add the full directory path to the executable geckodriver manually or command line** (don't forget to restart your system after adding executable geckodriver into system PATH to take effect)**. The principle is the same as on Unix. Now you can run your code same as you're doing as below :- from selenium import webdriver browser = webdriver.Firefox() selenium.common.exceptions.WebDriverException: Message: Expected browser binary location, but unable to find binary in default location, no 'moz:firefoxOptions.binary' capability provided, and no binary flag set on the command line The exception clearly states you have installed Firefox some other location while Selenium is trying to find Firefox and launch from the default location, but it couldn't find it. You need to provide explicitly Firefox installed binary location to launch Firefox as below :- from selenium import webdriver from selenium.webdriver.firefox.firefox_binary import FirefoxBinary binary = FirefoxBinary('path/to/installed firefox binary') browser = webdriver.Firefox(firefox_binary=binary)"} +{"question_id": 2361426, "score": 643, "creation_date": 1267513889, "tags": ["python", "iterator"], "instruction": "Get the first item from an iterable that matches a condition\n\nI would like to get the first item from a list matching a condition. It's important that the resulting method not process the entire list, which could be quite large. For example, the following function is adequate: def first(the_iterable, condition = lambda x: True): for i in the_iterable: if condition(i): return i This function could be used something like this: >>> first(range(10)) 0 >>> first(range(10), lambda i: i > 3) 4 However, I can't think of a good built-in / one-liner to let me do this. I don't particularly want to copy this function around if I don't have to. Is there a built-in way to get the first item matching a condition?", "output": "Python 2.6+ and Python 3: If you want StopIteration to be raised if no matching element is found: next(x for x in the_iterable if x > 3) If you want default_value (e.g. None) to be returned instead: next((x for x in the_iterable if x > 3), default_value) Note that you need an extra pair of parentheses around the generator expression in this case \u2212 they are needed whenever the generator expression isn't the only argument. I see most answers resolutely ignore the next built-in and so I assume that for some mysterious reason they're 100% focused on versions 2.5 and older -- without mentioning the Python-version issue (but then I don't see that mention in the answers that do mention the next built-in, which is why I thought it necessary to provide an answer myself -- at least the \"correct version\" issue gets on record this way;-). Python <= 2.5 The .next() method of iterators immediately raises StopIteration if the iterator immediately finishes -- i.e., for your use case, if no item in the iterable satisfies the condition. If you don't care (i.e., you know there must be at least one satisfactory item) then just use .next() (best on a genexp, line for the next built-in in Python 2.6 and better). If you do care, wrapping things in a function as you had first indicated in your Q seems best, and while the function implementation you proposed is just fine, you could alternatively use itertools, a for...: break loop, or a genexp, or a try/except StopIteration as the function's body, as various answers suggested. There's not much added value in any of these alternatives so I'd go for the starkly-simple version you first proposed."} +{"question_id": 48152674, "score": 642, "creation_date": 1515423013, "tags": ["python", "memory-management", "gpu", "nvidia", "pytorch"], "instruction": "How do I check if PyTorch is using the GPU?\n\nHow do I check if PyTorch is using the GPU? The nvidia-smi command can detect GPU activity, but I want to check it directly from inside a Python script.", "output": "These functions should help: >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.device_count() 1 >>> torch.cuda.current_device() 0 >>> torch.cuda.device(0) >>> torch.cuda.get_device_name(0) 'GeForce GTX 950M' This tells us: CUDA is available and can be used by one device. Device 0 refers to the GPU GeForce GTX 950M, and it is currently chosen by PyTorch."} +{"question_id": 42231764, "score": 642, "creation_date": 1487091076, "tags": ["python", "anaconda", "conda"], "instruction": "How can I rename a conda environment?\n\nI have a conda environment named old_name, how can I change its name to new_name without breaking references?", "output": "New answer: From Conda 4.14 you will be able to use just: conda rename -n old_name new_name Although, under the hood, conda rename still uses [1][2] undermentioned combination of conda create and conda remove. Use the -d flag for dry-run (not destination, as of v22.11.0) conda rename -n old_name -d new_name Old answer: You can't. One workaround is to create clone a new environment and then remove the original one. First, remember to deactivate your current environment. You can do this with the commands: deactivate on Windows or source deactivate on macOS/Linux. Then: conda create --name new_name --clone old_name conda remove --name old_name --all # or its alias: `conda env remove --name old_name` Notice there are several drawbacks of this method: It redownloads packages (you can use --offline flag to disable it) Time consumed on copying environment's files Temporary double disk usage There is an open issue requesting this feature."} +{"question_id": 150505, "score": 641, "creation_date": 1222720195, "tags": ["python", "django", "url", "get", "url-parameters"], "instruction": "How to get GET request values in Django?\n\nI am currently defining regular expressions in order to capture parameters in a URL, as described in the tutorial. How do I access parameters from the URL as part the HttpRequest object? My HttpRequest.GET currently returns an empty QueryDict object. I'd like to learn how to do this without a library, so I can get to know Django better.", "output": "Give URL: domain/search/?q=haha use: request.GET.get('q', 'default'). q is the parameter, and 'default' is the default value if q isn't found. However, if you are instead just configuring your URLconf**, then your captures from the regex are passed to the function as arguments (or named arguments). Such as: (r'^user/(?P\\w{0,50})/$', views.profile_page,), Then in your views.py you would have def profile_page(request, username): # Rest of the method"} +{"question_id": 72852, "score": 632, "creation_date": 1221575042, "tags": ["python", "python-import", "python-module"], "instruction": "How can I do relative imports in Python?\n\nImagine this directory structure: app/ __init__.py sub1/ __init__.py mod1.py sub2/ __init__.py mod2.py I'm coding mod1, and I need to import something from mod2. How should I do it? I tried from ..sub2 import mod2, but I'm getting an \"Attempted relative import in non-package\". I googled around, but I found only \"sys.path manipulation\" hacks. Isn't there a clean way? All my __init__.py's are currently empty I'm trying to do this because sub2 contains classes that are shared across sub packages (sub1, subX, etc.). The behaviour I'm looking for is the same as described in PEP 366 (thanks John B).", "output": "The problem is that you're running the module as '__main__' by passing the mod1.py as an argument to the interpreter. From PEP 328: Relative imports use a module's __name__ attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to '__main__') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system. In Python 2.6, they're adding the ability to reference modules relative to the main module. PEP 366 describes the change."} +{"question_id": 47152691, "score": 631, "creation_date": 1510041628, "tags": ["python", "pandas", "group-by", "pivot", "pivot-table"], "instruction": "How can I pivot a dataframe?\n\nHow do I pivot the pandas dataframe below such that the col values become columns, row values become the index, and mean of val0 becomes the values? (In some cases this is called transforming from long-format to wide-format.) Consider a dataframe df with columns 'key', 'row', 'item', 'col', and random float values 'val0', 'val1'. I conspicuously named the columns and relevant column values to correspond with how I want to pivot them. (Setup code at bottom.) key row item col val0 val1 0 key0 row3 item1 col3 0.81 0.04 1 key1 row2 item1 col2 0.44 0.07 2 key1 row0 item1 col0 0.77 0.01 3 key0 row4 item0 col2 0.15 0.59 4 key1 row0 item2 col1 0.81 0.64 5 key1 row2 item2 col4 0.13 0.88 6 key2 row4 item1 col3 0.88 0.39 7 key1 row4 item1 col1 0.10 0.07 8 key1 row0 item2 col4 0.65 0.02 9 key1 row2 item0 col2 0.35 0.61 10 key2 row0 item2 col1 0.40 0.85 11 key2 row4 item1 col2 0.64 0.25 12 key0 row2 item2 col3 0.50 0.44 13 key0 row4 item1 col4 0.24 0.46 14 key1 row3 item2 col3 0.28 0.11 15 key0 row3 item1 col1 0.31 0.23 16 key0 row0 item2 col3 0.86 0.01 17 key0 row4 item0 col3 0.64 0.21 18 key2 row2 item2 col0 0.13 0.45 19 key0 row2 item0 col4 0.37 0.70 Subquestions How to avoid getting ValueError: Index contains duplicate entries, cannot reshape? How do I pivot df such that the col values become columns, row values become the index, and mean of val0 are the values? col col0 col1 col2 col3 col4 row row0 0.77 0.605 NaN 0.860 0.65 row2 0.13 NaN 0.395 0.500 0.25 row3 NaN 0.310 NaN 0.545 NaN row4 NaN 0.100 0.395 0.760 0.24 How do I pivot... ... so that missing values are 0? col col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 row2 0.13 0.000 0.395 0.500 0.25 row3 0.00 0.310 0.000 0.545 0.00 row4 0.00 0.100 0.395 0.760 0.24 ... to do an aggregate function other than mean, like sum? col col0 col1 col2 col3 col4 row row0 0.77 1.21 0.00 0.86 0.65 row2 0.13 0.00 0.79 0.50 0.50 row3 0.00 0.31 0.00 1.09 0.00 row4 0.00 0.10 0.79 1.52 0.24 ... to do more that one aggregation at a time? sum mean col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 0.77 1.21 0.00 0.86 0.65 0.77 0.605 0.000 0.860 0.65 row2 0.13 0.00 0.79 0.50 0.50 0.13 0.000 0.395 0.500 0.25 row3 0.00 0.31 0.00 1.09 0.00 0.00 0.310 0.000 0.545 0.00 row4 0.00 0.10 0.79 1.52 0.24 0.00 0.100 0.395 0.760 0.24 ... to aggregate over multiple 'value' columns? val0 val1 col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 0.01 0.745 0.00 0.010 0.02 row2 0.13 0.000 0.395 0.500 0.25 0.45 0.000 0.34 0.440 0.79 row3 0.00 0.310 0.000 0.545 0.00 0.00 0.230 0.00 0.075 0.00 row4 0.00 0.100 0.395 0.760 0.24 0.00 0.070 0.42 0.300 0.46 ... to subdivide by multiple columns? (item0,item1,item2..., col0,col1,col2...) item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 row row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.605 0.86 0.65 row2 0.35 0.00 0.37 0.00 0.00 0.44 0.00 0.00 0.13 0.000 0.50 0.13 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.000 0.28 0.00 row4 0.15 0.64 0.00 0.00 0.10 0.64 0.88 0.24 0.00 0.000 0.00 0.00 ... to subdivide by multiple rows: (key0,key1... row0,row1,row2...) item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 key row key0 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.86 0.00 row2 0.00 0.00 0.37 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.00 0.00 0.00 row4 0.15 0.64 0.00 0.00 0.00 0.00 0.00 0.24 0.00 0.00 0.00 0.00 key1 row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.81 0.00 0.65 row2 0.35 0.00 0.00 0.00 0.00 0.44 0.00 0.00 0.00 0.00 0.00 0.13 row3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.28 0.00 row4 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 key2 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.00 row2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.00 0.00 0.00 row4 0.00 0.00 0.00 0.00 0.00 0.64 0.88 0.00 0.00 0.00 0.00 0.00 ... to aggregate the frequency in which the column and rows occur together, aka \"cross tabulation\"? col col0 col1 col2 col3 col4 row row0 1 2 0 1 1 row2 1 0 2 1 2 row3 0 1 0 2 0 row4 0 1 2 2 1 ... to convert a DataFrame from long-to-wide by pivoting on ONLY two columns? Given: np.random.seed([3, 1415]) df2 = pd.DataFrame({'A': list('aaaabbbc'), 'B': np.random.choice(15, 8)}) df2 A B 0 a 0 1 a 11 2 a 2 3 a 11 4 b 10 5 b 10 6 b 14 7 c 7 The expected should look something like a b c 0 0.0 10.0 7.0 1 11.0 10.0 NaN 2 2.0 14.0 NaN 3 11.0 NaN NaN How do I flatten the multi-index to single index after pivot? From: 1 2 1 1 2 a 2 1 1 b 2 1 0 c 1 0 0 To: 1|1 2|1 2|2 a 2 1 1 b 2 1 0 c 1 0 0 Setup import numpy as np import pandas as pd from numpy.core.defchararray import add np.random.seed([3,1415]) n = 20 cols = np.array(['key', 'row', 'item', 'col']) arr1 = (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str) df = pd.DataFrame( add(cols, arr1), columns=cols ).join( pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix('val') ) print(df) Why is this question not a duplicate? and more useful than the following autosuggestions: How to pivot a dataframe in Pandas? only covers the specific case of 'Country' to row-index, values of 'Indicator' for 'Year' to multiple columns and no aggregation of values. When pivoting a Pandas dataframe, how do I make the column names the same as in R? (flat, label in column names) asks how to pivot in pandas like in R, i.e. autogenerate an individual column for each value of strength... pandas pivoting a dataframe, duplicate rows asks about the syntax for pivoting multiple columns, without needing to list them all. None of the existing questions and answers are comprehensive, so this is an attempt at a canonical question and answer that encompasses all aspects of pivoting.", "output": "Here is a list of idioms we can use to pivot pd.DataFrame.pivot_table A glorified version of groupby with more intuitive API. For many people, this is the preferred approach. And it is the intended approach by the developers. Specify row level, column levels, values to be aggregated, and function(s) to perform aggregations. pd.DataFrame.groupby + pd.DataFrame.unstack Good general approach for doing just about any type of pivot You specify all columns that will constitute the pivoted row levels and column levels in one group by. You follow that by selecting the remaining columns you want to aggregate and the function(s) you want to perform the aggregation. Finally, you unstack the levels that you want to be in the column index. pd.DataFrame.set_index + pd.DataFrame.unstack Convenient and intuitive for some (myself included). Cannot handle duplicate grouped keys. Similar to the groupby paradigm, we specify all columns that will eventually be either row or column levels and set those to be the index. We then unstack the levels we want in the columns. If either the remaining index levels or column levels are not unique, this method will fail. pd.DataFrame.pivot Very similar to set_index in that it shares the duplicate key limitation. The API is very limited as well. It only takes scalar values for index, columns, values. Similar to the pivot_table method in that we select rows, columns, and values on which to pivot. However, we cannot aggregate and if either rows or columns are not unique, this method will fail. pd.crosstab This a specialized version of pivot_table and in its purest form is the most intuitive way to perform several tasks. pd.factorize + np.bincount This is a highly advanced technique that is very obscure but is very fast. It cannot be used in all circumstances, but when it can be used and you are comfortable using it, you will reap the performance rewards. pd.get_dummies + pd.DataFrame.dot I use this for cleverly performing cross tabulation. See also: Reshaping and pivot tables \u2014 pandas User Guide Question 1 Why do I get ValueError: Index contains duplicate entries, cannot reshape This occurs because pandas is attempting to reindex either a columns or index object with duplicate entries. There are varying methods to use that can perform a pivot. Some of them are not well suited to when there are duplicates of the keys on which it is being asked to pivot. For example: Consider pd.DataFrame.pivot. I know there are duplicate entries that share the row and col values: df.duplicated(['row', 'col']).any() True So when I pivot using df.pivot(index='row', columns='col', values='val0') I get the error mentioned above. In fact, I get the same error when I try to perform the same task with: df.set_index(['row', 'col'])['val0'].unstack() Examples What I'm going to do for each subsequent question is to answer it using pd.DataFrame.pivot_table. Then I'll provide alternatives to perform the same task. Questions 2 and 3 How do I pivot df such that the col values are columns, row values are the index, and mean of val0 are the values? pd.DataFrame.pivot_table df.pivot_table( values='val0', index='row', columns='col', aggfunc='mean') col col0 col1 col2 col3 col4 row row0 0.77 0.605 NaN 0.860 0.65 row2 0.13 NaN 0.395 0.500 0.25 row3 NaN 0.310 NaN 0.545 NaN row4 NaN 0.100 0.395 0.760 0.24 aggfunc='mean' is the default and I didn't have to set it. I included it to be explicit. How do I make it so that missing values are 0? pd.DataFrame.pivot_table fill_value is not set by default. I tend to set it appropriately. In this case I set it to 0. df.pivot_table( values='val0', index='row', columns='col', fill_value=0, aggfunc='mean') col col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 row2 0.13 0.000 0.395 0.500 0.25 row3 0.00 0.310 0.000 0.545 0.00 row4 0.00 0.100 0.395 0.760 0.24 pd.DataFrame.groupby df.groupby(['row', 'col'])['val0'].mean().unstack(fill_value=0) pd.crosstab pd.crosstab( index=df['row'], columns=df['col'], values=df['val0'], aggfunc='mean').fillna(0) Question 4 Can I get something other than mean, like maybe sum? pd.DataFrame.pivot_table df.pivot_table( values='val0', index='row', columns='col', fill_value=0, aggfunc='sum') col col0 col1 col2 col3 col4 row row0 0.77 1.21 0.00 0.86 0.65 row2 0.13 0.00 0.79 0.50 0.50 row3 0.00 0.31 0.00 1.09 0.00 row4 0.00 0.10 0.79 1.52 0.24 pd.DataFrame.groupby df.groupby(['row', 'col'])['val0'].sum().unstack(fill_value=0) pd.crosstab pd.crosstab( index=df['row'], columns=df['col'], values=df['val0'], aggfunc='sum').fillna(0) Question 5 Can I do more that one aggregation at a time? Notice that for pivot_table and crosstab I needed to pass list of callables. On the other hand, groupby.agg is able to take strings for a limited number of special functions. groupby.agg would also have taken the same callables we passed to the others, but it is often more efficient to leverage the string function names as there are efficiencies to be gained. pd.DataFrame.pivot_table df.pivot_table( values='val0', index='row', columns='col', fill_value=0, aggfunc=[np.size, np.mean]) size mean col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 1 2 0 1 1 0.77 0.605 0.000 0.860 0.65 row2 1 0 2 1 2 0.13 0.000 0.395 0.500 0.25 row3 0 1 0 2 0 0.00 0.310 0.000 0.545 0.00 row4 0 1 2 2 1 0.00 0.100 0.395 0.760 0.24 pd.DataFrame.groupby df.groupby(['row', 'col'])['val0'].agg(['size', 'mean']).unstack(fill_value=0) pd.crosstab pd.crosstab( index=df['row'], columns=df['col'], values=df['val0'], aggfunc=[np.size, np.mean]).fillna(0, downcast='infer') Question 6 Can I aggregate over multiple value columns? pd.DataFrame.pivot_table we pass values=['val0', 'val1'] but we could've left that off completely df.pivot_table( values=['val0', 'val1'], index='row', columns='col', fill_value=0, aggfunc='mean') val0 val1 col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4 row row0 0.77 0.605 0.000 0.860 0.65 0.01 0.745 0.00 0.010 0.02 row2 0.13 0.000 0.395 0.500 0.25 0.45 0.000 0.34 0.440 0.79 row3 0.00 0.310 0.000 0.545 0.00 0.00 0.230 0.00 0.075 0.00 row4 0.00 0.100 0.395 0.760 0.24 0.00 0.070 0.42 0.300 0.46 pd.DataFrame.groupby df.groupby(['row', 'col'])['val0', 'val1'].mean().unstack(fill_value=0) Question 7 Can I subdivide by multiple columns? pd.DataFrame.pivot_table df.pivot_table( values='val0', index='row', columns=['item', 'col'], fill_value=0, aggfunc='mean') item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 row row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.605 0.86 0.65 row2 0.35 0.00 0.37 0.00 0.00 0.44 0.00 0.00 0.13 0.000 0.50 0.13 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.000 0.28 0.00 row4 0.15 0.64 0.00 0.00 0.10 0.64 0.88 0.24 0.00 0.000 0.00 0.00 pd.DataFrame.groupby df.groupby( ['row', 'item', 'col'] )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1) Question 8 Can I subdivide by multiple columns? pd.DataFrame.pivot_table df.pivot_table( values='val0', index=['key', 'row'], columns=['item', 'col'], fill_value=0, aggfunc='mean') item item0 item1 item2 col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4 key row key0 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.86 0.00 row2 0.00 0.00 0.37 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 row3 0.00 0.00 0.00 0.00 0.31 0.00 0.81 0.00 0.00 0.00 0.00 0.00 row4 0.15 0.64 0.00 0.00 0.00 0.00 0.00 0.24 0.00 0.00 0.00 0.00 key1 row0 0.00 0.00 0.00 0.77 0.00 0.00 0.00 0.00 0.00 0.81 0.00 0.65 row2 0.35 0.00 0.00 0.00 0.00 0.44 0.00 0.00 0.00 0.00 0.00 0.13 row3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.28 0.00 row4 0.00 0.00 0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 key2 row0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.00 row2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.00 0.00 0.00 row4 0.00 0.00 0.00 0.00 0.00 0.64 0.88 0.00 0.00 0.00 0.00 0.00 pd.DataFrame.groupby df.groupby( ['key', 'row', 'item', 'col'] )['val0'].mean().unstack(['item', 'col']).fillna(0).sort_index(1) pd.DataFrame.set_index because the set of keys are unique for both rows and columns df.set_index( ['key', 'row', 'item', 'col'] )['val0'].unstack(['item', 'col']).fillna(0).sort_index(1) Question 9 Can I aggregate the frequency in which the column and rows occur together, aka \"cross tabulation\"? pd.DataFrame.pivot_table df.pivot_table(index='row', columns='col', fill_value=0, aggfunc='size') col col0 col1 col2 col3 col4 row row0 1 2 0 1 1 row2 1 0 2 1 2 row3 0 1 0 2 0 row4 0 1 2 2 1 pd.DataFrame.groupby df.groupby(['row', 'col'])['val0'].size().unstack(fill_value=0) pd.crosstab pd.crosstab(df['row'], df['col']) pd.factorize + np.bincount # get integer factorization `i` and unique values `r` # for column `'row'` i, r = pd.factorize(df['row'].values) # get integer factorization `j` and unique values `c` # for column `'col'` j, c = pd.factorize(df['col'].values) # `n` will be the number of rows # `m` will be the number of columns n, m = r.size, c.size # `i * m + j` is a clever way of counting the # factorization bins assuming a flat array of length # `n * m`. Which is why we subsequently reshape as `(n, m)` b = np.bincount(i * m + j, minlength=n * m).reshape(n, m) # BTW, whenever I read this, I think 'Bean, Rice, and Cheese' pd.DataFrame(b, r, c) col3 col2 col0 col1 col4 row3 2 0 0 1 0 row2 1 2 1 0 2 row0 1 0 1 2 1 row4 2 2 0 1 1 pd.get_dummies pd.get_dummies(df['row']).T.dot(pd.get_dummies(df['col'])) col0 col1 col2 col3 col4 row0 1 2 0 1 1 row2 1 0 2 1 2 row3 0 1 0 2 0 row4 0 1 2 2 1 Question 10 How do I convert a DataFrame from long to wide by pivoting on ONLY two columns? DataFrame.pivot The first step is to assign a number to each row - this number will be the row index of that value in the pivoted result. This is done using GroupBy.cumcount: df2.insert(0, 'count', df2.groupby('A').cumcount()) df2 count A B 0 0 a 0 1 1 a 11 2 2 a 2 3 3 a 11 4 0 b 10 5 1 b 10 6 2 b 14 7 0 c 7 The second step is to use the newly created column as the index to call DataFrame.pivot. df2.pivot(*df2) # df2.pivot(index='count', columns='A', values='B') A a b c count 0 0.0 10.0 7.0 1 11.0 10.0 NaN 2 2.0 14.0 NaN 3 11.0 NaN NaN DataFrame.pivot_table Whereas DataFrame.pivot only accepts columns, DataFrame.pivot_table also accepts arrays, so the GroupBy.cumcount can be passed directly as the index without creating an explicit column. df2.pivot_table(index=df2.groupby('A').cumcount(), columns='A', values='B') A a b c 0 0.0 10.0 7.0 1 11.0 10.0 NaN 2 2.0 14.0 NaN 3 11.0 NaN NaN Question 11 How do I flatten the multiple index to single index after pivot If columns type object with string join df.columns = df.columns.map('|'.join) else format df.columns = df.columns.map('{0[0]}|{0[1]}'.format)"} +{"question_id": 132988, "score": 628, "creation_date": 1222345629, "tags": ["python", "reference", "equality", "semantics"], "instruction": "Is there a difference between \"==\" and \"is\"?\n\nMy Google-fu has failed me. In Python, are the following two tests for equality equivalent? n = 5 # Test one. if n == 5: print 'Yay!' # Test two. if n is 5: print 'Yay!' Does this hold true for objects where you would be comparing instances (a list say)? Okay, so this kind of answers my question: L = [] L.append(1) if L == [1]: print 'Yay!' # Holds true, but... if L is [1]: print 'Yay!' # Doesn't. So == tests value where is tests to see if they are the same object?", "output": "is will return True if two variables point to the same object (in memory), == if the objects referred to by the variables are equal. >>> a = [1, 2, 3] >>> b = a >>> b is a True >>> b == a True # Make a new copy of list `a` via the slice operator, # and assign it to variable `b` >>> b = a[:] >>> b is a False >>> b == a True In your case, the second test works only because Python caches small integer objects, which is an implementation detail. For larger integers, this does not work: >>> 1000 is 10**3 False >>> 1000 == 10**3 True The same holds true for string literals: >>> \"a\" is \"a\" True >>> \"aa\" is \"a\" * 2 True >>> x = \"a\" >>> \"aa\" is x * 2 False >>> \"aa\" is intern(x*2) True Please see this question as well."} +{"question_id": 269795, "score": 628, "creation_date": 1225996612, "tags": ["python", "module"], "instruction": "How do I find the location of Python module sources?\n\nHow do I learn where the source file for a given Python module is installed? Is the method different on Windows than on Linux? I'm trying to look for the source of the datetime module in particular, but I'm interested in a more general answer as well.", "output": "For a pure python module you can find the source by looking at themodule.__file__. The datetime module, however, is written in C, and therefore datetime.__file__ points to a .so file (there is no datetime.__file__ on Windows), and therefore, you can't see the source. If you download a python source tarball and extract it, the modules' code can be found in the Modules subdirectory. For example, if you want to find the datetime code for python 2.6, you can look at Python-2.6/Modules/datetimemodule.c You can also find the latest version of this file on github on the web at https://github.com/python/cpython/blob/main/Modules/_datetimemodule.c"} +{"question_id": 306313, "score": 626, "creation_date": 1227205276, "tags": ["python", "integer", "operators", "identity", "python-internals"], "instruction": "Why does Python's \"is\" operator behave unexpectedly with integers?\n\nWhy does the following behave unexpectedly in Python? >>> a = 256 >>> b = 256 >>> a is b # This produces an expected result True >>> a = 257 >>> b = 257 >>> a is b # Why is this False? What is happening here? False >>> 257 is 257 # Yet the literal numbers compare properly True I am using Python 2.5.2. Trying some different versions of Python, it appears that Python 2.3.3 shows the above behaviour between 99 and 100. Based on the above, I can hypothesize that Python is internally implemented such that \"small\" integers are stored in a different way than larger integers and the is operator can tell the difference. Why the leaky abstraction? What is a better way of comparing two arbitrary objects to see whether they are the same when I don't know in advance whether they are numbers or not?", "output": "Take a look at this: >>> a = 256 >>> b = 256 >>> id(a) == id(b) True >>> a = 257 >>> b = 257 >>> id(a) == id(b) False Here's what I found in the documentation for \"Plain Integer Objects\": The current implementation keeps an array of integer objects for all integers between -5 and 256. When you create an int in that range you actually just get back a reference to the existing object. So, integers 256 are identical, but 257 are not. This is a CPython implementation detail, and not guaranteed for other Python implementations."} +{"question_id": 865115, "score": 619, "creation_date": 1242327852, "tags": ["python", "destructor"], "instruction": "How do I correctly clean up a Python object?\n\nclass Package: def __init__(self): self.files = [] # ... def __del__(self): for file in self.files: os.unlink(file) __del__(self) above fails with an AttributeError exception. I understand Python doesn't guarantee the existence of \"global variables\" (member data in this context?) when __del__() is invoked. If that is the case and this is the reason for the exception, how do I make sure the object destructs properly?", "output": "I'd recommend using Python's with statement for managing resources that need to be cleaned up. The problem with using an explicit close() statement is that you have to worry about people forgetting to call it at all or forgetting to place it in a finally block to prevent a resource leak when an exception occurs. To use the with statement, create a class with the following methods: def __enter__(self) def __exit__(self, exc_type, exc_value, traceback) In your example above, you'd use class Package: def __init__(self): self.files = [] def __enter__(self): return self # ... def __exit__(self, exc_type, exc_value, traceback): for file in self.files: os.unlink(file) Then, when someone wanted to use your class, they'd do the following: with Package() as package_obj: # use package_obj The variable package_obj will be an instance of type Package (it's the value returned by the __enter__ method). Its __exit__ method will automatically be called, regardless of whether or not an exception occurs. You could even take this approach a step further. In the example above, someone could still instantiate Package using its constructor without using the with clause. You don't want that to happen. You can fix this by creating a PackageResource class that defines the __enter__ and __exit__ methods. Then, the Package class would be defined strictly inside the __enter__ method and returned. That way, the caller never could instantiate the Package class without using a with statement: class PackageResource: def __enter__(self): class Package: ... self.package_obj = Package() return self.package_obj def __exit__(self, exc_type, exc_value, traceback): self.package_obj.cleanup() You'd use this as follows: with PackageResource() as package_obj: # use package_obj"} +{"question_id": 19339, "score": 616, "creation_date": 1219292947, "tags": ["python", "list", "matrix", "transpose"], "instruction": "Transpose/Unzip Function (inverse of zip)?\n\nI have a list of 2-item tuples and I'd like to convert them to 2 lists where the first contains the first item in each tuple and the second list holds the second item. For example: original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)] # and I want to become... result = (['a', 'b', 'c', 'd'], [1, 2, 3, 4]) Is there a builtin function that does that? See also: Transpose list of lists if the specific choice of lists vs. tuples in the result matters. Most answers here assume it doesn't.", "output": "In 2.x, zip is its own inverse! Provided you use the special * operator. >>> zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)]) [('a', 'b', 'c', 'd'), (1, 2, 3, 4)] This is equivalent to calling zip with each element of the list as a separate argument: zip(('a', 1), ('b', 2), ('c', 3), ('d', 4)) except the arguments are passed to zip directly (after being converted to a tuple), so there's no need to worry about the number of arguments getting too big. In 3.x, zip returns a lazy iterator, but this is trivially converted: >>> list(zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)])) [('a', 'b', 'c', 'd'), (1, 2, 3, 4)]"} +{"question_id": 1984325, "score": 613, "creation_date": 1262243238, "tags": ["python", "oop", "with-statement", "contextmanager"], "instruction": "Explaining Python's '__enter__' and '__exit__'\n\nI saw this in someone's code. What does it mean? def __enter__(self): return self def __exit__(self, type, value, tb): self.stream.close() Here is the complete code. from __future__ import with_statement#for python2.5 class a(object): def __enter__(self): print 'sss' return 'sss111' def __exit__(self ,type, value, traceback): print 'ok' return False with a() as s: print s print s", "output": "Using these magic methods (__enter__, __exit__) allows you to implement objects which can be used easily with the with statement. The idea is that it makes it easy to build code which needs some 'cleandown' code executed (think of it as a try-finally block). Some more explanation here. A useful example could be a database connection object (which then automagically closes the connection once the corresponding 'with'-statement goes out of scope): class DatabaseConnection(object): def __enter__(self): # make a database connection and return it ... return self.dbconn def __exit__(self, exc_type, exc_val, exc_tb): # make sure the dbconnection gets closed self.dbconn.close() ... As explained above, use this object with the with statement (you may need to do from __future__ import with_statement at the top of the file if you're on Python 2.5). with DatabaseConnection() as mydbconn: # do stuff PEP343 -- The 'with' statement' has a nice writeup as well."} +{"question_id": 18837262, "score": 612, "creation_date": 1379365342, "tags": ["python", "pandas", "dataframe", "dictionary"], "instruction": "Convert Python dict into a dataframe\n\nI have a Python dictionary: {u'2012-07-01': 391, u'2012-07-02': 392, u'2012-07-03': 392, u'2012-07-04': 392, u'2012-07-05': 392, u'2012-07-06': 392} I would like to convert this into a pandas dataframe by having the dates and their corresponding values as two separate columns; the expected result looks like: Date DateValue 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 . 2012-07-04 392 . ... ... Is there a direct way to do this?", "output": "The error here, is since calling the DataFrame constructor with scalar values (where it expects values to be a list/dict/... i.e. have multiple columns): pd.DataFrame(d) ValueError: If using all scalar values, you must must pass an index You could take the items from the dictionary (i.e. the key-value pairs): In [11]: pd.DataFrame(d.items()) # or list(d.items()) in python 3 Out[11]: 0 1 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 3 2012-07-04 392 4 2012-07-05 392 5 2012-07-06 392 In [12]: pd.DataFrame(d.items(), columns=['Date', 'DateValue']) Out[12]: Date DateValue 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 3 2012-07-04 392 4 2012-07-05 392 5 2012-07-06 392 But I think it makes more sense to pass the Series constructor: In [20]: s = pd.Series(d, name='DateValue') In [21]: s Out[21]: 2012-07-01 391 2012-07-02 392 2012-07-03 392 2012-07-04 392 2012-07-05 392 2012-07-06 392 Name: DateValue, dtype: int64 In [22]: s.index.name = 'Date' In [23]: s.reset_index() Out[23]: Date DateValue 0 2012-07-01 391 1 2012-07-02 392 2 2012-07-03 392 3 2012-07-04 392 4 2012-07-05 392 5 2012-07-06 392"} +{"question_id": 6294179, "score": 612, "creation_date": 1307628756, "tags": ["python", "list", "indices"], "instruction": "How to find all occurrences of an element in a list\n\nindex() will give the first occurrence of an item in a list. Is there a neat trick which returns all indices in a list for an element?", "output": "You can use a list comprehension with enumerate: indices = [i for i, x in enumerate(my_list) if x == \"whatever\"] The iterator enumerate(my_list) yields pairs (index, item) for each item in the list. Using i, x as loop variable target unpacks these pairs into the index i and the list item x. We filter down to all x that match our criterion, and select the indices i of these elements."} +{"question_id": 27981545, "score": 603, "creation_date": 1421402839, "tags": ["python", "python-2.6", "suppress-warnings", "urllib3", "pyvmomi"], "instruction": "Suppress InsecureRequestWarning: Unverified HTTPS request is being made in Python2.6\n\nI am writing scripts in Python2.6 with use of pyVmomi and while using one of the connection methods: service_instance = connect.SmartConnect(host=args.ip, user=args.user, pwd=args.password) I get the following warning: /usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py:734: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html InsecureRequestWarning) What's interesting is that I do not have urllib3 installed with pip (but it's there in /usr/lib/python2.6/site-packages/requests/packages/urllib3/). I have tried as suggested here import urllib3 ... urllib3.disable_warnings() but that didn't change anything.", "output": "You can disable any Python warnings via the PYTHONWARNINGS environment variable. In this case, you want: export PYTHONWARNINGS=\"ignore:Unverified HTTPS request\" To disable using Python code (requests >= 2.16.0): import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) For requests < 2.16.0, see original answer below. Original answer The reason doing urllib3.disable_warnings() didn't work for you is because it looks like you're using a separate instance of urllib3 vendored inside of requests. I gather this based on the path here: /usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py To disable warnings in requests' vendored urllib3, you'll need to import that specific instance of the module: import requests from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning)"} +{"question_id": 6541123, "score": 600, "creation_date": 1309468151, "tags": ["python", "pandas", "matplotlib", "seaborn", "subplot"], "instruction": "Improve subplot size/spacing with many subplots\n\nI need to generate a whole bunch of vertically-stacked plots in matplotlib. The result will be saved using savefig and viewed on a webpage, so I don't care how tall the final image is, as long as the subplots are spaced so they don't overlap. No matter how big I allow the figure to be, the subplots always seem to overlap. My code currently looks like import matplotlib.pyplot as plt import my_other_module titles, x_lists, y_lists = my_other_module.get_data() fig = plt.figure(figsize=(10,60)) for i, y_list in enumerate(y_lists): plt.subplot(len(titles), 1, i) plt.xlabel(\"Some X label\") plt.ylabel(\"Some Y label\") plt.title(titles[i]) plt.plot(x_lists[i],y_list) fig.savefig('out.png', dpi=100)", "output": "Please review matplotlib: Tight Layout guide and try using matplotlib.pyplot.tight_layout, or matplotlib.figure.Figure.tight_layout As a quick example: import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(8, 8)) fig.tight_layout() # Or equivalently, \"plt.tight_layout()\" plt.show() Without Tight Layout With Tight Layout"} +{"question_id": 3392354, "score": 597, "creation_date": 1280788805, "tags": ["python", "set", "append"], "instruction": "Append values to a set in Python\n\nHow do I add values to an existing set?", "output": "your_set.update(your_sequence_of_values) e.g, your_set.update([1, 2, 3, 4]). Or, if you have to produce the values in a loop for some other reason, for value in ...: your_set.add(value) But, of course, doing it in bulk with a single .update call is faster and handier, when otherwise feasible."} +{"question_id": 3160699, "score": 593, "creation_date": 1278009576, "tags": ["python", "progress-bar"], "instruction": "Python Progress Bar\n\nHow do I use a progress bar when my script is doing some task that is likely to take time? For example, a function which takes some time to complete and returns True when done. How can I display a progress bar during the time the function is being executed? Note that I need this to be in real time, so I can't figure out what to do about it. Do I need a thread for this? I have no idea. Right now I am not printing anything while the function is being executed, however a progress bar would be nice. Also I am more interested in how this can be done from a code point of view.", "output": "There are specific libraries (like this one here) but maybe something very simple would do: import time import sys toolbar_width = 40 # setup toolbar sys.stdout.write(\"[%s]\" % (\" \" * toolbar_width)) sys.stdout.flush() sys.stdout.write(\"\\b\" * (toolbar_width+1)) # return to start of line, after '[' for i in range(toolbar_width): time.sleep(0.1) # do real work here # update the bar sys.stdout.write(\"-\") sys.stdout.flush() sys.stdout.write(\"]\\n\") # this ends the progress bar Note: progressbar2 is a fork of progressbar which hasn't been maintained in years."} +{"question_id": 3987041, "score": 593, "creation_date": 1287661465, "tags": ["python", "function", "command-line"], "instruction": "Run function from the command line\n\nI have this code: def hello(): return 'Hi :)' How would I run this directly from the command line? See also: What does if __name__ == \"__main__\": do? to explain the standard idiom for getting the code started; Why doesn't the main() function run when I start a Python script? Where does the script start running (what is its entry point)? for why things like this are necessary", "output": "With the -c (command) argument (assuming your file is named foo.py): $ python -c 'import foo; print foo.hello()' Alternatively, if you don't care about namespace pollution: $ python -c 'from foo import *; print hello()' And the middle ground: $ python -c 'from foo import hello; print hello()'"} +{"question_id": 2428092, "score": 592, "creation_date": 1268336435, "tags": ["python", "django", "json"], "instruction": "Creating a JSON response using Django and Python\n\nI'm trying to convert a server side Ajax response script into a Django HttpResponse, but apparently it's not working. This is the server-side script: /* RECEIVE VALUE */ $validateValue=$_POST['validateValue']; $validateId=$_POST['validateId']; $validateError=$_POST['validateError']; /* RETURN VALUE */ $arrayToJs = array(); $arrayToJs[0] = $validateId; $arrayToJs[1] = $validateError; if($validateValue ==\"Testuser\"){ // Validate?? $arrayToJs[2] = \"true\"; // RETURN TRUE echo '{\"jsonValidateReturn\":'.json_encode($arrayToJs).'}'; // RETURN ARRAY WITH success } else{ for($x=0;$x<1000000;$x++){ if($x == 990000){ $arrayToJs[2] = \"false\"; echo '{\"jsonValidateReturn\":'.json_encode($arrayToJs).'}'; // RETURNS ARRAY WITH ERROR. } } } And this is the converted code def validate_user(request): if request.method == 'POST': vld_value = request.POST.get('validateValue') vld_id = request.POST.get('validateId') vld_error = request.POST.get('validateError') array_to_js = [vld_id, vld_error, False] if vld_value == \"TestUser\": array_to_js[2] = True x = simplejson.dumps(array_to_js) return HttpResponse(x) else: array_to_js[2] = False x = simplejson.dumps(array_to_js) error = 'Error' return render_to_response('index.html',{'error':error},context_instance=RequestContext(request)) return render_to_response('index.html',context_instance=RequestContext(request)) I'm using simplejson to encode the Python list (so it will return a JSON array). I couldn't figure out the problem yet. But I think that I did something wrong about the 'echo'.", "output": "I usually use a dictionary, not a list to return JSON content. import json from django.http import HttpResponse response_data = {} response_data['result'] = 'error' response_data['message'] = 'Some error message' Pre-Django 1.7 you'd return it like this: return HttpResponse(json.dumps(response_data), content_type=\"application/json\") For Django 1.7+, use JsonResponse as shown in this SO answer like so : from django.http import JsonResponse return JsonResponse({'foo':'bar'})"} +{"question_id": 18265935, "score": 590, "creation_date": 1376628185, "tags": ["python", "list"], "instruction": "How do I create a list with numbers between two values?\n\nHow do I create a list of numbers between two values? For example, a list between 11 and 16: [11, 12, 13, 14, 15, 16]", "output": "Use range. In Python 2, it returns a list directly: >>> range(11, 17) [11, 12, 13, 14, 15, 16] In Python 3, range is an iterator. To convert it to a list: >>> list(range(11, 17)) [11, 12, 13, 14, 15, 16] Note: The second number in range(start, stop) is exclusive. So, stop = 16+1 = 17. To increment by steps of 0.5, consider using numpy's arange() and .tolist(): >>> import numpy as np >>> np.arange(11, 17, 0.5).tolist() [11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 16.0, 16.5] See: How do I use a decimal step value for range()?"} +{"question_id": 7999935, "score": 589, "creation_date": 1320344291, "tags": ["python", "datetime", "datetime-format"], "instruction": "Python datetime to string without microsecond component\n\nI'm adding UTC time strings to Bitbucket API responses that currently only contain Amsterdam (!) time strings. For consistency with the UTC time strings returned elsewhere, the desired format is 2011-11-03 11:07:04 (followed by +00:00, but that's not germane). What's the best way to create such a string (without a microsecond component) from a datetime instance with a microsecond component? >>> import datetime >>> print unicode(datetime.datetime.now()) 2011-11-03 11:13:39.278026 I'll add the best option that's occurred to me as a possible answer, but there may well be a more elegant solution. Edit: I should mention that I'm not actually printing the current time \u2013 I used datetime.now to provide a quick example. So the solution should not assume that any datetime instances it receives will include microsecond components.", "output": "If you want to format a datetime object in a specific format that is different from the standard format, it's best to explicitly specify that format: >>> import datetime >>> datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\") '2011-11-03 18:21:26' See the documentation of datetime.strftime() for an explanation of the % directives. Starting from Python 3.6, the isoformat() method is flexible enough to also produce this format: datetime.datetime.now().isoformat(sep=\" \", timespec=\"seconds\")"} +{"question_id": 34962104, "score": 588, "creation_date": 1453543467, "tags": ["python", "pandas", "dataframe", "numpy", "apply"], "instruction": "How can I use the apply() function for a single column?\n\nI have a pandas dataframe with multiple columns. I want to change the values of the only the first column without affecting the other columns. How can I do that using apply() in pandas?", "output": "Given a sample dataframe df as: a b 0 1 2 1 2 3 2 3 4 3 4 5 what you want is: df['a'] = df['a'].apply(lambda x: x + 1) that returns: a b 0 2 2 1 3 3 2 4 4 3 5 5"} +{"question_id": 4830856, "score": 585, "creation_date": 1296233552, "tags": ["python", "git", "github", "pip"], "instruction": "Is it possible to use pip to install a package from a private GitHub repository?\n\nI am trying to install a Python package from a private GitHub repository. For a public repository, I can issue the following command which works fine: pip install git+git://github.com/django/django.git However, if I try this for a private repository: pip install git+git://github.com/echweb/echweb-utils.git I get the following output: Downloading/unpacking git+git://github.com/echweb/echweb-utils.git Cloning Git repository git://github.com/echweb/echweb-utils.git to /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build Complete output from command /usr/local/bin/git clone git://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build: fatal: The remote end hung up unexpectedly Cloning into /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build... ---------------------------------------- Command /usr/local/bin/git clone git://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build failed with error code 128 I guess this is because I am trying to access a private repository without providing any authentication. I therefore tried to use Git + ssh hoping that pip would use my SSH public key to authenticate: pip install git+ssh://github.com/echweb/echweb-utils.git This gives the following output: Downloading/unpacking git+ssh://github.com/echweb/echweb-utils.git Cloning Git repository ssh://github.com/echweb/echweb-utils.git to /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build Complete output from command /usr/local/bin/git clone ssh://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build: Cloning into /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build... Permission denied (publickey). fatal: The remote end hung up unexpectedly ---------------------------------------- Command /usr/local/bin/git clone ssh://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build failed with error code 128 Is what I am trying to achieve even possible? If so, how can I do it?", "output": "You can use the git+ssh URI scheme, but you must set a username. Notice the git@ part in the URI: pip install git+ssh://git@github.com/echweb/echweb-utils.git Also read about deploy keys. PS: In my installation, the \"git+ssh\" URI scheme works only with \"editable\" requirements: pip install -e URI#egg=EggName Remember: Change the : character that git remote -v prints to a / character before using the remote's address in the pip command: $ git remote -v origin git@github.com:echweb/echweb-utils.git (fetch) # ^ change this to a '/' character If you forget, you will get this error: ssh: Could not resolve hostname github.com:echweb: nodename nor servname provided, or not known"} +{"question_id": 4978738, "score": 583, "creation_date": 1297523089, "tags": ["python", "null-coalescing-operator"], "instruction": "Is there a Python equivalent of the C# null-coalescing operator?\n\nIn C# there's a null-coalescing operator (written as ??) that allows for easy (short) null checking during assignment: string s = null; var other = s ?? \"some default value\"; Is there a python equivalent? I know that I can do: s = None other = s if s else \"some default value\" But is there an even shorter way (where I don't need to repeat s)?", "output": "other = s or \"some default value\" Ok, it must be clarified how the or operator works. It is a boolean operator, so it works in a boolean context. If the values are not boolean, they are converted to boolean for the purposes of the operator. Note that the or operator does not return only True or False. Instead, it returns the first operand if the first operand evaluates to true, and it returns the second operand if the first operand evaluates to false. In this case, the expression x or y returns x if it is True or evaluates to true when converted to boolean. Otherwise, it returns y. For most cases, this will serve for the very same purpose of C\u266f's null-coalescing operator, but keep in mind: 42 or \"something\" # returns 42 0 or \"something\" # returns \"something\" None or \"something\" # returns \"something\" False or \"something\" # returns \"something\" \"\" or \"something\" # returns \"something\" If you use your variable s to hold something that is either a reference to the instance of a class or None (as long as your class does not define members __bool__() and __len__()), it is secure to use the same semantics as the null-coalescing operator. NB. Python 2.7 will look for __nonzero__() instead of __bool__(). In fact, it may even be useful to have this side-effect of Python. Since you know what values evaluates to false, you can use this to trigger the default value without using None specifically (an error object, for example). In some languages this behavior is referred to as the Elvis operator."} +{"question_id": 2860153, "score": 581, "creation_date": 1274208941, "tags": ["python"], "instruction": "How do I get the parent directory in Python?\n\nCould someone tell me how to get the parent directory of a path in Python in a cross platform way. E.g. C:\\Program Files ---> C:\\ and C:\\ ---> C:\\ If the directory doesn't have a parent directory, it returns the directory itself. The question might seem simple but I couldn't dig it up through Google.", "output": "Python 3.4 Use the pathlib module. from pathlib import Path path = Path(\"/here/your/path/file.txt\") print(path.parent.absolute()) Old answer Try this: import os print os.path.abspath(os.path.join(yourpath, os.pardir)) where yourpath is the path you want the parent for."} +{"question_id": 3518778, "score": 581, "creation_date": 1282192913, "tags": ["python", "numpy", "scipy", "genfromtxt"], "instruction": "How do I read CSV data into a record array in NumPy?\n\nIs there a direct way to import the contents of a CSV file into a record array, just like how R's read.table(), read.delim(), and read.csv() import data into R dataframes? Or should I use csv.reader() and then apply numpy.core.records.fromrecords()?", "output": "Use numpy.genfromtxt() by setting the delimiter kwarg to a comma: from numpy import genfromtxt my_data = genfromtxt('my_file.csv', delimiter=',')"} +{"question_id": 1228299, "score": 580, "creation_date": 1249400919, "tags": ["python", "string", "indexing", "replace"], "instruction": "Changing a character in a string\n\nWhat is the easiest way in Python to replace a character in a string? For example: text = \"abcdefg\"; text[1] = \"Z\"; ^", "output": "Don't modify strings. Work with them as lists; turn them into strings only when needed. >>> s = list(\"Hello zorld\") >>> s ['H', 'e', 'l', 'l', 'o', ' ', 'z', 'o', 'r', 'l', 'd'] >>> s[6] = 'W' >>> s ['H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd'] >>> \"\".join(s) 'Hello World' Python strings are immutable (i.e. they can't be modified). There are a lot of reasons for this. Use lists until you have no choice, only then turn them into strings."} +{"question_id": 31237042, "score": 579, "creation_date": 1436149900, "tags": ["python", "django", "django-models", "django-select-related", "django-prefetch-related"], "instruction": "What's the difference between select_related and prefetch_related in Django ORM?\n\nIn Django doc: select_related() \"follows\" foreign-key relationships, selecting additional related-object data when it executes its query. prefetch_related() does a separate lookup for each relationship, and does the \"joining\" in Python. What does it mean by \"doing the joining in python\"? Can someone illustrate with an example? My understanding is that for foreign key relationship, use select_related; and for M2M relationship, use prefetch_related. Is this correct?", "output": "Your understanding is mostly correct: select_related: when the object that you're going to be selecting is a single object, so OneToOneField or a ForeignKey prefetch_related: when you're going to get a \"set\" of things, so ManyToManyFields as you stated or reverse ForeignKeys. Just to clarify what I mean by reverse ForeignKeys, here's an example: class ModelA(models.Model): pass class ModelB(models.Model): a = ForeignKey(ModelA) # Forward ForeignKey relationship ModelB.objects.select_related('a').all() # Reverse ForeignKey relationship ModelA.objects.prefetch_related('modelb_set').all() The difference is that: select_related does an SQL join and therefore gets the results back as part of the table from the SQL server prefetch_related on the other hand executes another query and therefore reduces the redundant columns in the original object (ModelA in the above example) You may use prefetch_related for anything that you can use select_related for. The tradeoffs are that prefetch_related has to create and send a list of IDs to select back to the server, this can take a while. I'm not sure if there's a nice way of doing this in a transaction, but my understanding is that Django always just sends a list and says SELECT ... WHERE pk IN (...,...,...) basically. In this case if the prefetched data is sparse (let's say U.S. State objects linked to people's addresses) this can be very good, however if it's closer to one-to-one, this can waste a lot of communications. If in doubt, try both and see which performs better. Everything discussed above is basically about the communications with the database. On the Python side however prefetch_related has the extra benefit that a single object is used to represent each object in the database. With select_related duplicate objects will be created in Python for each \"parent\" object. Since objects in Python have a decent bit of memory overhead this can also be a consideration."} +{"question_id": 5870537, "score": 577, "creation_date": 1304430858, "tags": ["python", "django", "django-models", "foreign-keys"], "instruction": "OneToOneField() vs ForeignKey() in Django\n\nWhat's the difference between Django OneToOneField and ForeignKey?", "output": "Differences between OneToOneField(SomeModel) and ForeignKey(SomeModel, unique=True) as stated in The Definitive Guide to Django: OneToOneField A one-to-one relationship. Conceptually, this is similar to a ForeignKey with unique=True, but the \"reverse\" side of the relation will directly return a single object. In contrast to the OneToOneField \"reverse\" relation, a ForeignKey \"reverse\" relation returns a QuerySet. Example For example, if we have the following two models (full model code below): Car model uses OneToOneField(Engine) Car2 model uses ForeignKey(Engine2, unique=True) From within python manage.py shell execute the following: OneToOneField Example >>> from testapp.models import Car, Engine >>> c = Car.objects.get(name='Audi') >>> e = Engine.objects.get(name='Diesel') >>> e.car ForeignKey with unique=True Example >>> from testapp.models import Car2, Engine2 >>> c2 = Car2.objects.get(name='Mazda') >>> e2 = Engine2.objects.get(name='Wankel') >>> e2.car2_set.all() [] Model Code from django.db import models class Engine(models.Model): name = models.CharField(max_length=25) def __unicode__(self): return self.name class Car(models.Model): name = models.CharField(max_length=25) engine = models.OneToOneField(Engine) def __unicode__(self): return self.name class Engine2(models.Model): name = models.CharField(max_length=25) def __unicode__(self): return self.name class Car2(models.Model): name = models.CharField(max_length=25) engine = models.ForeignKey(Engine2, unique=True, on_delete=models.CASCADE) def __unicode__(self): return self.name"} +{"question_id": 21971449, "score": 574, "creation_date": 1393175298, "tags": ["python", "jupyter-notebook", "ipython", "jupyter"], "instruction": "How do I increase the cell width of the Jupyter/ipython notebook in my browser?\n\nI would like to increase the width of the ipython notebook in my browser. I have a high-resolution screen, and I would like to expand the cell width/size to make use of this extra space.", "output": "If you don't want to change your default settings, and you only want to change the width of the current notebook you're working on, you can enter the following into a cell: from IPython.display import display, HTML display(HTML(\"\"))"} +{"question_id": 1060279, "score": 572, "creation_date": 1246306562, "tags": ["python", "date", "datetime", "iteration", "date-range"], "instruction": "Iterating through a range of dates in Python\n\nI have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. day_count = (end_date - start_date).days + 1 for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]: print strftime(\"%Y-%m-%d\", single_date.timetuple()) Notes I'm not actually using this to print. That's just for demo purposes. The start_date and end_date variables are datetime.date objects because I don't need the timestamps. (They're going to be used to generate a report). Sample Output For a start date of 2009-05-30 and an end date of 2009-06-09: 2009-05-30 2009-05-31 2009-06-01 2009-06-02 2009-06-03 2009-06-04 2009-06-05 2009-06-06 2009-06-07 2009-06-08 2009-06-09", "output": "Why are there two nested iterations? For me it produces the same list of data with only one iteration: for single_date in (start_date + timedelta(n) for n in range(day_count)): print ... And no list gets stored, only one generator is iterated over. Also the \"if\" in the generator seems to be unnecessary. After all, a linear sequence should only require one iterator, not two. Update after discussion with John Machin: Maybe the most elegant solution is using a generator function to completely hide/abstract the iteration over the range of dates: from datetime import date, timedelta def daterange(start_date: date, end_date: date): days = int((end_date - start_date).days) for n in range(days): yield start_date + timedelta(n) start_date = date(2013, 1, 1) end_date = date(2015, 6, 2) for single_date in daterange(start_date, end_date): print(single_date.strftime(\"%Y-%m-%d\")) NB: For consistency with the built-in range() function this iteration stops before reaching the end_date. So for inclusive iteration use the next day, as you would with range()."} +{"question_id": 6501121, "score": 572, "creation_date": 1309230331, "tags": ["python", "exit"], "instruction": "Difference between exit() and sys.exit() in Python\n\nIn Python, there are two similarly-named functions, exit() and sys.exit(). What's the difference and when should I use one over the other?", "output": "exit is a helper for the interactive shell - sys.exit is intended for use in programs. The site module (which is imported automatically during startup, except if the -S command-line option is given) adds several constants to the built-in namespace (e.g. exit). They are useful for the interactive interpreter shell and should not be used in programs. Technically, they do mostly the same: raising SystemExit. sys.exit does so in sysmodule.c: static PyObject * sys_exit(PyObject *self, PyObject *args) { PyObject *exit_code = 0; if (!PyArg_UnpackTuple(args, \"exit\", 0, 1, &exit_code)) return NULL; /* Raise SystemExit so callers may catch it or clean up. */ PyErr_SetObject(PyExc_SystemExit, exit_code); return NULL; } While exit is defined in site.py and _sitebuiltins.py, respectively. class Quitter(object): def __init__(self, name): self.name = name def __repr__(self): return 'Use %s() or %s to exit' % (self.name, eof) def __call__(self, code=None): # Shells like IDLE catch the SystemExit, but listen when their # stdin wrapper is closed. try: sys.stdin.close() except: pass raise SystemExit(code) __builtin__.quit = Quitter('quit') __builtin__.exit = Quitter('exit') Note that there is a third exit option, namely os._exit, which exits without calling cleanup handlers, flushing stdio buffers, etc. (and which should normally only be used in the child process after a fork())."} +{"question_id": 2158395, "score": 571, "creation_date": 1264716942, "tags": ["python", "list", "optimization", "nested-lists", "flatten"], "instruction": "Flatten an irregular (arbitrarily nested) list of lists\n\nYes, I know this subject has been covered before: Python idiom to chain (flatten) an infinite iterable of finite iterables? Flattening a shallow list in Python Comprehension for flattening a sequence of sequences? How do I make a flat list out of a list of lists? but as far as I know, all solutions, except for one, fail on a list like [[[1, 2, 3], [4, 5]], 6], where the desired output is [1, 2, 3, 4, 5, 6] (or perhaps even better, an iterator). The only solution I saw that works for an arbitrary nesting is found in this question: def flatten(x): result = [] for el in x: if hasattr(el, \"__iter__\") and not isinstance(el, basestring): result.extend(flatten(el)) else: result.append(el) return result Is this the best approach? Did I overlook something? Any problems?", "output": "Using generator functions can make your example easier to read and improve performance. Python 2 Using the Iterable ABC added in 2.6: from collections import Iterable def flatten(xs): for x in xs: if isinstance(x, Iterable) and not isinstance(x, basestring): for item in flatten(x): yield item else: yield x Python 3 In Python 3, basestring is no more, but the tuple (str, bytes) gives the same effect. Also, the yield from operator returns an item from a generator one at a time. from collections.abc import Iterable def flatten(xs): for x in xs: if isinstance(x, Iterable) and not isinstance(x, (str, bytes)): yield from flatten(x) else: yield x"} +{"question_id": 1157106, "score": 568, "creation_date": 1248145930, "tags": ["python", "list"], "instruction": "Remove all occurrences of a value from a list?\n\nIn Python remove() will remove the first occurrence of value in a list. How to remove all occurrences of a value from a list? This is what I have in mind: >>> remove_values_from_list([1, 2, 3, 4, 2, 2, 3], 2) [1, 3, 4, 3]", "output": "Functional approach: Python 3.x >>> x = [1,2,3,2,2,2,3,4] >>> list(filter((2).__ne__, x)) [1, 3, 3, 4] or >>> x = [1,2,3,2,2,2,3,4] >>> list(filter(lambda a: a != 2, x)) [1, 3, 3, 4] or >>> [i for i in x if i != 2] Python 2.x >>> x = [1,2,3,2,2,2,3,4] >>> filter(lambda a: a != 2, x) [1, 3, 3, 4]"} +{"question_id": 1373164, "score": 566, "creation_date": 1251981468, "tags": ["python", "variable-variables"], "instruction": "How do I create variable variables?\n\nI know that some other languages, such as PHP, support a concept of \"variable variable names\" - that is, the contents of a string can be used as part of a variable name. I heard that this is a bad idea in general, but I think it would solve some problems I have in my Python code. Is it possible to do something like this in Python? What can go wrong? If you are just trying to look up an existing variable by its name, see How can I select a variable by (string) name?. However, first consider whether you can reorganize the code to avoid that need, following the advice in this question.", "output": "You can use dictionaries to accomplish this. Dictionaries are stores of keys and values. >>> dct = {'x': 1, 'y': 2, 'z': 3} >>> dct {'x': 1, 'y': 2, 'z': 3} >>> dct[\"y\"] 2 You can use variable key names to achieve the effect of variable variables without the security risk. >>> x = \"spam\" >>> z = {x: \"eggs\"} >>> z[\"spam\"] 'eggs' For cases where you're thinking of doing something like var1 = 'foo' var2 = 'bar' var3 = 'baz' ... a list may be more appropriate than a dict. A list represents an ordered sequence of objects, with integer indices: lst = ['foo', 'bar', 'baz'] print(lst[1]) # prints bar, because indices start at 0 lst.append('potatoes') # lst is now ['foo', 'bar', 'baz', 'potatoes'] For ordered sequences, lists are more convenient than dicts with integer keys, because lists support iteration in index order, slicing, append, and other operations that would require awkward key management with a dict."} +{"question_id": 749796, "score": 561, "creation_date": 1239753941, "tags": ["python", "xml", "pretty-print"], "instruction": "Pretty printing XML in Python\n\nWhat is the best way (or are the various ways) to pretty print XML in Python?", "output": "import xml.dom.minidom dom = xml.dom.minidom.parse(xml_fname) # or xml.dom.minidom.parseString(xml_string) pretty_xml_as_string = dom.toprettyxml()"} +{"question_id": 1871549, "score": 560, "creation_date": 1260332327, "tags": ["python", "virtualenv"], "instruction": "How to determine if Python is running inside a virtualenv?\n\nIs it possible to determine if the current script is running inside a virtualenv environment?", "output": "The reliable and documented way is to compare sys.prefix and sys.base_prefix. If they're equal, you're not in a virtual environment, otherwise you are. Inside a venv, sys.prefix points to the directory of the virtual environment, and sys.base_prefix to the Python interpreter used to create the environment. This is documented under How venvs work: It is sufficient to check sys.prefix != sys.base_prefix to determine if the current interpreter is running from a virtual environment. This works for Python stdlib venv and for virtualenv (since version 20): def in_venv(): return sys.prefix != sys.base_prefix Older versions of virtualenv used sys.real_prefix instead of sys.base_prefix, and sys.real_prefix did not exist outside a virtual environment. In Python 3.3 and earlier sys.base_prefix did not ever exist. So a check that also handles some legacy cases could look like this: import sys def get_base_prefix_compat(): \"\"\"Get base/real prefix, or sys.prefix if there is none.\"\"\" return ( getattr(sys, \"base_prefix\", None) or getattr(sys, \"real_prefix\", None) or sys.prefix ) def in_virtualenv(): return sys.prefix != get_base_prefix_compat() Using the VIRTUAL_ENV environment variable is not reliable. It is set by the virtualenv activate shell script, but a virtualenv can be used without activation by directly running an executable from the virtualenv's bin/ (or Scripts) directory, in which case $VIRTUAL_ENV will not be set. Or a non-virtualenv Python binary can be executed directly while a virtualenv is activated in the shell, in which case $VIRTUAL_ENV may be set in a Python process that is not actually running in that virtualenv."} +{"question_id": 12517451, "score": 559, "creation_date": 1348160800, "tags": ["python", "file-io"], "instruction": "Automatically creating directories with file output\n\nSay I want to make a file: filename = \"/foo/bar/baz.txt\" with open(filename, \"w\") as f: f.write(\"FOOBAR\") This gives an IOError, since /foo/bar does not exist. What is the most pythonic way to generate those directories automatically? Is it necessary for me explicitly call os.path.exists and os.mkdir on every single one (i.e., /foo, then /foo/bar)?", "output": "In Python 3.2+, using the APIs requested by the OP, you can elegantly do the following: import os filename = \"/foo/bar/baz.txt\" os.makedirs(os.path.dirname(filename), exist_ok=True) with open(filename, \"w\") as f: f.write(\"FOOBAR\") With the Pathlib module (introduced in Python 3.4), there is an alternate syntax (thanks David258): from pathlib import Path output_file = Path(\"/foo/bar/baz.txt\") output_file.parent.mkdir(exist_ok=True, parents=True) output_file.write_text(\"FOOBAR\") In older python, there is a less elegant way: The os.makedirs function does this. Try the following: import os import errno filename = \"/foo/bar/baz.txt\" if not os.path.exists(os.path.dirname(filename)): try: os.makedirs(os.path.dirname(filename)) except OSError as exc: # Guard against race condition if exc.errno != errno.EEXIST: raise with open(filename, \"w\") as f: f.write(\"FOOBAR\") The reason to add the try-except block is to handle the case when the directory was created between the os.path.exists and the os.makedirs calls, so that to protect us from race conditions."} +{"question_id": 10695139, "score": 558, "creation_date": 1337654882, "tags": ["python", "list", "tuples"], "instruction": "Sort a list of tuples by 2nd item (integer value)\n\nI have a list of tuples that looks something like this: [('abc', 121),('abc', 231),('abc', 148), ('abc',221)] I want to sort this list in ascending order by the integer value inside the tuples. Is it possible?", "output": "Try using the key keyword argument of sorted(), which sorts in increasing order by default: sorted( [('abc', 121), ('abc', 231), ('abc', 148), ('abc', 221)], key=lambda x: x[1] ) key should be a function that identifies how to retrieve the comparable element from your data structure. In your case, it is the second element of the tuple, so we access [1]. For optimization, see jamylak's response using operator.itemgetter(1), which is essentially a faster version of lambda x: x[1]."} +{"question_id": 7286365, "score": 553, "creation_date": 1314979978, "tags": ["python", "list", "range", "reverse"], "instruction": "Print a list in reverse order with range()?\n\nHow can you produce the following list with range() in Python? [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]", "output": "Use reversed() function (efficient since range implements __reversed__): reversed(range(10)) It's much more meaningful. Update: list cast If you want it to be a list (as @btk pointed out): list(reversed(range(10))) Update: range-only solution If you want to use only range to achieve the same result, you can use all its parameters. range(start, stop, step) For example, to generate a list [3, 2, 1, 0], you can use the following: range(3, -1, -1) It may be less intuitive, but it works the same with less text. This answer by @Wolf indicates this approach is slightly faster than reversed."} +{"question_id": 492519, "score": 553, "creation_date": 1233248909, "tags": ["python", "multithreading", "timeout", "python-multithreading"], "instruction": "Timeout on a function call\n\nI'm calling a function in Python which I know may stall and force me to restart the script. How do I call the function or what do I wrap it in so that if it takes longer than 5 seconds the script cancels it and does something else?", "output": "You may use the signal package if you are running on UNIX: In [1]: import signal # Register an handler for the timeout In [2]: def handler(signum, frame): ...: print(\"Forever is over!\") ...: raise Exception(\"end of time\") ...: # This function *may* run for an indetermined time... In [3]: def loop_forever(): ...: import time ...: while 1: ...: print(\"sec\") ...: time.sleep(1) ...: ...: # Register the signal function handler In [4]: signal.signal(signal.SIGALRM, handler) Out[4]: 0 # Define a timeout for your function In [5]: signal.alarm(10) Out[5]: 0 In [6]: try: ...: loop_forever() ...: except Exception, exc: ...: print(exc) ....: sec sec sec sec sec sec sec sec Forever is over! end of time # Cancel the timer if the function returned before timeout # (ok, mine won't but yours maybe will :) In [7]: signal.alarm(0) Out[7]: 0 10 seconds after the call signal.alarm(10), the handler is called. This raises an exception that you can intercept from the regular Python code. This module doesn't play well with threads (but then, who does?) Note that since we raise an exception when timeout happens, it may end up caught and ignored inside the function, for example of one such function: def loop_forever(): while 1: print('sec') try: time.sleep(10) except: continue"} +{"question_id": 533905, "score": 549, "creation_date": 1234295657, "tags": ["python", "list", "cartesian-product"], "instruction": "How to get the Cartesian product of multiple lists\n\nHow can I get the Cartesian product (every possible combination of values) from a group of lists? For example, given somelists = [ [1, 2, 3], ['a', 'b'], [4, 5] ] How do I get this? [(1, 'a', 4), (1, 'a', 5), (1, 'b', 4), (1, 'b', 5), (2, 'a', 4), (2, 'a', 5), ...] One common application for this technique is to avoid deeply nested loops. See Avoiding nested for loops for a more specific duplicate. Similarly, this technique might be used to \"explode\" a dictionary with list values; see Combine Python Dictionary Permutations into List of Dictionaries . If you want a Cartesian product of the same list with itself multiple times, itertools.product can handle that elegantly. See Operation on every pair of element in a list or How can I get \"permutations with repetitions\" from a list (Cartesian product of a list with itself)?. Many people who already know about itertools.product struggle with the fact that it expects separate arguments for each input sequence, rather than e.g. a list of lists. The accepted answer shows how to handle this with *. However, the use of * here to unpack arguments is fundamentally not different from any other time it's used in a function call. Please see Expanding tuples into arguments for this topic (and use that instead to close duplicate questions, as appropriate).", "output": "Use itertools.product, which has been available since Python 2.6. import itertools somelists = [ [1, 2, 3], ['a', 'b'], [4, 5] ] for element in itertools.product(*somelists): print(element) This is the same as: for element in itertools.product([1, 2, 3], ['a', 'b'], [4, 5]): print(element)"} +{"question_id": 1405913, "score": 548, "creation_date": 1252595641, "tags": ["python", "32bit-64bit"], "instruction": "How do I determine if my python shell is executing in 32bit or 64bit?\n\nHow can I tell what mode the shell is in, from within the shell? I've tried looking at the platform module, but it seems only to tell you about \"the bit architecture and the linkage format used for the executable\". My binary is compiled as 64bit (I'm running on OS X 10.6), so it seems to always report 64bit even though I'm using the methods described here to force 32bit mode.", "output": "One way is to look at sys.maxsize as documented here: $ python-32 -c 'import sys;print(\"%x\" % sys.maxsize, sys.maxsize > 2**32)' ('7fffffff', False) $ python-64 -c 'import sys;print(\"%x\" % sys.maxsize, sys.maxsize > 2**32)' ('7fffffffffffffff', True) On Windows, run the same commands formatted as follows: python -c \"import sys;print(\\\"%x\\\" % sys.maxsize, sys.maxsize > 2**32)\" sys.maxsize was introduced in Python 2.6. If you need a test for older systems, this slightly more complicated test should work on all Python 2 and 3 releases: $ python-32 -c 'import struct;print( 8 * struct.calcsize(\"P\"))' 32 $ python-64 -c 'import struct;print( 8 * struct.calcsize(\"P\"))' 64 BTW, you might be tempted to use platform.architecture() for this. Unfortunately, its results are not always reliable, particularly in the case of OS X universal binaries. $ arch -x86_64 /usr/bin/python2.6 -c 'import sys,platform; print platform.architecture()[0], sys.maxsize > 2**32' 64bit True $ arch -i386 /usr/bin/python2.6 -c 'import sys,platform; print platform.architecture()[0], sys.maxsize > 2**32' 64bit False"} +{"question_id": 15445981, "score": 545, "creation_date": 1363412338, "tags": ["python", "https", "python-requests"], "instruction": "How do I disable the security certificate check in Python's Requests?\n\nI am using import requests requests.post(url='https://foo.example', data={'bar':'baz'}) but I get a request.exceptions.SSLError. The website has an expired certificate, but I am not sending sensitive data, so it doesn't matter to me. I would imagine there is an argument like 'verifiy=False' that I could use, but I can't seem to find it.", "output": "From the documentation: requests can also ignore verifying the SSL certificate if you set verify to False. >>> requests.get('https://kennethreitz.com', verify=False) If you're using a third-party module and want to disable the checks, here's a context manager that monkey patches requests and changes it so that verify=False is the default and suppresses the warning. import warnings import contextlib import requests from urllib3.exceptions import InsecureRequestWarning old_merge_environment_settings = requests.Session.merge_environment_settings @contextlib.contextmanager def no_ssl_verification(): opened_adapters = set() def merge_environment_settings(self, url, proxies, stream, verify, cert): # Verification happens only once per connection so we need to close # all the opened adapters once we're done. Otherwise, the effects of # verify=False persist beyond the end of this context manager. opened_adapters.add(self.get_adapter(url)) settings = old_merge_environment_settings(self, url, proxies, stream, verify, cert) settings['verify'] = False return settings requests.Session.merge_environment_settings = merge_environment_settings try: with warnings.catch_warnings(): warnings.simplefilter('ignore', InsecureRequestWarning) yield finally: requests.Session.merge_environment_settings = old_merge_environment_settings for adapter in opened_adapters: try: adapter.close() except: pass Here's how you use it: with no_ssl_verification(): requests.get('https://wrong.host.badssl.example/') print('It works') requests.get('https://wrong.host.badssl.example/', verify=True) print('Even if you try to force it to') requests.get('https://wrong.host.badssl.example/', verify=False) print('It resets back') session = requests.Session() session.verify = True with no_ssl_verification(): session.get('https://wrong.host.badssl.example/', verify=True) print('Works even here') try: requests.get('https://wrong.host.badssl.example/') except requests.exceptions.SSLError: print('It breaks') try: session.get('https://wrong.host.badssl.example/') except requests.exceptions.SSLError: print('It breaks here again') Note that this code closes all open adapters that handled a patched request once you leave the context manager. This is because requests maintains a per-session connection pool and certificate validation happens only once per connection so unexpected things like this will happen: >>> import requests >>> session = requests.Session() >>> session.get('https://wrong.host.badssl.example/', verify=False) /usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) >>> session.get('https://wrong.host.badssl.example/', verify=True) /usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:857: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) "} +{"question_id": 23944657, "score": 545, "creation_date": 1401406052, "tags": ["python", "methods", "arguments", "self"], "instruction": "\"TypeError: method() takes 1 positional argument but 2 were given\" but I only passed one\n\nIf I have a class ... class MyClass: def method(arg): print(arg) ... which I use to create an object ... my_object = MyClass() ... on which I call method(\"foo\") like so ... >>> my_object.method(\"foo\") Traceback (most recent call last): File \"\", line 1, in TypeError: method() takes exactly 1 positional argument (2 given) ... why does Python tell me I gave it two arguments, when I only gave one?", "output": "In Python, this: my_object.method(\"foo\") ... is syntactic sugar, which the interpreter translates behind the scenes into: MyClass.method(my_object, \"foo\") ... which, as you can see, does indeed have two arguments - it's just that the first one is implicit, from the point of view of the caller. This is because most methods do some work with the object they're called on, so there needs to be some way for that object to be referred to inside the method. By convention, this first argument is called self inside the method definition: class MyNewClass: def method(self, arg): print(self) print(arg) If you call method(\"foo\") on an instance of MyNewClass, it works as expected: >>> my_new_object = MyNewClass() >>> my_new_object.method(\"foo\") <__main__.MyNewClass object at 0x29045d0> foo Occasionally (but not often), you really don't care about the object that your method is bound to, and in that circumstance, you can decorate the method with the builtin staticmethod() function to say so: class MyOtherClass: @staticmethod def method(arg): print(arg) ... in which case you don't need to add a self argument to the method definition, and it still works: >>> my_other_object = MyOtherClass() >>> my_other_object.method(\"foo\") foo"} +{"question_id": 9786102, "score": 543, "creation_date": 1332243762, "tags": ["python", "parallel-processing"], "instruction": "How do I parallelize a simple Python loop?\n\nThis is probably a trivial question, but how do I parallelize the following loop in python? # setup output lists output1 = list() output2 = list() output3 = list() for j in range(0, 10): # calc individual parameter value parameter = j * offset # call the calculation out1, out2, out3 = calc_stuff(parameter = parameter) # put results into correct output list output1.append(out1) output2.append(out2) output3.append(out3) I know how to start single threads in Python but I don't know how to \"collect\" the results. Multiple processes would be fine too - whatever is easiest for this case. I'm using currently Linux but the code should run on Windows and Mac as-well. What's the easiest way to parallelize this code?", "output": "The CPython implementation currently has a global interpreter lock (GIL) that prevents threads of the same interpreter from concurrently executing Python code. This means CPython threads are useful for concurrent I/O-bound workloads, but usually not for CPU-bound workloads. The naming calc_stuff() indicates that your workload is CPU-bound, so you want to use multiple processes here (which is often the better solution for CPU-bound workloads anyway, regardless of the GIL). There are two easy ways of creating a process pool into the Python standard library. The first one is the multiprocessing module, which can be used like this: pool = multiprocessing.Pool(4) out1, out2, out3 = zip(*pool.map(calc_stuff, range(0, 10 * offset, offset))) Note that this won't work in the interactive interpreter due to the way multiprocessing is implemented. The second way to create a process pool is concurrent.futures.ProcessPoolExecutor: with concurrent.futures.ProcessPoolExecutor() as pool: out1, out2, out3 = zip(*pool.map(calc_stuff, range(0, 10 * offset, offset))) This uses the multiprocessing module under the hood, so it behaves identically to the first version."} +{"question_id": 11941817, "score": 540, "creation_date": 1344889807, "tags": ["python", "list", "dictionary", "loops"], "instruction": "How to avoid \"RuntimeError: dictionary changed size during iteration\" error?\n\nSuppose I have a dictionary of lists: d = {'a': [1], 'b': [1, 2], 'c': [], 'd':[]} Now I want to remove key-value pairs where the values are empty lists. I tried this code: for i in d: if not d[i]: d.pop(i) but this gives an error: RuntimeError: dictionary changed size during iteration I understand that entries can't be added or removed from a dictionary while iterating through it. How can I work around this limitation in order to solve the problem? See Modifying a Python dict while iterating over it for citations that this can cause problems, and why.", "output": "In Python 3.x and 2.x you can use use list to force a copy of the keys to be made: for i in list(d): In Python 2.x calling .keys made a copy of the keys that you could iterate over while modifying the dict: for i in d.keys(): but on Python 3.x, .keys returns a view object instead, so it won't fix your error."} +{"question_id": 2399307, "score": 537, "creation_date": 1268022917, "tags": ["python", "class", "inheritance", "constructor", "superclass"], "instruction": "How to invoke the super constructor in Python?\n\nclass A: def __init__(self): print(\"world\") class B(A): def __init__(self): print(\"hello\") B() # output: hello In all other languages I've worked with the super constructor is invoked implicitly. How does one invoke it in Python? I would expect super(self) but this doesn't work.", "output": "In line with the other answers, there are multiple ways to call super class methods (including the constructor), however in Python 3 the process has been simplified: Python 3 class A(object): def __init__(self): print(\"world\") class B(A): def __init__(self): print(\"hello\") super().__init__() Python 2 In Python 2, you have to call the slightly more verbose version super(, self), which is equivalent to super()as per the docs. class A(object): def __init__(self): print \"world\" class B(A): def __init__(self): print \"hello\" super(B, self).__init__()"} +{"question_id": 6578986, "score": 535, "creation_date": 1309849315, "tags": ["python", "json", "django"], "instruction": "How to convert JSON data into a Python object?\n\nI want to convert JSON data into a Python object. I receive JSON data objects from the Facebook API, which I want to store in my database. My current View in Django (Python) (request.POST contains the JSON): response = request.POST user = FbApiUser(user_id = response['id']) user.name = response['name'] user.username = response['username'] user.save() This works fine, but how do I handle complex JSON data objects? Wouldn't it be much better if I could somehow convert this JSON object into a Python object for easy use?", "output": "UPDATE With Python3, you can do it in one line, using SimpleNamespace and object_hook: import json from types import SimpleNamespace data = '{\"name\": \"John Smith\", \"hometown\": {\"name\": \"New York\", \"id\": 123}}' # Parse JSON into an object with attributes corresponding to dict keys. x = json.loads(data, object_hook=lambda d: SimpleNamespace(**d)) # Or, in Python 3.13+: # json.loads(data, object_hook=SimpleNamespace) print(x.name, x.hometown.name, x.hometown.id) OLD ANSWER (Python2) In Python2, you can do it in one line, using namedtuple and object_hook (but it's very slow with many nested objects): import json from collections import namedtuple data = '{\"name\": \"John Smith\", \"hometown\": {\"name\": \"New York\", \"id\": 123}}' # Parse JSON into an object with attributes corresponding to dict keys. x = json.loads(data, object_hook=lambda d: namedtuple('X', d.keys())(*d.values())) print x.name, x.hometown.name, x.hometown.id or, to reuse this easily: def _json_object_hook(d): return namedtuple('X', d.keys())(*d.values()) def json2obj(data): return json.loads(data, object_hook=_json_object_hook) x = json2obj(data) If you want it to handle keys that aren't good attribute names, check out namedtuple's rename parameter."} +{"question_id": 9031783, "score": 535, "creation_date": 1327659518, "tags": ["python", "jupyter-notebook", "ipython", "warnings"], "instruction": "Hide all warnings in IPython\n\nI need to produce a screencast of an IPython session, and to avoid confusing viewers, I want to disable all warnings emitted by warnings.warn calls from different packages. Is there a way to configure the ipythonrc file to automatically disable all such warnings?", "output": "Place: import warnings warnings.filterwarnings('ignore') inside ~/.ipython/profile_default/startup/disable-warnings.py. Quite often it is useful to see a warning once. This can be set by: warnings.filterwarnings(action='once')"} +{"question_id": 12332975, "score": 531, "creation_date": 1347125582, "tags": ["python", "pip", "python-module", "pypi"], "instruction": "How can I Install a Python module with Pip programmatically (from my code)?\n\nI need to install a package from PyPI straight within my script. Is there maybe some module or distutils (distribute, pip, etc.) feature which allows me to just execute something like pypi.install('requests') and requests will be installed into my virtualenv?", "output": "The officially recommended way to install packages from a script is by calling pip's command-line interface via a subprocess. Most other answers presented here are not supported by pip. Furthermore since pip 10.x, all code has been moved to pip._internal precisely in order to make it clear to users that programmatic use of pip is not supported. Use sys.executable to ensure that you will call the same pip associated with the current runtime. For example: import subprocess import sys def install(package): subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", package])"} +{"question_id": 276052, "score": 531, "creation_date": 1226246690, "tags": ["python", "system", "cpu", "status", "ram"], "instruction": "How can I get current CPU and RAM usage in Python?\n\nHow can I get the current system status (current CPU, RAM, free disk space, etc.) in Python? Ideally, it would work for both Unix-like and Windows platforms. There seems to be a few possible ways of extracting that from my search: Using a library such as PSI (that currently seems not actively developed and not supported on multiple platforms) or something like pystatgrab (again no activity since 2007 it seems and no support for Windows). Using platform specific code such as using a os.popen(\"ps\") or similar for the Unix-like systems and MEMORYSTATUS in ctypes.windll.kernel32 (see this recipe on ActiveState) for the Windows platform. One could put a Python class together with all those code snippets. It's not that those methods are bad, but is there already a well-supported, multi-platform way of doing the same thing?", "output": "The psutil library gives you information about CPU, RAM, etc., on a variety of platforms: psutil is a module providing an interface for retrieving information on running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by tools like ps, top and Windows task manager. It currently supports Linux, Windows, OSX, Sun Solaris, FreeBSD, OpenBSD and NetBSD, both 32-bit and 64-bit architectures, with Python versions from 2.6 to 3.5 (users of Python 2.4 and 2.5 may use 2.1.3 version). Some examples: #!/usr/bin/env python import psutil # gives a single float value psutil.cpu_percent() # gives an object with many fields psutil.virtual_memory() # you can convert that object to a dictionary dict(psutil.virtual_memory()._asdict()) # you can have the percentage of used RAM psutil.virtual_memory().percent 79.2 # you can calculate percentage of available memory psutil.virtual_memory().available * 100 / psutil.virtual_memory().total 20.8 Here's other documentation that provides more concepts and interest concepts: psutil documentation"} +{"question_id": 1261875, "score": 527, "creation_date": 1250012184, "tags": ["python", "closures", "global", "nested-function", "python-nonlocal"], "instruction": "What does \"nonlocal\" do in Python 3?\n\nWhat does nonlocal do in Python 3.x? To close debugging questions where OP needs nonlocal and doesn't realize it, please use Is it possible to modify variable in python that is in outer, but not global, scope? instead. Although Python 2 is officially unsupported as of January 1, 2020, if for some reason you are forced to maintain a Python 2.x codebase and need an equivalent to nonlocal, see nonlocal keyword in Python 2.x.", "output": "Compare this, without using nonlocal: x = 0 def outer(): x = 1 def inner(): x = 2 print(\"inner:\", x) inner() print(\"outer:\", x) outer() print(\"global:\", x) # inner: 2 # outer: 1 # global: 0 To this, using nonlocal, where inner()'s x is now also outer()'s x: x = 0 def outer(): x = 1 def inner(): nonlocal x x = 2 print(\"inner:\", x) inner() print(\"outer:\", x) outer() print(\"global:\", x) # inner: 2 # outer: 2 # global: 0 If we were to use global, it would bind x to the properly \"global\" value: x = 0 def outer(): x = 1 def inner(): global x x = 2 print(\"inner:\", x) inner() print(\"outer:\", x) outer() print(\"global:\", x) # inner: 2 # outer: 1 # global: 2"} +{"question_id": 796008, "score": 526, "creation_date": 1240885688, "tags": ["python", "postgresql", "datetime", "timezone"], "instruction": "Can't subtract offset-naive and offset-aware datetimes\n\nI have a timezone aware timestamptz field in PostgreSQL. When I pull data from the table, I then want to subtract the time right now so I can get it's age. The problem I'm having is that both datetime.datetime.now() and datetime.datetime.utcnow() seem to return timezone unaware timestamps, which results in me getting this error: TypeError: can't subtract offset-naive and offset-aware datetimes Is there a way to avoid this (preferably without a third-party module being used). EDIT: Thanks for the suggestions, however trying to adjust the timezone seems to give me errors.. so I'm just going to use timezone unaware timestamps in PG and always insert using: NOW() AT TIME ZONE 'UTC' That way all my timestamps are UTC by default (even though it's more annoying to do this).", "output": "Have you tried to remove the timezone awareness? From http://pytz.sourceforge.net/ naive = dt.replace(tzinfo=None) may have to add time zone conversion as well. edit: Please be aware the age of this answer. An answer involving ADDing the timezone info instead of removing it in python 3 is below. https://stackoverflow.com/a/25662061/93380"} +{"question_id": 4530069, "score": 526, "creation_date": 1293274789, "tags": ["python", "datetime", "date", "timezone"], "instruction": "How do I get a value of datetime.today() in Python that is \"timezone aware\"?\n\nI am trying to subtract one date value from the value of datetime.datetime.today() to calculate how long ago something was. But it complains: TypeError: can't subtract offset-naive and offset-aware datetimes The return value from datetime.datetime.today() doesn't seem to be \"timezone aware\", while my other date value is. How do I get a return value from datetime.datetime.today() that is timezone aware? The ideal solution would be for it to automatically know the timezone. Right now, it's giving me the time in local time, which happens to be PST, i.e. UTC - 8 hours. Worst case, is there a way I can manually enter a timezone value into the datetime object returned by datetime.datetime.today() and set it to UTC-8?", "output": "In the standard library, there is no cross-platform way to create aware timezones without creating your own timezone class. (Edit: Python 3.9 introduces zoneinfo in the standard library which does provide this functionality.) On Windows, there's win32timezone.utcnow(), but that's part of pywin32. I would rather suggest to use the pytz library, which has a constantly updated database of most timezones. Working with local timezones can be very tricky (see \"Further reading\" links below), so you may rather want to use UTC throughout your application, especially for arithmetic operations like calculating the difference between two time points. You can get the current date/time like so: import pytz from datetime import datetime datetime.utcnow().replace(tzinfo=pytz.utc) Mind that datetime.today() and datetime.now() return the local time, not the UTC time, so applying .replace(tzinfo=pytz.utc) to them would not be correct. Another nice way to do it is: datetime.now(pytz.utc) which is a bit shorter and does the same. Further reading/watching why to prefer UTC in many cases: pytz documentation What Every Developer Should Know About Time \u2013 development hints for many real-life use cases The Problem with Time & Timezones - Computerphile \u2013 funny, eye-opening explanation about the complexity of working with timezones (video)"} +{"question_id": 11475885, "score": 525, "creation_date": 1342202630, "tags": ["python", "regex"], "instruction": "python .replace() regex\n\nI am trying to do a grab everything after the '' tag and delete it, but my code doesn't seem to be doing anything. Does .replace() not support regex? z.write(article.replace('.+', ''))", "output": "No. Regular expressions in Python are handled by the re module. article = re.sub(r'(?is).+', '', article) In general: str_output = re.sub(regex_search_term, regex_replacement, str_input)"} +{"question_id": 14261614, "score": 523, "creation_date": 1357832523, "tags": ["python", "python-2.7", "yaml", "pip", "pyyaml"], "instruction": "How do I install the yaml package for Python?\n\nI have a Python program that uses YAML. I attempted to install it on a new server using pip install yaml and it returns the following: $ sudo pip install yaml Downloading/unpacking yaml Could not find any downloads that satisfy the requirement yaml No distributions at all found for yaml Storing complete log in /home/pa/.pip/pip.log How do I install the yaml package for Python? I'm running Python 2.7. (OS: Debian Wheezy)", "output": "You could try the search the feature on https://pypi.org/search (via a browser) and look for packages in PyPI with yaml in the short description. That reveals various packages, including PyYaml, yamltools, and PySyck, among others (Note that PySyck docs recommend using PyYaml, since syck is out of date). Now you know a specific package name, you can install it: $ pip install pyyaml If you want to install python yaml system-wide in linux, you can also use a package manager, like aptitude or yum: $ sudo apt-get install python-yaml $ sudo yum install python-yaml"} +{"question_id": 334655, "score": 522, "creation_date": 1228236551, "tags": ["python", "function", "dictionary", "parameters", "keyword"], "instruction": "Passing a dictionary to a function as keyword parameters\n\nI'd like to call a function in python using a dictionary with matching key-value pairs for the parameters. Here is some code: d = dict(param='test') def f(param): print(param) f(d) This prints {'param': 'test'} but I'd like it to just print test. I'd like it to work similarly for more parameters: d = dict(p1=1, p2=2) def f2(p1, p2): print(p1, p2) f2(d) Is this possible?", "output": "Figured it out for myself in the end. It is simple, I was just missing the ** operator to unpack the dictionary So my example becomes: d = dict(p1=1, p2=2) def f2(p1,p2): print(p1, p2) f2(**d)"} +{"question_id": 20001229, "score": 520, "creation_date": 1384518919, "tags": ["python", "json", "post", "flask"], "instruction": "How to get POSTed JSON in Flask?\n\nI'm trying to build a simple API using Flask, in which I now want to read some POSTed JSON. I do the POST with the Postman Chrome extension, and the JSON I POST is simply {\"text\":\"hello world\"}. I try to read the JSON using the following method: @app.route('/api/add_message/', methods=['GET', 'POST']) def add_message(uuid): content = request.json print(content) return uuid On the browser it correctly returns the UUID I put in the GET, but on the console, it just prints out None (where I expect it to print out the {\"text\":\"hello world\"}. Does anybody know how I can get the posted JSON from within the Flask method?", "output": "First of all, the .json attribute is a property that delegates to the request.get_json() method, which documents why you see None here. You need to set the request content type to application/json for the .json property and .get_json() method (with no arguments) to work as either will produce None otherwise. See the Flask Request documentation: The parsed JSON data if mimetype indicates JSON (application/json, see .is_json). You can tell request.get_json() to skip the content type requirement by passing it the force=True keyword argument. Note that if an exception is raised at this point (possibly resulting in a 415 Unsupported Media Type response), your JSON data is invalid. It is in some way malformed; you may want to check it with a JSON validator."} +{"question_id": 3853722, "score": 518, "creation_date": 1286181600, "tags": ["python", "argparse"], "instruction": "How to insert newlines on argparse help text?\n\nI'm using argparse in Python 2.7 for parsing input options. One of my options is a multiple choice. I want to make a list in its help text, e.g. from argparse import ArgumentParser parser = ArgumentParser(description='test') parser.add_argument('-g', choices=['a', 'b', 'g', 'd', 'e'], default='a', help=\"Some option, where\\n\" \" a = alpha\\n\" \" b = beta\\n\" \" g = gamma\\n\" \" d = delta\\n\" \" e = epsilon\") parser.parse_args() However, argparse strips all newlines and consecutive spaces. The result looks like ~/Downloads:52$ python2.7 x.py -h usage: x.py [-h] [-g {a,b,g,d,e}] test optional arguments: -h, --help show this help message and exit -g {a,b,g,d,e} Some option, where a = alpha b = beta g = gamma d = delta e = epsilon How to insert newlines in the help text?", "output": "Try using RawTextHelpFormatter to preserve all of your formatting: from argparse import RawTextHelpFormatter parser = ArgumentParser(description='test', formatter_class=RawTextHelpFormatter) It's similar to RawDescriptionHelpFormatter but instead of only applying to the description and epilog, RawTextHelpFormatter also applies to all help text (including arguments)."} +{"question_id": 716477, "score": 514, "creation_date": 1238817635, "tags": ["python"], "instruction": "join list of lists in python\n\nIs the a short syntax for joining a list of lists into a single list( or iterator) in python? For example I have a list as follows and I want to iterate over a,b and c. x = [[\"a\",\"b\"], [\"c\"]] The best I can come up with is as follows. result = [] [ result.extend(el) for el in x] for el in result: print el", "output": "import itertools a = [['a','b'], ['c']] print(list(itertools.chain.from_iterable(a))) This gives ['a', 'b', 'c']"} +{"question_id": 15705630, "score": 513, "creation_date": 1364568493, "tags": ["python", "pandas", "dataframe", "group-by", "max"], "instruction": "Get the row(s) which have the max value in groups using groupby\n\nHow do I find all rows in a pandas DataFrame which have the max value for count column, after grouping by ['Sp','Mt'] columns? Example 1: the following DataFrame: Sp Mt Value count 0 MM1 S1 a **3** 1 MM1 S1 n 2 2 MM1 S3 cb **5** 3 MM2 S3 mk **8** 4 MM2 S4 bg **10** 5 MM2 S4 dgd 1 6 MM4 S2 rd 2 7 MM4 S2 cb 2 8 MM4 S2 uyi **7** Expected output is to get the result rows whose count is max in each group, like this: Sp Mt Value count 0 MM1 S1 a **3** 2 MM1 S3 cb **5** 3 MM2 S3 mk **8** 4 MM2 S4 bg **10** 8 MM4 S2 uyi **7** Example 2: Sp Mt Value count 4 MM2 S4 bg 10 5 MM2 S4 dgd 1 6 MM4 S2 rd 2 7 MM4 S2 cb 8 8 MM4 S2 uyi 8 Expected output: Sp Mt Value count 4 MM2 S4 bg 10 7 MM4 S2 cb 8 8 MM4 S2 uyi 8", "output": "Firstly, we can get the max count for each group like this: In [1]: df Out[1]: Sp Mt Value count 0 MM1 S1 a 3 1 MM1 S1 n 2 2 MM1 S3 cb 5 3 MM2 S3 mk 8 4 MM2 S4 bg 10 5 MM2 S4 dgd 1 6 MM4 S2 rd 2 7 MM4 S2 cb 2 8 MM4 S2 uyi 7 In [2]: df.groupby(['Sp', 'Mt'])['count'].max() Out[2]: Sp Mt MM1 S1 3 S3 5 MM2 S3 8 S4 10 MM4 S2 7 Name: count, dtype: int64 To get the indices of the original DF you can do: In [3]: idx = df.groupby(['Sp', 'Mt'])['count'].transform(max) == df['count'] In [4]: df[idx] Out[4]: Sp Mt Value count 0 MM1 S1 a 3 2 MM1 S3 cb 5 3 MM2 S3 mk 8 4 MM2 S4 bg 10 8 MM4 S2 uyi 7 Note that if you have multiple max values per group, all will be returned. Update On a Hail Mary chance that this is what the OP is requesting: In [5]: df['count_max'] = df.groupby(['Sp', 'Mt'])['count'].transform(max) In [6]: df Out[6]: Sp Mt Value count count_max 0 MM1 S1 a 3 3 1 MM1 S1 n 2 3 2 MM1 S3 cb 5 5 3 MM2 S3 mk 8 8 4 MM2 S4 bg 10 10 5 MM2 S4 dgd 1 10 6 MM4 S2 rd 2 7 7 MM4 S2 cb 2 7 8 MM4 S2 uyi 7 7"} +{"question_id": 19913659, "score": 511, "creation_date": 1384195926, "tags": ["python", "pandas", "dataframe", "numpy"], "instruction": "How do I create a new column where the values are selected based on an existing column?\n\nHow do I add a color column to the following dataframe so that color='green' if Set == 'Z', and color='red' otherwise? Type Set 1 A Z 2 B Z 3 B X 4 C Y", "output": "If you only have two choices to select from then use np.where: df['color'] = np.where(df['Set']=='Z', 'green', 'red') For example, import pandas as pd import numpy as np df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')}) df['color'] = np.where(df['Set']=='Z', 'green', 'red') print(df) yields Set Type color 0 Z A green 1 Z B green 2 X B red 3 Y C red If you have more than two conditions then use np.select. For example, if you want color to be yellow when (df['Set'] == 'Z') & (df['Type'] == 'A') otherwise blue when (df['Set'] == 'Z') & (df['Type'] == 'B') otherwise purple when (df['Type'] == 'B') otherwise black, then use df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')}) conditions = [ (df['Set'] == 'Z') & (df['Type'] == 'A'), (df['Set'] == 'Z') & (df['Type'] == 'B'), (df['Type'] == 'B')] choices = ['yellow', 'blue', 'purple'] df['color'] = np.select(conditions, choices, default='black') print(df) which yields Set Type color 0 Z A yellow 1 Z B blue 2 X B purple 3 Y C black"} +{"question_id": 41060382, "score": 508, "creation_date": 1481286055, "tags": ["python", "pip", "anaconda", "environment"], "instruction": "Using Pip to install packages to an Anaconda environment\n\nOn Conda 4.2.13 Mac OS X v10.12.1 (Sierra) I am trying to install packages from pip to a fresh environment (virtual) created using anaconda. In the Anaconda documentation it says this is perfectly fine. It is done the same way as for virtualenv. Activate the environment where you want to put the program, then pip install a program... I created an empty environment in Anaconda like this: conda create -n shrink_venv Activate it: source activate shrink_venv I then can see in the terminal that I am working in my environment (shrink_venv). A problem is coming up when I try to install a package using pip: pip install Pillow Output: Requirement already satisfied (use --upgrade to upgrade): Pillow in /Library/Python/2.7/site-packages So I can see it thinks the requirement is satisfied from the system-wide package. So it seems the environment is not working correctly, definitely not like it said in the documentation. Am I doing something wrong here? Just a note, I know you can use conda install for the packages, but I have had an issue with Pillow from anaconda, so I wanted to get it from pip, and since the docs say that is fine. Output of which -a pip: /usr/local/bin/pip /Users/my_user/anaconda/bin/pip I see this is pretty common issue. I have found that the Conda environment doesn't play well with the PYTHONPATH. The system seems to always look in the PYTHONPATH locations even when you're using a Conda environment. Now, I always run unset PYTHONPATH when using a Conda environment, and it works much better. I'm on a Mac.", "output": "For others who run into this situation, I found this to be the most straightforward solution: Run conda create -n venv_name and conda activate venv_name, where venv_name is the name of your virtual environment. Run conda install pip. This will install pip to your venv directory. Find your anaconda directory, and find the actual venv folder. It should be somewhere like /anaconda/envs/venv_name/; or, you could also run conda activate venv_name. Install new packages by doing /anaconda/envs/venv_name/bin/pip install package_name; or, simply run pip install package_name. This should now successfully install packages using that virtual environment's pip!"} +{"question_id": 474528, "score": 507, "creation_date": 1232744825, "tags": ["python", "timer"], "instruction": "How to repeatedly execute a function every x seconds?\n\nI want to repeatedly execute a function in Python every 60 seconds forever (just like an NSTimer in Objective C or setTimeout in JS). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user. In this question about a cron implemented in Python, the solution appears to effectively just sleep() for x seconds. I don't need such advanced functionality so perhaps something like this would work while True: # Code executed here time.sleep(60) Are there any foreseeable problems with this code?", "output": "If your program doesn't have a event loop already, use the sched module, which implements a general purpose event scheduler. import sched, time def do_something(scheduler): # schedule the next call first scheduler.enter(60, 1, do_something, (scheduler,)) print(\"Doing stuff...\") # then do your stuff my_scheduler = sched.scheduler(time.time, time.sleep) my_scheduler.enter(60, 1, do_something, (my_scheduler,)) my_scheduler.run() If you're already using an event loop library like asyncio, trio, tkinter, PyQt5, gobject, kivy, and many others - just schedule the task using your existing event loop library's methods, instead."} +{"question_id": 9295026, "score": 507, "creation_date": 1329315533, "tags": ["python", "matplotlib"], "instruction": "How to remove axis, legends, and white padding\n\nI would like to apply colormap to an image, and write the resulting image, without using axes, labels, titles, or anything automatically added by matplotlib. Here is what I did: def make_image(inputname,outputname): data = mpimg.imread(inputname)[:,:,0] fig = plt.imshow(data) fig.set_cmap('hot') fig.axes.get_xaxis().set_visible(False) fig.axes.get_yaxis().set_visible(False) plt.savefig(outputname) It successfully removes the axis of the figure, but the figure saved, presents a white padding, and a frame around the actual image. How can I remove them (at least the white padding)?", "output": "The axis('off') method resolves one of the problems more succinctly than separately changing each axis and border. It still leaves the white space around the border however. Adding bbox_inches='tight' to the savefig command almost gets you there; you can see in the example below that the white space left is much smaller, but still present. Newer versions of matplotlib may require bbox_inches=0 instead of the string 'tight' (via @episodeyang and @kadrach) from numpy import random import matplotlib.pyplot as plt data = random.random((5,5)) img = plt.imshow(data, interpolation='nearest') img.set_cmap('hot') plt.axis('off') plt.savefig(\"test.png\", bbox_inches='tight')"} +{"question_id": 47955263, "score": 507, "creation_date": 1514056516, "tags": ["python", "class", "python-3.7", "python-dataclasses"], "instruction": "What are data classes and how are they different from common classes?\n\nPEP 557 introduces data classes into the Python standard library. It says that by applying the @dataclass decorator shown below, it will generate \"among other things, an __init__()\". from dataclasses import dataclass @dataclass class InventoryItem: \"\"\"Class for keeping track of an item in inventory.\"\"\" name: str unit_price: float quantity_on_hand: int = 0 def total_cost(self) -> float: return self.unit_price * self.quantity_on_hand It also says dataclasses are \"mutable namedtuples with default\", but I don't understand what this means, nor how data classes are different from common classes. What are data classes and when is it best to use them?", "output": "Data classes are just regular classes that are geared towards storing state, rather than containing a lot of logic. Every time you create a class that mostly consists of attributes, you make a data class. What the dataclasses module does is to make it easier to create data classes. It takes care of a lot of boilerplate for you. This is especially useful when your data class must be hashable; because this requires a __hash__ method as well as an __eq__ method. If you add a custom __repr__ method for ease of debugging, that can become quite verbose: class InventoryItem: '''Class for keeping track of an item in inventory.''' name: str unit_price: float quantity_on_hand: int = 0 def __init__( self, name: str, unit_price: float, quantity_on_hand: int = 0 ) -> None: self.name = name self.unit_price = unit_price self.quantity_on_hand = quantity_on_hand def total_cost(self) -> float: return self.unit_price * self.quantity_on_hand def __repr__(self) -> str: return ( 'InventoryItem(' f'name={self.name!r}, unit_price={self.unit_price!r}, ' f'quantity_on_hand={self.quantity_on_hand!r})' ) def __hash__(self) -> int: return hash((self.name, self.unit_price, self.quantity_on_hand)) def __eq__(self, other) -> bool: if not isinstance(other, InventoryItem): return NotImplemented return ( (self.name, self.unit_price, self.quantity_on_hand) == (other.name, other.unit_price, other.quantity_on_hand)) With dataclasses you can reduce it to: from dataclasses import dataclass @dataclass(unsafe_hash=True) class InventoryItem: '''Class for keeping track of an item in inventory.''' name: str unit_price: float quantity_on_hand: int = 0 def total_cost(self) -> float: return self.unit_price * self.quantity_on_hand (Example based on the PEP example). The same class decorator can also generate comparison methods (__lt__, __gt__, etc.) and handle immutability. namedtuple classes are also data classes, but are immutable by default (as well as being sequences). dataclasses are much more flexible in this regard, and can easily be structured such that they can fill the same role as a namedtuple class. The PEP was inspired by the attrs project, which can do even more (including slots, validators, converters, metadata, etc.). If you want to see some examples, I recently used dataclasses for several of my Advent of Code solutions, see the solutions for day 7, day 8, day 11 and day 20. If you want to use dataclasses module in Python versions < 3.7, then you could install the backported module (requires 3.6) or use the attrs project mentioned above."} +{"question_id": 49836676, "score": 506, "creation_date": 1523743950, "tags": ["python", "pip"], "instruction": "Error after upgrading pip: cannot import name 'main'\n\nWhenever I am trying to install any package using pip, I am getting this import error: guru@guru-notebook:~$ pip3 install numpy Traceback (most recent call last): File \"/usr/bin/pip3\", line 9, in from pip import main ImportError: cannot import name 'main' guru@guru-notebook:~$ cat `which pip3` #!/usr/bin/python3 # GENERATED BY DEBIAN import sys # Run the main entry point, similarly to how setuptools does it, but because # we didn't install the actual entry point from setup.py, don't use the # pkg_resources API. from pip import main if __name__ == '__main__': sys.exit(main()) It was working fine earlier, I am not sure why it is throwing this error. I have searched about this error, but can't find anything to fix it. Please let me know if you need any further detail, I will update my question.", "output": "You must have inadvertently upgraded your system pip (probably through something like sudo pip install pip --upgrade) pip 10.x adjusts where its internals are situated. The pip3 command you're seeing is one provided by your package maintainer (presumably debian based here?) and is not a file managed by pip. You can read more about this on pip's issue tracker You'll probably want to not upgrade your system pip and instead use a virtualenv. To recover the pip3 binary you'll need to sudo python3 -m pip uninstall pip && sudo apt install python3-pip --reinstall If you want to continue in \"unsupported territory\" (upgrading a system package outside of the system package manager), you can probably get away with python3 -m pip ... instead of pip3."} +{"question_id": 1732438, "score": 503, "creation_date": 1258153299, "tags": ["python", "unit-testing", "testing", "python-unittest"], "instruction": "How do I run all Python unit tests in a directory?\n\nI have a directory that contains my Python unit tests. Each unit test module is of the form test_*.py. I am attempting to make a file called all_test.py that will, you guessed it, run all files in the aforementioned test form and return the result. I have tried two methods so far; both have failed. I will show the two methods, and I hope someone out there knows how to actually do this correctly. For my first valiant attempt, I thought \"If I just import all my testing modules in the file, and then call this unittest.main() doodad, it will work, right?\" Well, turns out I was wrong. import glob import unittest testSuite = unittest.TestSuite() test_file_strings = glob.glob('test_*.py') module_strings = [str[0:len(str)-3] for str in test_file_strings] if __name__ == \"__main__\": unittest.main() This did not work, the result I got was: $ python all_test.py ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK For my second try, I though, ok, maybe I will try to do this whole testing thing in a more \"manual\" fashion. So I attempted to do that below: import glob import unittest testSuite = unittest.TestSuite() test_file_strings = glob.glob('test_*.py') module_strings = [str[0:len(str)-3] for str in test_file_strings] [__import__(str) for str in module_strings] suites = [unittest.TestLoader().loadTestsFromName(str) for str in module_strings] [testSuite.addTest(suite) for suite in suites] print testSuite result = unittest.TestResult() testSuite.run(result) print result #Ok, at this point I have a result #How do I display it as the normal unit test command line output? if __name__ == \"__main__\": unittest.main() This also did not work, but it seems so close! $ python all_test.py ]>]>]> ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I seem to have a suite of some sort, and I can execute the result. I am a little concerned about the fact that it says I have only run=1, seems like that should be run=2, but it is progress. But how do I pass and display the result to main? Or how do I basically get it working so I can just run this file, and in doing so, run all the unit tests in this directory?", "output": "With Python 2.7 and higher you don't have to write new code or use third-party tools to do this; recursive test execution via the command line is built-in. Put an __init__.py in your test directory and: python -m unittest discover # or python -m unittest discover -s -p '*_test.py' You can read more in the python 2.7 or python 3.x unittest documentation."} +{"question_id": 9834038, "score": 502, "creation_date": 1332475529, "tags": ["python", "django", "sorting"], "instruction": "django order_by query set, ascending and descending\n\nHow can I order by descending my query set in django by date? Reserved.objects.all().filter(client=client_id).order_by('check_in') I just want to filter from descending all the Reserved by check_in date.", "output": "Reserved.objects.filter(client=client_id).order_by('-check_in') Notice the - before check_in. - before column name mean \"descending order\", while without - mean \"ascending\". Django Documentation"} +{"question_id": 26763344, "score": 501, "creation_date": 1415208274, "tags": ["python", "pandas", "dataframe", "datetime", "type-conversion"], "instruction": "Convert Pandas Column to DateTime\n\nI have one field in a pandas DataFrame that was imported as string format. It should be a datetime variable. How do I convert it to a datetime column, and then filter based on date? Example: raw_data = pd.DataFrame({'Mycol': ['05SEP2014:00:00:00.000']})", "output": "Use the to_datetime function, specifying a format to match your data. df['Mycol'] = pd.to_datetime(df['Mycol'], format='%d%b%Y:%H:%M:%S.%f')"} +{"question_id": 2566412, "score": 499, "creation_date": 1270208303, "tags": ["python", "search", "numpy"], "instruction": "Find nearest value in numpy array\n\nHow do I find the nearest value in a numpy array? Example: np.find_nearest(array, value)", "output": "import numpy as np def find_nearest(array, value): array = np.asarray(array) idx = (np.abs(array - value)).argmin() return array[idx] Example usage: array = np.random.random(10) print(array) # [ 0.21069679 0.61290182 0.63425412 0.84635244 0.91599191 0.00213826 # 0.17104965 0.56874386 0.57319379 0.28719469] print(find_nearest(array, value=0.5)) # 0.568743859261"} +{"question_id": 384759, "score": 497, "creation_date": 1229883692, "tags": ["python", "image", "numpy", "python-imaging-library", "numpy-ndarray"], "instruction": "How do I convert a PIL Image into a NumPy array?\n\nHow do I convert a PIL Image back and forth to a NumPy array so that I can do faster pixel-wise transformations than PIL's PixelAccess allows? I can convert it to a NumPy array via: pic = Image.open(\"foo.jpg\") pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3) But how do I load it back into the PIL Image after I've modified the array? pic.putdata() isn't working well.", "output": "You're not saying how exactly putdata() is not behaving. I'm assuming you're doing >>> pic.putdata(a) Traceback (most recent call last): File \"...blablabla.../PIL/Image.py\", line 1185, in putdata self.im.putdata(data, scale, offset) SystemError: new style getargs format but argument is not a tuple This is because putdata expects a sequence of tuples and you're giving it a numpy array. This >>> data = list(tuple(pixel) for pixel in pix) >>> pic.putdata(data) will work but it is very slow. As of PIL 1.1.6, the \"proper\" way to convert between images and numpy arrays is simply >>> pix = numpy.array(pic) although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case). Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix)."} +{"question_id": 2891790, "score": 495, "creation_date": 1274619269, "tags": ["python", "numpy", "pretty-print"], "instruction": "Pretty-print a NumPy array without scientific notation and with given precision\n\nHow do I print formatted NumPy arrays in a way similar to this: x = 1.23456 print('%.3f' % x) If I want to print the numpy.ndarray of floats, it prints several decimals, often in 'scientific' format, which is rather hard to read even for low-dimensional arrays. However, numpy.ndarray apparently has to be printed as a string, i.e., with %s. Is there a solution for this?", "output": "Use numpy.set_printoptions to set the precision of the output: import numpy as np x = np.random.random(10) print(x) # [ 0.07837821 0.48002108 0.41274116 0.82993414 0.77610352 0.1023732 # 0.51303098 0.4617183 0.33487207 0.71162095] np.set_printoptions(precision=3) print(x) # [ 0.078 0.48 0.413 0.83 0.776 0.102 0.513 0.462 0.335 0.712] And suppress suppresses the use of scientific notation for small numbers: y = np.array([1.5e-10, 1.5, 1500]) print(y) # [ 1.500e-10 1.500e+00 1.500e+03] np.set_printoptions(suppress=True) print(y) # [ 0. 1.5 1500. ] To apply print options locally, using NumPy 1.15.0 or later, you could use the numpy.printoptions context manager. For example, inside the with-suite precision=3 and suppress=True are set: x = np.random.random(10) with np.printoptions(precision=3, suppress=True): print(x) # [ 0.073 0.461 0.689 0.754 0.624 0.901 0.049 0.582 0.557 0.348] But outside the with-suite the print options are back to default settings: print(x) # [ 0.07334334 0.46132615 0.68935231 0.75379645 0.62424021 0.90115836 # 0.04879837 0.58207504 0.55694118 0.34768638] If you are using an earlier version of NumPy, you can create the context manager yourself. For example, import numpy as np import contextlib @contextlib.contextmanager def printoptions(*args, **kwargs): original = np.get_printoptions() np.set_printoptions(*args, **kwargs) try: yield finally: np.set_printoptions(**original) x = np.random.random(10) with printoptions(precision=3, suppress=True): print(x) # [ 0.073 0.461 0.689 0.754 0.624 0.901 0.049 0.582 0.557 0.348] To prevent zeros from being stripped from the end of floats: np.set_printoptions now has a formatter parameter which allows you to specify a format function for each type. np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) print(x) which prints [ 0.078 0.480 0.413 0.830 0.776 0.102 0.513 0.462 0.335 0.712] instead of [ 0.078 0.48 0.413 0.83 0.776 0.102 0.513 0.462 0.335 0.712]"} +{"question_id": 8949252, "score": 491, "creation_date": 1327102680, "tags": ["python", "attributeerror", "nonetype"], "instruction": "Why do I get AttributeError: 'NoneType' object has no attribute 'something'?\n\nI got this error: AttributeError: 'NoneType' object has no attribute 'something' What general scenarios might cause such an AttributeError, and how can I identify the problem? This is a special case of AttributeErrors. It merits separate treatment because there are a lot of ways to get an unexpected None value from the code, so it's typically a different problem; for other AttributeErrors, the problem might just as easily be the attribute name. See also What is a None value? and What is a 'NoneType' object? for an understanding of None and its type, NoneType.", "output": "NoneType means that instead of an instance of whatever Class or Object you think you're working with, you've actually got None. That usually means that an assignment or function call up above failed or returned an unexpected result."} +{"question_id": 21925671, "score": 490, "creation_date": 1392958956, "tags": ["python", "django", "dictionary", "django-models", "django-orm"], "instruction": "Convert Django Model object to dict with all of the fields intact\n\nHow does one convert a django Model object to a dict with all of its fields? All ideally includes foreign keys and fields with editable=False. Let me elaborate. Let's say I have a django model like the following: from django.db import models class OtherModel(models.Model): pass class SomeModel(models.Model): normal_value = models.IntegerField() readonly_value = models.IntegerField(editable=False) auto_now_add = models.DateTimeField(auto_now_add=True) foreign_key = models.ForeignKey(OtherModel, related_name=\"ref1\") many_to_many = models.ManyToManyField(OtherModel, related_name=\"ref2\") In the terminal, I have done the following: other_model = OtherModel() other_model.save() instance = SomeModel() instance.normal_value = 1 instance.readonly_value = 2 instance.foreign_key = other_model instance.save() instance.many_to_many.add(other_model) instance.save() I want to convert this to the following dictionary: {'auto_now_add': datetime.datetime(2015, 3, 16, 21, 34, 14, 926738, tzinfo=), 'foreign_key': 1, 'id': 1, 'many_to_many': [1], 'normal_value': 1, 'readonly_value': 2} Questions with unsatisfactory answers: Django: Converting an entire set of a Model's objects into a single dictionary How can I turn Django Model objects into a dictionary and still have their foreign keys?", "output": "There are many ways to convert an instance to a dictionary, with varying degrees of corner case handling and closeness to the desired result. 1. instance.__dict__ instance.__dict__ which returns {'_foreign_key_cache': , '_state': , 'auto_now_add': datetime.datetime(2018, 12, 20, 21, 34, 29, 494827, tzinfo=), 'foreign_key_id': 2, 'id': 1, 'normal_value': 1, 'readonly_value': 2} This is by far the simplest, but is missing many_to_many, foreign_key is misnamed, and it has two unwanted extra things in it. 2. model_to_dict from django.forms.models import model_to_dict model_to_dict(instance) which returns {'foreign_key': 2, 'id': 1, 'many_to_many': [], 'normal_value': 1} This is the only one with many_to_many, but is missing the uneditable fields. 3. model_to_dict(..., fields=...) from django.forms.models import model_to_dict model_to_dict(instance, fields=[field.name for field in instance._meta.fields]) which returns {'foreign_key': 2, 'id': 1, 'normal_value': 1} This is strictly worse than the standard model_to_dict invocation. 4. query_set.values() SomeModel.objects.filter(id=instance.id).values()[0] which returns {'auto_now_add': datetime.datetime(2018, 12, 20, 21, 34, 29, 494827, tzinfo=), 'foreign_key_id': 2, 'id': 1, 'normal_value': 1, 'readonly_value': 2} This is the same output as instance.__dict__ but without the extra fields. foreign_key_id is still wrong and many_to_many is still missing. 5. Custom Function The code for django's model_to_dict had most of the answer. It explicitly removed non-editable fields, so removing that check and getting the ids of foreign keys for many to many fields results in the following code which behaves as desired: from itertools import chain def to_dict(instance): opts = instance._meta data = {} for f in chain(opts.concrete_fields, opts.private_fields): data[f.name] = f.value_from_object(instance) for f in opts.many_to_many: data[f.name] = [i.id for i in f.value_from_object(instance)] return data While this is the most complicated option, calling to_dict(instance) gives us exactly the desired result: {'auto_now_add': datetime.datetime(2018, 12, 20, 21, 34, 29, 494827, tzinfo=), 'foreign_key': 2, 'id': 1, 'many_to_many': [2], 'normal_value': 1, 'readonly_value': 2} 6. Use Serializers Django Rest Framework's ModelSerializer allows you to build a serializer automatically from a model. from rest_framework import serializers class SomeModelSerializer(serializers.ModelSerializer): class Meta: model = SomeModel fields = \"__all__\" SomeModelSerializer(instance).data returns {'auto_now_add': '2018-12-20T21:34:29.494827Z', 'foreign_key': 2, 'id': 1, 'many_to_many': [2], 'normal_value': 1, 'readonly_value': 2} This is almost as good as the custom function, but auto_now_add is a string instead of a datetime object. Bonus Round: better model printing If you want a django model that has a better python command-line display, have your models child-class the following: from django.db import models from itertools import chain class PrintableModel(models.Model): def __repr__(self): return str(self.to_dict()) def to_dict(instance): opts = instance._meta data = {} for f in chain(opts.concrete_fields, opts.private_fields): data[f.name] = f.value_from_object(instance) for f in opts.many_to_many: data[f.name] = [i.id for i in f.value_from_object(instance)] return data class Meta: abstract = True So, for example, if we define our models as such: class OtherModel(PrintableModel): pass class SomeModel(PrintableModel): normal_value = models.IntegerField() readonly_value = models.IntegerField(editable=False) auto_now_add = models.DateTimeField(auto_now_add=True) foreign_key = models.ForeignKey(OtherModel, related_name=\"ref1\") many_to_many = models.ManyToManyField(OtherModel, related_name=\"ref2\") Calling SomeModel.objects.first() now gives output like this: {'auto_now_add': datetime.datetime(2018, 12, 20, 21, 34, 29, 494827, tzinfo=), 'foreign_key': 2, 'id': 1, 'many_to_many': [2], 'normal_value': 1, 'readonly_value': 2}"} +{"question_id": 17134716, "score": 490, "creation_date": 1371395698, "tags": ["python", "pandas", "dataframe", "datetime", "type-conversion"], "instruction": "Convert DataFrame column type from string to datetime\n\nHow can I convert a DataFrame column of strings (in dd/mm/yyyy format) to datetime dtype?", "output": "The easiest way is to use to_datetime: df['col'] = pd.to_datetime(df['col']) It also offers a dayfirst argument for European times (but beware this isn't strict). Here it is in action: In [11]: pd.to_datetime(pd.Series(['05/23/2005'])) Out[11]: 0 2005-05-23 00:00:00 dtype: datetime64[ns] You can pass a specific format: In [12]: pd.to_datetime(pd.Series(['05/23/2005']), format=\"%m/%d/%Y\") Out[12]: 0 2005-05-23 dtype: datetime64[ns]"} +{"question_id": 17098654, "score": 489, "creation_date": 1371164736, "tags": ["python", "pandas", "dataframe"], "instruction": "How to reversibly store and load a Pandas dataframe to/from disk\n\nRight now I'm importing a fairly large CSV as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?", "output": "The easiest way is to pickle it using to_pickle: df.to_pickle(file_name) # where to save it, usually as a .pkl Then you can load it back using: df = pd.read_pickle(file_name) Note: before 0.11.1 save and load were the only way to do this (they are now deprecated in favor of to_pickle and read_pickle respectively). Another popular choice is to use HDF5 (pytables) which offers very fast access times for large datasets: import pandas as pd store = pd.HDFStore('store.h5') store['df'] = df # save it store['df'] # load it More advanced strategies are discussed in the cookbook. Since 0.13 there's also msgpack which may be be better for interoperability, as a faster alternative to JSON, or if you have python object/text-heavy data (see this question)."} +{"question_id": 13035764, "score": 488, "creation_date": 1351012264, "tags": ["python", "pandas", "dataframe", "duplicates"], "instruction": "Remove pandas rows with duplicate indices\n\nHow to remove rows with duplicate index values? In the weather DataFrame below, sometimes a scientist goes back and corrects observations -- not by editing the erroneous rows, but by appending a duplicate row to the end of a file. I'm reading some automated weather data from the web (observations occur every 5 minutes, and compiled into monthly files for each weather station.) After parsing a file, the DataFrame looks like: Sta Precip1hr Precip5min Temp DewPnt WindSpd WindDir AtmPress Date 2001-01-01 00:00:00 KPDX 0 0 4 3 0 0 30.31 2001-01-01 00:05:00 KPDX 0 0 4 3 0 0 30.30 2001-01-01 00:10:00 KPDX 0 0 4 3 4 80 30.30 2001-01-01 00:15:00 KPDX 0 0 3 2 5 90 30.30 2001-01-01 00:20:00 KPDX 0 0 3 2 10 110 30.28 Example of a duplicate case: import pandas as pd import datetime startdate = datetime.datetime(2001, 1, 1, 0, 0) enddate = datetime.datetime(2001, 1, 1, 5, 0) index = pd.date_range(start=startdate, end=enddate, freq='H') data1 = {'A' : range(6), 'B' : range(6)} data2 = {'A' : [20, -30, 40], 'B' : [-50, 60, -70]} df1 = pd.DataFrame(data=data1, index=index) df2 = pd.DataFrame(data=data2, index=index[:3]) df3 = df2.append(df1) df3 A B 2001-01-01 00:00:00 20 -50 2001-01-01 01:00:00 -30 60 2001-01-01 02:00:00 40 -70 2001-01-01 03:00:00 3 3 2001-01-01 04:00:00 4 4 2001-01-01 05:00:00 5 5 2001-01-01 00:00:00 0 0 2001-01-01 01:00:00 1 1 2001-01-01 02:00:00 2 2 And so I need df3 to eventually become: A B 2001-01-01 00:00:00 0 0 2001-01-01 01:00:00 1 1 2001-01-01 02:00:00 2 2 2001-01-01 03:00:00 3 3 2001-01-01 04:00:00 4 4 2001-01-01 05:00:00 5 5 I thought that adding a column of row numbers (df3['rownum'] = range(df3.shape[0])) would help me select the bottom-most row for any value of the DatetimeIndex, but I am stuck on figuring out the group_by or pivot (or ???) statements to make that work.", "output": "I would suggest using the duplicated method on the Pandas Index itself: df3 = df3.loc[~df3.index.duplicated(keep='first'), :] While all the other methods work, .drop_duplicates is by far the least performant for the provided example. Furthermore, while the groupby method is only slightly less performant, I find the duplicated method to be more readable. Using the sample data provided: >>> %timeit df3.reset_index().drop_duplicates(subset='index', keep='first').set_index('index') 1000 loops, best of 3: 1.54 ms per loop >>> %timeit df3.groupby(df3.index).first() 1000 loops, best of 3: 580 \u00b5s per loop >>> %timeit df3.loc[~df3.index.duplicated(keep='first'), :] 1000 loops, best of 3: 307 \u00b5s per loop Note that you can keep the last element by changing the keep argument to 'last'. It should also be noted that this method works with MultiIndex as well (using df1 as specified in Paul's example): >>> %timeit df1.groupby(level=df1.index.names).last() 1000 loops, best of 3: 771 \u00b5s per loop >>> %timeit df1.loc[~df1.index.duplicated(keep='last'), :] 1000 loops, best of 3: 365 \u00b5s per loop Edit: While the .loc is not necessary (per @lingjiankong's comment) I agree with @shadowtalker that being explicit rather than implicit about row selection can be helpful (especially in large codebases)."} +{"question_id": 647515, "score": 487, "creation_date": 1237108158, "tags": ["python", "windows", "path"], "instruction": "How can I find where Python is installed on Windows?\n\nI want to find out my Python installation path on Windows. For example: C:\\Python25 How can I find where Python is installed?", "output": "In your Python interpreter, type the following commands: >>> import os, sys >>> os.path.dirname(sys.executable) 'C:\\\\Python25' Also, you can club all these and use a single line command. Open cmd and enter following command python -c \"import os, sys; print(os.path.dirname(sys.executable))\""} +{"question_id": 34240703, "score": 487, "creation_date": 1449929007, "tags": ["python", "machine-learning", "tensorflow"], "instruction": "What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?\n\nIn the tensorflow API docs they use a keyword called logits. What is it? A lot of methods are written like: tf.nn.softmax(logits, name=None) If logits is just a generic Tensor input, why is it named logits? Secondly, what is the difference between the following two methods? tf.nn.softmax(logits, name=None) tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None) I know what tf.nn.softmax does, but not the other. An example would be really helpful.", "output": "The softmax+logits simply means that the function operates on the unscaled output of earlier layers and that the relative scale to understand the units is linear. It means, in particular, the sum of the inputs may not equal 1, that the values are not probabilities (you might have an input of 5). Internally, it first applies softmax to the unscaled output, and then computes the cross entropy of those values vs. what they \"should\" be as defined by the labels. tf.nn.softmax produces the result of applying the softmax function to an input tensor. The softmax \"squishes\" the inputs so that sum(input) = 1, and it does the mapping by interpreting the inputs as log-probabilities (logits) and then converting them back into raw probabilities between 0 and 1. The shape of output of a softmax is the same as the input: a = tf.constant(np.array([[.1, .3, .5, .9]])) print s.run(tf.nn.softmax(a)) [[ 0.16838508 0.205666 0.25120102 0.37474789]] See this answer for more about why softmax is used extensively in DNNs. tf.nn.softmax_cross_entropy_with_logits combines the softmax step with the calculation of the cross-entropy loss after applying the softmax function, but it does it all together in a more mathematically careful way. It's similar to the result of: sm = tf.nn.softmax(x) ce = cross_entropy(sm) The cross entropy is a summary metric: it sums across the elements. The output of tf.nn.softmax_cross_entropy_with_logits on a shape [2,5] tensor is of shape [2,1] (the first dimension is treated as the batch). If you want to do optimization to minimize the cross entropy AND you're softmaxing after your last layer, you should use tf.nn.softmax_cross_entropy_with_logits instead of doing it yourself, because it covers numerically unstable corner cases in the mathematically right way. Otherwise, you'll end up hacking it by adding little epsilons here and there. Edited 2016-02-07: If you have single-class labels, where an object can only belong to one class, you might now consider using tf.nn.sparse_softmax_cross_entropy_with_logits so that you don't have to convert your labels to a dense one-hot array. This function was added after release 0.6.0."} +{"question_id": 5352546, "score": 487, "creation_date": 1300454463, "tags": ["python", "python-3.x", "dictionary", "associative-array"], "instruction": "Extract a subset of key-value pairs from dictionary?\n\nI have a big dictionary object that has several key value pairs (about 16), but I am only interested in 3 of them. What is the best way (shortest/efficient/most elegant) to subset such dictionary? The best I know is: bigdict = {'a':1,'b':2,....,'z':26} subdict = {'l':bigdict['l'], 'm':bigdict['m'], 'n':bigdict['n']} I am sure there is a more elegant way than this.", "output": "You could try: {k: bigdict[k] for k in ('l', 'm', 'n')} Or in Python versions before 2.7: dict((k, bigdict[k]) for k in ('l', 'm', 'n')) I'm assuming that you know the keys are going to be in the dictionary. See the answer by H\u00e5vard S if you don't. Alternatively, as timbo points out in the comments, if you want a key that's missing in bigdict to map to None, you can do: {k: bigdict.get(k, None) for k in ('l', 'm', 'n')} If you're using Python 3, and you only want keys in the new dict that actually exist in the original one, you can use the fact to view objects implement some set operations: {k: bigdict[k] for k in bigdict.keys() & {'l', 'm', 'n'}}"} +{"question_id": 34162443, "score": 485, "creation_date": 1449596379, "tags": ["python", "matplotlib"], "instruction": "Why do many examples use `fig, ax = plt.subplots()`\n\nI'm learning to use matplotlib by studying examples, and a lot of examples seem to include a line like the following before creating a single plot... fig, ax = plt.subplots() Here are some examples... Modify tick label text http://matplotlib.org/examples/pylab_examples/boxplot_demo2.html I see this function used a lot, even though the example is only attempting to create a single chart. Is there some other advantage? The official demo for subplots() also uses f, ax = subplots when creating a single chart, and it only ever references ax after that. This is the code they use. # Just a figure and one subplot f, ax = plt.subplots() ax.plot(x, y) ax.set_title('Simple plot')", "output": "plt.subplots() is a function that returns a tuple containing a figure and axes object(s). Thus when using fig, ax = plt.subplots() you unpack this tuple into the variables fig and ax. Having fig is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with fig.savefig('yourfilename.png')). You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object anyway, thus: fig, ax = plt.subplots() is more concise than this: fig = plt.figure() ax = fig.add_subplot(111)"} +{"question_id": 3431825, "score": 484, "creation_date": 1281210652, "tags": ["python", "md5", "checksum", "hashlib"], "instruction": "How to generate an MD5 checksum of a file in Python?\n\nIs there any simple way of generating (and checking) MD5 checksums of a list of files in Python? (I have a small program I'm working on, and I'd like to confirm the checksums of the files).", "output": "You can use hashlib.md5() Note that sometimes you won't be able to fit the whole file in memory. In that case, you'll have to read chunks sequentially and feed them to the md5 update method. In this example I'm using a 4096-byte (4 KiB) buffer, but you can adjust this size import hashlib def md5(fname): hash_md5 = hashlib.md5() with open(fname, \"rb\") as f: for chunk in iter(lambda: f.read(4096), b\"\"): hash_md5.update(chunk) return hash_md5.hexdigest() Note: hash_md5.hexdigest() will return the hex string representation for the digest, if you just need the packed bytes use return hash_md5.digest(), so you don't have to convert back."} +{"question_id": 6116978, "score": 483, "creation_date": 1306271723, "tags": ["python", "text", "replace"], "instruction": "How to replace multiple substrings of a string?\n\nI use the .replace function to replace multiple strings: string.replace(\"condition1\", \"\").replace(\"condition2\", \"text\") although that feels like bad syntax what is the proper way? like how in grep/regex you can do \\1 and \\2 to replace fields to certain search strings", "output": "Here is a short example that should do the trick with regular expressions: import re rep = {\"condition1\": \"\", \"condition2\": \"text\"} # define desired replacements here # use these three lines to do the replacement rep = dict((re.escape(k), v) for k, v in rep.items()) pattern = re.compile(\"|\".join(rep.keys())) text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text) For example: >>> pattern.sub(lambda m: rep[re.escape(m.group(0))], \"(condition1) and --condition2--\") '() and --text--'"} +{"question_id": 8220801, "score": 479, "creation_date": 1321924500, "tags": ["python", "timeit"], "instruction": "How to use the timeit module?\n\nHow do I use timeit to compare the performance of my own functions such as insertion_sort and tim_sort?", "output": "The way timeit works is to run setup code once and then make repeated calls to a series of statements. So, if you want to test sorting, some care is required so that one pass at an in-place sort doesn't affect the next pass with already sorted data (that, of course, would make the Timsort really shine because it performs best when the data already partially ordered). Here is an example of how to set up a test for sorting: >>> import timeit >>> setup = ''' import random random.seed('slartibartfast') s = [random.random() for i in range(1000)] timsort = list.sort ''' >>> print(min(timeit.Timer('a=s[:]; timsort(a)', setup=setup).repeat(7, 1000))) 0.04485079200821929 Note that the series of statements makes a fresh copy of the unsorted data on every pass. Also, note the timing technique of running the measurement suite seven times and keeping only the best time \u2014 this can really help reduce measurement distortions due to other processes running on your system."} +{"question_id": 18425225, "score": 479, "creation_date": 1377399506, "tags": ["python", "variables"], "instruction": "Getting the name of a variable as a string\n\nI already read How to get a function name as a string?. How can I do the same for a variable? As opposed to functions, Python variables do not have the __name__ attribute. In other words, if I have a variable such as: foo = dict() foo['bar'] = 2 I am looking for a function/attribute, e.g. retrieve_name, where: >>> retrieve_name(foo) 'foo' This is in order to create a DataFrame in Pandas from this list, where the column names are given by the names of the actual dictionaries: # List of dictionaries for my DataFrame list_of_dicts = [n_jobs, users, queues, priorities] columns = [retrieve_name(d) for d in list_of_dicts]", "output": "TL;DR Use the Wrapper helper from python-varname: from varname.helpers import Wrapper foo = Wrapper(dict()) # foo.name == 'foo' # foo.value == {} foo.value['bar'] = 2 For list comprehension part, you can do: n_jobs = Wrapper() users = Wrapper() queues = Wrapper() priorities = Wrapper() list_of_dicts = [n_jobs, users, queues, priorities] columns = [d.name for d in list_of_dicts] # ['n_jobs', 'users', 'queues', 'priorities'] # REMEMBER that you have to access the by d.value I am the author of the python-varname package. Please let me know if you have any questions or you can submit issues on Github. The long answer Is it even possible? Yes and No. We are retrieving the variable names at runtime, so we need a function to be called to enable us to access the previous frames to retrieve the variable names. That's why we need a Wrapper there. In that function, at runtime, we are parsing the source code/AST nodes in the previous frames to get the exact variable name. However, the source code/AST nodes in the previous frames are not always available, or they could be modified by other environments (e.g: pytest's assert statement). One simple example is that the codes run via exec(). Even though we are still able to retrieve some information from the bytecode, it needs too much effort and it is also error-prone. How to do it? First of all, we need to identify which frame the variable is given. It's not always simply the direct previous frame. For example, we may have another wrapper for the function: from varname import varname def func(): return varname() def wrapped(): return func() x = wrapped() In the above example, we have to skip the frame inside wrapped to get to the right frame x = wrapped() so that we are able to locate x. The arguments frame and ignore of varname allow us to skip some of these intermediate frames. See more details in the README file and the API docs of the package. Then we need to parse the AST node to locate where the variable is assigned value (function call) to. It's not always just a simple assignment. Sometimes there could be complex AST nodes, for example, x = [wrapped()]. We need to identify the correct assignment by traversing the AST tree. How reliable is it? Once we identify the assignment node, it is reliable. varname is all depending on executing package to look for the node. The node executing detects is ensured to be the correct one (see also this). It partially works with environments where other AST magics apply, including pytest, ipython, macropy, birdseye, reticulate with R, etc. Neither executing nor varname is 100% working with those environments. Do we need a package to do it? Well, yes and no, again. If your scenario is simple, the code provided by @juan Isaza or @scohe001 probably is enough for you to work with the case where a variable is defined at the direct previous frame and the AST node is a simple assignment. You just need to go one frame back and retrieve the information there. However, if the scenario becomes complicated, or we need to adopt different application scenarios, you probably need a package like python-varname, to handle them. These scenarios may include to: present more friendly messages when the source code is not available or AST nodes are not accessible skip intermediate frames (allows the function to be wrapped or called in other intermediate frames) automatically ignores calls from built-in functions or libraries. For example: x = str(func()) retrieve multiple variable names on the left-hand side of the assignment etc. How about the f-string? Like the answer provided by @Aivar Paalberg. It's definitely fast and reliable. However, it's not at runtime, meaning that you have to know it's foo before you print the name out. But with varname, you don't have to know that variable is coming: from varname import varname def func(): return varname() # In external uses x = func() # 'x' y = func() # 'y' Finally python-varname is not only able to detect the variable name from an assignment, but also: Retrieve variable names directly, using nameof Detect next immediate attribute name, using will Fetch argument names/sources passed to a function using argname Read more from its documentation. However, the final word I want to say is that, try to avoid using it whenever you can. Because you can't make sure that the client code will run in an environment where the source node is available or AST node is accessible. And of course, it costs resources to parse the source code, identify the environment, retrieve the AST nodes and evaluate them when needed."} +{"question_id": 1074212, "score": 477, "creation_date": 1246539787, "tags": ["python", "sql", "django", "database", "django-database"], "instruction": "How can I see the raw SQL queries Django is running?\n\nIs there a way to show the SQL that Django is running while performing a query?", "output": "See the docs FAQ: \"How can I see the raw SQL queries Django is running?\" django.db.connection.queries contains a list of the SQL queries: from django.db import connection print(connection.queries) Querysets also have a query attribute containing the query to be executed: print(MyModel.objects.filter(name=\"my name\").query) Note that the output of the query is not valid SQL, because: \"Django never actually interpolates the parameters: it sends the query and the parameters separately to the database adapter, which performs the appropriate operations.\" From Django bug report #17741. Because of that, you should not send query output directly to a database. If you need to reset the queries to, for example, see how many queries are running in a given period, you can use reset_queries from django.db: from django.db import reset_queries from django.db import connection reset_queries() # Run your query here print(connection.queries) >>> []"} +{"question_id": 9159757, "score": 477, "creation_date": 1328529560, "tags": ["python", "comments", "pip"], "instruction": "Can I add comments to a pip requirements file?\n\nI'd like to add comments for a few packages in a pip requirements file. (Just to explain why that package is on the list.) Can I do this? I'm imagining something like Babel==0.9.5 # translation CherryPy==3.2.0 # web server Creoleparser==0.7.1 # wiki formatting Genshi==0.5.1 # templating", "output": "Sure, you can, just use # pip docs: A line that begins with # is treated as a comment and ignored. Whitespace followed by a # causes the # and the remainder of the line to be treated as a comment."} +{"question_id": 938733, "score": 474, "creation_date": 1243936240, "tags": ["python", "memory-management"], "instruction": "Total memory used by Python process?\n\nIs there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.", "output": "Here is a useful solution that works for various operating systems, including Linux, Windows, etc.: import psutil process = psutil.Process() print(process.memory_info().rss) # in bytes Notes: do pip install psutil if it is not installed yet handy one-liner if you quickly want to know how many MiB your process takes: import os, psutil; print(psutil.Process(os.getpid()).memory_info().rss / 1024 ** 2) with Python 2.7 and psutil 5.6.3, it was process.memory_info()[0] instead (there was a change in the API later)."} +{"question_id": 6444548, "score": 474, "creation_date": 1308766628, "tags": ["python", "python-imaging-library", "image"], "instruction": "How do I get the picture size with PIL?\n\nHow do I get a size of a pictures sides with PIL or any other Python library?", "output": "from PIL import Image im = Image.open('whatever.png') width, height = im.size According to the documentation."} +{"question_id": 295135, "score": 469, "creation_date": 1226912527, "tags": ["python", "filenames", "slug", "sanitize"], "instruction": "Turn a string into a valid filename?\n\nI have a string that I want to use as a filename, so I want to remove all characters that wouldn't be allowed in filenames, using Python. I'd rather be strict than otherwise, so let's say I want to retain only letters, digits, and a small set of other characters like \"_-.() \". What's the most elegant solution? The filename needs to be valid on multiple operating systems (Windows, Linux and Mac OS) - it's an MP3 file in my library with the song title as the filename, and is shared and backed up between 3 machines.", "output": "You can look at the Django framework (but take their licence into account!) for how they create a \"slug\" from arbitrary text. A slug is URL- and filename- friendly. The Django text utils define a function, slugify(), that's probably the gold standard for this kind of thing. Essentially, their code is the following. import unicodedata import re def slugify(value, allow_unicode=False): \"\"\" Taken from https://github.com/django/django/blob/master/django/utils/text.py Convert to ASCII if 'allow_unicode' is False. Convert spaces or repeated dashes to single dashes. Remove characters that aren't alphanumerics, underscores, or hyphens. Convert to lowercase. Also strip leading and trailing whitespace, dashes, and underscores. \"\"\" value = str(value) if allow_unicode: value = unicodedata.normalize('NFKC', value) else: value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii') value = re.sub(r'[^\\w\\s-]', '', value.lower()) return re.sub(r'[-\\s]+', '-', value).strip('-_') And the older version: def slugify(value): \"\"\" Normalizes string, converts to lowercase, removes non-alpha characters, and converts spaces to hyphens. \"\"\" import unicodedata value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore') value = unicode(re.sub('[^\\w\\s-]', '', value).strip().lower()) value = unicode(re.sub('[-\\s]+', '-', value)) # ... return value There's more, but I left it out, since it doesn't address slugification, but escaping."} +{"question_id": 2960772, "score": 467, "creation_date": 1275505552, "tags": ["python", "string", "variables", "string-interpolation"], "instruction": "How do I put a variable\u2019s value inside a string (interpolate it into the string)?\n\nI would like to put an int into a string. This is what I am doing at the moment: num = 40 plot.savefig('hanning40.pdf') #problem line I have to run the program for several different numbers, so I'd like to do a loop. But inserting the variable like this doesn't work: plot.savefig('hanning', num, '.pdf') How do I insert a variable into a Python string? See also If you are trying to create a file path, see How can I create a full path to a file from parts (e.g. path to the folder, name and extension)? for additional techniques. It will usually be better to use code that is specific to creating paths. If you are trying to construct an URL with variable data, do not use ordinary string formatting, because it is error-prone and more difficult than necessary. Specialized tools are available. See Add params to given URL in Python. If you are trying to construct a SQL query, do not use ordinary string formatting, because it is a major security risk. This is the cause of \"SQL injection\" which costs real companies huge amounts of money every year. See for example How to use variables in SQL statement in Python? for proper techniques. If you just want to print (output) the string, you can prepare it this way first, or if you don't need the string for anything else, print each piece of the output individually using a single call to print. See How can I print multiple things (fixed text and/or variable values) on the same line, all at once? for details on both approaches. See How can I concatenate str and int objects? for bugs caused by trying to use + to join \"strings\" when one of them isn't a string.", "output": "plot.savefig('hanning(%d).pdf' % num) The % operator, when following a string, allows you to insert values into that string via format codes (the %d in this case). For more details, see the Python documentation: printf-style String Formatting"} +{"question_id": 17534345, "score": 467, "creation_date": 1373311302, "tags": ["python", "constructor", "typeerror", "positional-argument"], "instruction": "Why do I get \"TypeError: Missing 1 required positional argument: 'self'\"?\n\nI have some code like: class Pump: def __init__(self): print(\"init\") def getPumps(self): pass p = Pump.getPumps() print(p) But I get an error like: Traceback (most recent call last): File \"C:\\Users\\Dom\\Desktop\\test\\test.py\", line 7, in p = Pump.getPumps() TypeError: getPumps() missing 1 required positional argument: 'self' Why doesn't __init__ seem to be called, and what does this exception mean? My understanding is that self is passed to the constructor and methods automatically. What am I doing wrong here? See Why do I get 'takes exactly 1 argument (2 given)' when trying to call a method? for the opposite problem.", "output": "To use the class, first create an instance, like so: p = Pump() p.getPumps() A full example: >>> class TestClass: ... def __init__(self): ... print(\"init\") ... def testFunc(self): ... print(\"Test Func\") ... >>> testInstance = TestClass() init >>> testInstance.testFunc() Test Func"} +{"question_id": 25445439, "score": 466, "creation_date": 1408705131, "tags": ["python", "python-3.x"], "instruction": "What does \"SyntaxError: Missing parentheses in call to 'print'\" mean in Python?\n\nWhen I try to use a print statement in Python, it gives me this error: >>> print \"Hello, World!\" File \"\", line 1 print \"Hello, World!\" ^ SyntaxError: Missing parentheses in call to 'print' What does that mean? See Getting SyntaxError for print with keyword argument end=' ' for the opposite problem. See Python 3 print without parenthesis for workarounds, and confirmation that print cannot be made to work like a statement in Python 3.", "output": "The error message SyntaxError: Missing parentheses in call to 'print' occurs when you attempt to use Python 3 syntax with the Python 2 print statement. Example: print \"Hello, World!\" In Python 3, the print statement was replaced with a print() function, requiring parentheses around the value to be printed. Solution print(\"Hello, World!\") In Python 3, the print statement was replaced with a print() function, requiring parentheses around the value to be printed. >>> print(\"Hello, World!\") Hello, World! In earlier versions of Python 3, the interpreter just reports a generic syntax error, without providing any useful hints as to what might be going wrong: >>> print \"Hello, World!\" File \"\", line 1 print \"Hello, World!\" ^ SyntaxError: invalid syntax As for why print became an ordinary function in Python 3, that didn't relate to the basic form of the statement, but rather to how you did more complicated things like printing multiple items to stderr with a trailing space rather than ending the line. In Python 2: >>> import sys >>> print >> sys.stderr, 1, 2, 3,; print >> sys.stderr, 4, 5, 6 1 2 3 4 5 6 In Python 3: >>> import sys >>> print(1, 2, 3, file=sys.stderr, end=\" \"); print(4, 5, 6, file=sys.stderr) 1 2 3 4 5 6 Starting with the Python 3.6.3 release in September 2017, some error messages related to the Python 2.x print syntax have been updated to recommend their Python 3.x counterparts: >>> print \"Hello!\" File \"\", line 1 print \"Hello!\" ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print(\"Hello!\")? Since the \"Missing parentheses in call to print\" case is a compile time syntax error and hence has access to the raw source code, it's able to include the full text on the rest of the line in the suggested replacement. However, it doesn't currently try to work out the appropriate quotes to place around that expression (that's not impossible, just sufficiently complicated that it hasn't been done). The TypeError raised for the right shift operator has also been customised: >>> print >> sys.stderr Traceback (most recent call last): File \"\", line 1, in TypeError: unsupported operand type(s) for >>: 'builtin_function_or_method' and '_io.TextIOWrapper'. Did you mean \"print(, file=)\"? Since this error is raised when the code runs, rather than when it is compiled, it doesn't have access to the raw source code, and hence uses meta-variables ( and ) in the suggested replacement expression instead of whatever the user actually typed. Unlike the syntax error case, it's straightforward to place quotes around the Python expression in the custom right shift error message."} +{"question_id": 5384914, "score": 464, "creation_date": 1300749809, "tags": ["python", "dictionary"], "instruction": "How to delete items from a dictionary while iterating over it?\n\nCan I delete items from a dictionary in Python while iterating over it? I want to remove elements that don't meet a certain condition from the dictionary, instead of creating an entirely new dictionary. Is the following a good solution, or are there better ways? for k, v in mydict.items(): if k == val: del mydict[k]", "output": "For Python 3+: >>> mydict {'four': 4, 'three': 3, 'one': 1} >>> for k in list(mydict.keys()): ... if mydict[k] == 3: ... del mydict[k] >>> mydict {'four': 4, 'one': 1} The other answers work fine with Python 2 but raise a RuntimeError for Python 3: RuntimeError: dictionary changed size during iteration. This happens because mydict.keys() returns an iterator not a list. As pointed out in comments simply convert mydict.keys() to a list by list(mydict.keys()) and it should work. For Python 2: A simple test in the console shows you cannot modify a dictionary while iterating over it: >>> mydict = {'one': 1, 'two': 2, 'three': 3, 'four': 4} >>> for k, v in mydict.iteritems(): ... if k == 'two': ... del mydict[k] ------------------------------------------------------------ Traceback (most recent call last): File \"\", line 1, in RuntimeError: dictionary changed size during iteration As stated in delnan's answer, deleting entries causes problems when the iterator tries to move onto the next entry. Instead, use the keys() method to get a list of the keys and work with that: >>> for k in mydict.keys(): ... if k == 'two': ... del mydict[k] >>> mydict {'four': 4, 'three': 3, 'one': 1} If you need to delete based on the items value, use the items() method instead: >>> for k, v in mydict.items(): ... if v == 3: ... del mydict[k] >>> mydict {'four': 4, 'one': 1}"} +{"question_id": 51710037, "score": 463, "creation_date": 1533566033, "tags": ["python", "python-typing"], "instruction": "How should I use the Optional type hint?\n\nI'm trying to understand how to use the Optional type hint. From PEP-484, I know I can use Optional for def test(a: int = None) either as def test(a: Union[int, None]) or def test(a: Optional[int]). But how about following examples? def test(a : dict = None): #print(a) ==> {'a': 1234} #or #print(a) ==> None def test(a : list = None): #print(a) ==> [1,2,3,4, 'a', 'b'] #or #print(a) ==> None If Optional[type] seems to mean the same thing as Union[type, None], why should I use Optional[] at all?", "output": "If your code is designed to work with Python 3.10 or newer, you want to use the PEP 604 syntax, using ... | None union syntax, and not use typing.Optional: def test(a: dict[Any, Any] | None = None) -> None: #print(a) ==> {'a': 1234} #or #print(a) ==> None def test(a: list[Any] | None = None) -> None: #print(a) ==> [1, 2, 3, 4, 'a', 'b'] #or #print(a) ==> None Code that still supports older Python versions can still stick to using Optional. Optional[...] is a shorthand notation for Union[..., None], telling the type checker that either an object of the specific type is required, or None is required. ... stands for any valid type hint, including complex compound types or a Union[] of more types. Whenever you have a keyword argument with default value None, you should use Optional. So for your two examples, you have dict and list container types, but the default value for the a keyword argument shows that None is permitted too so use Optional[...]: from typing import Optional def test(a: Optional[dict] = None) -> None: #print(a) ==> {'a': 1234} #or #print(a) ==> None def test(a: Optional[list] = None) -> None: #print(a) ==> [1, 2, 3, 4, 'a', 'b'] #or #print(a) ==> None There is technically no difference between using Optional[] on a Union[], or just adding None to the Union[]. So Optional[Union[str, int]] and Union[str, int, None] are exactly the same thing. Personally, I'd stick with always using Optional[] when setting the type for a keyword argument that uses = None to set a default value, this documents the reason why None is allowed better. Moreover, it makes it easier to move the Union[...] part into a separate type alias, or to later remove the Optional[...] part if an argument becomes mandatory. For example, say you have from typing import Optional, Union def api_function(optional_argument: Optional[Union[str, int]] = None) -> None: \"\"\"Frob the fooznar. If optional_argument is given, it must be an id of the fooznar subwidget to filter on. The id should be a string, or for backwards compatibility, an integer is also accepted. \"\"\" then documentation is improved by pulling out the Union[str, int] into a type alias: from typing import Optional, Union # subwidget ids used to be integers, now they are strings. Support both. SubWidgetId = Union[str, int] def api_function(optional_argument: Optional[SubWidgetId] = None) -> None: \"\"\"Frob the fooznar. If optional_argument is given, it must be an id of the fooznar subwidget to filter on. The id should be a string, or for backwards compatibility, an integer is also accepted. \"\"\" The refactor to move the Union[] into an alias was made all the much easier because Optional[...] was used instead of Union[str, int, None]. The None value is not a 'subwidget id' after all, it's not part of the value, None is meant to flag the absence of a value. Side note: Unless your code only has to support Python 3.9 or newer, you want to avoid using the standard library container types in type hinting, as you can't say anything about what types they must contain. So instead of dict and list, use typing.Dict and typing.List, respectively. And when only reading from a container type, you may just as well accept any immutable abstract container type; lists and tuples are Sequence objects, while dict is a Mapping type: from typing import Mapping, Optional, Sequence, Union def test(a: Optional[Mapping[str, int]] = None) -> None: \"\"\"accepts an optional map with string keys and integer values\"\"\" # print(a) ==> {'a': 1234} # or # print(a) ==> None def test(a: Optional[Sequence[Union[int, str]]] = None) -> None: \"\"\"accepts an optional sequence of integers and strings # print(a) ==> [1, 2, 3, 4, 'a', 'b'] # or # print(a) ==> None In Python 3.9 and up, the standard container types have all been updated to support using them in type hints, see PEP 585. But, while you now can use dict[str, int] or list[Union[int, str]], you still may want to use the more expressive Mapping and Sequence annotations to indicate that a function won't be mutating the contents (they are treated as 'read only'), and that the functions would work with any object that works as a mapping or sequence, respectively. Python 3.10 introduces the | union operator into type hinting, see PEP 604. Instead of Union[str, int] you can write str | int. In line with other type-hinted languages, the preferred (and more concise) way to denote an optional argument in Python 3.10 and up, is now Type | None, e.g. str | None or list | None."} +{"question_id": 14432557, "score": 462, "creation_date": 1358742517, "tags": ["python", "matplotlib", "text", "scatter-plot", "annotate"], "instruction": "Scatter plot with different text at each data point\n\nI am trying to make a scatter plot and annotate data points with different numbers from a list. So, for example, I want to plot y vs x and annotate with corresponding numbers from n. y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199] x = [0.15, 0.3, 0.45, 0.6, 0.75] n = [58, 651, 393, 203, 123] ax = fig.add_subplot(111) ax1.scatter(z, y, fmt='o') Any ideas?", "output": "I'm not aware of any plotting method which takes arrays or lists but you could use annotate() while iterating over the values in n. import matplotlib.pyplot as plt x = [0.15, 0.3, 0.45, 0.6, 0.75] y = [2.56422, 3.77284, 3.52623, 3.51468, 3.02199] n = [58, 651, 393, 203, 123] fig, ax = plt.subplots() ax.scatter(x, y) for i, txt in enumerate(n): ax.annotate(txt, (x[i], y[i])) There are a lot of formatting options for annotate(), see the matplotlib website:"} +{"question_id": 11887762, "score": 462, "creation_date": 1344529370, "tags": ["python", "sorting", "version", "string-comparison"], "instruction": "How do I compare version numbers in Python?\n\nI am walking a directory that contains eggs to add those eggs to the sys.path. If there are two versions of the same .egg in the directory, I want to add only the latest one. I have a regular expression r\"^(?P\\w+)-(?P[\\d\\.]+)-.+\\.egg$ to extract the name and version from the filename. The problem is comparing the version number, which is a string like 2.3.1. Since I'm comparing strings, 2 sorts above 10, but that's not correct for versions. >>> \"2.3.1\" > \"10.1.1\" True I could do some splitting, parsing, casting to int, etc., and I would eventually get a workaround. But this is Python, not Java. Is there an elegant way to compare version strings?", "output": "Use packaging.version.Version which supports PEP 440 style ordering of version strings. >>> # pip install packaging >>> from packaging.version import Version >>> Version(\"2.3.1\") < Version(\"10.1.2\") True >>> Version(\"1.3.a4\") < Version(\"10.1.2\") True An ancient and now deprecated method you might encounter is distutils.version, it's undocumented and conforms only to the superseded PEP 386; >>> from distutils.version import LooseVersion, StrictVersion >>> LooseVersion(\"2.3.1\") < LooseVersion(\"10.1.2\") True >>> StrictVersion(\"2.3.1\") < StrictVersion(\"10.1.2\") True >>> StrictVersion(\"1.3.a4\") Traceback (most recent call last): ... ValueError: invalid version number '1.3.a4' As you can see it sees valid PEP 440 versions as \u201cnot strict\u201d and therefore doesn\u2019t match modern Python\u2019s notion of what a valid version is. As distutils.version is undocumented, here are the relevant docstrings."} +{"question_id": 436198, "score": 461, "creation_date": 1231781016, "tags": ["python", "python-3.x", "include", "execfile"], "instruction": "What alternative is there to execfile in Python 3? / How to include a Python file?\n\nIt seems like in Python 3 they've removed all of the easy ways to quickly load a script, by removing execfile(). What alternative is there to include a Python file in another one, and execute it?", "output": "According to the Python documentation, instead of this: execfile(\"./filename\") Use this: exec(open(\"./filename\").read()) See Python's docs for: What\u2019s New In Python 3.0 execfile exec"} +{"question_id": 28930465, "score": 461, "creation_date": 1425840584, "tags": ["python", "numpy", "multidimensional-array", "flatten", "numpy-ndarray"], "instruction": "What is the difference between flatten and ravel functions in numpy?\n\nimport numpy as np y = np.array(((1,2,3),(4,5,6),(7,8,9))) OUTPUT: print(y.flatten()) [1 2 3 4 5 6 7 8 9] print(y.ravel()) [1 2 3 4 5 6 7 8 9] Both function return the same list. Then what is the need of two different functions performing same job.", "output": "The current API is that: flatten always returns a copy. ravel returns a contiguous view of the original array whenever possible. This isn't visible in the printed output, but if you modify the array returned by ravel, it may modify the entries in the original array. If you modify the entries in an array returned from flatten this will never happen. ravel will often be faster since no memory is copied, but you have to be more careful about modifying the array it returns. reshape((-1,)) gets a view whenever the strides of the array allow it even if that means you don't always get a contiguous array."} +{"question_id": 2827623, "score": 460, "creation_date": 1273761243, "tags": ["python", "class", "object", "attributes"], "instruction": "How can I create an object and add attributes to it?\n\nI want to create a dynamic object in Python and then add attributes to it. This didn't work: obj = object() obj.somefield = \"somevalue\" AttributeError: 'object' object has no attribute 'somefield' For details on why it doesn't work, see Can't set attributes on instance of \"object\" class.", "output": "You could use my ancient Bunch recipe, but if you don't want to make a \"bunch class\", a very simple one already exists in Python -- all functions can have arbitrary attributes (including lambda functions). So, the following works: obj = lambda: None obj.somefield = 'somevalue' Whether the loss of clarity compared to the venerable Bunch recipe is OK, is a style decision I will of course leave up to you."} +{"question_id": 568962, "score": 458, "creation_date": 1235123891, "tags": ["python", "arrays", "numpy"], "instruction": "How do I create an empty array and then append to it in NumPy?\n\nI want to create an empty array and append items to it, one at a time. xs = [] for item in data: xs.append(item) Can I use this list-style notation with NumPy arrays?", "output": "That is the wrong mental model for using NumPy efficiently. NumPy arrays are stored in contiguous blocks of memory. To append rows or columns to an existing array, the entire array needs to be copied to a new block of memory, creating gaps for the new elements to be stored. This is very inefficient if done repeatedly. Instead of appending rows, allocate a suitably sized array, and then assign to it row-by-row: >>> import numpy as np >>> a = np.zeros(shape=(3, 2)) >>> a array([[ 0., 0.], [ 0., 0.], [ 0., 0.]]) >>> a[0] = [1, 2] >>> a[1] = [3, 4] >>> a[2] = [5, 6] >>> a array([[ 1., 2.], [ 3., 4.], [ 5., 6.]])"} +{"question_id": 37139786, "score": 458, "creation_date": 1462886379, "tags": ["python", "python-3.x", "package"], "instruction": "Is __init__.py not required for packages in Python 3.3+\n\nI am using Python 3.5.1. I read the document and the package section here: https://docs.python.org/3/tutorial/modules.html#packages Now, I have the following structure: /home/wujek/Playground/a/b/module.py module.py: class Foo: def __init__(self): print('initializing Foo') Now, while in /home/wujek/Playground: ~/Playground $ python3 >>> import a.b.module >>> a.b.module.Foo() initializing Foo Similarly, now in home, superfolder of Playground: ~ $ PYTHONPATH=Playground python3 >>> import a.b.module >>> a.b.module.Foo() initializing Foo Actually, I can do all kinds of stuff: ~ $ PYTHONPATH=Playground python3 >>> import a >>> import a.b >>> import Playground.a.b Why does this work? I though there needed to be __init__.py files (empty ones would work) in both a and b for module.py to be importable when the Python path points to the Playground folder? This seems to have changed from Python 2.7: ~ $ PYTHONPATH=Playground python >>> import a ImportError: No module named a >>> import a.b ImportError: No module named a.b >>> import a.b.module ImportError: No module named a.b.module With __init__.py in both ~/Playground/a and ~/Playground/a/b it works fine.", "output": "Overview @Mike's answer is correct but too imprecise. It is true that Python 3.3+ supports Implicit Namespace Packages that allows it to create a package without an __init__.py file. This is called a namespace package in contrast to a regular package which does have an __init__.py file (empty or not empty). However, creating a namespace package should ONLY be done if there is a need for it. For most use cases and developers out there, this doesn't apply so you should stick with EMPTY __init__.py files regardless. Namespace package use case To demonstrate the difference between the two types of python packages, lets look at the following example: google_pubsub/ <- Package 1 google/ <- Namespace package (there is no __init__.py) cloud/ <- Namespace package (there is no __init__.py) pubsub/ <- Regular package (with __init__.py) __init__.py <- Required to make the package a regular package foo.py google_storage/ <- Package 2 google/ <- Namespace package (there is no __init__.py) cloud/ <- Namespace package (there is no __init__.py) storage/ <- Regular package (with __init__.py) __init__.py <- Required to make the package a regular package bar.py google_pubsub and google_storage are separate packages but they share the same namespace google/cloud. In order to share the same namespace, it is required to make each directory of the common path a namespace package, i.e. google/ and cloud/. This should be the only use case for creating namespace packages, otherwise, there is no need for it. It's crucial that there are no __init__py files in the google and google/cloud directories so that both directories can be interpreted as namespace packages. In Python 3.3+ any directory on the sys.path with a name that matches the package name being looked for will be recognized as contributing modules and subpackages to that package. As a result, when you import both from google_pubsub and google_storage, the Python interpreter will be able to find them. This is different from regular packages which are self-contained meaning all parts live in the same directory hierarchy. When importing a package and the Python interpreter encounters a subdirectory on the sys.path with an __init__.py file, then it will create a single directory package containing only modules from that directory, rather than finding all appropriately named subdirectories outside that directory. This is perfectly fine for packages that don't want to share a namespace. I highly recommend taking a look at Traps for the Unwary in Python\u2019s Import System to get a better understanding of how Python importing behaves with regular and namespace package and what __init__.py traps to watch out for. Summary Only skip __init__.py files if you want to create namespace packages. Only create namespace packages if you have different libraries that reside in different locations and you want them each to contribute a subpackage to the parent package, i.e. the namespace package. Keep on adding empty __init__.py to your directories because 99% of the time you just want to create regular packages. Also, Python tools out there such as mypy and pytest require empty __init__.py files to interpret the code structure accordingly. This can lead to weird errors if not done with care. Resources My answer only touches the surface of how regular packages and namespace packages work, so take a look at the following resources for further information: PEP 420 -- Implicit Namespace Packages The import system - Regular packages The import system - Namespace packages Traps for the Unwary in Python\u2019s Import System"} +{"question_id": 1035340, "score": 458, "creation_date": 1245792412, "tags": ["python", "file-io", "binary"], "instruction": "Reading binary file and looping over each byte\n\nIn Python, how do I read in a binary file and loop over each byte of that file?", "output": "Python >= 3.8 Thanks to the walrus operator (:=) the solution is quite short. We read bytes objects from the file and assign them to the variable byte with open(\"myfile\", \"rb\") as f: while (byte := f.read(1)): # Do stuff with byte. Python >= 3 In older Python 3 versions, we get have to use a slightly more verbose way: with open(\"myfile\", \"rb\") as f: byte = f.read(1) while byte != b\"\": # Do stuff with byte. byte = f.read(1) Or as benhoyt says, skip the not equal and take advantage of the fact that b\"\" evaluates to false. This makes the code compatible between 2.6 and 3.x without any changes. It would also save you from changing the condition if you go from byte mode to text or the reverse. with open(\"myfile\", \"rb\") as f: byte = f.read(1) while byte: # Do stuff with byte. byte = f.read(1) Python >= 2.5 In Python 2, it's a bit different. Here we don't get bytes objects, but raw characters: with open(\"myfile\", \"rb\") as f: byte = f.read(1) while byte != \"\": # Do stuff with byte. byte = f.read(1) Note that the with statement is not available in versions of Python below 2.5. To use it in v 2.5 you'll need to import it: from __future__ import with_statement In 2.6 this is not needed. Python 2.4 and Earlier f = open(\"myfile\", \"rb\") try: byte = f.read(1) while byte != \"\": # Do stuff with byte. byte = f.read(1) finally: f.close()"} +{"question_id": 2231227, "score": 458, "creation_date": 1265738116, "tags": ["python", "subprocess", "popen"], "instruction": "Python subprocess/Popen with a modified environment\n\nI believe that running an external command with a slightly modified environment is a very common case. That's how I tend to do it: import subprocess, os my_env = os.environ my_env[\"PATH\"] = \"/usr/sbin:/sbin:\" + my_env[\"PATH\"] subprocess.Popen(my_command, env=my_env) I've got a gut feeling that there's a better way; does it look alright?", "output": "I think os.environ.copy() is better if you don't intend to modify the os.environ for the current process: import subprocess, os my_env = os.environ.copy() my_env[\"PATH\"] = f\"/usr/sbin:/sbin:{my_env['PATH']}\" subprocess.Popen(my_command, env=my_env)"} +{"question_id": 646644, "score": 456, "creation_date": 1237061984, "tags": ["python", "list", "slice"], "instruction": "How to get last items of a list in Python?\n\nI need the last 9 numbers of a list and I'm sure there is a way to do it with slicing, but I can't seem to get it. I can get the first 9 like this: num_list[0:9]", "output": "You can use negative integers with the slicing operator for that. Here's an example using the python CLI interpreter: >>> a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] >>> a[-9:] [4, 5, 6, 7, 8, 9, 10, 11, 12] the important line is a[-9:]"} +{"question_id": 19048732, "score": 455, "creation_date": 1380278353, "tags": ["python", "setuptools"], "instruction": "Python setup.py develop vs install\n\nTwo options in setup.py develop and install are confusing me. According to this site, using develop creates a special link to site-packages directory. People have suggested that I use python setup.py install for a fresh installation and python setup.py develop after any changes have been made to the setup file. Can anyone shed some light on the usage of these commands?", "output": "python setup.py install is used to install (typically third party) packages that you're not going to develop/modify/debug yourself. For your own stuff, you want to first install your package and then be able to frequently edit the code without having to re-install the package every time \u2014 and that is exactly what python setup.py develop does: it installs the package (typically just a source folder) in a way that allows you to conveniently edit your code after it\u2019s installed to the (virtual) environment, and have the changes take effect immediately. Note: It is highly recommended to use pip install . (regular install) and pip install -e . (developer install) to install packages, as invoking setup.py directly will do the wrong things for many dependencies, such as pull prereleases and incompatible package versions, or make the package hard to uninstall with pip. Update: The develop counterpart for the latest python -m build approach is as follows (as per):"} +{"question_id": 14399534, "score": 454, "creation_date": 1358514014, "tags": ["python", "pip", "setuptools", "requirements.txt"], "instruction": "Reference requirements.txt for the install_requires kwarg in setuptools setup.py file\n\nI have a requirements.txt file that I'm using with Travis-CI. It seems silly to duplicate the requirements in both requirements.txt and setup.py, so I was hoping to pass a file handle to the install_requires kwarg in setuptools.setup. Is this possible? If so, how should I go about doing it? Here is my requirements.txt file: guessit>=0.5.2 tvdb_api>=1.8.2 hachoir-metadata>=1.3.3 hachoir-core>=1.3.3 hachoir-parser>=1.3.4", "output": "Update 12/2024: This does not work in recent versions of Python, primarily because the function has been moved around in the pip module. Either way, as stated below by maintainers of the library, it is not recommended to import and use this function, as it is an internal function which is subject to change and being moved around. Old answer: You can flip it around and list the dependencies in setup.py and have a single character \u2014 a dot . \u2014 in requirements.txt instead. Alternatively, even if not advised, it is still possible to parse the requirements.txt file (if it doesn't refer any external requirements by URL) with the following hack (tested with pip 9.0.1): install_reqs = parse_requirements('requirements.txt', session='hack') This doesn't filter environment markers though. In old versions of pip, more specifically older than 6.0, there is a public API that can be used to achieve this. A requirement file can contain comments (#) and can include some other files (--requirement or -r). Thus, if you really want to parse a requirements.txt you can use the pip parser: from pip.req import parse_requirements # parse_requirements() returns generator of pip.req.InstallRequirement objects install_reqs = parse_requirements() # reqs is a list of requirement # e.g. ['django==1.5.1', 'mezzanine==1.4.6'] reqs = [str(ir.req) for ir in install_reqs] setup( ... install_requires=reqs )"} +{"question_id": 3061761, "score": 452, "creation_date": 1276779351, "tags": ["python", "arrays", "numpy", "dimensions"], "instruction": "Numpy array dimensions\n\nHow do I get the dimensions of an array? For instance, this is 2x2: a = np.array([[1, 2], [3, 4]])", "output": "Use .shape to obtain a tuple of array dimensions: >>> a.shape (2, 2)"} +{"question_id": 4836710, "score": 449, "creation_date": 1296302153, "tags": ["python", "sorting"], "instruction": "Is there a built in function for string natural sort?\n\nI have a list of strings for which I would like to perform a natural alphabetical sort. For instance, the following list is naturally sorted (what I want): ['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13'] And here's the \"sorted\" version of the above list (what I get using sorted()): ['Elm11', 'Elm12', 'Elm2', 'elm0', 'elm1', 'elm10', 'elm13', 'elm9'] I'm looking for a sort function which behaves like the first one.", "output": "There is a third party library for this on PyPI called natsort (full disclosure, I am the package's author). For your case, you can do either of the following: >>> from natsort import natsorted, ns >>> x = ['Elm11', 'Elm12', 'Elm2', 'elm0', 'elm1', 'elm10', 'elm13', 'elm9'] >>> natsorted(x, key=lambda y: y.lower()) ['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13'] >>> natsorted(x, alg=ns.IGNORECASE) # or alg=ns.IC ['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13'] You should note that natsort uses a general algorithm so it should work for just about any input that you throw at it. If you want more details on why you might choose a library to do this rather than rolling your own function, check out the natsort documentation's How It Works page, in particular the Special Cases Everywhere! section. If you need a sorting key instead of a sorting function, use either of the below formulas. >>> from natsort import natsort_keygen, ns >>> l1 = ['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13'] >>> l2 = l1[:] >>> natsort_key1 = natsort_keygen(key=lambda y: y.lower()) >>> l1.sort(key=natsort_key1) >>> l1 ['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13'] >>> natsort_key2 = natsort_keygen(alg=ns.IGNORECASE) >>> l2.sort(key=natsort_key2) >>> l2 ['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13'] Update November 2020 Given that a popular request/question is \"how to sort like Windows Explorer?\" (or whatever is your operating system's file system browser), as of natsort version 7.1.0 there is a function called os_sorted to do exactly this. On Windows, it will sort in the same order as Windows Explorer, and on other operating systems it should sort like whatever is the local file system browser. >>> from natsort import os_sorted >>> os_sorted(list_of_paths) # your paths sorted like your file system browser For those needing a sort key, you can use os_sort_keygen (or os_sort_key if you just need the defaults). Caveat - Please read the API documentation for this function before you use to understand the limitations and how to get best results."} +{"question_id": 25231989, "score": 449, "creation_date": 1407697312, "tags": ["python", "dictionary"], "instruction": "How to check if a variable is a dictionary in Python?\n\nHow would you check if a variable is a dictionary in Python? For example, I'd like it to loop through the values in the dictionary until it finds a dictionary. Then, loop through the one it finds: dict = {'abc': 'abc', 'def': {'ghi': 'ghi', 'jkl': 'jkl'}} for k, v in dict.items(): if ###check if v is a dictionary: for k, v in v.iteritems(): print(k, ' ', v) else: print(k, ' ', v)", "output": "Use isinstance(ele, dict), which will return true for dict objects as well as subclasses of dict, such as OrderedDict and defaultdict: d = {'abc': 'abc', 'def': {'ghi': 'ghi', 'jkl': 'jkl'}} for element in d.values(): if isinstance(element, dict): for k, v in element.items(): print(k, ' ', v) You can do if type(ele) is dict if you want to check strictly for instances of dict and not subclasses of it, though this is generally not recommended."} +{"question_id": 15746675, "score": 448, "creation_date": 1364831333, "tags": ["python", "python-2.7", "upload", "python-module", "python-packaging"], "instruction": "How to write a Python module/package?\n\nI've been making Python scripts for simple tasks at work and never really bothered packaging them for others to use. Now I have been assigned to make a Python wrapper for a REST API. I have absolutely no idea on how to start and I need help. What I have: (Just want to be specific as possible) I have the virtualenv ready, it's also up in github, the .gitignore file for python is there as well, plus, the requests library for interacting with the REST API. That's it. Here's the current directory tree . \u251c\u2500\u2500 bin \u2502 \u2514\u2500\u2500 /the usual stuff/ \u251c\u2500\u2500 include \u2502 \u2514\u2500\u2500 /the usual stuff/ \u251c\u2500\u2500 lib \u2502 \u2514\u2500\u2500 python2.7 \u2502 \u2514\u2500\u2500 /the usual stuff/ \u251c\u2500\u2500 local \u2502 \u2514\u2500\u2500 /the usual stuff/ \u2514\u2500\u2500 README.md 27 directories, 280 files I don't even know where to put the .py files, if I ever make one. What I wanted to do: Make a python module install-able with \"pip install ...\" If possible, I want a general step by step process on writing Python modules.", "output": "A module is a file containing Python definitions and statements. The file name is the module name with the suffix .py create hello.py then write the following function as its content: def helloworld(): print(\"hello\") Then you can import hello: >>> import hello >>> hello.helloworld() 'hello' To group many .py files put them in a folder. Any folder with an __init__.py is considered a module by python and you can call them a package |-HelloModule |_ __init__.py |_ hellomodule.py You can go about with the import statement on your module the usual way. For more information, see 6.4. Packages."} +{"question_id": 4675728, "score": 447, "creation_date": 1294879860, "tags": ["python", "stdout"], "instruction": "Redirect stdout to a file in Python?\n\nHow do I redirect stdout to an arbitrary file in Python? When a long-running Python script (e.g, web application) is started from within the ssh session and backgounded, and the ssh session is closed, the application will raise IOError and fail the moment it tries to write to stdout. I needed to find a way to make the application and modules output to a file rather than stdout to prevent failure due to IOError. Currently, I employ nohup to redirect output to a file, and that gets the job done, but I was wondering if there was a way to do it without using nohup, out of curiosity. I have already tried sys.stdout = open('somefile', 'w'), but this does not seem to prevent some external modules from still outputting to terminal (or maybe the sys.stdout = ... line did not fire at all). I know it should work from simpler scripts I've tested on, but I also didn't have time yet to test on a web application yet.", "output": "If you want to do the redirection within the Python script, setting sys.stdout to a file object does the trick: # for python3 import sys with open('file', 'w') as sys.stdout: print('test') A far more common method is to use shell redirection when executing (same on Windows and Linux): $ python3 foo.py > file"} +{"question_id": 42809096, "score": 447, "creation_date": 1489579052, "tags": ["python", "boto3"], "instruction": "Difference in Boto3 between resource, client, and session?\n\nI'm learning how to use AWS SDK for Python (Boto3) from the following resource: https://boto3.readthedocs.io/en/latest/guide/quickstart.html#using-boto-3. My doubt is when to use resource, client, or session, and their respective functionality. I am using Python 2.7.12 in Ubuntu 16.04 LTS.", "output": "Client and Resource are two different abstractions within the boto3 SDK for making AWS service requests. If you want to make API calls to an AWS service with boto3, then you do so via a Client or a Resource. You would typically choose to use either the Client abstraction or the Resource abstraction, but you can use both, as needed. I've outlined the differences below to help readers decide which to use. Session is largely orthogonal to the concepts of Client and Resource (but is used by both). Here's some more detailed information on what Client, Resource, and Session are all about. Client this is the original boto3 API abstraction it provides low-level AWS service access all AWS service operations are supported by clients it exposes botocore client to the developer it typically maps 1:1 with the AWS service API it exposes snake-cased method names (e.g. ListBuckets API => list_buckets method) typically yields primitive, non-marshalled data (e.g. DynamoDB attributes are dicts representing primitive DynamoDB values) requires you to code result pagination it is generated from an AWS service description Here's an example of client-level access to an S3 bucket's objects: import boto3 client = boto3.client('s3') response = client.list_objects_v2(Bucket='mybucket') for content in response['Contents']: obj_dict = client.get_object(Bucket='mybucket', Key=content['Key']) print(content['Key'], obj_dict['LastModified']) Note: this client-level code is limited to listing at most 1000 objects. You would have to use a paginator, or implement your own loop, calling list_objects_v2() repeatedly with a continuation marker if there were more than 1000 objects. OK, so that's the low-level Client interface. Now onto the higher-level (more abstract) Resource interface. Resource this is the newer boto3 API abstraction it provides a high-level, object-oriented API it does not provide 100% API coverage of AWS services it uses identifiers and attributes it has actions (operations on resources) it exposes sub-resources and collections of AWS resources typically yields marshalled data, not primitive AWS data (e.g. DynamoDB attributes are native Python values representing primitive DynamoDB values) does result pagination for you it is generated from an AWS resource description Here's the equivalent example using resource-level access to an S3 bucket's objects: import boto3 s3 = boto3.resource('s3') bucket = s3.Bucket('mybucket') for obj in bucket.objects.all(): print(obj.key, obj.last_modified) Note: in this case you do not have to make a second API call to get the objects; they're available to you as a collection on the bucket. These collections of sub-resources are lazily-loaded. You can see that the Resource version of the code is much simpler, more compact, and has more capability (for example it does pagination for you and it exposes properties instead of a raw dictionary). The Client version of the code would actually be more complicated than shown above if you wanted to include pagination. Finally, onto Session which is fundamental to both Client and Resource and how both get access to AWS credentials, for example. Session stores configuration information (primarily credentials and selected region) allows you to create service clients and resources boto3 creates a default session for you when needed A useful resource to learn more about these boto3 concepts is the introductory re:Invent video. Update January 2023 Per the Resources page in the boto3 documentation: The AWS Python SDK team does not intend to add new features to the resources interface in boto3. Existing interfaces will continue to operate during boto3's lifecycle. Customers can find access to newer service features through the client interface. You can read more about the plans to maintain but no longer enhance resources at boto3/discussions/3563."} +{"question_id": 20763012, "score": 444, "creation_date": 1387897747, "tags": ["python", "pandas", "dataframe", "list", "numpy"], "instruction": "Creating a Pandas DataFrame from a Numpy array: How do I specify the index column and column headers?\n\nI have a Numpy array consisting of a list of lists, representing a two-dimensional array with row labels and column names as shown below: data = np.array([['','Col1','Col2'],['Row1',1,2],['Row2',3,4]]) I'd like the resulting DataFrame to have Row1 and Row2 as index values, and Col1, Col2 as header values. I can specify the index as follows: df = pd.DataFrame(data, index=data[:,0]) However, I am unsure how to best assign column headers.", "output": "Specify data, index and columns to the DataFrame constructor, as follows: >>> pd.DataFrame(data=data[1:,1:], # values ... index=data[1:,0], # 1st column as index ... columns=data[0,1:]) # 1st row as the column names As @joris mentions, you may need to change above to np.int_(data[1:,1:]) to have the correct data type."} +{"question_id": 1966207, "score": 444, "creation_date": 1261927795, "tags": ["python", "list", "numpy", "numpy-ndarray"], "instruction": "Convert NumPy array to Python list\n\nHow do I convert a NumPy array into a Python List?", "output": "Use tolist(): >>> import numpy as np >>> np.array([[1,2,3],[4,5,6]]).tolist() [[1, 2, 3], [4, 5, 6]] Note that this converts the values from whatever numpy type they may have (e.g. np.int32 or np.float32) to the \"nearest compatible Python type\" (in a list). If you want to preserve the numpy data types, you could call list() on your array instead, and you'll end up with a list of numpy scalars. (Thanks to Mr_and_Mrs_D for pointing that out in a comment.)"} +{"question_id": 79797, "score": 443, "creation_date": 1221623562, "tags": ["python", "datetime", "utc", "localtime"], "instruction": "How to convert local time string to UTC?\n\nHow do I convert a datetime string in local time to a string in UTC time? I'm sure I've done this before, but can't find it and SO will hopefully help me (and others) do that in future. Clarification: For example, if I have 2008-09-17 14:02:00 in my local timezone (+10), I'd like to generate a string with the equivalent UTC time: 2008-09-17 04:02:00. Also, from http://lucumr.pocoo.org/2011/7/15/eppur-si-muove/, note that in general this isn't possible as with DST and other issues there is no unique conversion from local time to UTC time.", "output": "Thanks @rofly, the full conversion from string to string is as follows: import time time.strftime(\"%Y-%m-%d %H:%M:%S\", time.gmtime(time.mktime(time.strptime(\"2008-09-17 14:04:00\", \"%Y-%m-%d %H:%M:%S\")))) My summary of the time/calendar functions: time.strptime string --> tuple (no timezone applied, so matches string) time.mktime local time tuple --> seconds since epoch (always local time) time.gmtime seconds since epoch --> tuple in UTC and calendar.timegm tuple in UTC --> seconds since epoch time.localtime seconds since epoch --> tuple in local timezone"} +{"question_id": 7771011, "score": 443, "creation_date": 1318611625, "tags": ["python", "json", "parsing"], "instruction": "How can I parse (read) and use JSON in Python?\n\nMy Python program receives JSON data, and I need to get bits of information out of it. How can I parse the data and use the result? I think I need to use json.loads for this task, but I can't understand how to do it. For example, suppose that I have jsonStr = '{\"one\" : \"1\", \"two\" : \"2\", \"three\" : \"3\"}'. Given this JSON, and an input of \"two\", how can I get the corresponding data, \"2\"? Beware that .load is for files; .loads is for strings. See also: Reading JSON from a file. Occasionally, a JSON document is intended to represent tabular data. If you have something like this and are trying to use it with Pandas, see Python - How to convert JSON File to Dataframe. Some data superficially looks like JSON, but is not JSON. For example, sometimes the data comes from applying repr to native Python data structures. The result may use quotes differently, use title-cased True and False rather than JSON-mandated true and false, etc. For such data, see Convert a String representation of a Dictionary to a dictionary or How to convert string representation of list to a list. Another common variant format puts separate valid JSON-formatted data on each line of the input. (Proper JSON cannot be parsed line by line, because it uses balanced brackets that can be many lines apart.) This format is called JSONL. See Loading JSONL file as JSON objects. Sometimes JSON data from a web source is padded with some extra text. In some contexts, this works around security restrictions in browsers. This is called JSONP and is described at What is JSONP, and why was it created?. In other contexts, the extra text implements a security measure, as described at Why does Google prepend while(1); to their JSON responses?. Either way, handling this in Python is straightforward: simply identify and remove the extra text, and proceed as before.", "output": "Very simple: import json data = json.loads('{\"one\" : \"1\", \"two\" : \"2\", \"three\" : \"3\"}') print(data['two']) # or `print data['two']` in Python 2"} +{"question_id": 3503879, "score": 442, "creation_date": 1282057437, "tags": ["python"], "instruction": "Assign output of os.system to a variable and prevent it from being displayed on the screen\n\nI want to assign the output of a command I run using os.system to a variable and prevent it from being output to the screen. But, in the below code ,the output is sent to the screen and the value printed for var is 0, which I guess signifies whether the command ran successfully or not. Is there any way to assign the command output to the variable and also stop it from being displayed on the screen? var = os.system(\"cat /etc/services\") print var #Prints 0", "output": "From this question which I asked a long time ago, what you may want to use is popen: os.popen('cat /etc/services').read() From the docs for Python 3.6, This is implemented using subprocess.Popen; see that class\u2019s documentation for more powerful ways to manage and communicate with subprocesses. Here's the corresponding code for subprocess: import subprocess proc = subprocess.Popen([\"cat\", \"/etc/services\"], stdout=subprocess.PIPE, shell=True) (out, err) = proc.communicate() print(\"program output:\", out)"} +{"question_id": 12400256, "score": 439, "creation_date": 1347516051, "tags": ["python"], "instruction": "Converting Epoch time into the datetime\n\nI am getting a response from the rest is an Epoch time format like start_time = 1234566 end_time = 1234578 I want to convert that epoch seconds in MySQL format time so that I could store the differences in my MySQL database. I tried: >>> import time >>> time.gmtime(123456) time.struct_time(tm_year=1970, tm_mon=1, tm_mday=2, tm_hour=10, tm_min=17, tm_sec=36, tm_wday=4, tm_yday=2, tm_isdst=0) The above result is not what I am expecting. I want it be like 2012-09-12 21:00:00 Please suggest how can I achieve this? Also, Why I am getting TypeError: a float is required for >>> getbbb_class.end_time = 1347516459425 >>> mend = time.gmtime(getbbb_class.end_time).tm_hour Traceback (most recent call last): ... TypeError: a float is required", "output": "To convert your time value (float or int) to a formatted string, use: strftime('%Y-%m-%d %H:%M:%S', localtime(1347517370)) preceded by this import: from time import strftime, localtime"} +{"question_id": 4504662, "score": 439, "creation_date": 1292971546, "tags": ["python", "range"], "instruction": "Why does range(start, end) not include end?\n\n>>> range(1,11) gives you [1,2,3,4,5,6,7,8,9,10] Why not 1-11? Did they just decide to do it like that at random or does it have some value I am not seeing?", "output": "Because it's more common to call range(0, 10) which returns [0,1,2,3,4,5,6,7,8,9] which contains 10 elements which equals len(range(0, 10)). There's a tendency in programming to use 0-based indexing. Also, consider the following common code snippet: for i in range(len(li)): pass Could you see that if range() went up to exactly len(li) that this would be problematic? The programmer would need to explicitly subtract 1. This also follows the common trend of programmers preferring for(int i = 0; i < 10; i++) over for(int i = 0; i <= 9; i++). If you are calling range with a start of 1 frequently, you might want to define your own function: >>> def range1(start, end): ... return range(start, end+1) ... >>> range1(1, 10) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"} +{"question_id": 16344756, "score": 436, "creation_date": 1367517850, "tags": ["python", "apache", "flask"], "instruction": "Auto reloading python Flask app upon code changes\n\nI'm investigating how to develop a decent web app with Python. Since I don't want some high-order structures to get in my way, my choice fell on the lightweight Flask framework. Time will tell if this was the right choice. So, now I've set up an Apache server with mod_wsgi, and my test site is running fine. However, I'd like to speed up the development routine by making the site automatically reload upon any changes in py or template files I make. I see that any changes in site's .wsgi file causes reloading (even without WSGIScriptReloading On in the apache config file), but I still have to prod it manually (ie, insert extra linebreak, save). Is there some way how to cause reload when I edit some of the app's py files? Or, I am expected to use IDE that refreshes the .wsgi file for me?", "output": "Run the flask run CLI command with debug mode enabled, which will automatically enable the reloader. As of Flask 2.2, you can pass --app and --debug options on the command line. $ flask --app main.py --debug run --app can also be set to module:app or module:create_app instead of module.py. See the docs for a full explanation. More options are available with: $ flask run --help Prior to Flask 2.2, you needed to set the FLASK_APP and FLASK_ENV=development environment variables. $ export FLASK_APP=main.py $ export FLASK_ENV=development $ flask run It is still possible to set FLASK_APP and FLASK_DEBUG=1 in Flask 2.2."} +{"question_id": 29815129, "score": 436, "creation_date": 1429769538, "tags": ["python", "pandas", "dataframe"], "instruction": "Pandas DataFrame to List of Dictionaries\n\nI have the following DataFrame: customer item1 item2 item3 1 apple milk tomato 2 water orange potato 3 juice mango chips which I want to translate it to list of dictionaries per row rows = [ { 'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato' }, { 'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato' }, { 'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips' } ]", "output": "Use df.to_dict('records') -- gives the output without having to transpose externally. In [2]: df.to_dict('records') Out[2]: [{'customer': 1L, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'}, {'customer': 2L, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'}, {'customer': 3L, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]"} +{"question_id": 1306631, "score": 434, "creation_date": 1250778942, "tags": ["python", "list", "set"], "instruction": "Add list to set\n\nHow do I add a list of values to an existing set?", "output": "You can't add a list to a set because lists are mutable, meaning that you can change the contents of the list after adding it to the set. You can however add tuples to the set, because you cannot change the contents of a tuple: >>> a.add(('f', 'g')) >>> print a set(['a', 'c', 'b', 'e', 'd', ('f', 'g')]) Edit: some explanation: The documentation defines a set as an unordered collection of distinct hashable objects. The objects have to be hashable so that finding, adding and removing elements can be done faster than looking at each individual element every time you perform these operations. The specific algorithms used are explained in the Wikipedia article. Pythons hashing algorithms are explained on effbot.org and pythons __hash__ function in the python reference. Some facts: Set elements as well as dictionary keys have to be hashable Some unhashable datatypes: list: use tuple instead set: use frozenset instead dict: has no official counterpart, but there are some recipes Object instances are hashable by default with each instance having a unique hash. You can override this behavior as explained in the python reference."} +{"question_id": 22149584, "score": 434, "creation_date": 1393857664, "tags": ["python", "pandas", "dataframe"], "instruction": "What does axis in pandas mean?\n\nHere is my code to generate a dataframe: import pandas as pd import numpy as np dff = pd.DataFrame(np.random.randn(1, 2), columns=list('AB')) then I got the dataframe: A B 0 0.626386 1.52325 When I type the command dff.mean(axis=1), I get: 0 1.074821 dtype: float64 According to the reference of pandas, axis=1 stands for columns and I expect the result of the command to be A 0.626386 B 1.523255 dtype: float64 So what does axis in pandas mean?", "output": "It specifies the axis along which the means are computed. By default axis=0. This is consistent with the numpy.mean usage when axis is specified explicitly (in numpy.mean, axis==None by default, which computes the mean value over the flattened array) , in which axis=0 along the rows (namely, index in pandas), and axis=1 along the columns. For added clarity, one may choose to specify axis='index' (instead of axis=0) or axis='columns' (instead of axis=1). A B 0 0.626386 1.52325 \u2192 \u2192 axis=1 \u2192 \u2192 \u2193 \u2193 \u2193 axis=0 \u2193 \u2193 \u2193"} +{"question_id": 1695183, "score": 434, "creation_date": 1257648225, "tags": ["python", "url", "encoding", "urllib", "urlencode"], "instruction": "How can I percent-encode URL parameters in Python?\n\nIf I do url = \"http://example.com?p=\" + urllib.quote(query) It doesn't encode / to %2F (breaks OAuth normalization) It doesn't handle Unicode (it throws an exception) Is there a better library?", "output": "From the Python 3 documentation: urllib.parse.quote(string, safe='/', encoding=None, errors=None) Replace special characters in string using the %xx escape. Letters, digits, and the characters '_.-~' are never quoted. By default, this function is intended for quoting the path section of a URL. The optional safe parameter specifies additional ASCII characters that should not be quoted \u2014 its default value is '/'. That means passing '' for safe will solve your first issue: >>> import urllib.parse >>> urllib.parse.quote('/test') '/test' >>> urllib.parse.quote('/test', safe='') '%2Ftest' (The function quote was moved from urllib to urllib.parse in Python 3.) By the way, have a look at urlencode. About the second issue, there was a bug report about it and it was fixed in Python 3. For Python 2, you can work around it by encoding as UTF-8 like this: >>> query = urllib.quote(u\"M\u00fcller\".encode('utf8')) >>> print urllib.unquote(query).decode('utf8') M\u00fcller"} +{"question_id": 491921, "score": 432, "creation_date": 1233241275, "tags": ["python", "unicode", "utf-8", "io"], "instruction": "Unicode (UTF-8) reading and writing to files in Python\n\nI'm having some brain failure in understanding reading and writing text to a file (Python 2.4). # The string, which has an a-acute in it. ss = u'Capit\\xe1n' ss8 = ss.encode('utf8') repr(ss), repr(ss8) (\"u'Capit\\xe1n'\", \"'Capit\\xc3\\xa1n'\") print ss, ss8 print >> open('f1','w'), ss8 >>> file('f1').read() 'Capit\\xc3\\xa1n\\n' So I type in Capit\\xc3\\xa1n into my favorite editor, in file f2. Then: >>> open('f1').read() 'Capit\\xc3\\xa1n\\n' >>> open('f2').read() 'Capit\\\\xc3\\\\xa1n\\n' >>> open('f1').read().decode('utf8') u'Capit\\xe1n\\n' >>> open('f2').read().decode('utf8') u'Capit\\\\xc3\\\\xa1n\\n' What am I not understanding here? Clearly there is some vital bit of magic (or good sense) that I'm missing. What does one type into text files to get proper conversions? What I'm truly failing to grok here, is what the point of the UTF-8 representation is, if you can't actually get Python to recognize it, when it comes from outside. Maybe I should just JSON dump the string, and use that instead, since that has an asciiable representation! More to the point, is there an ASCII representation of this Unicode object that Python will recognize and decode, when coming in from a file? If so, how do I get it? >>> print simplejson.dumps(ss) '\"Capit\\u00e1n\"' >>> print >> file('f3','w'), simplejson.dumps(ss) >>> simplejson.load(open('f3')) u'Capit\\xe1n'", "output": "In the notation u'Capit\\xe1n\\n' (should be just 'Capit\\xe1n\\n' in 3.x, and must be in 3.0 and 3.1), the \\xe1 represents just one character. \\x is an escape sequence, indicating that e1 is in hexadecimal. Writing Capit\\xc3\\xa1n into the file in a text editor means that it actually contains \\xc3\\xa1. Those are 8 bytes and the code reads them all. We can see this by displaying the result: # Python 3.x - reading the file as bytes rather than text, # to ensure we see the raw data >>> open('f2', 'rb').read() b'Capit\\\\xc3\\\\xa1n\\n' # Python 2.x >>> open('f2').read() 'Capit\\\\xc3\\\\xa1n\\n' Instead, just input characters like \u00e1 in the editor, which should then handle the conversion to UTF-8 and save it. In 2.x, a string that actually contains these backslash-escape sequences can be decoded using the string_escape codec: # Python 2.x >>> print 'Capit\\\\xc3\\\\xa1n\\n'.decode('string_escape') Capit\u00e1n The result is a str that is encoded in UTF-8 where the accented character is represented by the two bytes that were written \\\\xc3\\\\xa1 in the original string. To get a unicode result, decode again with UTF-8. In 3.x, the string_escape codec is replaced with unicode_escape, and it is strictly enforced that we can only encode from a str to bytes, and decode from bytes to str. unicode_escape needs to start with a bytes in order to process the escape sequences (the other way around, it adds them); and then it will treat the resulting \\xc3 and \\xa1 as character escapes rather than byte escapes. As a result, we have to do a bit more work: # Python 3.x >>> 'Capit\\\\xc3\\\\xa1n\\n'.encode('ascii').decode('unicode_escape').encode('latin-1').decode('utf-8') 'Capit\u00e1n\\n'"} +{"question_id": 14529838, "score": 432, "creation_date": 1359145605, "tags": ["python", "group-by", "aggregate-functions", "pandas"], "instruction": "Apply multiple functions to multiple groupby columns\n\nThe docs show how to apply multiple functions on a groupby object at a time using a dict with the output column names as the keys: In [563]: grouped['D'].agg({'result1' : np.sum, .....: 'result2' : np.mean}) .....: Out[563]: result2 result1 A bar -0.579846 -1.739537 foo -0.280588 -1.402938 However, this only works on a Series groupby object. And when a dict is similarly passed to a groupby DataFrame, it expects the keys to be the column names that the function will be applied to. What I want to do is apply multiple functions to several columns (but certain columns will be operated on multiple times). Also, some functions will depend on other columns in the groupby object (like sumif functions). My current solution is to go column by column, and doing something like the code above, using lambdas for functions that depend on other rows. But this is taking a long time, (I think it takes a long time to iterate through a groupby object). I'll have to change it so that I iterate through the whole groupby object in a single run, but I'm wondering if there's a built in way in pandas to do this somewhat cleanly. For example, I've tried something like grouped.agg({'C_sum' : lambda x: x['C'].sum(), 'C_std': lambda x: x['C'].std(), 'D_sum' : lambda x: x['D'].sum()}, 'D_sumifC3': lambda x: x['D'][x['C'] == 3].sum(), ...) but as expected I get a KeyError (since the keys have to be a column if agg is called from a DataFrame). Is there any built in way to do what I'd like to do, or a possibility that this functionality may be added, or will I just need to iterate through the groupby manually?", "output": "The second half of the currently accepted answer is outdated and has two deprecations. First and most important, you can no longer pass a dictionary of dictionaries to the agg groupby method. Second, never use .ix. If you desire to work with two separate columns at the same time I would suggest using the apply method which implicitly passes a DataFrame to the applied function. Let's use a similar dataframe as the one from above df = pd.DataFrame(np.random.rand(4,4), columns=list('abcd')) df['group'] = [0, 0, 1, 1] df a b c d group 0 0.418500 0.030955 0.874869 0.145641 0 1 0.446069 0.901153 0.095052 0.487040 0 2 0.843026 0.936169 0.926090 0.041722 1 3 0.635846 0.439175 0.828787 0.714123 1 A dictionary mapped from column names to aggregation functions is still a perfectly good way to perform an aggregation. df.groupby('group').agg({'a':['sum', 'max'], 'b':'mean', 'c':'sum', 'd': lambda x: x.max() - x.min()}) a b c d sum max mean sum group 0 0.864569 0.446069 0.466054 0.969921 0.341399 1 1.478872 0.843026 0.687672 1.754877 0.672401 If you don't like that ugly lambda column name, you can use a normal function and supply a custom name to the special __name__ attribute like this: def max_min(x): return x.max() - x.min() max_min.__name__ = 'Max minus Min' df.groupby('group').agg({'a':['sum', 'max'], 'b':'mean', 'c':'sum', 'd': max_min}) a b c d sum max mean sum Max minus Min group 0 0.864569 0.446069 0.466054 0.969921 0.341399 1 1.478872 0.843026 0.687672 1.754877 0.672401 Using apply and returning a Series Now, if you had multiple columns that needed to interact together then you cannot use agg, which implicitly passes a Series to the aggregating function. When using apply the entire group as a DataFrame gets passed into the function. I recommend making a single custom function that returns a Series of all the aggregations. Use the Series index as labels for the new columns: def f(x): d = {} d['a_sum'] = x['a'].sum() d['a_max'] = x['a'].max() d['b_mean'] = x['b'].mean() d['c_d_prodsum'] = (x['c'] * x['d']).sum() return pd.Series(d, index=['a_sum', 'a_max', 'b_mean', 'c_d_prodsum']) df.groupby('group').apply(f) a_sum a_max b_mean c_d_prodsum group 0 0.864569 0.446069 0.466054 0.173711 1 1.478872 0.843026 0.687672 0.630494 If you are in love with MultiIndexes, you can still return a Series with one like this: def f_mi(x): d = [] d.append(x['a'].sum()) d.append(x['a'].max()) d.append(x['b'].mean()) d.append((x['c'] * x['d']).sum()) return pd.Series(d, index=[['a', 'a', 'b', 'c_d'], ['sum', 'max', 'mean', 'prodsum']]) df.groupby('group').apply(f_mi) a b c_d sum max mean prodsum group 0 0.864569 0.446069 0.466054 0.173711 1 1.478872 0.843026 0.687672 0.630494"} +{"question_id": 9413216, "score": 432, "creation_date": 1330000642, "tags": ["python", "opencv", "numpy", "computer-vision", "ocr"], "instruction": "Simple Digit Recognition OCR in OpenCV-Python\n\nI am trying to implement a \"Digit Recognition OCR\" in OpenCV-Python (cv2). It is just for learning purposes. I would like to learn both KNearest and SVM features in OpenCV. I have 100 samples (i.e. images) of each digit. I would like to train with them. There is a sample letter_recog.py that comes with OpenCV sample. But I still couldn't figure out on how to use it. I don't understand what are the samples, responses etc. Also, it loads a txt file at first, which I didn't understand first. Later on searching a little bit, I could find a letter_recognition.data in cpp samples. I used it and made a code for cv2.KNearest in the model of letter_recog.py (just for testing): import numpy as np import cv2 fn = 'letter-recognition.data' a = np.loadtxt(fn, np.float32, delimiter=',', converters={ 0 : lambda ch : ord(ch)-ord('A') }) samples, responses = a[:,1:], a[:,0] model = cv2.KNearest() retval = model.train(samples,responses) retval, results, neigh_resp, dists = model.find_nearest(samples, k = 10) print results.ravel() It gave me an array of size 20000, I don't understand what it is. Questions: 1) What is letter_recognition.data file? How to build that file from my own data set? 2) What does results.reval() denote? 3) How we can write a simple digit recognition tool using letter_recognition.data file (either KNearest or SVM)?", "output": "Well, I decided to workout myself on my question to solve the above problem. What I wanted is to implement a simple OCR using KNearest or SVM features in OpenCV. And below is what I did and how. (it is just for learning how to use KNearest for simple OCR purposes). 1) My first question was about letter_recognition.data file that comes with OpenCV samples. I wanted to know what is inside that file. It contains a letter, along with 16 features of that letter. And this SOF helped me to find it. These 16 features are explained in the paper Letter Recognition Using Holland-Style Adaptive Classifiers. (Although I didn't understand some of the features at the end) 2) Since I knew, without understanding all those features, it is difficult to do that method. I tried some other papers, but all were a little difficult for a beginner. So I just decided to take all the pixel values as my features. (I was not worried about accuracy or performance, I just wanted it to work, at least with the least accuracy) I took the below image for my training data: (I know the amount of training data is less. But, since all letters are of the same font and size, I decided to try on this). To prepare the data for training, I made a small code in OpenCV. It does the following things: It loads the image. Selects the digits (obviously by contour finding and applying constraints on area and height of letters to avoid false detections). Draws the bounding rectangle around one letter and wait for key press manually. This time we press the digit key ourselves corresponding to the letter in the box. Once the corresponding digit key is pressed, it resizes this box to 10x10 and saves all 100 pixel values in an array (here, samples) and corresponding manually entered digit in another array(here, responses). Then save both the arrays in separate .txt files. At the end of the manual classification of digits, all the digits in the training data (train.png) are labeled manually by ourselves, image will look like below: Below is the code I used for the above purpose (of course, not so clean): import sys import numpy as np import cv2 im = cv2.imread('pitrain.png') im3 = im.copy() gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray,(5,5),0) thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2) ################# Now finding Contours ################### contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) samples = np.empty((0,100)) responses = [] keys = [i for i in range(48,58)] for cnt in contours: if cv2.contourArea(cnt)>50: [x,y,w,h] = cv2.boundingRect(cnt) if h>28: cv2.rectangle(im,(x,y),(x+w,y+h),(0,0,255),2) roi = thresh[y:y+h,x:x+w] roismall = cv2.resize(roi,(10,10)) cv2.imshow('norm',im) key = cv2.waitKey(0) if key == 27: # (escape to quit) sys.exit() elif key in keys: responses.append(int(chr(key))) sample = roismall.reshape((1,100)) samples = np.append(samples,sample,0) responses = np.array(responses,np.float32) responses = responses.reshape((responses.size,1)) print \"training complete\" np.savetxt('generalsamples.data',samples) np.savetxt('generalresponses.data',responses) Now we enter in to training and testing part. For the testing part, I used the below image, which has the same type of letters I used for the training phase. For training we do as follows: Load the .txt files we already saved earlier create an instance of the classifier we are using (it is KNearest in this case) Then we use KNearest.train function to train the data For testing purposes, we do as follows: We load the image used for testing process the image as earlier and extract each digit using contour methods Draw a bounding box for it, then resize it to 10x10, and store its pixel values in an array as done earlier. Then we use KNearest.find_nearest() function to find the nearest item to the one we gave. ( If lucky, it recognizes the correct digit.) I included last two steps (training and testing) in single code below: import cv2 import numpy as np ####### training part ############### samples = np.loadtxt('generalsamples.data',np.float32) responses = np.loadtxt('generalresponses.data',np.float32) responses = responses.reshape((responses.size,1)) model = cv2.KNearest() model.train(samples,responses) ############################# testing part ######################### im = cv2.imread('pi.png') out = np.zeros(im.shape,np.uint8) gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2) contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) for cnt in contours: if cv2.contourArea(cnt)>50: [x,y,w,h] = cv2.boundingRect(cnt) if h>28: cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2) roi = thresh[y:y+h,x:x+w] roismall = cv2.resize(roi,(10,10)) roismall = roismall.reshape((1,100)) roismall = np.float32(roismall) retval, results, neigh_resp, dists = model.find_nearest(roismall, k = 1) string = str(int((results[0][0]))) cv2.putText(out,string,(x,y+h),0,1,(0,255,0)) cv2.imshow('im',im) cv2.imshow('out',out) cv2.waitKey(0) And it worked, below is the result I got: Here it worked with 100% accuracy. I assume this is because all the digits are of the same kind and the same size. But anyway, this is a good start to go for beginners (I hope so)."} +{"question_id": 58754860, "score": 431, "creation_date": 1573150926, "tags": ["python", "windows-10", "sublimetext3"], "instruction": "CMD opens Windows Store when I type 'python'\n\nToday when I tried to run simple code on Sublime Text 3, the following message appeared: Python was not found but can be installed from the Microsoft Store: https://go.microsoft.com/fwlink?linkID=2082640 And when I type Python in CMD, it opens the Windows Store for me to download Python 3.7. This problem started today for no good reason. I didn't change or download anything about Python and already tried reinstalling Python, and the Path environment variable is correct.", "output": "Use the Windows search bar to find \"Manage app execution aliases\". There should be two aliases for Python. Unselect them, and this will allow the usual Python aliases \"python\" and \"python3\". See the image below. I think we have this problem when installing Python because in a new Windows installation the aliases are in the ON position as in image below. When turned on, Windows puts an empty or fake file named python.exe and python3.exe in the directory named %USERPROFILE%\\AppData\\Local\\Microsoft\\WindowsApps. This is the alias. Then Microsoft put that directory at the top of the list in the \"Path\" environment variables. When you enter \"python\" in cmd, it searches the directories listed in your \"Path\" environment variables page from top to bottom. So if you installed Python after a new Windows 10 install then get redirected to the Windows Store, it's because there are two python.exe's: The alias in the App Execution Alias page, and the real one wherever you installed Python. But cmd finds the App execution, alias python.exe, first because that directory is at the top of the Path. I think the easiest solution is to just check the python.exe and python3.exe to OFF as I suggested before, which deletes the fake EXE file files. The first time I ran into this problem, I manually deleted the python.exe and python3.exe files but when I restarted the files regenerated. That prompted me to search for the App Execution Aliases page and uncheck the box, which solved it for me, by not allowing the files to regenerate. Based on this Microsoft Devblog, they stated they created this system partially for new Python users, specifically kids learning Python in school that had trouble installing it, and focus on learning to code. I think Windows probably deletes those aliases if you install Python from the Windows App Store. We are noticing that they do not get deleted if you manually install from another source. (Also, the empty/fake python.exe is not really empty. It says 0 KB in the screenshot, but entering \"start ms-windows-store:\" in cmd opens the Windows App Store, so it probably just has a line with that and a way to direct it to the Python page.) One alternative, as Chipjust suggested, you can create a new alias for Python using something like DOSKEY as explained in this article for example: How to set aliases for the command prompt in Windows Another alternative is to delete the user path environment variable that points to the alias files, %USERPROFILE%\\AppData\\Local\\Microsoft\\WindowsApps, but the App Execution Aliases handle more apps than just python, and deleting the path from environment variables breaks all the other apps that have execution aliases in that directory; which on my PC includes notepad, xbox game bar, spotify, monitoring software for my motherboard, paint, windows subsystem for android, to name a few. Also if you think about it, the average Windows user is unfamiliar editing environment variables and on school and business owned computers requires administrative access. So deleting the path to ...\\WindowsApps, from the path environment variable, is not ideal."} +{"question_id": 62983756, "score": 431, "creation_date": 1595181079, "tags": ["python", "pip", "packaging", "pyproject.toml"], "instruction": "What is pyproject.toml file for?\n\nBackground I was about to try Python package downloaded from GitHub, and realized that it did not have a setup.py, so I could not install it with pip install -e Instead, the package had a pyproject.toml file which seems to have very similar entries as the setup.py usually has. What I found Googling lead me into PEP-518 and it gives some critique to setup.py in Rationale section. However, it does not clearly tell that usage of setup.py should be avoided, or that pyproject.toml would as such completely replace setup.py. Questions Is the pyproject.toml something that is used to replace setup.py? Or should a package come with both, a pyproject.toml and a setup.py? How would one install a project with pyproject.toml in an editable state?", "output": "What is it for? Currently there are multiple packaging tools being popular in Python community and while setuptools still seems to be prevalent it's not a de facto standard anymore. This situation creates a number of hassles for both end users and developers: For setuptools-based packages installation from source / build of a distribution can fail if one doesn't have setuptools installed; pip doesn't support the installation of packages based on other packaging tools from source, so these tools had to generate a setup.py file to produce a compatible package. To build a distribution package one has to install the packaging tool first and then use tool-specific commands; If package author decides to change the packaging tool, workflows must be changed as well to use different tool-specific commands. pyproject.toml is a new configuration file introduced by PEP 517 and PEP 518 to solve these problems: ... think of the (rough) steps required to produce a built artifact for a project: The source checkout of the project. Installation of the build system. Execute the build system. This PEP [518] covers step #2. PEP 517 covers step #3 ... Any tool can also extend this file with its own section (table) to accept tool-specific options, but it's up to them and not required. PEP 621 suggests using pyproject.toml to specify package core metadata in static, tool-agnostic way. Which backends currently support this is shown in the following table: enscons flit_core hatchling pdm-backend poetry-core setuptools 0.26.0+ 3.2+ 0.3+ 0.3.0+ 2.0.0+ 61.0.0+ Does it replace setup.py? For setuptools-based packages pyproject.toml is not strictly meant to replace setup.py, but rather to ensure its correct execution if it's still needed. For other packaging tools \u2013 yes, it is: Where the build-backend key exists, this takes precedence and the source tree follows the format and conventions of the specified backend (as such no setup.py is needed unless the backend requires it). Projects may still wish to include a setup.py for compatibility with tools that do not use this spec. How to install a package in editable mode? Originally \"editable install\" was a setuptools-specific feature and as such it was not supported by PEP 517. Later on PEP 660 extended this concept to packages using pyproject.toml. There are two possible conditions for installing a package in editable mode using pip: Modern: Both the frontend (pip) and a backend must support PEP 660. pip supports it since version 21.3; Legacy: Packaging tool must provide a setup.py file which supports the develop command. Since version 21.1 pip can also install packages using only setup.cfg file in editable mode. The following table describes the support of editable installs by various backends: enscons flit_core hatchling pdm-backend poetry-core setuptools 0.28.0+ 3.4+ 0.3+ 0.8.0+ 1.0.8+ 64.0.0+"} +{"question_id": 163823, "score": 430, "creation_date": 1222971979, "tags": ["python", "django", "django-models", "django-admin", "modeladmin"], "instruction": "Can \"list_display\" in a Django ModelAdmin display attributes of ForeignKey fields?\n\nI have a Person model that has a foreign key relationship to Book, which has a number of fields, but I'm most concerned about author (a standard CharField). With that being said, in my PersonAdmin model, I'd like to display book.author using list_display: class PersonAdmin(admin.ModelAdmin): list_display = ['book.author',] I've tried all of the obvious methods for doing so, but nothing seems to work. Any suggestions?", "output": "As another option, you can do lookups like: #models.py class UserAdmin(admin.ModelAdmin): list_display = (..., 'get_author') def get_author(self, obj): return obj.book.author get_author.short_description = 'Author' get_author.admin_order_field = 'book__author' For Django 3.2 or higher, please refer to this answer"} +{"question_id": 16807011, "score": 429, "creation_date": 1369809222, "tags": ["python", "arrays", "variables", "scalar"], "instruction": "python: how to identify if a variable is an array or a scalar\n\nI have a function that takes the argument NBins. I want to make a call to this function with a scalar 50 or an array [0, 10, 20, 30]. How can I identify within the function, what the length of NBins is? or said differently, if it is a scalar or a vector? I tried this: >>> N=[2,3,5] >>> P = 5 >>> len(N) 3 >>> len(P) Traceback (most recent call last): File \"\", line 1, in TypeError: object of type 'int' has no len() >>> As you see, I can't apply len to P, since it's not an array.... Is there something like isarray or isscalar in python? thanks", "output": ">>> import collections.abc >>> isinstance([0, 10, 20, 30], collections.abc.Sequence) and not isinstance([0, 10, 20, 30], (str, unicode)) True >>> isinstance(50, collections.abc.Sequence) and not isinstance(50, (str, unicode)) False note: isinstance also supports a tuple of classes, check type(x) in (..., ...) should be avoided and is unnecessary. You may also wanna check not isinstance(x, (str, unicode)) As noted by @2080 and also here this won't work for numpy arrays. eg. >>> import collections.abc >>> import numpy as np >>> isinstance((1, 2, 3), collections.abc.Sequence) True >>> isinstance(np.array([1, 2, 3]), collections.abc.Sequence) False In which case you may try the answer from @jpaddison3: >>> hasattr(np.array([1, 2, 3]), \"__len__\") True >>> hasattr([1, 2, 3], \"__len__\") True >>> hasattr((1, 2, 3), \"__len__\") True However as noted here, this is not perfect either, and will incorrectly (at least according to me) classify dictionaries as sequences whereas isinstance with collections.abc.Sequence classifies correctly: >>> hasattr({\"a\": 1}, \"__len__\") True >>> from numpy.distutils.misc_util import is_sequence >>> is_sequence({\"a\": 1}) True >>> isinstance({\"a\": 1}, collections.abc.Sequence) False You could customise your solution to something like this, add more types to isinstance depending on your needs: >>> isinstance(np.array([1, 2, 3]), (collections.abc.Sequence, np.ndarray)) True >>> isinstance([1, 2, 3], (collections.abc.Sequence, np.ndarray)) True"} +{"question_id": 3279560, "score": 429, "creation_date": 1279529061, "tags": ["python", "matplotlib"], "instruction": "Reverse colormap in matplotlib\n\nI would like to know how to simply reverse the color order of a given colormap in order to use it with plot_surface.", "output": "The standard colormaps also all have reversed versions. They have the same names with _r tacked on to the end. (Documentation here.)"} +{"question_id": 6590688, "score": 429, "creation_date": 1309916524, "tags": ["python", "django", "virtualenv"], "instruction": "Is it bad to have my virtualenv directory inside my git repository?\n\nI'm thinking about putting the virtualenv for a Django web app I am making inside my git repository for the app. It seems like an easy way to keep deploy's simple and easy. Is there any reason why I shouldn't do this?", "output": "I use pip freeze to get the packages I need into a requirements.txt file and add that to my repository. I tried to think of a way of why you would want to store the entire virtualenv, but I could not."} +{"question_id": 39590187, "score": 428, "creation_date": 1474363085, "tags": ["python", "requirements.txt"], "instruction": "In requirements.txt, what does tilde equals (~=) mean?\n\nIn the requirements.txt for a Python library I am using, one of the requirements is specified like: mock-django~=0.6.10 What does ~= mean?", "output": "It means it will select the latest version of the package, greater than or equal to 0.6.10, but still in the 0.6.* version, so it won't download 0.7.0 for example. It ensures you will get security fixes but keep backward-compatibility, if the package maintainer respects the semantic versioning (which states that breaking changes should occur only in major versions). Or, as said by PEP 440: For a given release identifier V.N , the compatible release clause is approximately equivalent to the pair of comparison clauses: >= V.N, == V.* Definition in PEP 440 Complete example here in the documentation"} +{"question_id": 713794, "score": 427, "creation_date": 1238763625, "tags": ["python", "exception"], "instruction": "Catching an exception while using a Python 'with' statement\n\nI can't figure out how to handle exception for python 'with' statement. If I have a code: with open(\"a.txt\") as f: print f.readlines() I really want to handle 'file not found exception' in order to do something. But I can't write with open(\"a.txt\") as f: print f.readlines() except: print 'oops' and can't write with open(\"a.txt\") as f: print f.readlines() else: print 'oops' Enclosing with in a try/except statement doesn't work either, and an exception is not raised. What can I do in order to process failure inside with statement in a Pythonic way?", "output": "This solution will keep the with-block-code outside of the try-except-clause. try: f = open('foo.txt') except FileNotFoundError: print('error') else: with f: print f.readlines()"} +{"question_id": 15753390, "score": 426, "creation_date": 1364857313, "tags": ["python", "mocking", "request"], "instruction": "How can I mock requests and the response?\n\nI am trying to use Pythons mock package to mock Pythons requests module. What are the basic calls to get me working in below scenario? In my views.py, I have a function that makes variety of requests.get() calls with different response each time def myview(request): res1 = requests.get('aurl') res2 = request.get('burl') res3 = request.get('curl') In my test class I want to do something like this but cannot figure out exact method calls Step 1: # Mock the requests module # when mockedRequests.get('aurl') is called then return 'a response' # when mockedRequests.get('burl') is called then return 'b response' # when mockedRequests.get('curl') is called then return 'c response' Step 2: Call my view Step 3: verify response contains 'a response', 'b response' , 'c response' How can I complete Step 1 (mocking the requests module)?", "output": "Here is what worked for me: from unittest import mock @mock.patch('requests.get', mock.Mock(side_effect = lambda k:{'aurl': 'a response', 'burl' : 'b response'}.get(k, 'unhandled request %s'%k)))"} +{"question_id": 10388462, "score": 425, "creation_date": 1335809342, "tags": ["python", "matplotlib", "subplot", "figure", "matplotlib-gridspec"], "instruction": "Matplotlib different size subplots\n\nI need to add two subplots to a figure. One subplot needs to be about three times as wide as the second (same height). I accomplished this using GridSpec and the colspan argument but I would like to do this using figure so I can save to PDF. I can adjust the first figure using the figsize argument in the constructor, but how do I change the size of the second plot?", "output": "As of matplotlib 3.6.0, width_ratios and height_ratios can now be passed directly as keyword arguments to plt.subplots and subplot_mosaic, as per What's new in Matplotlib 3.6.0 (Sep 15, 2022). f, (a0, a1) = plt.subplots(1, 2, width_ratios=[3, 1]) f, (a0, a1, a2) = plt.subplots(3, 1, height_ratios=[1, 1, 3]) Another way is to use the subplots function and pass the width ratio with gridspec_kw matplotlib Tutorial: Customizing Figure Layouts Using GridSpec and Other Functions matplotlib.gridspec.GridSpec has available gridspect_kw options import numpy as np import matplotlib.pyplot as plt # generate some data x = np.arange(0, 10, 0.2) y = np.sin(x) # plot it f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]}) a0.plot(x, y) a1.plot(y, x) f.tight_layout() f.savefig('grid_figure.pdf') Because the question is canonical, here is an example with vertical subplots. # plot it f, (a0, a1, a2) = plt.subplots(3, 1, gridspec_kw={'height_ratios': [1, 1, 3]}) a0.plot(x, y) a1.plot(x, y) a2.plot(x, y) f.tight_layout()"} +{"question_id": 10588644, "score": 424, "creation_date": 1337018590, "tags": ["python", "debugging", "https", "python-requests"], "instruction": "How can I see the entire HTTP request that's being sent by my Python application?\n\nIn my case, I'm using the Requests library to call PayPal's API over HTTPS. Unfortunately, I'm getting an error from PayPal, and PayPal support cannot figure out what the error is or what's causing it. They want me to \"Please provide the entire request, headers included\". How can I do that?", "output": "A simple method: enable logging in recent versions of Requests (1.x and higher.) Requests uses the http.client and logging module configuration to control logging verbosity, as described here. Demonstration Code excerpted from the linked documentation: import requests import logging # These two lines enable debugging at httplib level (requests->urllib3->http.client) # You will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. # The only thing missing will be the response.body which is not logged. try: import http.client as http_client except ImportError: # Python 2 import httplib as http_client http_client.HTTPConnection.debuglevel = 1 # You must initialize logging, otherwise you'll not see debug output. logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger(\"requests.packages.urllib3\") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True requests.get('https://httpbin.org/headers') Example Output python requests-logging.py Output: INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org send: 'GET /headers HTTP/1.1\\r\\nHost: httpbin.org\\r\\nAccept-Encoding: gzip, deflate, compress\\r\\nAccept: */*\\r\\nUser-Agent: python-requests/1.2.0 CPython/2.7.3 Linux/3.2.0-48-generic\\r\\n\\r\\n' reply: 'HTTP/1.1 200 OK\\r\\n' header: Content-Type: application/json header: Date: Sat, 29 Jun 2013 11:19:34 GMT header: Server: gunicorn/0.17.4 header: Content-Length: 226 header: Connection: keep-alive DEBUG:requests.packages.urllib3.connectionpool:\"GET /headers HTTP/1.1\" 200 226"} +{"question_id": 11497376, "score": 424, "creation_date": 1342404851, "tags": ["python", "line-breaks", "file-writing"], "instruction": "How do I specify new lines in a string in order to write multiple lines to a file?\n\nHow can I indicate a newline in a string in Python, so that I can write multiple lines to a text file?", "output": "It depends on how correct you want to be. \\n will usually do the job. If you really want to get it right, you look up the newline character in the os package. (It's actually called linesep.) Note: when writing to files using the Python API, do not use the os.linesep. Just use \\n; Python automatically translates that to the proper newline character for your platform."} +{"question_id": 18674064, "score": 423, "creation_date": 1378562341, "tags": ["python", "indexing", "pandas"], "instruction": "how do I insert a column at a specific column index in pandas?\n\nCan I insert a column at a specific column index in pandas? import pandas as pd df = pd.DataFrame({'l':['a','b','c','d'], 'v':[1,2,1,2]}) df['n'] = 0 This will put column n as the last column of df, but isn't there a way to tell df to put n at the beginning?", "output": "see docs: http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html using loc = 0 will insert at the beginning df.insert(loc, column, value) df = pd.DataFrame({'B': [1, 2, 3], 'C': [4, 5, 6]}) df Out: B C 0 1 4 1 2 5 2 3 6 idx = 0 new_col = [7, 8, 9] # can be a list, a Series, an array or a scalar df.insert(loc=idx, column='A', value=new_col) df Out: A B C 0 7 1 4 1 8 2 5 2 9 3 6"} +{"question_id": 9236926, "score": 423, "creation_date": 1328922666, "tags": ["python", "arrays", "numpy", "concatenation", "numpy-ndarray"], "instruction": "Concatenating two one-dimensional NumPy arrays\n\nHow do I concatenate two one-dimensional arrays in NumPy? I tried numpy.concatenate: import numpy as np a = np.array([1, 2, 3]) b = np.array([4, 5]) np.concatenate(a, b) But I get an error: TypeError: only length-1 arrays can be converted to Python scalars", "output": "Use: np.concatenate([a, b]) The arrays you want to concatenate need to be passed in as a sequence, not as separate arguments. From the NumPy documentation: numpy.concatenate((a1, a2, ...), axis=0) Join a sequence of arrays together. It was trying to interpret your b as the axis parameter, which is why it complained it couldn't convert it into a scalar."} +{"question_id": 10062954, "score": 422, "creation_date": 1333889767, "tags": ["python", "numpy"], "instruction": "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\n\nLet x be a NumPy array. The following: (x > 1) and (x < 3) Gives the error message: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() How do I fix this?", "output": "If a and b are Boolean NumPy arrays, the & operation returns the elementwise-and of them: a & b That returns a Boolean array. To reduce this to a single Boolean value, use either (a & b).any() or (a & b).all() Note: if a and b are non-Boolean arrays, consider (a - b).any() or (a - b).all() instead. Rationale The NumPy developers felt there was no one commonly understood way to evaluate an array in Boolean context: it could mean True if any element is True, or it could mean True if all elements are True, or True if the array has non-zero length, just to name three possibilities. Since different users might have different needs and different assumptions, the NumPy developers refused to guess and instead decided to raise a ValueError whenever one tries to evaluate an array in Boolean context. Applying and to two numpy arrays causes the two arrays to be evaluated in Boolean context (by calling __bool__ in Python3 or __nonzero__ in Python2)."} +{"question_id": 458550, "score": 420, "creation_date": 1232388354, "tags": ["python", "setuptools", "setup.py", "python-packaging"], "instruction": "Standard way to embed version into Python package?\n\nIs there a standard way to associate version string with a Python package in such way that I could do the following? import foo print(foo.version) I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in setup.py already. Alternative solution that I found was to have import __version__ in my foo/__init__.py and then have __version__.py generated by setup.py.", "output": "Not directly an answer to your question, but you should consider naming it __version__, not version. This is almost a quasi-standard. Many modules in the standard library use __version__, and this is also used in lots of 3rd-party modules, so it's the quasi-standard. Usually, __version__ is a string, but sometimes it's also a float or tuple. As mentioned by S.Lott (Thank you!), PEP 8 says it explicitly: Module Level Dunder Names Module level \"dunders\" (i.e. names with two leading and two trailing underscores) such as __all__, __author__, __version__, etc. should be placed after the module docstring but before any import statements except from __future__ imports. You should also make sure that the version number conforms to the format described in PEP 440 (PEP 386 a previous version of this standard)."} +{"question_id": 3090302, "score": 419, "creation_date": 1277181522, "tags": ["python", "django", "django-queryset"], "instruction": "How do I get the object if it exists, or None if it does not exist in Django?\n\nWhen I ask the model manager to get an object, it raises DoesNotExist when there is no matching object. go = Content.objects.get(name=\"baby\") Instead of DoesNotExist, how can I have go be None instead?", "output": "There is no 'built in' way to do this as of version 1.2. Django will raise the DoesNotExist exception every time. The idiomatic way to handle this in Python is to wrap it in a try catch: try: go = SomeModel.objects.get(foo='bar') except SomeModel.DoesNotExist: go = None What I did was to subclass models.Manager, create a safe_get like the code above, and use that manager for my models. That way you can write one line to achieve this: SomeModel.objects.safe_get(foo='bar')"} +{"question_id": 6981717, "score": 418, "creation_date": 1312804576, "tags": ["python", "loops", "if-statement", "for-loop"], "instruction": "Pythonic way to combine for-loop and if-statement\n\nI know how to use both for loops and if statements on separate lines, such as: >>> a = [2,3,4,5,6,7,8,9,0] ... xyz = [0,12,4,6,242,7,9] ... for x in xyz: ... if x in a: ... print(x) 0,4,6,7,9 And I know I can use a list comprehension to combine these when the statements are simple, such as: print([x for x in xyz if x in a]) But what I can't find is a good example anywhere (to copy and learn from) demonstrating a complex set of commands (not just \"print x\") that occur following a combination of a for loop and some if statements. Something that I would expect looks like: for x in xyz if x not in a: print(x...) Is this just not the way python is supposed to work?", "output": "You can use generator expressions like this: gen = (x for x in xyz if x not in a) for x in gen: print(x)"} +{"question_id": 3368969, "score": 417, "creation_date": 1280469028, "tags": ["string", "python", "substring"], "instruction": "Find string between two substrings\n\nHow do I find a string between two substrings ('123STRINGabc' -> 'STRING')? My current method is like this: >>> start = 'asdf=5;' >>> end = '123jasd' >>> s = 'asdf=5;iwantthis123jasd' >>> print((s.split(start))[1].split(end)[0]) iwantthis However, this seems very inefficient and un-pythonic. What is a better way to do something like this? Forgot to mention: The string might not start and end with start and end. They may have more characters before and after.", "output": "import re s = 'asdf=5;iwantthis123jasd' result = re.search('asdf=5;(.*)123jasd', s) print(result.group(1)) # returns 'iwantthis'"} +{"question_id": 7610001, "score": 417, "creation_date": 1317383909, "tags": ["python", "command-line", "module", "package"], "instruction": "What is the purpose of the -m switch?\n\nCould you explain to me what the difference is between calling python -m mymod1 mymod2.py args and python mymod1.py mymod2.py args It seems in both cases mymod1.py is called and sys.argv is ['mymod1.py', 'mymod2.py', 'args'] So what is the -m switch for?", "output": "The first line of the Rationale section of PEP 338 says: Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose. So you can specify any module in Python's search path this way, not just files in the current directory. You're correct that python mymod1.py mymod2.py args has exactly the same effect. The first line of the Scope of this proposal section states: In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line. With -m more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Read it for more info."} +{"question_id": 45421163, "score": 416, "creation_date": 1501519050, "tags": ["python", "anaconda", "miniconda"], "instruction": "Anaconda vs. miniconda\n\nIn the Anaconda repository, there are two types of installers: \"Anaconda installers\" and \"Miniconda installers\". What are their differences? Besides, for an installer file, Anaconda2-4.4.0.1-Linux-ppc64le.sh, what does 2-4.4.0.1 stand for?", "output": "The difference is that miniconda is just shipping the repository management system. So when you install it there is just the management system without packages. Whereas with Anaconda, it is like a distribution with some built in packages. Like with any Linux distribution, there are some releases which bundles lots of updates for the included packages. That is why there is a difference in version numbering. If you only decide to upgrade Anaconda, you are updating a whole system. EDIT there are new options now for on the package management side. mamba can be used as a drop in replacement for conda. It has a faster solver and is a complete re-write in C++. The solver is actually experimentally available in conda with --experimental-solver=libmamba. Keywords to look for if you want to learn more: mamba, mambaforge, micromamba."} +{"question_id": 3925096, "score": 415, "creation_date": 1286982195, "tags": ["python", "path", "path-manipulation"], "instruction": "How to get only the last part of a path in Python?\n\nIn python, suppose I have a path like this: /folderA/folderB/folderC/folderD/ How can I get just the folderD part?", "output": "Use os.path.normpath to strip off any trailing slashes, then os.path.basename gives you the last part of the path: >>> os.path.basename(os.path.normpath('/folderA/folderB/folderC/folderD/')) 'folderD' Using only basename gives everything after the last slash, which in this case is ''."} +{"question_id": 5466618, "score": 415, "creation_date": 1301358734, "tags": ["python"], "instruction": "'too many values to unpack', iterating over a dict. key=>string, value=>list\n\nI am getting the too many values to unpack error. Any idea how I can fix this? first_names = ['foo', 'bar'] last_names = ['gravy', 'snowman'] fields = { 'first_names': first_names, 'last_name': last_names, } for field, possible_values in fields: # error happens on this line", "output": "Python 3 Use items(). for field, possible_values in fields.items(): print(field, possible_values) Python 2 Use iteritems(). for field, possible_values in fields.iteritems(): print field, possible_values See this answer for more information on iterating through dictionaries, such as using items(), across Python versions. For reference, iteritems() was removed in Python 3."} +{"question_id": 4075190, "score": 412, "creation_date": 1288676712, "tags": ["python", "function", "built-in", "getattr", "class-attributes"], "instruction": "What is and how can I to use getattr() in Python?\n\nI read an article about the getattr function, but I still can't understand what it's for. The only thing I understand about getattr() is that getattr(li, \"pop\") is the same as calling li.pop. When and how do I use this exactly? The book said something about using it to get a reference to a function whose name isn't known until runtime, but when and why would I use this?", "output": "getattr(object, 'x') is completely equivalent to object.x. There are only two cases where getattr can be useful. you can't write object.x, because you don't know in advance which attribute you want (it comes from a string). It is very useful for metaprogramming. you want to provide a default value. object.y will raise an AttributeError if there's no y. But getattr(object, 'y', 5) will return 5."} +{"question_id": 949098, "score": 411, "creation_date": 1244101038, "tags": ["python", "list"], "instruction": "How can I partition (split up, divide) a list based on a condition?\n\nI have some code like: good = [x for x in mylist if x in goodvals] bad = [x for x in mylist if x not in goodvals] The goal is to split up the contents of mylist into two other lists, based on whether or not they meet a condition. How can I do this more elegantly? Can I avoid doing two separate iterations over mylist? Can I improve performance by doing so?", "output": "good = [x for x in mylist if x in goodvals] bad = [x for x in mylist if x not in goodvals] is there a more elegant way to do this? That code is perfectly readable, and extremely clear! # files looks like: [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi'), ... ] IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png') images = [f for f in files if f[2].lower() in IMAGE_TYPES] anims = [f for f in files if f[2].lower() not in IMAGE_TYPES] Again, this is fine! There might be slight performance improvements using sets, but it's a trivial difference, and I find the list comprehension far easier to read, and you don't have to worry about the order being messed up, duplicates being removed as so on. In fact, I may go another step \"backward\", and just use a simple for loop: images, anims = [], [] for f in files: if f.lower() in IMAGE_TYPES: images.append(f) else: anims.append(f) The a list-comprehension or using set() is fine until you need to add some other check or another bit of logic - say you want to remove all 0-byte jpeg's, you just add something like.. if f[1] == 0: continue"} +{"question_id": 582056, "score": 410, "creation_date": 1235487402, "tags": ["python", "parameters"], "instruction": "Getting list of parameter names inside python function\n\nIs there an easy way to be inside a python function and get a list of the parameter names? For example: def func(a,b,c): print magic_that_does_what_I_want() >>> func() ['a','b','c'] Thanks", "output": "Well we don't actually need inspect here. >>> func = lambda x, y: (x, y) >>> >>> func.__code__.co_argcount 2 >>> func.__code__.co_varnames ('x', 'y') >>> >>> def func2(x,y=3): ... print(func2.__code__.co_varnames) ... pass # Other things ... >>> func2(3,3) ('x', 'y') >>> >>> func2.__defaults__ (3,)"} +{"question_id": 29370057, "score": 408, "creation_date": 1427809086, "tags": ["python", "pandas", "dataframe", "date", "datetime"], "instruction": "Select DataFrame rows between two dates\n\nI am creating a DataFrame from a csv as follows: stock = pd.read_csv('data_in/' + filename + '.csv', skipinitialspace=True) The DataFrame has a date column. Is there a way to create a new DataFrame (or just overwrite the existing one) which only contains rows with date values that fall within a specified date range or between two specified date values?", "output": "There are two possible solutions: Use a boolean mask, then use df.loc[mask] Set the date column as a DatetimeIndex, then use df[start_date : end_date] Using a boolean mask: Ensure df['date'] is a Series with dtype datetime64[ns]: df['date'] = pd.to_datetime(df['date']) Make a boolean mask. start_date and end_date can be datetime.datetimes, np.datetime64s, pd.Timestamps, or even datetime strings: #greater than the start date and smaller than the end date mask = (df['date'] > start_date) & (df['date'] <= end_date) Select the sub-DataFrame: df.loc[mask] or re-assign to df df = df.loc[mask] For example, import numpy as np import pandas as pd df = pd.DataFrame(np.random.random((200,3))) df['date'] = pd.date_range('2000-1-1', periods=200, freq='D') mask = (df['date'] > '2000-6-1') & (df['date'] <= '2000-6-10') print(df.loc[mask]) yields 0 1 2 date 153 0.208875 0.727656 0.037787 2000-06-02 154 0.750800 0.776498 0.237716 2000-06-03 155 0.812008 0.127338 0.397240 2000-06-04 156 0.639937 0.207359 0.533527 2000-06-05 157 0.416998 0.845658 0.872826 2000-06-06 158 0.440069 0.338690 0.847545 2000-06-07 159 0.202354 0.624833 0.740254 2000-06-08 160 0.465746 0.080888 0.155452 2000-06-09 161 0.858232 0.190321 0.432574 2000-06-10 Using a DatetimeIndex: If you are going to do a lot of selections by date, it may be quicker to set the date column as the index first. Then you can select rows by date using df.loc[start_date:end_date]. import numpy as np import pandas as pd df = pd.DataFrame(np.random.random((200,3))) df['date'] = pd.date_range('2000-1-1', periods=200, freq='D') df = df.set_index(['date']) print(df.loc['2000-6-1':'2000-6-10']) yields 0 1 2 date 2000-06-01 0.040457 0.326594 0.492136 # <- includes start_date 2000-06-02 0.279323 0.877446 0.464523 2000-06-03 0.328068 0.837669 0.608559 2000-06-04 0.107959 0.678297 0.517435 2000-06-05 0.131555 0.418380 0.025725 2000-06-06 0.999961 0.619517 0.206108 2000-06-07 0.129270 0.024533 0.154769 2000-06-08 0.441010 0.741781 0.470402 2000-06-09 0.682101 0.375660 0.009916 2000-06-10 0.754488 0.352293 0.339337 While Python list indexing, e.g. seq[start:end] includes start but not end, in contrast, Pandas df.loc[start_date : end_date] includes both end-points in the result if they are in the index. Neither start_date nor end_date has to be in the index however. Also note that pd.read_csv has a parse_dates parameter which you could use to parse the date column as datetime64s. Thus, if you use parse_dates, you would not need to use df['date'] = pd.to_datetime(df['date'])."} +{"question_id": 3798835, "score": 407, "creation_date": 1285520142, "tags": ["python", "python-descriptors"], "instruction": "Understanding __get__ and __set__ and Python descriptors\n\nI am trying to understand what Python's descriptors are and what they are useful for. I understand how they work, but here are my doubts. Consider the following code: class Celsius(object): def __init__(self, value=0.0): self.value = float(value) def __get__(self, instance, owner): return self.value def __set__(self, instance, value): self.value = float(value) class Temperature(object): celsius = Celsius() Why do I need the descriptor class? What is instance and owner here? (in __get__). What is the purpose of these parameters? How would I call/use this example?", "output": "The descriptor is how Python's property type is implemented. A descriptor simply implements __get__, __set__, etc. and is then added to another class in its definition (as you did above with the Temperature class). For example: temp=Temperature() temp.celsius #calls celsius.__get__ Accessing the property you assigned the descriptor to (celsius in the above example) calls the appropriate descriptor method. instance in __get__ is the instance of the class (so above, __get__ would receive temp, while owner is the class with the descriptor (so it would be Temperature). You need to use a descriptor class to encapsulate the logic that powers it. That way, if the descriptor is used to cache some expensive operation (for example), it could store the value on itself and not its class. The official Python documentation includes an article about descriptors that walks through how they work in more detail, including several examples. EDIT: As jchl pointed out in the comments, if you simply try Temperature.celsius, instance will be None."} +{"question_id": 4581789, "score": 406, "creation_date": 1294023305, "tags": ["python", "django"], "instruction": "How do I get user IP address in Django?\n\nHow do I get user's IP in Django? I have a view like this: # Create your views from django.contrib.gis.utils import GeoIP from django.template import RequestContext from django.shortcuts import render_to_response def home(request): g = GeoIP() client_ip = request.META['REMOTE_ADDR'] lat,long = g.lat_lon(client_ip) return render_to_response('home_page_tmp.html',locals()) But I get this error: KeyError at /mypage/ 'REMOTE_ADDR' Request Method: GET Request URL: http://mywebsite.example/mypage/ Django Version: 1.2.4 Exception Type: KeyError Exception Value: 'REMOTE_ADDR' Exception Location: /mysite/homepage/views.py in home, line 9 Python Executable: /usr/bin/python Python Version: 2.6.6 Python Path: ['/mysite', '/usr/local/lib/python2.6/dist-packages/flup-1.0.2-py2.6.egg', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/local/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages', '/usr/lib/pymodules/python2.6'] Server time: Sun, 2 Jan 2011 20:42:50 -0600", "output": "def get_client_ip(request): x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') if x_forwarded_for: ip = x_forwarded_for.split(',')[0] else: ip = request.META.get('REMOTE_ADDR') return ip Make sure you have reverse proxy (if any) configured correctly (e.g. mod_rpaf installed for Apache). Note: the above uses the first item in X-Forwarded-For, but you might want to use the last item (e.g., in the case of Heroku: Get client's real IP address on Heroku) And then just pass the request as argument to it; get_client_ip(request) Django documentation for HttpRequest.META"} +{"question_id": 8177079, "score": 406, "creation_date": 1321583483, "tags": ["python", "list"], "instruction": "Take the content of a list and append it to another list\n\nI am trying to understand if it makes sense to take the content of a list and append it to another list. I have the first list created through a loop function, that will get specific lines out of a file and will save them in a list. Then a second list is used to save these lines, and start a new cycle over another file. My idea was to get the list once that the for cycle is done, dump it into the second list, then start a new cycle, dump the content of the first list again into the second but appending it, so the second list will be the sum of all the smaller list files created in my loop. The list has to be appended only if certain conditions met. It looks like something similar to this: # This is done for each log in my directory, i have a loop running for logs in mydir: for line in mylog: #...if the conditions are met list1.append(line) for item in list1: if \"string\" in item: #if somewhere in the list1 i have a match for a string list2.append(list1) # append every line in list1 to list2 del list1 [:] # delete the content of the list1 break else: del list1 [:] # delete the list content and start all over Does this makes sense or should I go for a different route? I need something efficient that would not take up too many cycles, since the list of logs is long and each text file is pretty big; so I thought that the lists would fit the purpose.", "output": "You probably want list2.extend(list1) instead of list2.append(list1) Here's the difference: >>> a = [1, 2, 3] >>> b = [4, 5, 6] >>> c = [7, 8, 9] >>> b.append(a) >>> b [4, 5, 6, [1, 2, 3]] >>> c.extend(a) >>> c [7, 8, 9, 1, 2, 3] Since list.extend() accepts an arbitrary iterable, you can also replace for line in mylog: list1.append(line) by list1.extend(mylog)"} +{"question_id": 48720833, "score": 406, "creation_date": 1518266144, "tags": ["python", "python-3.x", "python-2.7", "tensorflow", "pip"], "instruction": "Could not find a version that satisfies the requirement tensorflow\n\nI installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message: Could not find a version that satisfies the requirement TensorFlow (from versions: ) No matching distribution found for TensorFlow. Then I tried installing TensorFlow from the command prompt and I got the same error message. I did however successfully install tflearn. I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask). How can I install Tensorflow? Thanks.", "output": "The latest requirements for running TensorFlow are documented in the installation documentation. TensorFlow only supports 64-bit Python TensorFlow only supports certain versions of Python (for example, Python 3.6 is not supported) So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version."} +{"question_id": 2397141, "score": 404, "creation_date": 1267982874, "tags": ["python", "multidimensional-array"], "instruction": "How to initialize a two-dimensional array (list of lists, if not using NumPy) in Python?\n\nI'm beginning python and I'm trying to use a two-dimensional list, that I initially fill up with the same variable in every place. I came up with this: def initialize_twodlist(foo): twod_list = [] new = [] for i in range (0, 10): for j in range (0, 10): new.append(foo) twod_list.append(new) new = [] It gives the desired result, but feels like a workaround. Is there an easier/shorter/more elegant way to do this?", "output": "A pattern that often came up in Python was bar = [] for item in some_iterable: bar.append(SOME EXPRESSION) which helped motivate the introduction of list comprehensions, which convert that snippet to bar = [SOME_EXPRESSION for item in some_iterable] which is shorter and sometimes clearer. Usually, you get in the habit of recognizing these and often replacing loops with comprehensions. Your code follows this pattern twice twod_list = [] \\ for i in range (0, 10): \\ new = [] \\ can be replaced } this too for j in range (0, 10): } with a list / new.append(foo) / comprehension / twod_list.append(new) /"} +{"question_id": 400739, "score": 404, "creation_date": 1230654533, "tags": ["python"], "instruction": "What does asterisk * mean in Python?\n\nDoes * have a special meaning in Python as it does in C? I saw a function like this in the Python Cookbook: def get(self, *a, **kw) Would you please explain it to me or point out where I can find an answer (Google interprets the * as wild card character and thus I cannot find a satisfactory answer).", "output": "See Function Definitions in the Language Reference. If the form *identifier is present, it is initialized to a tuple receiving any excess positional parameters, defaulting to the empty tuple. If the form **identifier is present, it is initialized to a new dictionary receiving any excess keyword arguments, defaulting to a new empty dictionary. Also, see Function Calls. Assuming that one knows what positional and keyword arguments are, here are some examples: Example 1: # Excess keyword argument (python 3) example: def foo(a, b, c, **args): print(\"a = %s\" % (a,)) print(\"b = %s\" % (b,)) print(\"c = %s\" % (c,)) print(args) foo(a=\"testa\", d=\"excess\", c=\"testc\", b=\"testb\", k=\"another_excess\") As you can see in the above example, we only have parameters a, b, c in the signature of the foo function. Since d and k are not present, they are put into the args dictionary. The output of the program is: a = testa b = testb c = testc {'k': 'another_excess', 'd': 'excess'} Example 2: # Excess positional argument (python 3) example: def foo(a, b, c, *args): print(\"a = %s\" % (a,)) print(\"b = %s\" % (b,)) print(\"c = %s\" % (c,)) print(args) foo(\"testa\", \"testb\", \"testc\", \"excess\", \"another_excess\") Here, since we're testing positional arguments, the excess ones have to be on the end, and *args packs them into a tuple, so the output of this program is: a = testa b = testb c = testc ('excess', 'another_excess') You can also unpack a dictionary or a tuple into arguments of a function: def foo(a,b,c,**args): print(\"a=%s\" % (a,)) print(\"b=%s\" % (b,)) print(\"c=%s\" % (c,)) print(\"args=%s\" % (args,)) argdict = dict(a=\"testa\", b=\"testb\", c=\"testc\", excessarg=\"string\") foo(**argdict) Prints: a=testa b=testb c=testc args={'excessarg': 'string'} And def foo(a,b,c,*args): print(\"a=%s\" % (a,)) print(\"b=%s\" % (b,)) print(\"c=%s\" % (c,)) print(\"args=%s\" % (args,)) argtuple = (\"testa\",\"testb\",\"testc\",\"excess\") foo(*argtuple) Prints: a=testa b=testb c=testc args=('excess',)"} +{"question_id": 7588511, "score": 402, "creation_date": 1317238136, "tags": ["python", "datetime", "string-formatting", "milliseconds"], "instruction": "Format a datetime into a string with milliseconds\n\nHow can I format a datetime object as a string with milliseconds?", "output": "To get a date string with milliseconds, use [:-3] to trim the last three digits of %f (microseconds): >>> from datetime import datetime >>> datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f')[:-3] '2022-09-24 10:18:32.926' Or shorter: >>> from datetime import datetime >>> datetime.utcnow().strftime('%F %T.%f')[:-3] '2022-09-24 10:18:32.926' See the Python docs for more \"%\" format codes and the strftime(3) man page for the full list."} +{"question_id": 1027714, "score": 402, "creation_date": 1245683100, "tags": ["python"], "instruction": "How to execute a file within the Python interpreter?\n\nI'm trying to execute a file with Python commands from within the interpreter. I'm trying to use variables and settings from that file, not to invoke a separate process.", "output": "Several ways. From the shell python someFile.py From inside IDLE, hit F5. If you're typing interactively, try this (Python3): >>> exec(open(\"filename.py\").read()) For Python 2: >>> variables= {} >>> execfile( \"someFile.py\", variables ) >>> print variables # globals from the someFile module"} +{"question_id": 17477979, "score": 402, "creation_date": 1372971320, "tags": ["python", "pandas", "numpy"], "instruction": "Dropping infinite values from dataframes in pandas?\n\nHow do I drop nan, inf, and -inf values from a DataFrame without resetting mode.use_inf_as_null? Can I tell dropna to include inf in its definition of missing values so that the following works? df.dropna(subset=[\"col1\", \"col2\"], how=\"all\")", "output": "First replace() infs with NaN: df.replace([np.inf, -np.inf], np.nan, inplace=True) and then drop NaNs via dropna(): df.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True) For example: >>> df = pd.DataFrame({\"col1\": [1, np.inf, -np.inf], \"col2\": [2, 3, np.nan]}) >>> df col1 col2 0 1.0 2.0 1 inf 3.0 2 -inf NaN >>> df.replace([np.inf, -np.inf], np.nan, inplace=True) >>> df col1 col2 0 1.0 2.0 1 NaN 3.0 2 NaN NaN >>> df.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True) >>> df col1 col2 0 1.0 2.0 1 NaN 3.0 The same method also works for Series."} +{"question_id": 15115328, "score": 402, "creation_date": 1361977276, "tags": ["python", "python-2.7", "python-requests"], "instruction": "Python Requests - No connection adapters\n\nI'm using the Requests: HTTP for Humans library and I got this error: No connection adapters were found for '192.168.1.61:8080/api/call' What does this mean, and how can I fix it?", "output": "You need to include the protocol scheme: 'http://192.168.1.61:8080/api/call' Without the http:// part, Requests doesn\u2019t have any idea how to connect to the remote server. Note that the protocol scheme must be all lowercase; if your URL starts with HTTP:// for example, it won\u2019t find the http:// connection adapter either."} +{"question_id": 847850, "score": 402, "creation_date": 1242044506, "tags": ["python", "cross-platform", "temporary-directory"], "instruction": "Cross-platform way of getting temp directory in Python\n\nIs there a cross-platform way of getting the path to the temp directory in Python 2.6? For example, under Linux that would be /tmp, while under XP C:\\Documents and settings\\[user]\\Application settings\\Temp.", "output": "That would be the tempfile module. It has functions to get the temporary directory, and also has some shortcuts to create temporary files and directories in it, either named or unnamed. Example: import tempfile print tempfile.gettempdir() # prints the current temporary directory f = tempfile.TemporaryFile() f.write('something on temporaryfile') f.seek(0) # return to beginning of file print f.read() # reads data back from the file f.close() # temporary file is automatically deleted here For completeness, here's how it searches for the temporary directory, according to the documentation: The directory named by the TMPDIR environment variable. The directory named by the TEMP environment variable. The directory named by the TMP environment variable. A platform-specific location: On RiscOS, the directory named by the Wimp$ScrapDir environment variable. On Windows, the directories C:\\TEMP, C:\\TMP, \\TEMP, and \\TMP, in that order. On all other platforms, the directories /tmp, /var/tmp, and /usr/tmp, in that order. As a last resort, the current working directory."} +{"question_id": 39922986, "score": 401, "creation_date": 1475861785, "tags": ["python", "pandas", "dataframe", "group-by", "aggregate"], "instruction": "How do I use Pandas group-by to get the sum?\n\nI am using this dataframe: Fruit Date Name Number Apples 10/6/2016 Bob 7 Apples 10/6/2016 Bob 8 Apples 10/6/2016 Mike 9 Apples 10/7/2016 Steve 10 Apples 10/7/2016 Bob 1 Oranges 10/7/2016 Bob 2 Oranges 10/6/2016 Tom 15 Oranges 10/6/2016 Mike 57 Oranges 10/6/2016 Bob 65 Oranges 10/7/2016 Tony 1 Grapes 10/7/2016 Bob 1 Grapes 10/7/2016 Tom 87 Grapes 10/7/2016 Bob 22 Grapes 10/7/2016 Bob 12 Grapes 10/7/2016 Tony 15 I would like to aggregate this by Name and then by Fruit to get a total number of Fruit per Name. For example: Bob,Apples,16 I tried grouping by Name and Fruit but how do I get the total number of Fruit?", "output": "Use GroupBy.sum: df.groupby(['Fruit','Name']).sum() Out[31]: Number Fruit Name Apples Bob 16 Mike 9 Steve 10 Grapes Bob 35 Tom 87 Tony 15 Oranges Bob 67 Mike 57 Tom 15 Tony 1 To specify the column to sum, use this: df.groupby(['Name', 'Fruit'])['Number'].sum()"} +{"question_id": 42703500, "score": 401, "creation_date": 1489086400, "tags": ["python", "serialization", "deep-learning", "pytorch", "tensor"], "instruction": "How do I save a trained model in PyTorch?\n\nHow do I save a trained model in PyTorch? I have read that: torch.save()/torch.load() is for saving/loading a serializable object. model.state_dict()/model.load_state_dict() is for saving/loading model state.", "output": "Found this page on their github repo: Recommended approach for saving a model There are two main approaches for serializing and restoring a model. The first (recommended) saves and loads only the model parameters: torch.save(the_model.state_dict(), PATH) Then later: the_model = TheModelClass(*args, **kwargs) the_model.load_state_dict(torch.load(PATH)) The second saves and loads the entire model: torch.save(the_model, PATH) Then later: the_model = torch.load(PATH) However in this case, the serialized data is bound to the specific classes and the exact directory structure used, so it can break in various ways when used in other projects, or after some serious refactors. See also: Save and Load the Model section from the official PyTorch tutorials."} +{"question_id": 6473679, "score": 400, "creation_date": 1308948884, "tags": ["python", "list", "transpose"], "instruction": "Transpose list of lists\n\nSuppose I have l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] How can I get a result like so? r = [[1, 4, 7], [2, 5, 8], [3, 6, 9]] I know how to get r = [(1, 4, 7), (2, 5, 8), (3, 6, 9)] but I must have lists as elements of the result.", "output": "For rectangular data (or to limit each \"column\" to the length of the shortest input \"row\") In Python 3.x, use: # short circuits at shortest nested list if table is jagged: list(map(list, zip(*l))) In Python 2.x, use: # short circuits at shortest nested list if table is jagged: map(list, zip(*l)) There are two important things to understand here: The signature of zip: zip(*iterables) This means zip expects an arbitrary number of arguments each of which must be iterable. E.g. zip([1, 2], [3, 4], [5, 6]). Unpacked argument lists: Given a sequence of arguments args, f(*args) will call f such that each element in args is a separate positional argument of f. Given l = [[1, 2, 3], [4, 5, 6], [7, 8, 9]], zip(*l) would be equivalent to zip([1, 2, 3], [4, 5, 6], [7, 8, 9]). See also: Expanding tuples into arguments The rest is just making sure the result is a list of lists instead of a list of tuples, by using map to create a list from each tuple. For jagged data To pad shorter rows with None values in the output, import itertools (this is in the standard library), and then: In Python 3.x, use: list(map(list, itertools.zip_longest(*l, fillvalue=None))) In Python 2.x, use: list(map(list, itertools.izip_longest(*l, fillvalue=None)))"} +{"question_id": 13332268, "score": 400, "creation_date": 1352645716, "tags": ["python", "linux", "subprocess", "pipe"], "instruction": "How to use `subprocess` command with pipes\n\nI want to use subprocess.check_output() with ps -A | grep 'process_name'. I tried various solutions but so far nothing worked. Can someone guide me how to do it?", "output": "To use a pipe with the subprocess module, you can pass shell=True but be aware of the Security Considerations. It is discouraged using shell=True. In most cases there are better solutions for the same problem. However, this isn't really advisable for various reasons, not least of which is security. Instead, create the ps and grep processes separately, and pipe the output from one into the other, like so: ps = subprocess.Popen(('ps', '-A'), stdout=subprocess.PIPE) output = subprocess.check_output(('grep', 'process_name'), stdin=ps.stdout) ps.wait() In your particular case, however, the simple solution is to call subprocess.check_output(('ps', '-A')) and then str.find on the output."} +{"question_id": 42352841, "score": 400, "creation_date": 1487619131, "tags": ["python", "django", "anaconda", "conda"], "instruction": "How to update an existing Conda environment with a .yml file\n\nHow can a pre-existing conda environment be updated with another .yml file. This is extremely helpful when working on projects that have multiple requirement files, i.e. base.yml, local.yml, production.yml, etc. For example, below is a base.yml file has conda-forge, conda, and pip packages: base.yml name: myenv channels: - conda-forge dependencies: - django=1.10.5 - pip: - django-crispy-forms==1.6.1 The actual environment is created with: conda env create -f base.yml. Later on, additional packages need to be added to base.yml. Another file, say local.yml, needs to import those updates. Previous attempts to accomplish this include: creating a local.yml file with an import definition: channels: dependencies: - pip: - boto3==1.4.4 imports: - requirements/base. And then run the command: conda install -f local.yml. This does not work. Any thoughts?", "output": "Try using conda env update: conda activate myenv conda env update --file local.yml --prune --prune uninstalls dependencies which were removed from local.yml, as pointed out in this answer by @Blink. Attention: if there is a name tag with a name other than that of your environment in local.yml, the command above will create a new environment with that name. To avoid this, use (thanks @NumesSanguis): conda env update --name myenv --file local.yml --prune See Updating an environment in Conda User Guide."} +{"question_id": 2847386, "score": 399, "creation_date": 1274082725, "tags": ["python", "string", "integer", "concatenation"], "instruction": "Python strings and integer concatenation\n\nI want to create a string using an integer appended to it, in a for loop. Like this: for i in range(1, 11): string = \"string\" + i But it returns an error: TypeError: unsupported operand type(s) for +: 'int' and 'str' What's the best way to concatenate the string and integer?", "output": "NOTE: The method used in this answer (backticks) is deprecated in later versions of Python 2, and removed in Python 3. Use the str() function instead. You can use: string = 'string' for i in range(11): string +=`i` print string It will print string012345678910. To get string0, string1 ..... string10 you can use this as YOU suggested: >>> string = \"string\" >>> [string+`i` for i in range(11)] For Python 3 You can use: string = 'string' for i in range(11): string += str(i) print string It will print string012345678910. To get string0, string1 ..... string10, you can use this as YOU suggested: >>> string = \"string\" >>> [string+str(i) for i in range(11)]"} +{"question_id": 10218946, "score": 399, "creation_date": 1334787703, "tags": ["python", "pip", "virtualenv", "version", "requirements"], "instruction": "Upgrade Python in a virtual environment\n\nIs there a way to upgrade the version of Python used in a virtual environment (e.g., if a bugfix release comes out)? I could pip freeze --local > requirements.txt, remove the directory, and pip install -r requirements.txt, but this requires a lot of reinstallation of large libraries, for instance, NumPy, which I use a lot. I can see this is an advantage when upgrading from, e.g., 2.6 -> 2.7, but what about 2.7.x -> 2.7.y?", "output": "Did you see this? If I haven't misunderstand that answer, you may try to create a new virtualenv on top of the old one. You just need to know which Python interpreter is going to use your virtualenv (you will need to see your virtualenv version). If your virtualenv is installed with the same Python version of the old one and upgrading your virtualenv package is not an option, you may want to read this in order to install a virtualenv with the Python version you want. I've tested this approach (the one that create a new virtualenv on top of the old one), and it worked fine for me. I think you may have some problems if you change from python 2.6 to 2.7 or 2.7 to 3.x, but if you just upgrade inside the same version (staying at 2.7 as you want) you shouldn't have any problem, as all the packages are held in the same folders for both Python versions (2.7.x and 2.7.y packages are inside your_env/lib/python2.7/). If you change your virtualenv Python version, you will need to install all your packages again for that version (or just link the packages you need into the new version packages folder, i.e., your_env/lib/python_newversion/site-packages)"} +{"question_id": 45594707, "score": 398, "creation_date": 1502292036, "tags": ["python", "docker", "pip"], "instruction": "What is pip's `--no-cache-dir` good for?\n\nI've recently seen the --no-cache-dir being used in a Docker file. I've never seen that flag before and the help is not explaining it: --no-cache-dir Disable the cache. Question: What is cached? Question: What is the cache used for? Question: Why would I want to disable it?", "output": "Cached is: store away in hiding or for future use Used for store the installation files(.whl, etc) of the modules that you install through pip store the source files (.tar.gz, etc) to avoid re-download when not expired Possible Reason you might want to disable cache: you don't have space on your hard drive previously run pip install with unexpected settings eg: previously run export PYCURL_SSL_LIBRARY=nss and pip install pycurl want new run export PYCURL_SSL_LIBRARY=openssl and pip install pycurl --compile --no-cache-dir you want to keep a Docker image as small as possible Links to documentation https://pip.pypa.io/en/stable/topics/caching"} +{"question_id": 6824681, "score": 397, "creation_date": 1311648362, "tags": ["python", "random"], "instruction": "Get a random boolean in python?\n\nI am looking for the best way (fast and elegant) to get a random boolean in python (flip a coin). For the moment I am using random.randint(0, 1) or random.getrandbits(1). Are there better choices that I am not aware of?", "output": "Adam's answer is quite fast, but I found that random.getrandbits(1) to be quite a lot faster. If you really want a boolean instead of a long then import random bool(random.getrandbits(1)) is still about twice as fast as random.choice([True, False]) Both solutions need to import random If utmost speed isn't to priority then random.choice definitely reads better. Note that random.choice() is slower than just choice() (after from random import choice) due to the attribute lookup. $ python3 --version Python 3.9.7 $ python3 -m timeit -s \"from random import choice\" \"choice([True, False])\" 1000000 loops, best of 5: 376 nsec per loop $ python3 -m timeit -s \"from random import choice\" \"choice((True, False))\" 1000000 loops, best of 5: 352 nsec per loop $ python3 -m timeit -s \"from random import getrandbits\" \"getrandbits(1)\" 10000000 loops, best of 5: 33.7 nsec per loop $ python3 -m timeit -s \"from random import getrandbits\" \"bool(getrandbits(1))\" 5000000 loops, best of 5: 89.5 nsec per loop $ python3 -m timeit -s \"from random import getrandbits\" \"not getrandbits(1)\" 5000000 loops, best of 5: 46.3 nsec per loop $ python3 -m timeit -s \"from random import random\" \"random() < 0.5\" 5000000 loops, best of 5: 46.4 nsec per loop"} +{"question_id": 9452775, "score": 397, "creation_date": 1330256411, "tags": ["python", "numpy"], "instruction": "Converting numpy dtypes to native python types\n\nIf I have a numpy dtype, how do I automatically convert it to its closest python data type? For example, numpy.float32 -> \"python float\" numpy.float64 -> \"python float\" numpy.uint32 -> \"python int\" numpy.int16 -> \"python int\" I could try to come up with a mapping of all of these cases, but does numpy provide some automatic way of converting its dtypes into the closest possible native python types? This mapping need not be exhaustive, but it should convert the common dtypes that have a close python analog. I think this already happens somewhere in numpy.", "output": "Use val.item() to convert most NumPy values to a native Python type: import numpy as np # for example, numpy.float32 -> python float val = np.float32(0) pyval = val.item() print(type(pyval)) # # and similar... type(np.float64(0).item()) # type(np.uint32(0).item()) # type(np.int16(0).item()) # type(np.cfloat(0).item()) # type(np.datetime64(0, 'D').item()) # type(np.datetime64('2001-01-01 00:00:00').item()) # type(np.timedelta64(0, 'D').item()) # ... (A related method np.asscalar(val) was deprecated with 1.16, and removed with 1.23). For the curious, to build a table of conversions of NumPy array scalars for your system: for name in dir(np): obj = getattr(np, name) if hasattr(obj, 'dtype'): try: if 'time' in name: npn = obj(0, 'D') else: npn = obj(0) nat = npn.item() print('{0} ({1!r}) -> {2}'.format(name, npn.dtype.char, type(nat))) except: pass There are a few NumPy types that have no native Python equivalent on some systems, including: clongdouble, clongfloat, complex192, complex256, float128, longcomplex, longdouble and longfloat. These need to be converted to their nearest NumPy equivalent before using .item()."} +{"question_id": 3428536, "score": 396, "creation_date": 1281138192, "tags": ["python", "list"], "instruction": "How do I subtract one list from another?\n\nI want to take the difference between lists x and y: >>> x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> y = [1, 3, 5, 7, 9] >>> x - y # should return [0, 2, 4, 6, 8]", "output": "Use a list comprehension to compute the difference while maintaining the original order from x: [item for item in x if item not in y] If you don't need list properties (e.g. ordering), use a set difference, as the other answers suggest: list(set(x) - set(y)) To allow x - y infix syntax, override __sub__ on a class inheriting from list: class MyList(list): def __init__(self, *args): super(MyList, self).__init__(args) def __sub__(self, other): return self.__class__(*[item for item in self if item not in other]) Usage: x = MyList(1, 2, 3, 4) y = MyList(2, 5, 2) z = x - y"} +{"question_id": 41407414, "score": 396, "creation_date": 1483179539, "tags": ["python", "string", "enums", "type-conversion"], "instruction": "Convert string to Enum in Python\n\nWhat's the correct way to convert a string to a corresponding instance of an Enum subclass? Seems like getattr(YourEnumType, str) does the job, but I'm not sure if it's safe enough. As an example, suppose I have an enum like class BuildType(Enum): debug = 200 release = 400 Given the string 'debug', how can I get BuildType.debug as a result?", "output": "This functionality is already built in to Enum: >>> from enum import Enum >>> class Build(Enum): ... debug = 200 ... build = 400 ... >>> Build['debug'] The member names are case sensitive, so if user-input is being converted you need to make sure case matches: an_enum = input('Which type of build?') build_type = Build[an_enum.lower()]"} +{"question_id": 12572362, "score": 395, "creation_date": 1348518266, "tags": ["python", "string", "substring"], "instruction": "How to get a string after a specific substring?\n\nHow can I get a string after a specific substring? For example, I want to get the string after \"world\" in my_string=\"hello python world, I'm a beginner\" ...which in this case is: \", I'm a beginner\")", "output": "The easiest way is probably just to split on your target word my_string=\"hello python world , i'm a beginner\" print(my_string.split(\"world\",1)[1]) split takes the word (or character) to split on and optionally a limit to the number of splits. In this example, split on \"world\" and limit it to only one split."} +{"question_id": 11620914, "score": 393, "creation_date": 1343079414, "tags": ["python", "numpy", "nan"], "instruction": "How do I remove NaN values from a NumPy array?\n\nHow do I remove NaN values from a NumPy array? [1, 2, NaN, 4, NaN, 8] \u27f6 [1, 2, 4, 8]", "output": "To remove NaN values from a NumPy array x: x = x[~numpy.isnan(x)] Explanation The inner function numpy.isnan returns a boolean/logical array which has the value True everywhere that x is not-a-number. Since we want the opposite, we use the logical-not operator ~ to get an array with Trues everywhere that x is a valid number. Lastly, we use this logical array to index into the original array x, in order to retrieve just the non-NaN values."} +{"question_id": 10851906, "score": 392, "creation_date": 1338560008, "tags": ["python", "scope"], "instruction": "How can I fix \"UnboundLocalError: local variable referenced before assignment\"?\n\nThe following code gives the error UnboundLocalError: local variable 'Var1' referenced before assignment: Var1 = 1 Var2 = 0 def function(): if Var2 == 0 and Var1 > 0: print(\"Result 1\") elif Var2 == 1 and Var1 > 0: print(\"Result 2\") elif Var1 < 1: print(\"Result 3\") Var1 -= 1 function() How can I fix this?", "output": "You can fix this by passing parameters rather than relying on Globals def function(Var1, Var2): if Var2 == 0 and Var1 > 0: print(\"Result One\") elif Var2 == 1 and Var1 > 0: print(\"Result Two\") elif Var1 < 1: print(\"Result Three\") return Var1 -= 1 function(1, 1)"} +{"question_id": 6930982, "score": 392, "creation_date": 1312394381, "tags": ["python", "regex", "python-3.x", "variables", "escaping"], "instruction": "How to use a variable inside a regular expression\n\nI'd like to use a variable inside a regex, how can I do this in Python? TEXTO = sys.argv[1] if re.search(r\"\\b(?=\\w)TEXTO\\b(?!\\w)\", subject, re.IGNORECASE): # Successful match else: # Match attempt failed", "output": "From python 3.6 on you can also use Literal String Interpolation, \"f-strings\". In your particular case the solution would be: if re.search(rf\"\\b(?=\\w){TEXTO}\\b(?!\\w)\", subject, re.IGNORECASE): ...do something EDIT: Since there have been some questions in the comment on how to deal with special characters I'd like to extend my answer: raw strings ('r'): One of the main concepts you have to understand when dealing with special characters in regular expressions is to distinguish between string literals and the regular expression itself. It is very well explained here: In short: Let's say instead of finding a word boundary \\b after TEXTO you want to match the string \\boundary. The you have to write: TEXTO = \"Var\" subject = r\"Var\\boundary\" if re.search(rf\"\\b(?=\\w){TEXTO}\\\\boundary(?!\\w)\", subject, re.IGNORECASE): print(\"match\") This only works because we are using a raw-string (the regex is preceded by 'r'), otherwise we must write \"\\\\\\\\boundary\" in the regex (four backslashes). Additionally, without '\\r', \\b' would not converted to a word boundary anymore but to a backspace! re.escape: Basically puts a backslash in front of any special character. Hence, if you expect a special character in TEXTO, you need to write: if re.search(rf\"\\b(?=\\w){re.escape(TEXTO)}\\b(?!\\w)\", subject, re.IGNORECASE): print(\"match\") NOTE: For any version >= python 3.7: !, \", %, ', ,, /, :, ;, <, =, >, @, and ` are not escaped. Only special characters with meaning in a regex are still escaped. _ is not escaped since Python 3.3.(s. here) Curly braces: If you want to use quantifiers within the regular expression using f-strings, you have to use double curly braces. Let's say you want to match TEXTO followed by exactly 2 digits: if re.search(rf\"\\b(?=\\w){re.escape(TEXTO)}\\d{{2}}\\b(?!\\w)\", subject, re.IGNORECASE): print(\"match\")"} +{"question_id": 8350853, "score": 392, "creation_date": 1322791009, "tags": ["python", "import", "module"], "instruction": "How to import module when module name has a '-' dash or hyphen in it?\n\nI want to import foo-bar.py, this works: foobar = __import__(\"foo-bar\") This does not: from \"foo-bar\" import * My question: Is there any way that I can use the above format i.e., from \"foo-bar\" import * to import a module that has a - in it?", "output": "In Python 2, you can't. foo-bar is not an identifier. rename the file to foo_bar.py It's possible since Python 3.1+, see Julien's answer. If import is not your goal (as in: you don't care what happens with sys.modules, you don't need it to import itself), just getting all of the file's globals into your own scope, you can use execfile # contents of foo-bar.py baz = 'quux' >>> execfile('foo-bar.py') >>> baz 'quux' >>>"} +{"question_id": 15307623, "score": 391, "creation_date": 1362807536, "tags": ["python", "django", "datetime", "comparison"], "instruction": "Can't compare naive and aware datetime.now() <= challenge.datetime_end\n\nI am trying to compare the current date and time with dates and times specified in models using comparison operators: if challenge.datetime_start <= datetime.now() <= challenge.datetime_end: The script errors out with: TypeError: can't compare offset-naive and offset-aware datetimes The models look like this: class Fundraising_Challenge(models.Model): name = models.CharField(max_length=100) datetime_start = models.DateTimeField() datetime_end = models.DateTimeField() I also have django using locale date and times. What I haven't been able to find is the format django uses for DateTimeField(). Is it naive or aware? And how do I get datetime.now() to recognize locale datetime?", "output": "By default, the datetime object is naive in Python, so you need to make both of them either naive or aware datetime objects. This can be done using: import datetime import pytz utc=pytz.UTC challenge.datetime_start = utc.localize(challenge.datetime_start) challenge.datetime_end = utc.localize(challenge.datetime_end) # now both the datetime objects are aware, and you can compare them Note: This would raise a ValueError if tzinfo is already set. If you are not sure about that, just use start_time = challenge.datetime_start.replace(tzinfo=utc) end_time = challenge.datetime_end.replace(tzinfo=utc) BTW, you could format a UNIX timestamp in datetime.datetime object with timezone info as following d = datetime.datetime.utcfromtimestamp(int(unix_timestamp)) d_with_tz = datetime.datetime( year=d.year, month=d.month, day=d.day, hour=d.hour, minute=d.minute, second=d.second, tzinfo=pytz.UTC)"} +{"question_id": 9483979, "score": 390, "creation_date": 1330440170, "tags": ["python", "loops", "syntax", "continue"], "instruction": "Is there a difference between \"pass\" and \"continue\" in a for loop in Python?\n\nIs there any significant difference between the two Python keywords continue and pass like in the examples for element in some_list: if not element: pass and for element in some_list: if not element: continue I should be aware of?", "output": "Yes, they do completely different things. pass simply does nothing, while continue goes on with the next loop iteration. In your example, the difference would become apparent if you added another statement after the if: After executing pass, this further statement would be executed. After continue, it wouldn't. >>> a = [0, 1, 2] >>> for element in a: ... if not element: ... pass ... print(element) ... 0 1 2 >>> for element in a: ... if not element: ... continue ... print(element) ... 1 2"} +{"question_id": 2241891, "score": 389, "creation_date": 1265856199, "tags": ["dictionary", "python"], "instruction": "How to initialize a dict with keys from a list and empty value in Python?\n\nI'd like to get from this: keys = [1,2,3] to this: {1: None, 2: None, 3: None} Is there a pythonic way of doing it? This is an ugly way to do it: >>> keys = [1,2,3] >>> dict([(1,2)]) {1: 2} >>> dict(zip(keys, [None]*len(keys))) {1: None, 2: None, 3: None}", "output": "dict.fromkeys directly solves the problem: >>> dict.fromkeys([1, 2, 3, 4]) {1: None, 2: None, 3: None, 4: None} This is actually a classmethod, so it works for dict-subclasses (like collections.defaultdict) as well. The optional second argument, which defaults to None, specifies the value to use for the keys. Note that the same object will be used for each key, which can cause problems with mutable values: >>> x = dict.fromkeys([1, 2, 3, 4], []) >>> x[1].append('test') >>> x {1: ['test'], 2: ['test'], 3: ['test'], 4: ['test']} If this is unacceptable, see How can I initialize a dictionary whose values are distinct empty lists? for a workaround."} +{"question_id": 2295290, "score": 388, "creation_date": 1266572791, "tags": ["python", "lambda", "closures"], "instruction": "What do lambda function closures capture?\n\nRecently I started playing around with Python and I came around something peculiar in the way closures work. Consider the following code: adders = [None, None, None, None] for i in [0, 1, 2, 3]: adders[i] = lambda a: i+a print adders[1](3) It builds a simple array of functions that take a single input and return that input added by a number. The functions are constructed in for loop where the iterator i runs from 0 to 3. For each of these numbers a lambda function is created which captures i and adds it to the function's input. The last line calls the second lambda function with 3 as a parameter. To my surprise the output was 6. I expected a 4. My reasoning was: in Python everything is an object and thus every variable is essential a pointer to it. When creating the lambda closures for i, I expected it to store a pointer to the integer object currently pointed to by i. That means that when i assigned a new integer object it shouldn't effect the previously created closures. Sadly, inspecting the adders array within a debugger shows that it does. All lambda functions refer to the last value of i, 3, which results in adders[1](3) returning 6. Which make me wonder about the following: What do the closures capture exactly? What is the most elegant way to convince the lambda functions to capture the current value of i in a way that will not be affected when i changes its value? For a more accessible, practical version of the question, specific to the case where a loop (or list comprehension, generator expression etc.) is used, see Creating functions (or lambdas) in a loop (or comprehension). This question is focused on understanding the underlying behaviour of the code in Python. If you got here trying to fix a problem with making buttons in Tkinter, try tkinter creating buttons in for loop passing command arguments for more specific advice. See What exactly is contained within a obj.__closure__? for technical details of how Python implements closures. See What is the difference between Early and Late Binding? for related terminology discussion.", "output": "What do the closures capture exactly? Closures in Python use lexical scoping: they remember the name and scope of the closed-over variable where it is created. However, they are still late binding: the name is looked up when the code in the closure is used, not when the closure is created. Since all the functions in your example are created in the same scope and use the same variable name, they always refer to the same variable. There are at least two ways to get early binding instead: The most concise, but not strictly equivalent way is the one recommended by Adrien Plisson. Create a lambda with an extra argument, and set the extra argument's default value to the object you want preserved. More verbosely but also more robustly, we can create a new scope for each created lambda: >>> adders = [0,1,2,3] >>> for i in [0,1,2,3]: ... adders[i] = (lambda b: lambda a: b + a)(i) ... >>> adders[1](3) 4 >>> adders[2](3) 5 The scope here is created using a new function (another lambda, for brevity), which binds its argument, and passing the value you want to bind as the argument. In real code, though, you most likely will have an ordinary function instead of the lambda to create the new scope: def createAdder(x): return lambda y: y + x adders = [createAdder(i) for i in range(4)]"} +{"question_id": 377017, "score": 386, "creation_date": 1229579736, "tags": ["python", "path"], "instruction": "Test if executable exists in Python?\n\nIn Python, is there a portable and simple way to test if an executable program exists? By simple I mean something like the which command which would be just perfect. I don't want to search PATH manually or something involving trying to execute it with Popen & al and see if it fails (that's what I'm doing now, but imagine it's launchmissiles)", "output": "Easiest way I can think of: def which(program): import os def is_exe(fpath): return os.path.isfile(fpath) and os.access(fpath, os.X_OK) fpath, fname = os.path.split(program) if fpath: if is_exe(program): return program else: for path in os.environ.get(\"PATH\", \"\").split(os.pathsep): exe_file = os.path.join(path, program) if is_exe(exe_file): return exe_file return None Edit: Updated code sample to include logic for handling case where provided argument is already a full path to the executable, i.e. \"which /bin/ls\". This mimics the behavior of the UNIX 'which' command. Edit: Updated to use os.path.isfile() instead of os.path.exists() per comments. Edit: path.strip('\"') seems like the wrong thing to do here. Neither Windows nor POSIX appear to encourage quoted PATH items."} +{"question_id": 13710631, "score": 386, "creation_date": 1354650151, "tags": ["python"], "instruction": "Is there shorthand for returning a default value if None in Python?\n\nIn C#, I can say x ?? \"\", which will give me x if x is not null, and the empty string if x is null. I've found it useful for working with databases. Is there a way to return a default value if Python finds None in a variable?", "output": "You could use the or operator: return x or \"default\" Note that this also returns \"default\" if x is any falsy value, including an empty list, 0, empty string, or even datetime.time(0) (midnight). Python 3.8+ update: bool(datetime.time(0)) now resolves to True. This was resolved with issue13936. Other \"empty\" values are still considered \"falsy\" as expected."} +{"question_id": 48001598, "score": 386, "creation_date": 1514435500, "tags": ["python", "neural-network", "deep-learning", "pytorch", "gradient-descent"], "instruction": "Why do we need to call zero_grad() in PyTorch?\n\nWhy does zero_grad() need to be called during training? | zero_grad(self) | Sets gradients of all model parameters to zero.", "output": "In PyTorch, for every mini-batch during the training phase, we typically want to explicitly set the gradients to zero before starting to do backpropagation (i.e., updating the Weights and biases) because PyTorch accumulates the gradients on subsequent backward passes. This accumulating behavior is convenient while training RNNs or when we want to compute the gradient of the loss summed over multiple mini-batches. So, the default action has been set to accumulate (i.e. sum) the gradients on every loss.backward() call. Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly. Otherwise, the gradient would be a combination of the old gradient, which you have already used to update your model parameters and the newly-computed gradient. It would therefore point in some other direction than the intended direction towards the minimum (or maximum, in case of maximization objectives). Here is a simple example: import torch from torch.autograd import Variable import torch.optim as optim def linear_model(x, W, b): return torch.matmul(x, W) + b data, targets = ... W = Variable(torch.randn(4, 3), requires_grad=True) b = Variable(torch.randn(3), requires_grad=True) optimizer = optim.Adam([W, b]) for sample, target in zip(data, targets): # clear out the gradients of all Variables # in this optimizer (i.e. W, b) optimizer.zero_grad() output = linear_model(sample, W, b) loss = (output - target) ** 2 loss.backward() optimizer.step() Alternatively, if you're doing a vanilla gradient descent, then: W = Variable(torch.randn(4, 3), requires_grad=True) b = Variable(torch.randn(3), requires_grad=True) for sample, target in zip(data, targets): # clear out the gradients of Variables # (i.e. W, b) W.grad.data.zero_() b.grad.data.zero_() output = linear_model(sample, W, b) loss = (output - target) ** 2 loss.backward() W -= learning_rate * W.grad.data b -= learning_rate * b.grad.data Note: The accumulation (i.e., sum) of gradients happens when .backward() is called on the loss tensor. As of v1.7.0, Pytorch offers the option to reset the gradients to None optimizer.zero_grad(set_to_none=True) instead of filling them with a tensor of zeroes. The docs claim that this setting reduces memory requirements and slightly improves performance, but might be error-prone if not handled carefully."} +{"question_id": 9823936, "score": 385, "creation_date": 1332425334, "tags": ["python", "exception"], "instruction": "How do I determine what type of exception occurred?\n\nsome_function() raises an exception while executing, so the program jumps to the except: try: some_function() except: print(\"exception happened!\") How do I see what caused the exception to occur?", "output": "The other answers all point out that you should not catch generic exceptions, but no one seems to want to tell you why, which is essential to understanding when you can break the \"rule\". Here is an explanation. Basically, it's so that you don't hide: the fact that an error occurred the specifics of the error that occurred (error hiding antipattern) So as long as you take care to do none of those things, it's OK to catch the generic exception. For instance, you could provide information about the exception to the user another way, like: Present exceptions as dialogs in a GUI Transfer exceptions from a worker thread or process to the controlling thread or process in a multithreading or multiprocessing application So how to catch the generic exception? There are several ways. If you just want the exception object, do it like this: try: someFunction() except Exception as ex: template = \"An exception of type {0} occurred. Arguments:\\n{1!r}\" message = template.format(type(ex).__name__, ex.args) print(message) Make sure message is brought to the attention of the user in a hard-to-miss way! Printing it, as shown above, may not be enough if the message is buried in lots of other messages. Failing to get the users attention is tantamount to swallowing all exceptions, and if there's one impression you should have come away with after reading the answers on this page, it's that this is not a good thing. Ending the except block with a raise statement will remedy the problem by transparently reraising the exception that was caught. The difference between the above and using just except: without any argument is twofold: A bare except: doesn't give you the exception object to inspect The exceptions SystemExit, KeyboardInterrupt and GeneratorExit aren't caught by the above code, which is generally what you want. See the exception hierarchy. If you also want the same stacktrace you get if you do not catch the exception, you can get that like this (still inside the except clause): import traceback print traceback.format_exc() If you use the logging module, you can print the exception to the log (along with a message) like this: import logging log = logging.getLogger() log.exception(\"Message for you, sir!\") If you want to dig deeper and examine the stack, look at variables etc., use the post_mortem function of the pdb module inside the except block: import pdb pdb.post_mortem() I've found this last method to be invaluable when hunting down bugs."} +{"question_id": 20658572, "score": 385, "creation_date": 1387370583, "tags": ["python", "http", "python-requests"], "instruction": "Python requests - print entire http request (raw)?\n\nWhile using the requests module, is there any way to print the raw HTTP request? I don't want just the headers, I want the request line, headers, and content printout. Is it possible to see what ultimately is constructed from HTTP request?", "output": "Since v1.2.3 Requests added the PreparedRequest object. As per the documentation \"it contains the exact bytes that will be sent to the server\". One can use this to pretty print a request, like so: import requests req = requests.Request('POST','http://stackoverflow.com',headers={'X-Custom':'Test'},data='a=1&b=2') prepared = req.prepare() def pretty_print_POST(req): \"\"\" At this point it is completely built and ready to be fired; it is \"prepared\". However pay attention at the formatting used in this function because it is programmed to be pretty printed and may differ from the actual request. \"\"\" print('{}\\n{}\\r\\n{}\\r\\n\\r\\n{}'.format( '-----------START-----------', req.method + ' ' + req.url, '\\r\\n'.join('{}: {}'.format(k, v) for k, v in req.headers.items()), req.body, )) pretty_print_POST(prepared) which produces: -----------START----------- POST http://stackoverflow.com/ Content-Length: 7 X-Custom: Test a=1&b=2 Then you can send the actual request with this: s = requests.Session() s.send(prepared) These links are to the latest documentation available, so they might change in content: Advanced - Prepared requests and API - Lower level classes"} +{"question_id": 21965484, "score": 384, "creation_date": 1393140980, "tags": ["python", "timeout", "python-requests"], "instruction": "Timeout for Python requests.get() entire response\n\nI'm gathering statistics on a list of websites, and I'm using Requests for it for simplicity. Here is my code: data = [] websites = ['http://google.com', 'http://bbc.co.uk'] for w in websites: r = requests.get(w, verify=False) data.append((r.url, len(r.content), r.elapsed.total_seconds(), str([(l.status_code, l.url) for l in r.history]), str(r.headers.items()), str(r.cookies.items()))) Now, I want requests.get to timeout after 10 seconds, so the loop doesn't get stuck. This question has been of interest before too, but none of the answers were clean. I hear that maybe not using Requests is a good idea, but then how should I get the nice things Requests offer (the ones in the tuple)?", "output": "What about using 'eventlet'? If you want to timeout the request after 10 seconds, even if data is being received, this snippet will work for you: import requests import eventlet eventlet.monkey_patch() with eventlet.Timeout(10): requests.get(\"http://ipv4.download.thinkbroadband.com/1GB.zip\", verify=False)"} +{"question_id": 11269575, "score": 384, "creation_date": 1341007737, "tags": ["python", "subprocess", "espeak"], "instruction": "How to hide output of subprocess\n\nI'm using eSpeak on Ubuntu and have a Python 2.7 script that prints and speaks a message: import subprocess text = 'Hello World.' print text subprocess.call(['espeak', text]) eSpeak produces the desired sounds, but clutters the shell with some errors (ALSA lib..., no socket connect) so i cannot easily read what was printed earlier. Exit code is 0. Unfortunately there is no documented option to turn off its verbosity, so I'm looking for a way to only visually silence it and keep the open shell clean for further interaction. How can I do this? See Python os.system without the output for approaches specific to os.system - although modern code should normally use the subprocess library instead.", "output": "For python >= 3.3, Redirect the output to DEVNULL: import os import subprocess retcode = subprocess.call(['echo', 'foo'], stdout=subprocess.DEVNULL, stderr=subprocess.STDOUT) For python <3.3, including 2.7 use: FNULL = open(os.devnull, 'w') retcode = subprocess.call(['echo', 'foo'], stdout=FNULL, stderr=subprocess.STDOUT) It is effectively the same as running this shell command: retcode = os.system(\"echo 'foo' &> /dev/null\")"} +{"question_id": 514371, "score": 384, "creation_date": 1233802787, "tags": ["python"], "instruction": "What's the bad magic number error?\n\nWhat's the \"Bad magic number\" ImportError in python, and how do I fix it? The only thing I can find online suggests this is caused by compiling a .py -> .pyc file and then trying to use it with the wrong version of python. In my case, however, the file seems to import fine some times but not others, and I'm not sure why. The information python's providing in the traceback isn't particularly helpful (which is why I was asking here...), but here it is in case it helps: Traceback (most recent call last): File \"run.py\", line 7, in from Normalization import Normalizer", "output": "The magic number comes from UNIX-type systems where the first few bytes of a file held a marker indicating the file type. Python puts a similar marker into its pyc files when it creates them. Then the python interpreter makes sure this number is correct when loading it. Anything that damages this magic number will cause your problem. This includes editing the pyc file or trying to run a pyc from a different version of python (usually later) than your interpreter. If they are your pyc files (or you have the py files for them), just delete them and let the interpreter re-compile the py files. On UNIX type systems, that could be something as simple as: rm *.pyc or: find . -name '*.pyc' -delete If they are not yours, and the original py files are not provided, you'll have to either get the py files for re-compilation, or use an interpreter that can run the pyc files with that particular magic value. One thing that might be causing the intermittent nature. The pyc that's causing the problem may only be imported under certain conditions. It's highly unlikely it would import sometimes. You should check the actual full stack trace when the import fails. As an aside, the first word of all my 2.5.1(r251:54863) pyc files is 62131, 2.6.1(r261:67517) is 62161. The list of all magic numbers can be found in Python/import.c, reproduced here for completeness (current as at the time the answer was posted, has changed since then): 1.5: 20121 1.5.1: 20121 1.5.2: 20121 1.6: 50428 2.0: 50823 2.0.1: 50823 2.1: 60202 2.1.1: 60202 2.1.2: 60202 2.2: 60717 2.3a0: 62011 2.3a0: 62021 2.3a0: 62011 2.4a0: 62041 2.4a3: 62051 2.4b1: 62061 2.5a0: 62071 2.5a0: 62081 2.5a0: 62091 2.5a0: 62092 2.5b3: 62101 2.5b3: 62111 2.5c1: 62121 2.5c2: 62131 2.6a0: 62151 2.6a1: 62161 2.7a0: 62171"} +{"question_id": 42521230, "score": 384, "creation_date": 1488328510, "tags": ["python", "python-3.x", "curly-braces", "f-string"], "instruction": "How to escape curly-brackets in f-strings?\n\nI have a string in which I would like curly-brackets, but also take advantage of the f-strings feature. Is there some syntax that works for this? Here are two ways it does not work. I would like to include the literal text {bar} as part of the string. foo = \"test\" fstring = f\"{foo} {bar}\" NameError: name 'bar' is not defined fstring = f\"{foo} \\{bar\\}\" SyntaxError: f-string expression part cannot include a backslash Desired result: 'test {bar}' Edit: Looks like this question has the same answer as How can I print literal curly-brace characters in a string and also use .format on it?, but you can only know that if you know that str.format uses the same rules as the f-string. So hopefully this question has value in tying f-string searchers to this answer.", "output": "Although there is a custom syntax error from the parser, the same trick works as for calling .format on regular strings. Use double curlies: >>> foo = 'test' >>> f'{foo} {{bar}}' 'test {bar}' To embed a value within braces, you can use triple-braces. >>> foo = 'test' >>> f'{{{foo}}}' '{test}' It's mentioned in the spec here and the docs here."} +{"question_id": 4527942, "score": 383, "creation_date": 1293217752, "tags": ["python", "dictionary", "comparison"], "instruction": "Comparing two dictionaries and checking how many (key, value) pairs are equal\n\nI have two dictionaries, but for simplification, I will take these two: >>> x = dict(a=1, b=2) >>> y = dict(a=2, b=2) Now, I want to compare whether each key, value pair in x has the same corresponding value in y. So I wrote this: >>> for x_values, y_values in zip(x.iteritems(), y.iteritems()): if x_values == y_values: print 'Ok', x_values, y_values else: print 'Not', x_values, y_values And it works since a tuple is returned and then compared for equality. My questions: Is this correct? Is there a better way to do this? Better not in speed, I am talking about code elegance. UPDATE: I forgot to mention that I have to check how many key, value pairs are equal.", "output": "If you want to know how many values match in both the dictionaries, you should have said that :) Maybe something like this: shared_items = {k: x[k] for k in x if k in y and x[k] == y[k]} print(len(shared_items))"} +{"question_id": 18072759, "score": 382, "creation_date": 1375768963, "tags": ["python", "list", "nested", "list-comprehension"], "instruction": "How can I use list comprehensions to process a nested list?\n\nI have this nested list: l = [['40', '20', '10', '30'], ['20', '20', '20', '20', '20', '30', '20'], ['30', '20', '30', '50', '10', '30', '20', '20', '20'], ['100', '100'], ['100', '100', '100', '100', '100'], ['100', '100', '100', '100']] I want to convert each element in l to float. I have this code: newList = [] for x in l: for y in x: newList.append(float(y)) How can I solve the problem with a nested list comprehension instead? See also: How can I get a flat result from a list comprehension instead of a nested list?", "output": "Here is how you would do this with a nested list comprehension: [[float(y) for y in x] for x in l] This would give you a list of lists, similar to what you started with except with floats instead of strings. If you want one flat list, then you would use [float(y) for x in l for y in x] Note the loop order - for x in l comes first in this one."} +{"question_id": 4529815, "score": 382, "creation_date": 1293267723, "tags": ["python", "serialization", "persistence"], "instruction": "Saving an Object (Data persistence)\n\nI've created an object like this: company1.name = 'banana' company1.value = 40 I would like to save this object. How can I do that?", "output": "You could use the pickle module in the standard library. Here's an elementary application of it to your example: import pickle class Company(object): def __init__(self, name, value): self.name = name self.value = value with open('company_data.pkl', 'wb') as outp: company1 = Company('banana', 40) pickle.dump(company1, outp, pickle.HIGHEST_PROTOCOL) company2 = Company('spam', 42) pickle.dump(company2, outp, pickle.HIGHEST_PROTOCOL) del company1 del company2 with open('company_data.pkl', 'rb') as inp: company1 = pickle.load(inp) print(company1.name) # -> banana print(company1.value) # -> 40 company2 = pickle.load(inp) print(company2.name) # -> spam print(company2.value) # -> 42 You could also define your own simple utility like the following which opens a file and writes a single object to it: def save_object(obj, filename): with open(filename, 'wb') as outp: # Overwrites any existing file. pickle.dump(obj, outp, pickle.HIGHEST_PROTOCOL) # sample usage save_object(company1, 'company1.pkl') Update Since this is such a popular answer, I'd like touch on a few slightly advanced usage topics. cPickle (or _pickle) vs pickle It's almost always preferable to actually use the cPickle module rather than pickle because the former is written in C and is much faster. There are some subtle differences between them, but in most situations they're equivalent and the C version will provide greatly superior performance. Switching to it couldn't be easier, just change the import statement to this: import cPickle as pickle In Python 3, cPickle was renamed _pickle, but doing this is no longer necessary since the pickle module now does it automatically\u2014see What difference between pickle and _pickle in python 3?. The rundown is you could use something like the following to ensure that your code will always use the C version when it's available in both Python 2 and 3: try: import cPickle as pickle except ModuleNotFoundError: import pickle Data stream formats (protocols) pickle can read and write files in several different, Python-specific, formats, called protocols as described in the documentation, \"Protocol version 0\" is ASCII and therefore \"human-readable\". Versions > 0 are binary and the highest one available depends on what version of Python is being used. The default also depends on Python version. In Python 2 the default was Protocol version 0, but in Python 3.8.1, it's Protocol version 4. In Python 3.x the module had a pickle.DEFAULT_PROTOCOL added to it, but that doesn't exist in Python 2. Fortunately there's shorthand for writing pickle.HIGHEST_PROTOCOL in every call (assuming that's what you want, and you usually do), just use the literal number -1 \u2014 similar to referencing the last element of a sequence via a negative index. So, instead of writing: pickle.dump(obj, outp, pickle.HIGHEST_PROTOCOL) You can just write: pickle.dump(obj, outp, -1) Either way, you'd only have specify the protocol once if you created a Pickler object for use in multiple pickle operations: pickler = pickle.Pickler(outp, -1) pickler.dump(obj1) pickler.dump(obj2) etc... Note: If you're in an environment running different versions of Python, then you'll probably want to explicitly use (i.e. hardcode) a specific protocol number that all of them can read (later versions can generally read files produced by earlier ones). Multiple Objects While a pickle file can contain any number of pickled objects, as shown in the above samples, when there's an unknown number of them, it's often easier to store them all in some sort of variably-sized container, like a list, tuple, or dict and write them all to the file in a single call: tech_companies = [ Company('Apple', 114.18), Company('Google', 908.60), Company('Microsoft', 69.18) ] save_object(tech_companies, 'tech_companies.pkl') and restore the list and everything in it later with: with open('tech_companies.pkl', 'rb') as inp: tech_companies = pickle.load(inp) The major advantage is you don't need to know how many object instances are saved in order to load them back later (although doing so without that information is possible, it requires some slightly specialized code). See the answers to the related question Saving and loading multiple objects in pickle file? for details on different ways to do this. Personally I liked @Lutz Prechelt's answer the best, so that's the approach used in the sample code below: class Company: def __init__(self, name, value): self.name = name self.value = value def pickle_loader(filename): \"\"\" Deserialize a file of pickled objects. \"\"\" with open(filename, \"rb\") as f: while True: try: yield pickle.load(f) except EOFError: break print('Companies in pickle file:') for company in pickle_loader('company_data.pkl'): print(' name: {}, value: {}'.format(company.name, company.value))"} +{"question_id": 15268953, "score": 381, "creation_date": 1362652773, "tags": ["python", "pip"], "instruction": "How to install Python package from GitHub?\n\nI want to use a new feature of httpie. This feature is in the github repo https://github.com/jkbr/httpie but not in the release on the python package index https://pypi.python.org/pypi/httpie How can I install the httpie package from the github repo? I tried pip install https://github.com/jkbr/httpie But I got an error 'could not unpack' In Nodejs, I can install packages from github like this npm install git+https://github.com/substack/node-optimist.git", "output": "You need to use the proper git URL: pip install git+https://github.com/jkbr/httpie.git#egg=httpie Also see the VCS Support section of the pip documentation. Don\u2019t forget to include the egg= part to explicitly name the project; this way pip can track metadata for it without having to have run the setup.py script."} +{"question_id": 41815079, "score": 381, "creation_date": 1485203529, "tags": ["python", "python-3.x", "pandas", "join", "pandas-merge"], "instruction": "pandas: merge (join) two data frames on multiple columns\n\nI am trying to join two pandas dataframes using two columns: new_df = pd.merge(A_df, B_df, how='left', left_on='[A_c1,c2]', right_on = '[B_c1,c2]') but got the following error: pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4164)() pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4028)() pandas/src/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:13166)() pandas/src/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:13120)() KeyError: '[B_1, c2]' Any idea what should be the right way to do this?", "output": "Try this new_df = pd.merge( left=A_df, right=B_df, how='left', left_on=['A_c1', 'c2'], right_on=['B_c1', 'c2'], ) https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html left_on : label or list, or array-like Field names to join on in left DataFrame. Can be a vector or list of vectors of the length of the DataFrame to use a particular vector as the join key instead of columns right_on : label or list, or array-like Field names to join on in right DataFrame or vector/list of vectors per left_on docs"} +{"question_id": 26089893, "score": 380, "creation_date": 1411940031, "tags": ["python", "arrays", "numpy", "multidimensional-array", "numpy-einsum"], "instruction": "Understanding NumPy's einsum\n\nHow does np.einsum work? Given arrays A and B, their matrix multiplication followed by transpose is computed using (A @ B).T, or equivalently, using: np.einsum(\"ij, jk -> ki\", A, B)", "output": "(Note: this answer is based on a short blog post about einsum I wrote a while ago.) What does einsum do? Imagine that we have two multi-dimensional arrays, A and B. Now let's suppose we want to... multiply A with B in a particular way to create new array of products; and then maybe sum this new array along particular axes; and then maybe transpose the axes of the new array in a particular order. There's a good chance that einsum will help us do this faster and more memory-efficiently than combinations of the NumPy functions like multiply, sum and transpose will allow. How does einsum work? Here's a simple (but not completely trivial) example. Take the following two arrays: A = np.array([0, 1, 2]) B = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) We will multiply A and B element-wise and then sum along the rows of the new array. In \"normal\" NumPy we'd write: >>> (A[:, np.newaxis] * B).sum(axis=1) array([ 0, 22, 76]) So here, the indexing operation on A lines up the first axes of the two arrays so that the multiplication can be broadcast. The rows of the array of products are then summed to return the answer. Now if we wanted to use einsum instead, we could write: >>> np.einsum('i,ij->i', A, B) array([ 0, 22, 76]) The signature string 'i,ij->i' is the key here and needs a little bit of explaining. You can think of it in two halves. On the left-hand side (left of the ->) we've labelled the two input arrays. To the right of ->, we've labelled the array we want to end up with. Here is what happens next: A has one axis; we've labelled it i. And B has two axes; we've labelled axis 0 as i and axis 1 as j. By repeating the label i in both input arrays, we are telling einsum that these two axes should be multiplied together. In other words, we're multiplying array A with each column of array B, just like A[:, np.newaxis] * B does. Notice that j does not appear as a label in our desired output; we've just used i (we want to end up with a 1D array). By omitting the label, we're telling einsum to sum along this axis. In other words, we're summing the rows of the products, just like .sum(axis=1) does. That's basically all you need to know to use einsum. It helps to play about a little; if we leave both labels in the output, 'i,ij->ij', we get back a 2D array of products (same as A[:, np.newaxis] * B). If we say no output labels, 'i,ij->, we get back a single number (same as doing (A[:, np.newaxis] * B).sum()). The great thing about einsum however, is that it does not build a temporary array of products first; it just sums the products as it goes. This can lead to big savings in memory use. A slightly bigger example To explain the dot product, here are two new arrays: A = array([[1, 1, 1], [2, 2, 2], [5, 5, 5]]) B = array([[0, 1, 0], [1, 1, 0], [1, 1, 1]]) We will compute the dot product using np.einsum('ij,jk->ik', A, B). Here's a picture showing the labelling of the A and B and the output array that we get from the function: You can see that label j is repeated - this means we're multiplying the rows of A with the columns of B. Furthermore, the label j is not included in the output - we're summing these products. Labels i and k are kept for the output, so we get back a 2D array. It might be even clearer to compare this result with the array where the label j is not summed. Below, on the left you can see the 3D array that results from writing np.einsum('ij,jk->ijk', A, B) (i.e. we've kept label j): Summing axis j gives the expected dot product, shown on the right. Some exercises To get more of a feel for einsum, it can be useful to implement familiar NumPy array operations using the subscript notation. Anything that involves combinations of multiplying and summing axes can be written using einsum. Let A and B be two 1D arrays with the same length. For example, A = np.arange(10) and B = np.arange(5, 15). The sum of A can be written: np.einsum('i->', A) Element-wise multiplication, A * B, can be written: np.einsum('i,i->i', A, B) The inner product or dot product, np.inner(A, B) or np.dot(A, B), can be written: np.einsum('i,i->', A, B) # or just use 'i,i' The outer product, np.outer(A, B), can be written: np.einsum('i,j->ij', A, B) For 2D arrays, C and D, provided that the axes are compatible lengths (both the same length or one of them of has length 1), here are a few examples: The trace of C (sum of main diagonal), np.trace(C), can be written: np.einsum('ii', C) Element-wise multiplication of C and the transpose of D, C * D.T, can be written: np.einsum('ij,ji->ij', C, D) Multiplying each element of C by the array D (to make a 4D array), C[:, :, None, None] * D, can be written: np.einsum('ij,kl->ijkl', C, D)"} +{"question_id": 6771428, "score": 380, "creation_date": 1311224304, "tags": ["python", "numpy"], "instruction": "Most efficient way to reverse a numpy array\n\nBelieve it or not, after profiling my current code, the repetitive operation of numpy array reversion ate a giant chunk of the running time. What I have right now is the common view-based method: reversed_arr = arr[::-1] Is there any other way to do it more efficiently, or is it just an illusion from my obsession with unrealistic numpy performance?", "output": "reversed_arr = arr[::-1] gives a reversed view into the original array arr. Any changes made to the original array arr will also be immediately visible in reversed_arr. The underlying data buffers for arr and reversed_arr are shared, so creating this view is always instantaneous, and does not require any additional memory allocation or copying for the array contents. See also, this discussion on NumPy views: How do I create a view onto a NumPy array? Possible solutions to performance problems regarding views Are you re-creating the view more often than you need to? You should be able to do something like this: arr = np.array(some_sequence) reversed_arr = arr[::-1] do_something(arr) look_at(reversed_arr) do_something_else(arr) look_at(reversed_arr) I'm not a numpy expert, but this seems like it would be the fastest way to do things in numpy. If this is what you are already doing, I don't think you can improve on it."} +{"question_id": 10415028, "score": 376, "creation_date": 1335966274, "tags": ["python", "function", "multiprocessing", "return-value", "python-multiprocessing"], "instruction": "How can I get the return value of a function passed to multiprocessing.Process?\n\nIn the example code below, I'd like to get the return value of the function worker. How can I go about doing this? Where is this value stored? Example Code: import multiprocessing def worker(procnum): '''worker function''' print str(procnum) + ' represent!' return procnum if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker, args=(i,)) jobs.append(p) p.start() for proc in jobs: proc.join() print jobs Output: 0 represent! 1 represent! 2 represent! 3 represent! 4 represent! [, , , , ] I can't seem to find the relevant attribute in the objects stored in jobs.", "output": "Use a shared variable to communicate. For example, like this, Example Code: import multiprocessing def worker(procnum, return_dict): \"\"\"worker function\"\"\" print(str(procnum) + \" represent!\") return_dict[procnum] = procnum if __name__ == \"__main__\": manager = multiprocessing.Manager() return_dict = manager.dict() jobs = [] for i in range(5): p = multiprocessing.Process(target=worker, args=(i, return_dict)) jobs.append(p) p.start() for proc in jobs: proc.join() print(return_dict.values()) Output: 0 represent! 1 represent! 3 represent! 2 represent! 4 represent! [0, 1, 3, 2, 4]"} +{"question_id": 2136556, "score": 375, "creation_date": 1264462873, "tags": ["python", "regex"], "instruction": "In Python, how do I split a string and keep the separators?\n\nHere's the simplest way to explain this. Here's what I'm using: re.split('\\W', 'foo/bar spam\\neggs') >>> ['foo', 'bar', 'spam', 'eggs'] Here's what I want: someMethod('\\W', 'foo/bar spam\\neggs') >>> ['foo', '/', 'bar', ' ', 'spam', '\\n', 'eggs'] The reason is that I want to split a string into tokens, manipulate it, then put it back together again.", "output": "The docs of re.split mention: Split string by the occurrences of pattern. If capturing parentheses are used in pattern, then the text of all groups in the pattern are also returned as part of the resulting list. So you just need to wrap your separator with a capturing group: >>> re.split('(\\W)', 'foo/bar spam\\neggs') ['foo', '/', 'bar', ' ', 'spam', '\\n', 'eggs']"} +{"question_id": 2872512, "score": 374, "creation_date": 1274348223, "tags": ["python", "truncate"], "instruction": "Python truncate a long string\n\nHow does one truncate a string to 75 characters in Python? This is how it is done in JavaScript: const data = \"saddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddsaddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddsadddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"; const info = (data.length > 75) ? data.substring(0, 75) + '..' : data; console.log(info);", "output": "info = (data[:75] + '..') if len(data) > 75 else data This code matches the JavaScript, but you should consider using data[:73] so that the total result including the .. fits in 75 characters."} +{"question_id": 15431044, "score": 374, "creation_date": 1363346342, "tags": ["python", "python-requests"], "instruction": "Can I set max_retries for requests.request?\n\nThe Python requests module is simple and elegant but one thing bugs me. It is possible to get a requests.exception.ConnectionError with a message like: Max retries exceeded with url: ... This implies that requests can attempt to access the data several times. But there is not a single mention of this possibility anywhere in the docs. Looking at the source code I didn't find any place where I could alter the default (presumably 0) value. So is it possible to somehow set the maximum number of retries for requests?", "output": "It is the underlying urllib3 library that does the retrying. To set a different maximum retry count, use alternative transport adapters: from requests.adapters import HTTPAdapter s = requests.Session() s.mount('http://stackoverflow.com', HTTPAdapter(max_retries=5)) The max_retries argument takes an integer or a Retry() object; the latter gives you fine-grained control over what kinds of failures are retried (an integer value is turned into a Retry() instance which only handles connection failures; errors after a connection is made are by default not handled as these could lead to side-effects). Old answer, predating the release of requests 1.2.1: The requests library doesn't really make this configurable, nor does it intend to (see this pull request). Currently (requests 1.1), the retries count is set to 0. If you really want to set it to a higher value, you'll have to set this globally: import requests requests.adapters.DEFAULT_RETRIES = 5 This constant is not documented; use it at your own peril as future releases could change how this is handled. Update: and this did change; in version 1.2.1 the option to set the max_retries parameter on the HTTPAdapter() class was added, so that now you have to use alternative transport adapters, see above. The monkey-patch approach no longer works, unless you also patch the HTTPAdapter.__init__() defaults (very much not recommended)."} +{"question_id": 15331726, "score": 373, "creation_date": 1362979423, "tags": ["python", "functional-programming", "currying", "functools", "partial-application"], "instruction": "How does functools partial do what it does?\n\nI am not able to get my head on how the partial works in functools. I have the following code from here: >>> sum = lambda x, y : x + y >>> sum(1, 2) 3 >>> incr = lambda y : sum(1, y) >>> incr(2) 3 >>> def sum2(x, y): return x + y >>> incr2 = functools.partial(sum2, 1) >>> incr2(4) 5 Now in the line incr = lambda y : sum(1, y) I get that whatever argument I pass to incr it will be passed as y to lambda which will return sum(1, y) i.e 1 + y. I understand that. But I didn't understand this incr2(4). How does the 4 gets passed as x in partial function? To me, 4 should replace the sum2. What is the relation between x and 4?", "output": "Roughly, partial does something like this (apart from keyword args support, etc): def partial(func, *part_args): def wrapper(*extra_args): return func(*part_args, *extra_args) return wrapper So, by calling partial(sum2, 4) you create a new function (a callable, to be precise) that behaves like sum2, but has one positional argument less. That missing argument is always substituted by 4, so that partial(sum2, 4)(2) == sum2(4, 2) As for why it's needed, there's a variety of cases. Just for one, suppose you have to pass a function somewhere where it's expected to have 2 arguments: class EventNotifier(object): def __init__(self): self._listeners = [] def add_listener(self, callback): ''' callback should accept two positional arguments, event and params ''' self._listeners.append(callback) # ... def notify(self, event, *params): for f in self._listeners: f(event, params) But a function you already have needs access to some third context object to do its job: def log_event(context, event, params): context.log_event(\"Something happened %s, %s\", event, params) So, there are several solutions: A custom object: class Listener(object): def __init__(self, context): self._context = context def __call__(self, event, params): self._context.log_event(\"Something happened %s, %s\", event, params) notifier.add_listener(Listener(context)) Lambda: log_listener = lambda event, params: log_event(context, event, params) notifier.add_listener(log_listener) With partials: context = get_context() # whatever notifier.add_listener(partial(log_event, context)) Of those three, partial is the shortest and the fastest. (For a more complex case you might want a custom object though)."} +{"question_id": 15315452, "score": 373, "creation_date": 1362860269, "tags": ["python", "pandas", "select", "filter", "conditional-statements"], "instruction": "Selecting with complex criteria from pandas.DataFrame\n\nFor example I have simple DF: import pandas as pd from random import randint df = pd.DataFrame({'A': [randint(1, 9) for x in range(10)], 'B': [randint(1, 9)*10 for x in range(10)], 'C': [randint(1, 9)*100 for x in range(10)]}) Can I select values from 'A' for which corresponding values for 'B' will be greater than 50, and for 'C' - not equal to 900, using methods and idioms of Pandas?", "output": "Sure! Setup: >>> import pandas as pd >>> from random import randint >>> df = pd.DataFrame({'A': [randint(1, 9) for x in range(10)], 'B': [randint(1, 9)*10 for x in range(10)], 'C': [randint(1, 9)*100 for x in range(10)]}) >>> df A B C 0 9 40 300 1 9 70 700 2 5 70 900 3 8 80 900 4 7 50 200 5 9 30 900 6 2 80 700 7 2 80 400 8 5 80 300 9 7 70 800 We can apply column operations and get boolean Series objects: >>> df[\"B\"] > 50 0 False 1 True 2 True 3 True 4 False 5 False 6 True 7 True 8 True 9 True Name: B >>> (df[\"B\"] > 50) & (df[\"C\"] != 900) or >>> (df[\"B\"] > 50) & ~(df[\"C\"] == 900) 0 False 1 False 2 True 3 True 4 False 5 False 6 False 7 False 8 False 9 False [Update, to switch to new-style .loc]: And then we can use these to index into the object. For read access, you can chain indices: >>> df[\"A\"][(df[\"B\"] > 50) & (df[\"C\"] != 900)] 2 5 3 8 Name: A, dtype: int64 but you can get yourself into trouble because of the difference between a view and a copy doing this for write access. You can use .loc instead: >>> df.loc[(df[\"B\"] > 50) & (df[\"C\"] != 900), \"A\"] 2 5 3 8 Name: A, dtype: int64 >>> df.loc[(df[\"B\"] > 50) & (df[\"C\"] != 900), \"A\"].values array([5, 8], dtype=int64) >>> df.loc[(df[\"B\"] > 50) & (df[\"C\"] != 900), \"A\"] *= 1000 >>> df A B C 0 9 40 300 1 9 70 700 2 5000 70 900 3 8000 80 900 4 7 50 200 5 9 30 900 6 2 80 700 7 2 80 400 8 5 80 300 9 7 70 800"} +{"question_id": 14343812, "score": 371, "creation_date": 1358272545, "tags": ["python", "http-redirect", "flask"], "instruction": "Redirecting to URL in Flask\n\nI'm trying to do the equivalent of Response.redirect as in C# - i.e.: redirect to a specific URL - how do I go about this? Here is my code: import os from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return 'Hello World!' if __name__ == '__main__': # Bind to PORT if defined, otherwise default to 5000. port = int(os.environ.get('PORT', 5000)) app.run(host='0.0.0.0', port=port)", "output": "You have to return a redirect: import os from flask import Flask,redirect app = Flask(__name__) @app.route('/') def hello(): return redirect(\"http://www.example.com\", code=302) if __name__ == '__main__': # Bind to PORT if defined, otherwise default to 5000. port = int(os.environ.get('PORT', 5000)) app.run(host='0.0.0.0', port=port) The default value for code is 302 so code=302 can be omitted or replaced by other redirect code (one in 301, 302, 303, 305, and 307). See the documentation on flask docs."} +{"question_id": 27673231, "score": 371, "creation_date": 1419733347, "tags": ["python", "pandas", "copy", "chained-assignment"], "instruction": "why should I make a copy of a data frame in pandas\n\nWhen selecting a sub dataframe from a parent dataframe, I noticed that some programmers make a copy of the data frame using the .copy() method. For example, X = my_dataframe[features_list].copy() ...instead of just X = my_dataframe[features_list] Why are they making a copy of the data frame? What will happen if I don't make a copy?", "output": "This answer has been deprecated in newer versions of pandas. See docs This expands on Paul's answer. In Pandas, indexing a DataFrame returns a reference to the initial DataFrame. Thus, changing the subset will change the initial DataFrame. Thus, you'd want to use the copy if you want to make sure the initial DataFrame shouldn't change. Consider the following code: df = DataFrame({'x': [1,2]}) df_sub = df[0:1] df_sub.x = -1 print(df) You'll get: x 0 -1 1 2 In contrast, the following leaves df unchanged: df_sub_copy = df[0:1].copy() df_sub_copy.x = -1"} +{"question_id": 1094841, "score": 370, "creation_date": 1247000372, "tags": ["python", "code-snippets", "filesize"], "instruction": "Get a human-readable version of a file size\n\nA function to return a human-readable size from the bytes size: >>> human_readable(2048) '2 kilobytes' How can I do this?", "output": "Addressing the above \"too small a task to require a library\" issue by a straightforward implementation (using f-strings, so Python 3.6+): def sizeof_fmt(num, suffix=\"B\"): for unit in (\"\", \"Ki\", \"Mi\", \"Gi\", \"Ti\", \"Pi\", \"Ei\", \"Zi\"): if abs(num) < 1024.0: return f\"{num:3.1f}{unit}{suffix}\" num /= 1024.0 return f\"{num:.1f}Yi{suffix}\" Supports: all currently known binary prefixes negative and positive numbers numbers larger than 1000 Yobibytes arbitrary units (maybe you like to count in Gibibits!) Example: >>> sizeof_fmt(168963795964) '157.4GiB' by Fred Cirera"} +{"question_id": 1737017, "score": 370, "creation_date": 1258274876, "tags": ["python", "django", "datetime", "django-models", "django-admin"], "instruction": "Django auto_now and auto_now_add\n\nFor Django 1.1. I have this in my models.py: class User(models.Model): created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) When updating a row I get: [Sun Nov 15 02:18:12 2009] [error] /home/ptarjan/projects/twitter-meme/django/db/backends/mysql/base.py:84: Warning: Column 'created' cannot be null [Sun Nov 15 02:18:12 2009] [error] return self.cursor.execute(query, args) The relevant part of my database is: `created` datetime NOT NULL, `modified` datetime NOT NULL, Is this cause for concern? Side question: in my admin tool, those two fields aren't showing up. Is that expected?", "output": "Any field with the auto_now attribute set will also inherit editable=False and therefore will not show up in the admin panel. There has been talk in the past about making the auto_now and auto_now_add arguments go away, and although they still exist, I feel you're better off just using a custom save() method. So, to make this work properly, I would recommend not using auto_now or auto_now_add and instead define your own save() method to make sure that created is only updated if id is not set (such as when the item is first created), and have it update modified every time the item is saved. I have done the exact same thing with other projects I have written using Django, and so your save() would look like this: from django.utils import timezone class User(models.Model): created = models.DateTimeField(editable=False) modified = models.DateTimeField() def save(self, *args, **kwargs): ''' On save, update timestamps ''' if not self.id: self.created = timezone.now() self.modified = timezone.now() return super(User, self).save(*args, **kwargs) Edit in response to comments: The reason why I just stick with overloading save() vs. relying on these field arguments is two-fold: The aforementioned ups and downs with their reliability. These arguments are heavily reliant on the way each type of database that Django knows how to interact with treats a date/time stamp field, and seems to break and/or change between every release. (Which I believe is the impetus behind the call to have them removed altogether). The fact that they only work on DateField, DateTimeField, and TimeField, and by using this technique you are able to automatically populate any field type every time an item is saved. Use django.utils.timezone.now() vs. datetime.datetime.now(), because it will return a TZ-aware or naive datetime.datetime object depending on settings.USE_TZ. To address why the OP saw the error, I don't know exactly, but it looks like created isn't even being populated at all, despite having auto_now_add=True. To me it stands out as a bug, and underscores item #1 in my little list above: auto_now and auto_now_add are flaky at best."} +{"question_id": 33072570, "score": 370, "creation_date": 1444618410, "tags": ["python", "oop"], "instruction": "When should I be using classes in Python?\n\nI have been programming in python for about two years; mostly data stuff (pandas, mpl, numpy), but also automation scripts and small web apps. I'm trying to become a better programmer and increase my python knowledge and one of the things that bothers me is that I have never used a class (outside of copying random flask code for small web apps). I generally understand what they are, but I can't seem to wrap my head around why I would need them over a simple function. To add specificity to my question: I write tons of automated reports which always involve pulling data from multiple data sources (mongo, sql, postgres, apis), performing a lot or a little data munging and formatting, writing the data to csv/excel/html, send it out in an email. The scripts range from ~250 lines to ~600 lines. Would there be any reason for me to use classes to do this and why?", "output": "Classes are the pillar of Object Oriented Programming. OOP is highly concerned with code organization, reusability, and encapsulation. First, a disclaimer: OOP is partially in contrast to Functional Programming, which is a different paradigm used a lot in Python. Not everyone who programs in Python (or surely most languages) uses OOP. You can do a lot in Java 8 that isn't very Object Oriented. If you don't want to use OOP, then don't. If you're just writing one-off scripts to process data that you'll never use again, then keep writing the way you are. However, there are a lot of reasons to use OOP. Some reasons: Organization: OOP defines well known and standard ways of describing and defining both data and procedure in code. Both data and procedure can be stored at varying levels of definition (in different classes), and there are standard ways about talking about these definitions. That is, if you use OOP in a standard way, it will help your later self and others understand, edit, and use your code. Also, instead of using a complex, arbitrary data storage mechanism (dicts of dicts or lists or dicts or lists of dicts of sets, or whatever), you can name pieces of data structures and conveniently refer to them. State: OOP helps you define and keep track of state. For instance, in a classic example, if you're creating a program that processes students (for instance, a grade program), you can keep all the info you need about them in one spot (name, age, gender, grade level, courses, grades, teachers, peers, diet, special needs, etc.), and this data is persisted as long as the object is alive, and is easily accessible. In contrast, in pure functional programming, state is never mutated in place. Encapsulation: With encapsulation, procedure and data are stored together. Methods (an OOP term for functions) are defined right alongside the data that they operate on and produce. In a language like Java that allows for access control, or in Python, depending upon how you describe your public API, this means that methods and data can be hidden from the user. What this means is that if you need or want to change code, you can do whatever you want to the implementation of the code, but keep the public APIs the same. Inheritance: Inheritance allows you to define data and procedure in one place (in one class), and then override or extend that functionality later. For instance, in Python, I often see people creating subclasses of the dict class in order to add additional functionality. A common change is overriding the method that throws an exception when a key is requested from a dictionary that doesn't exist to give a default value based on an unknown key. This allows you to extend your own code now or later, allow others to extend your code, and allows you to extend other people's code. Reusability: All of these reasons and others allow for greater reusability of code. Object oriented code allows you to write solid (tested) code once, and then reuse over and over. If you need to tweak something for your specific use case, you can inherit from an existing class and overwrite the existing behavior. If you need to change something, you can change it all while maintaining the existing public method signatures, and no one is the wiser (hopefully). Again, there are several reasons not to use OOP, and you don't need to. But luckily with a language like Python, you can use just a little bit or a lot, it's up to you. An example of the student use case (no guarantee on code quality, just an example): Object Oriented class Student(object): def __init__(self, name, age, gender, level, grades=None): self.name = name self.age = age self.gender = gender self.level = level self.grades = grades or {} def setGrade(self, course, grade): self.grades[course] = grade def getGrade(self, course): return self.grades[course] def getGPA(self): return sum(self.grades.values())/len(self.grades) # Define some students john = Student(\"John\", 12, \"male\", 6, {\"math\":3.3}) jane = Student(\"Jane\", 12, \"female\", 6, {\"math\":3.5}) # Now we can get to the grades easily print(john.getGPA()) print(jane.getGPA()) Standard Dict def calculateGPA(gradeDict): return sum(gradeDict.values())/len(gradeDict) students = {} # We can set the keys to variables so we might minimize typos name, age, gender, level, grades = \"name\", \"age\", \"gender\", \"level\", \"grades\" john, jane = \"john\", \"jane\" math = \"math\" students[john] = {} students[john][age] = 12 students[john][gender] = \"male\" students[john][level] = 6 students[john][grades] = {math:3.3} students[jane] = {} students[jane][age] = 12 students[jane][gender] = \"female\" students[jane][level] = 6 students[jane][grades] = {math:3.5} # At this point, we need to remember who the students are and where the grades are stored. Not a huge deal, but avoided by OOP. print(calculateGPA(students[john][grades])) print(calculateGPA(students[jane][grades]))"} +{"question_id": 10873824, "score": 369, "creation_date": 1338756388, "tags": ["python", "numpy"], "instruction": "Convert 2D float array to 2D int array in NumPy\n\nHow do I convert a float NumPy array into an int NumPy array?", "output": "Use the astype method. >>> x = np.array([[1.0, 2.3], [1.3, 2.9]]) >>> x array([[ 1. , 2.3], [ 1.3, 2.9]]) >>> x.astype(int) array([[1, 2], [1, 2]])"} +{"question_id": 29663459, "score": 368, "creation_date": 1429145236, "tags": ["python", "docker", "dockerfile", "docker-run"], "instruction": "Why doesn't Python app print anything when run in a detached docker container?\n\nI have a Python (2.7) app which is started in my dockerfile: CMD [\"python\",\"main.py\"] main.py prints some strings when it is started and goes into a loop afterwards: print \"App started\" while True: time.sleep(1) As long as I start the container with the -it flag, everything works as expected: $ docker run --name=myapp -it myappimage > App started And I can see the same output via logs later: $ docker logs myapp > App started If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output: $ docker run --name=myapp -d myappimage > b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1 $ docker logs myapp $ (empty) But the container still seems to run; $ docker ps Container Status ... myapp up 4 minutes ... Attach does not display anything either: $ docker attach --sig-proxy=false myapp (working, no output) Any ideas whats going wrong? Does \"print\" behave differently when ran in background? Docker version: Client version: 1.5.0 Client API version: 1.17 Go version (client): go1.4.2 Git commit (client): a8a31ef OS/Arch (client): linux/arm Server version: 1.5.0 Server API version: 1.17 Go version (server): go1.4.2 Git commit (server): a8a31ef", "output": "Finally I found a solution to see Python output when running daemonized in Docker, thanks to @ahmetalpbalkan over at GitHub. Answering it here myself for further reference : Using unbuffered output with CMD [\"python\",\"-u\",\"main.py\"] instead of CMD [\"python\",\"main.py\"] solves the problem; you can see the output now (both, stderr and stdout) via docker logs myapp why -u ref - print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up - executing the same script with python -u gives instant output as said above - import logging + logging.warning(\"text\") gives the expected result even without -u what it means by python -u ref. > python --help | grep -- -u -u : force the stdout and stderr streams to be unbuffered;"} +{"question_id": 14661701, "score": 368, "creation_date": 1359806626, "tags": ["python", "pandas"], "instruction": "How to drop a list of rows from Pandas dataframe?\n\nI have a dataframe df : >>> df sales discount net_sales cogs STK_ID RPT_Date 600141 20060331 2.709 NaN 2.709 2.245 20060630 6.590 NaN 6.590 5.291 20060930 10.103 NaN 10.103 7.981 20061231 15.915 NaN 15.915 12.686 20070331 3.196 NaN 3.196 2.710 20070630 7.907 NaN 7.907 6.459 Then I want to drop rows with certain sequence numbers which indicated in a list, suppose here is [1,2,4], then left: sales discount net_sales cogs STK_ID RPT_Date 600141 20060331 2.709 NaN 2.709 2.245 20061231 15.915 NaN 15.915 12.686 20070630 7.907 NaN 7.907 6.459 How or what function can do that ?", "output": "Use DataFrame.drop and pass it a Series of index labels: In [65]: df Out[65]: one two one 1 4 two 2 3 three 3 2 four 4 1 In [66]: df.drop(df.index[[1,3]]) Out[66]: one two one 1 4 three 3 2"} +{"question_id": 38623138, "score": 368, "creation_date": 1469653385, "tags": ["python", "visual-studio-code", "vscode-debugger"], "instruction": "How to set the working directory for debugging a Python program in VS Code?\n\nHow do I execute a Python file with the debugger and specify the working directory for the run?", "output": "Setting \"cwd\" to ${FileDirname} in launch.json did not work for me; rather, I tried modifying settings.json, and actually got a solution. In the settings.json file, add the field: \"python.terminal.executeInFileDir\": true After adding this field, my settings.json looks something like: { \"python.terminal.executeInFileDir\": true } This will definitely set the cwd to the directory that contains your file, as opposed to the root folder of the project."} +{"question_id": 3162271, "score": 367, "creation_date": 1278025167, "tags": ["python", "for-loop"], "instruction": "Get loop count inside a for-loop\n\nThis for loop iterates over all elements in a list: for item in my_list: print item Is there a way to know within the loop how many times I've been looping so far? For instance, I want to take a list and after I've processed ten elements I want to do something with them. The alternatives I thought about would be something like: count=0 for item in my_list: print item count +=1 if count % 10 == 0: print 'did ten' Or: for count in range(0,len(my_list)): print my_list[count] if count % 10 == 0: print 'did ten' Is there a better way (just like the for item in my_list) to get the number of iterations so far?", "output": "The pythonic way is to use enumerate: for idx, item in enumerate(my_list):"} +{"question_id": 110362, "score": 367, "creation_date": 1221976866, "tags": ["python", "cross-platform", "platform-specific"], "instruction": "How can I find the current OS in Python?\n\nAs the title says, how can I find the current operating system in python?", "output": "I usually use sys.platform to get the platform. sys.platform will distinguish between linux, other unixes, and OS X, while os.name is \"posix\" for all of them. For much more detailed information, use the platform module. This has cross-platform functions that will give you information on the machine architecture, OS and OS version, version of Python, etc. Also it has os-specific functions to get things like the particular linux distribution."} +{"question_id": 13118029, "score": 366, "creation_date": 1351498809, "tags": ["python", "directory"], "instruction": "Deleting folders in python recursively\n\nI'm having a problem with deleting empty directories. Here is my code: for dirpath, dirnames, filenames in os.walk(dir_to_search): # other codes try: os.rmdir(dirpath) except OSError as ex: print(ex) The argument dir_to_search is where I'm passing the directory where the work needs to be done. That directory looks like this: test/20/... test/22/... test/25/... test/26/... Note that all the above folders are empty. When I run this script the folders 20,25 alone gets deleted! But the folders 25 and 26 aren't deleted, even though they are empty folders. Edit: The exception that I'm getting are: [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/29' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/29/tmp' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/28' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/28/tmp' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/26' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/25' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/27' [Errno 39] Directory not empty: '/home/python-user/shell-scripts/s3logs/test/2012/10/27/tmp' Where am I making a mistake?", "output": "Try shutil.rmtree to delete files and directories: import shutil shutil.rmtree('/path/to/your/dir/')"} +{"question_id": 6260457, "score": 366, "creation_date": 1307419395, "tags": ["python", "python-requests", "httprequest"], "instruction": "Using headers with the Python 'Requests' library's get() method\n\nI recently stumbled upon this great library for handling HTTP requests in Python; found on Requests: HTTP for Humans. I love working with it, but how can I add headers to my get requests?", "output": "According to the API, the headers can all be passed in with requests.get(): import requests r = requests.get(\"http://www.example.com/\", headers={\"Content-Type\":\"text\"})"} +{"question_id": 32565829, "score": 364, "creation_date": 1442236557, "tags": ["python", "jupyter-notebook", "ipython", "jupyter"], "instruction": "Simple way to measure cell execution time in ipython notebook\n\nI would like to get the time spent on the cell execution in addition to the original output from cell. To this end, I tried %%timeit -r1 -n1 but it doesn't expose the variable defined within cell. %%time works for cell which only contains 1 statement. In[1]: %%time 1 CPU times: user 4 \u00b5s, sys: 0 ns, total: 4 \u00b5s Wall time: 5.96 \u00b5s Out[1]: 1 In[2]: %%time # Notice there is no out result in this case. x = 1 x CPU times: user 3 \u00b5s, sys: 0 ns, total: 3 \u00b5s Wall time: 5.96 \u00b5s What's the best way to do it? Update I have been using Execute Time in Nbextension for quite some time now. It is great. Update 2021-03 As of now, this is the correct answer. Essentially, %%time and %%timeit both now work as one would expect.", "output": "That was only a problem in old versions. All you need to do now is put %%time at the top of the cell. %%time measures how long it took something to run. It\u2019s better for reporting on long-running operations than for doing low-level optimization. %%timeit is a benchmarking tool that runs statements over and over to give the average runtime for some statements, as well as the standard deviation. Because of the way in which the statements are repeatedly executed, the variables created in %%timeit cells are not available in other cells. %%timeit uses the python timeit module. The docs for that say, It avoids a number of common traps for measuring execution times. See also Tim Peters\u2019 introduction to the \u201cAlgorithms\u201d chapter in the Python Cookbook, published by O\u2019Reilly. I hope that that module is still relevant, as the reference it refers to describes issues such as (1) workarounds for Windows 98 only updating time.time() 18.2 times per second, and (2) jamming all the statements onto one line to avoid the bytecode overhead of incrementing the line number counter. The currently top-rated answer, as well as some of the other outdated ones\u2014which should be deleted because they are now highly misleading\u2014do have useful comments indicating that those answers are not correct: %%time works even when the last statement is not print variables in the tested cell are now taken into consideration into the next cells"} +{"question_id": 10996140, "score": 364, "creation_date": 1339502077, "tags": ["python", "arrays", "numpy"], "instruction": "How to remove specific elements in a numpy array\n\nHow can I remove some specific elements from a numpy array? Say I have import numpy as np a = np.array([1,2,3,4,5,6,7,8,9]) I then want to remove 3,4,7 from a. All I know is the index of the values (index=[2,3,6]).", "output": "Use numpy.delete(), which returns a new array with sub-arrays along an axis deleted. numpy.delete(a, index) For your specific question: import numpy as np a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) index = [2, 3, 6] new_a = np.delete(a, index) print(new_a) # Output: [1, 2, 5, 6, 8, 9] Note that numpy.delete() returns a new array since array scalars are immutable, similar to strings in Python, so each time a change is made to it, a new object is created. I.e., to quote the delete() docs: \"A copy of arr with the elements specified by obj removed. Note that delete does not occur in-place...\" If the code I post has output, it is the result of running the code."} +{"question_id": 24853923, "score": 362, "creation_date": 1405883313, "tags": ["python", "python-typing"], "instruction": "Type hinting a collection of a specified type\n\nUsing Python 3's function annotations, it is possible to specify the type of items contained within a homogeneous list (or other collection) for the purpose of type hinting in PyCharm and other IDEs? A pseudo-python code example for a list of int: def my_func(l:list): pass I know it's possible using Docstring... def my_func(l): \"\"\" :type l: list[int] \"\"\" pass ... but I prefer the annotation style if it's possible.", "output": "As of May 2015, PEP 484 (Type Hints) has been formally accepted. The draft implementation is also available at github under ambv/typehinting. In September 2015, Python 3.5 was released with support for Type Hints and includes a new typing module. This allows for the specification of types contained within collections. As of November 2015, JetBrains PyCharm 5.0 fully supports Python 3.5 to include Type Hints as illustrated below. from typing import List def do_something(l: List[str]): for s in l: s # str Original Answer As of Aug 2014, I have confirmed that it is not possible to use Python 3 type annotations to specify types within collections (ex: a list of strings). The use of formatted docstrings such as reStructuredText or Sphinx are viable alternatives and supported by various IDEs. It also appears that Guido is mulling over the idea of extending type annotations in the spirit of mypy: http://mail.python.org/pipermail/python-ideas/2014-August/028618.html"} +{"question_id": 3024925, "score": 362, "creation_date": 1276277943, "tags": ["python", "with-statement", "contextmanager"], "instruction": "Create a \"with\" block on several context managers?\n\nSuppose you have three objects you acquire via context manager, for instance A lock, a db connection and an ip socket. You can acquire them by: with lock: with db_con: with socket: #do stuff But is there a way to do it in one block? something like with lock,db_con,socket: #do stuff Furthermore, is it possible, given an array of unknown length of objects that have context managers, is it possible to somehow do: a=[lock1, lock2, lock3, db_con1, socket, db_con2] with a as res: #now all objects in array are acquired If the answer is \"no\", is it because the need for such a feature implies bad design, or maybe I should suggest it in a pep? :-P", "output": "In Python 2.7 and 3.1 and above, you can write: with A() as X, B() as Y, C() as Z: do_something() This is normally the best method to use, but if you have an unknown-length list of context managers you'll need one of the below methods. In Python 3.3, you can enter an unknown-length list of context managers by using contextlib.ExitStack: with ExitStack() as stack: for mgr in ctx_managers: stack.enter_context(mgr) # ... This allows you to create the context managers as you are adding them to the ExitStack, which prevents the possible problem with contextlib.nested (mentioned below). contextlib2 provides a backport of ExitStack for Python 2.6 and 2.7. In Python 2.6 and below, you can use contextlib.nested: from contextlib import nested with nested(A(), B(), C()) as (X, Y, Z): do_something() is equivalent to: m1, m2, m3 = A(), B(), C() with m1 as X: with m2 as Y: with m3 as Z: do_something() Note that this isn't exactly the same as normally using nested with, because A(), B(), and C() will all be called initially, before entering the context managers. This will not work correctly if one of these functions raises an exception. contextlib.nested is deprecated in newer Python versions in favor of the above methods."} +{"question_id": 4768446, "score": 361, "creation_date": 1295707466, "tags": ["python", "module", "ldap", "python-ldap"], "instruction": "I can't install python-ldap\n\nWhen I run the following command: sudo pip install python-ldap I get this error: In file included from Modules/LDAPObject.c:9: Modules/errors.h:8: fatal error: lber.h: No such file or directory How can I fix this?", "output": "The python-ldap is based on OpenLDAP, so you need to have the development files (headers) in order to compile the Python module. If you're on Ubuntu, the package is called libldap2-dev. Debian/Ubuntu: sudo apt-get install libsasl2-dev python-dev-is-python3 libldap2-dev libssl-dev Red Hat/CentOS: sudo yum install python-devel openldap-devel"} +{"question_id": 28991015, "score": 361, "creation_date": 1426088345, "tags": ["python", "version-control", "delete-file", "temporary-files"], "instruction": "Remove __pycache__ folders and .pyc files from Python project\n\nWhat is the best way to clear out all the __pycache__ folders and .pyc/.pyo files from a Python project? I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS.", "output": "Running py3clean . cleaned it up very nicely. Make sure you do not accidentally mistype pyclean as pycclean, otherwise you will get this error: No command 'pycclean' found, did you mean: Command 'py3clean' from package 'python3-minimal' (main) Command 'pyclean' from package 'python-minimal' (main) pycclean: command not found"} +{"question_id": 18499497, "score": 361, "creation_date": 1377729880, "tags": ["python", "daemon", "sigterm", "start-stop-daemon"], "instruction": "How to process SIGTERM signal gracefully?\n\nLet's assume we have such a trivial daemon written in python: def mainloop(): while True: # 1. do # 2. some # 3. important # 4. job # 5. sleep mainloop() and we daemonize it using start-stop-daemon which by default sends SIGTERM (TERM) signal on --stop. Let's suppose the current step performed is #2. And at this very moment we're sending TERM signal. What happens is that the execution terminates immediately. I've found that I can handle the signal event using signal.signal(signal.SIGTERM, handler) but the thing is that it still interrupts the current execution and passes the control to handler. So, my question is - is it possible to not interrupt the current execution but handle the TERM signal in a separated thread (?) so that I was able to set shutdown_flag = True so that mainloop() had a chance to stop gracefully?", "output": "A class based clean to use solution: import signal import time class GracefulKiller: kill_now = False def __init__(self): signal.signal(signal.SIGINT, self.exit_gracefully) signal.signal(signal.SIGTERM, self.exit_gracefully) def exit_gracefully(self, signum, frame): self.kill_now = True if __name__ == '__main__': killer = GracefulKiller() while not killer.kill_now: time.sleep(1) print(\"doing something in a loop ...\") print(\"End of the program. I was killed gracefully :)\")"} +{"question_id": 12229064, "score": 361, "creation_date": 1346513497, "tags": ["python", "dictionary", "map-function"], "instruction": "Mapping over values in a python dictionary\n\nGiven a dictionary { k1: v1, k2: v2 ... } I want to get { k1: f(v1), k2: f(v2) ... } provided I pass a function f. Is there any such built in function? Or do I have to do dict([(k, f(v)) for (k, v) in my_dictionary.iteritems()]) Ideally I would just write my_dictionary.map_values(f) or my_dictionary.mutate_values_with(f) That is, it doesn't matter to me if the original dictionary is mutated or a copy is created.", "output": "There is no such function; the easiest way to do this is to use a dict comprehension: my_dictionary = {k: f(v) for k, v in my_dictionary.items()} Note that there is no such method on lists either; you'd have to use a list comprehension or the map() function. As such, you could use the map() function for processing your dict as well: my_dictionary = dict(map(lambda kv: (kv[0], f(kv[1])), my_dictionary.items())) but that's not that readable, really. (Note that if you're still using Python 2.7, you should use the .iteritems() method instead of .items() to save memory. Also, the dict comprehension syntax wasn't introduced until Python 2.7.)"} +{"question_id": 8214932, "score": 361, "creation_date": 1321892447, "tags": ["python", "dictionary", "find"], "instruction": "How to check if a value exists in a dictionary?\n\nI have the following dictionary in python: d = {'1': 'one', '3': 'three', '2': 'two', '5': 'five', '4': 'four'} I need a way to find if a value such as \"one\" or \"two\" exists in this dictionary. For example, if I wanted to know if the index \"1\" existed I would simply have to type: \"1\" in d And then python would tell me if that is true or false, however I need to do that same exact thing except to find if a value exists.", "output": ">>> d = {'1': 'one', '3': 'three', '2': 'two', '5': 'five', '4': 'four'} >>> 'one' in d.values() True Out of curiosity, some comparative timing: >>> T(lambda : 'one' in d.itervalues()).repeat() [0.28107285499572754, 0.29107213020324707, 0.27941107749938965] >>> T(lambda : 'one' in d.values()).repeat() [0.38303399085998535, 0.37257885932922363, 0.37096405029296875] >>> T(lambda : 'one' in d.viewvalues()).repeat() [0.32004380226135254, 0.31716084480285645, 0.3171098232269287] >>> T(lambda : 'four' in d.itervalues()).repeat() [0.41178202629089355, 0.3959040641784668, 0.3970959186553955] >>> T(lambda : 'four' in d.values()).repeat() [0.4631338119506836, 0.43541407585144043, 0.4359898567199707] >>> T(lambda : 'four' in d.viewvalues()).repeat() [0.43414998054504395, 0.4213531017303467, 0.41684913635253906] The reason is that each of the above returns a different type of object, which may or may not be well suited for lookup operations: >>> type(d.viewvalues()) >>> type(d.values()) >>> type(d.itervalues()) "} +{"question_id": 28583565, "score": 361, "creation_date": 1424262005, "tags": ["python", "python-3.x"], "instruction": "'str' object has no attribute 'decode'. Python 3 error?\n\nHere is my code: import imaplib from email.parser import HeaderParser conn = imaplib.IMAP4_SSL('imap.gmail.com') conn.login('example@gmail.com', 'password') conn.select() conn.search(None, 'ALL') data = conn.fetch('1', '(BODY[HEADER])') header_data = data[1][0][1].decode('utf-8') At this point I get the error message: AttributeError: 'str' object has no attribute 'decode' Python 3 doesn't have str.decode() anymore, so how can I fix this?", "output": "You are trying to decode an object that is already decoded. You have a str, there is no need to decode from UTF-8 anymore. Simply drop the .decode('utf-8') part: header_data = data[1][0][1]"} +{"question_id": 31818050, "score": 359, "creation_date": 1438717899, "tags": ["python", "rounding"], "instruction": "Round number to nearest integer\n\nI've been trying to round long float numbers like: 32.268907563; 32.268907563; 31.2396694215; 33.6206896552; ... With no success so far. I tried math.ceil(x), math.floor(x) (although that would round up or down, which is not what I'm looking for) and round(x) which didn't work either (still float numbers). What could I do? Code: for i in widthRange: for j in heightRange: r, g, b = rgb_im.getpixel((i, j)) h, s, v = colorsys.rgb_to_hsv(r/255.0, g/255.0, b/255.0) h = h * 360 int(round(h)) print(h)", "output": "TL;DR: round(x) will round it and change it to integer. You are not assigning round(h) to any variable. When you call round(h), it returns the integer number but does nothing else; you have to change that line for: h = round(h) to assign the new value to h. As @plowman said in the comments, Python's round() doesn't work as one would normally expect, and that's because the way the number is stored as a variable is usually not the way you see it on screen. There are lots of answers that explain this behavior. One way to avoid this problem is to use the Decimal as stated by this answer. In order for this answer to work properly without using extra libraries it would be convenient to use a custom rounding function. I came up with the following solution, that as far as I tested avoided all the storing issues. It is based on using the string representation, obtained with repr() (NOT str()!). It looks hacky but it was the only way I found to solve all the cases. It works with both Python2 and Python3. def proper_round(num, dec=0): num = str(num)[:str(num).index('.')+dec+2] if num[-1]>='5': return float(num[:-2-(not dec)]+str(int(num[-2-(not dec)])+1)) return float(num[:-1]) Tests: >>> print(proper_round(1.0005,3)) 1.001 >>> print(proper_round(2.0005,3)) 2.001 >>> print(proper_round(3.0005,3)) 3.001 >>> print(proper_round(4.0005,3)) 4.001 >>> print(proper_round(5.0005,3)) 5.001 >>> print(proper_round(1.005,2)) 1.01 >>> print(proper_round(2.005,2)) 2.01 >>> print(proper_round(3.005,2)) 3.01 >>> print(proper_round(4.005,2)) 4.01 >>> print(proper_round(5.005,2)) 5.01 >>> print(proper_round(1.05,1)) 1.1 >>> print(proper_round(2.05,1)) 2.1 >>> print(proper_round(3.05,1)) 3.1 >>> print(proper_round(4.05,1)) 4.1 >>> print(proper_round(5.05,1)) 5.1 >>> print(proper_round(1.5)) 2.0 >>> print(proper_round(2.5)) 3.0 >>> print(proper_round(3.5)) 4.0 >>> print(proper_round(4.5)) 5.0 >>> print(proper_round(5.5)) 6.0 >>> >>> print(proper_round(1.000499999999,3)) 1.0 >>> print(proper_round(2.000499999999,3)) 2.0 >>> print(proper_round(3.000499999999,3)) 3.0 >>> print(proper_round(4.000499999999,3)) 4.0 >>> print(proper_round(5.000499999999,3)) 5.0 >>> print(proper_round(1.00499999999,2)) 1.0 >>> print(proper_round(2.00499999999,2)) 2.0 >>> print(proper_round(3.00499999999,2)) 3.0 >>> print(proper_round(4.00499999999,2)) 4.0 >>> print(proper_round(5.00499999999,2)) 5.0 >>> print(proper_round(1.0499999999,1)) 1.0 >>> print(proper_round(2.0499999999,1)) 2.0 >>> print(proper_round(3.0499999999,1)) 3.0 >>> print(proper_round(4.0499999999,1)) 4.0 >>> print(proper_round(5.0499999999,1)) 5.0 >>> print(proper_round(1.499999999)) 1.0 >>> print(proper_round(2.499999999)) 2.0 >>> print(proper_round(3.499999999)) 3.0 >>> print(proper_round(4.499999999)) 4.0 >>> print(proper_round(5.499999999)) 5.0 Finally, the corrected answer would be: # Having proper_round defined as previously stated h = int(proper_round(h)) Tests: >>> proper_round(6.39764125, 2) 6.31 # should be 6.4 >>> proper_round(6.9764125, 1) 6.1 # should be 7 The gotcha here is that the dec-th decimal can be 9 and if the dec+1-th digit >=5 the 9 will become a 0 and a 1 should be carried to the dec-1-th digit. If we take this into consideration, we get: def proper_round(num, dec=0): num = str(num)[:str(num).index('.')+dec+2] if num[-1]>='5': a = num[:-2-(not dec)] # integer part b = int(num[-2-(not dec)])+1 # decimal part return float(a)+b**(-dec+1) if a and b == 10 else float(a+str(b)) return float(num[:-1]) In the situation described above b = 10 and the previous version would just concatenate a and b which would result in a concatenation of 10 where the trailing 0 would disappear. This version transforms b to the right decimal place based on dec, as a proper carry."} +{"question_id": 29831489, "score": 358, "creation_date": 1429813494, "tags": ["python", "numpy", "machine-learning", "numpy-ndarray", "one-hot-encoding"], "instruction": "Convert array of indices to one-hot encoded array in NumPy\n\nGiven a 1D array of indices: a = array([1, 0, 3]) I want to one-hot encode this as a 2D array: b = array([[0,1,0,0], [1,0,0,0], [0,0,0,1]])", "output": "Create a zeroed array b with enough columns, i.e. a.max() + 1. Then, for each row i, set the a[i]th column to 1. >>> a = np.array([1, 0, 3]) >>> b = np.zeros((a.size, a.max() + 1)) >>> b[np.arange(a.size), a] = 1 >>> b array([[ 0., 1., 0., 0.], [ 1., 0., 0., 0.], [ 0., 0., 0., 1.]])"} +{"question_id": 42480111, "score": 358, "creation_date": 1488180942, "tags": ["python", "machine-learning", "deep-learning", "pytorch"], "instruction": "How do I print the model summary in PyTorch?\n\nHow do I print the summary of a model in PyTorch like what model.summary() does in Keras: Model Summary: ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 1, 15, 27) 0 ____________________________________________________________________________________________________ convolution2d_1 (Convolution2D) (None, 8, 15, 27) 872 input_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_1 (MaxPooling2D) (None, 8, 7, 27) 0 convolution2d_1[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 1512) 0 maxpooling2d_1[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 1) 1513 flatten_1[0][0] ==================================================================================================== Total params: 2,385 Trainable params: 2,385 Non-trainable params: 0", "output": "While you will not get as detailed information about the model as in Keras' model.summary, simply printing the model will give you some idea about the different layers involved and their specifications. For instance: from torchvision import models model = models.vgg16() print(model) The output in this case would be something as follows: VGG ( (features): Sequential ( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU (inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU (inplace) (4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU (inplace) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU (inplace) (9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU (inplace) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU (inplace) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU (inplace) (16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU (inplace) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU (inplace) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU (inplace) (23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU (inplace) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU (inplace) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU (inplace) (30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1)) ) (classifier): Sequential ( (0): Dropout (p = 0.5) (1): Linear (25088 -> 4096) (2): ReLU (inplace) (3): Dropout (p = 0.5) (4): Linear (4096 -> 4096) (5): ReLU (inplace) (6): Linear (4096 -> 1000) ) ) Now you could, as mentioned by Kashyap, use the state_dict method to get the weights of the different layers. But using this listing of the layers would perhaps provide more direction is creating a helper function to get that Keras like model summary!"} +{"question_id": 79968, "score": 358, "creation_date": 1221625515, "tags": ["python", "regex"], "instruction": "Split a string by spaces -- preserving quoted substrings -- in Python\n\nI have a string which is like this: this is \"a test\" I'm trying to write something in Python to split it up by space while ignoring spaces within quotes. The result I'm looking for is: ['this', 'is', 'a test'] PS. I know you are going to ask \"what happens if there are quotes within the quotes, well, in my application, that will never happen.", "output": "You want split, from the built-in shlex module. >>> import shlex >>> shlex.split('this is \"a test\"') ['this', 'is', 'a test'] This should do exactly what you want. If you want to preserve the quotation marks, then you can pass the posix=False kwarg. >>> shlex.split('this is \"a test\"', posix=False) ['this', 'is', '\"a test\"']"} +{"question_id": 10606133, "score": 358, "creation_date": 1337104124, "tags": ["python", "web-crawler", "python-requests"], "instruction": "Sending \"User-agent\" using Requests library in Python\n\nI want to send a value for \"User-agent\" while requesting a webpage using Python Requests. I am not sure is if it is okay to send this as a part of the header, as in the code below: debug = {'verbose': sys.stderr} user_agent = {'User-agent': 'Mozilla/5.0'} response = requests.get(url, headers = user_agent, config=debug) The debug information isn't showing the headers being sent during the request. Is it acceptable to send this information in the header? If not, how can I send it?", "output": "The user-agent should be specified as a field in the header. Here is a list of HTTP header fields, and you'd probably be interested in request-specific fields, which includes User-Agent. If you're using requests v2.13 and newer The simplest way to do what you want is to create a dictionary and specify your headers directly, like so: import requests url = 'SOME URL' headers = { 'User-Agent': 'My User Agent 1.0', 'From': 'youremail@domain.example' # This is another valid field } response = requests.get(url, headers=headers) If you're using requests v2.12.x and older Older versions of requests clobbered default headers, so you'd want to do the following to preserve default headers and then add your own to them. import requests url = 'SOME URL' # Get a copy of the default headers that requests would use headers = requests.utils.default_headers() # Update the headers with your custom ones # You don't have to worry about case-sensitivity with # the dictionary keys, because default_headers uses a custom # CaseInsensitiveDict implementation within requests' source code. headers.update( { 'User-Agent': 'My User Agent 1.0', } ) response = requests.get(url, headers=headers)"} +{"question_id": 7915998, "score": 357, "creation_date": 1319719557, "tags": ["python", "package", "pip"], "instruction": "Does uninstalling a package with \"pip\" also remove the dependent packages?\n\nWhen you use pip to install a package, all the required packages will also be installed with it (dependencies). Does uninstalling that package also remove the dependent packages?", "output": "No, it doesn't uninstall the dependencies packages. It only removes the specified package: $ pip install specloud $ pip freeze # all the packages here are dependencies of specloud package figleaf==0.6.1 nose==1.1.2 pinocchio==0.3 specloud==0.4.5 $ pip uninstall specloud $ pip freeze figleaf==0.6.1 nose==1.1.2 pinocchio==0.3 As you can see those packages are dependencies from specloud and they're still there, but not the specloud package itself. As mentioned below, you can install and use the pip-autoremove utility to remove a package plus unused dependencies."} +{"question_id": 5183672, "score": 355, "creation_date": 1299170411, "tags": ["python", "linux", "installation", "package", "m2crypto"], "instruction": "How do I update a Python package?\n\nI'm running Ubuntu 9:10 and a package called M2Crypto is installed (version is 0.19.1). I need to download, build and install the latest version of the M2Crypto package (0.20.2). The 0.19.1 package has files in a number of locations including (/usr/share/pyshared and /usr/lib/pymodules.python2.6). How can I completely uninstall version 0.19.1 from my system before installing 0.20.2?", "output": "You might want to look into a Python package manager like pip. If you don't want to use a Python package manager, you should be able to download M2Crypto and build/compile/install over the old installation."} +{"question_id": 4783810, "score": 354, "creation_date": 1295883396, "tags": ["python", "linux", "tkinter", "installation"], "instruction": "Install tkinter for Python\n\nI am trying to import Tkinter. However, I get an error stating that Tkinter has not been installed: ImportError: No module named _tkinter, please install the python-tk package I could probably install it using synaptic manager (can I?), however, I would have to install it on every machine I program on. Would it be possible to add the Tkinter library into my workspace and reference it from there?", "output": "It is not very easy to install Tkinter locally to use with system-provided Python. You may build it from sources, but this is usually not the best idea with a binary package-based distro you're apparently running. It's safer to apt-get install python3-tk on your machine(s). (Works on Debian-derived distributions like for Ubuntu; refer to your package manager and package list on other distributions.)"} +{"question_id": 11091623, "score": 353, "creation_date": 1340056306, "tags": ["python", "pip", "freebsd", "easy-install", "python-requests"], "instruction": "How can I install packages offline?\n\nWhat's the best way to download a Python package and its dependencies from PyPI for offline installation on another machine? Is there an easy way to do this with pip or easy_install? I'm trying to install the Requests library on a FreeBSD box that is not connected to the Internet_.", "output": "If the package is on PyPI, download it and its dependencies to some local directory. E.g., mkdir /pypi && cd /pypi ls -la Output: -rw-r--r-- 1 pavel staff 237954 Apr 19 11:31 Flask-WTF-0.6.tar.gz -rw-r--r-- 1 pavel staff 389741 Feb 22 17:10 Jinja2-2.6.tar.gz -rw-r--r-- 1 pavel staff 70305 Apr 11 00:28 MySQL-python-1.2.3.tar.gz -rw-r--r-- 1 pavel staff 2597214 Apr 10 18:26 SQLAlchemy-0.7.6.tar.gz -rw-r--r-- 1 pavel staff 1108056 Feb 22 17:10 Werkzeug-0.8.2.tar.gz -rw-r--r-- 1 pavel staff 488207 Apr 10 18:26 boto-2.3.0.tar.gz -rw-r--r-- 1 pavel staff 490192 Apr 16 12:00 flask-0.9-dev-2a6c80a.tar.gz Some packages may have to be archived into similar looking tarballs by hand. I do it a lot when I want a more recent (less stable) version of something. Some packages aren't on PyPI, so same applies to them. Suppose you have a properly formed Python application in ~/src/myapp. ~/src/myapp/setup.py will have install_requires list that mentions one or more things that you have in your /pypi directory. Like so: install_requires = [ 'boto', 'Flask', 'Werkzeug', # And so on If you want to be able to run your application with all the necessary dependencies while still hacking on it, you'll do something like this: cd ~/src/myapp python setup.py develop --always-unzip --allow-hosts=None --find-links=/pypi This way, your application will be executed straight from your source directory. You can hack on things, and then rerun the application without rebuilding anything. If you want to install your application and its dependencies into the current python environment, you'll do something like this: cd ~/src/myapp easy_install --always-unzip --allow-hosts=None --find-links=/pypi . In both cases, the build will fail if one or more dependencies aren't present in the /pypi directory. It won't attempt to promiscuously install missing things from Internet. I highly recommend to invoke setup.py develop ... and easy_install ... within an active virtual environment to avoid contaminating your global Python environment. It is (virtualenv that is) pretty much the way to go. Never install anything into global Python environment. If the machine that you've built your application has same architecture as the machine on which you want to deploy it, you can simply tarball the entire virtual environment directory into which you easy_install-ed everything. Just before tarballing though, you must make the virtual environment directory relocatable (see the --relocatable option). Note: the destination machine needs to have the same version of Python installed, and also any C-based dependencies your application may have must be preinstalled there too (e.g., say if you depend on PIL, then libpng, libjpeg, etc. must be preinstalled)."} +{"question_id": 40468069, "score": 351, "creation_date": 1478530310, "tags": ["python", "pandas", "dataframe", "merge", "concatenation"], "instruction": "Merge two dataframes by index\n\nI have the following dataframes: > df1 id begin conditional confidence discoveryTechnique 0 278 56 false 0.0 1 1 421 18 false 0.0 1 > df2 concept 0 A 1 B How do I merge on the indices to get: id begin conditional confidence discoveryTechnique concept 0 278 56 false 0.0 1 A 1 421 18 false 0.0 1 B I ask because it is my understanding that merge() i.e. df1.merge(df2) uses columns to do the matching. In fact, doing this I get: Traceback (most recent call last): File \"\", line 1, in File \"/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py\", line 4618, in merge copy=copy, indicator=indicator) File \"/usr/local/lib/python2.7/dist-packages/pandas/tools/merge.py\", line 58, in merge copy=copy, indicator=indicator) File \"/usr/local/lib/python2.7/dist-packages/pandas/tools/merge.py\", line 491, in __init__ self._validate_specification() File \"/usr/local/lib/python2.7/dist-packages/pandas/tools/merge.py\", line 812, in _validate_specification raise MergeError('No common columns to perform merge on') pandas.tools.merge.MergeError: No common columns to perform merge on Is it bad practice to merge on index? Is it impossible? If so, how can I shift the index into a new column called \"index\"?", "output": "Use merge, which is an inner join by default: pd.merge(df1, df2, left_index=True, right_index=True) Or join, which is a left join by default: df1.join(df2) Or concat, which is an outer join by default: pd.concat([df1, df2], axis=1) Samples: df1 = pd.DataFrame({'a':range(6), 'b':[5,3,6,9,2,4]}, index=list('abcdef')) print (df1) a b a 0 5 b 1 3 c 2 6 d 3 9 e 4 2 f 5 4 df2 = pd.DataFrame({'c':range(4), 'd':[10,20,30, 40]}, index=list('abhi')) print (df2) c d a 0 10 b 1 20 h 2 30 i 3 40 # Default inner join df3 = pd.merge(df1, df2, left_index=True, right_index=True) print (df3) a b c d a 0 5 0 10 b 1 3 1 20 # Default left join df4 = df1.join(df2) print (df4) a b c d a 0 5 0.0 10.0 b 1 3 1.0 20.0 c 2 6 NaN NaN d 3 9 NaN NaN e 4 2 NaN NaN f 5 4 NaN NaN # Default outer join df5 = pd.concat([df1, df2], axis=1) print (df5) a b c d a 0.0 5.0 0.0 10.0 b 1.0 3.0 1.0 20.0 c 2.0 6.0 NaN NaN d 3.0 9.0 NaN NaN e 4.0 2.0 NaN NaN f 5.0 4.0 NaN NaN h NaN NaN 2.0 30.0 i NaN NaN 3.0 40.0"} +{"question_id": 9834452, "score": 351, "creation_date": 1332479591, "tags": ["python", "matplotlib", "legend", "subplot", "legend-properties"], "instruction": "How do I make a single legend for many subplots?\n\nI am plotting the same type of information, but for different countries, with multiple subplots with Matplotlib. That is, I have nine plots on a 3x3 grid, all with the same for lines (of course, different values per line). However, I have not figured out how to put a single legend (since all nine subplots have the same lines) on the figure just once. How do I do that?", "output": "There is also a nice function get_legend_handles_labels() you can call on the last axis (if you iterate over them) that would collect everything you need from label= arguments: handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels, loc='upper center') If the pyplot interface is being used instead of the Axes interface, use: handles, labels = plt.gca().get_legend_handles_labels() To remove legends from subplots, see Remove the legend on a matplotlib figure. To merge twinx legends, see Secondary axis with twinx(): how to add to legend."} +{"question_id": 285289, "score": 351, "creation_date": 1226522588, "tags": ["python", "exit-code"], "instruction": "What does \"script xyz.py returned exit code 0\" mean in Python?\n\nI got a message saying script xyz.py returned exit code 0. What does this mean? What do the exit codes in Python mean? How many are there? Which ones are important?", "output": "You're looking for calls to sys.exit(...) (exit(...) calls sys.exit(...)) in the script. The argument to that method is returned to the environment as the exit code. It's fairly likely that the script is never calling the exit(...) method, and that 0 is the default exit code."} +{"question_id": 42479902, "score": 351, "creation_date": 1488180070, "tags": ["python", "machine-learning", "memory", "pytorch", "tensor"], "instruction": "What does `view()` do in PyTorch?\n\nWhat does view() do to the tensor x? What do negative values mean? x = x.view(-1, 16 * 5 * 5)", "output": "view() reshapes the tensor without copying memory, similar to numpy's reshape(). Given a tensor a with 16 elements: import torch a = torch.range(1, 16) To reshape this tensor to make it a 4 x 4 tensor, use: a = a.view(4, 4) Now a will be a 4 x 4 tensor. Note that after the reshape the total number of elements need to remain the same. Reshaping the tensor a to a 3 x 5 tensor would not be appropriate. What is the meaning of parameter -1? If there is any situation that you don't know how many rows you want but are sure of the number of columns, then you can specify this with a -1. (Note that you can extend this to tensors with more dimensions. Only one of the axis value can be -1). This is a way of telling the library: \"give me a tensor that has these many columns and you compute the appropriate number of rows that is necessary to make this happen\". This can be seen in this model definition code. After the line x = self.pool(F.relu(self.conv2(x))) in the forward function, you will have a 16 depth feature map. You have to flatten this to give it to the fully connected layer. So you tell PyTorch to reshape the tensor you obtained to have specific number of columns and tell it to decide the number of rows by itself."} +{"question_id": 8560131, "score": 349, "creation_date": 1324291314, "tags": ["python", "unit-testing", "pytest", "assert"], "instruction": "pytest: assert almost equal\n\nHow to do assert almost equal with pytest for floats without resorting to something like: assert x - 0.00001 <= y <= x + 0.00001 More specifically it will be useful to know a neat solution for quickly comparing pairs of float, without unpacking them: assert (1.32, 2.4) == i_return_tuple_of_two_floats()", "output": "I noticed that this question specifically asked about pytest. pytest 3.0 includes an approx() function (well, really class) that is very useful for this purpose. import pytest assert 2.2 == pytest.approx(2.3) # fails, default is \u00b1 2.3e-06 assert 2.2 == pytest.approx(2.3, 0.1) # passes # also works the other way, in case you were worried: assert pytest.approx(2.3, 0.1) == 2.2 # passes"} +{"question_id": 19859282, "score": 348, "creation_date": 1383914226, "tags": ["python", "string"], "instruction": "Check if a string contains a number\n\nMost of the questions I've found are biased on the fact they're looking for letters in their numbers, whereas I'm looking for numbers in what I'd like to be a numberless string. I need to enter a string and check to see if it contains any numbers and if it does reject it. The function isdigit() only returns True if ALL of the characters are numbers. I just want to see if the user has entered a number so a sentence like \"I own 1 dog\" or something. Any ideas?", "output": "You can use any function, with the str.isdigit function, like this def has_numbers(inputString): return any(char.isdigit() for char in inputString) has_numbers(\"I own 1 dog\") # True has_numbers(\"I own no dog\") # False Alternatively you can use a Regular Expression, like this import re def has_numbers(inputString): return bool(re.search(r'\\d', inputString)) has_numbers(\"I own 1 dog\") # True has_numbers(\"I own no dog\") # False"} +{"question_id": 6386698, "score": 347, "creation_date": 1308318261, "tags": ["python", "logging", "python-logging"], "instruction": "How to write to a file, using the logging Python module?\n\nHow can I use the logging module in Python to write to a file? Every time I try to use it, it just prints out the message.", "output": "An example of using logging.basicConfig rather than logging.fileHandler() logging.basicConfig(filename=logname, filemode='a', format='%(asctime)s,%(msecs)03d %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', level=logging.DEBUG) logging.info(\"Running Urban Planning\") logger = logging.getLogger('urbanGUI') In order, the five parts do the following: set the output file (filename=logname) set it to append (filemode='a') rather than overwrite (filemode='w') determine the format of the output message (format=...) determine the format of the output date and time (datefmt='%Y-%m-%d %H:%M:%S') and determine the minimum message level it will accept (level=logging.DEBUG)."} +{"question_id": 34097281, "score": 347, "creation_date": 1449262554, "tags": ["python", "numpy", "tensorflow"], "instruction": "Convert a tensor to numpy array in Tensorflow?\n\nHow to convert a tensor into a numpy array when using Tensorflow with Python bindings?", "output": "TensorFlow 2.x Eager Execution is enabled by default, so just call .numpy() on the Tensor object. import tensorflow as tf a = tf.constant([[1, 2], [3, 4]]) b = tf.add(a, 1) a.numpy() # array([[1, 2], # [3, 4]], dtype=int32) b.numpy() # array([[2, 3], # [4, 5]], dtype=int32) tf.multiply(a, b).numpy() # array([[ 2, 6], # [12, 20]], dtype=int32) See NumPy Compatibility for more. It is worth noting (from the docs), Numpy array may share a memory with the Tensor object. Any changes to one may be reflected in the other. Bold emphasis mine. A copy may or may not be returned, and this is an implementation detail based on whether the data is in CPU or GPU (in the latter case, a copy has to be made from GPU to host memory). But why am I getting the AttributeError: 'Tensor' object has no attribute 'numpy'?. A lot of folks have commented about this issue, there are a couple of possible reasons: TF 2.0 is not correctly installed (in which case, try re-installing), or TF 2.0 is installed, but eager execution is disabled for some reason. In such cases, call tf.compat.v1.enable_eager_execution() to enable it, or see below. If Eager Execution is disabled, you can build a graph and then run it through tf.compat.v1.Session: a = tf.constant([[1, 2], [3, 4]]) b = tf.add(a, 1) out = tf.multiply(a, b) out.eval(session=tf.compat.v1.Session()) # array([[ 2, 6], # [12, 20]], dtype=int32) See also TF 2.0 Symbols Map for a mapping of the old API to the new one."} +{"question_id": 1547145, "score": 346, "creation_date": 1255152620, "tags": ["python", "function", "module", "private"], "instruction": "Defining private module functions in python\n\nAccording to http://www.faqs.org/docs/diveintopython/fileinfo_private.html: Like most languages, Python has the concept of private elements: Private functions, which can't be called from outside their module However, if I define two files: #a.py __num=1 and: #b.py import a print a.__num when i run b.py it prints out 1 without giving any exception. Is diveintopython wrong, or did I misunderstand something? And is there some way to do define a module's function as private?", "output": "In Python, \"privacy\" depends on \"consenting adults'\" levels of agreement - you can't force it. A single leading underscore means you're not supposed to access it \"from the outside\" -- two leading underscores (w/o trailing underscores) carry the message even more forcefully... but, in the end, it still depends on social convention and consensus: Python's introspection is forceful enough that you can't handcuff every other programmer in the world to respect your wishes. ((Btw, though it's a closely held secret, much the same holds for C++: with most compilers, a simple #define private public line before #includeing your .h file is all it takes for wily coders to make hash of your \"privacy\"...!-))"} +{"question_id": 28829236, "score": 346, "creation_date": 1425377913, "tags": ["python", "pylint"], "instruction": "Is it possible to ignore one single specific line with Pylint?\n\nI have the following line in my header: import config.logging_settings This actually changes my Python logging settings, but Pylint thinks it is an unused import. I do not want to remove unused-import warnings in general, so is it possible to just ignore this one specific line? I wouldn't mind having a .pylintrc for this project, so answers changing a configuration file will be accepted. Otherwise, something like this will also be appreciated: import config.logging_settings # pylint: disable-this-line-in-some-way", "output": "Message control is documented in the pylint FAQ: Is it possible to locally disable a particular message? Yes, this feature has been added in Pylint 0.11. This may be done by adding \"# pylint: disable=some-message,another-one\" at the desired block level or at the end of the desired line of code. You can use the message code or the symbolic names. For example, def test(): # Disable all the no-member violations in this function # pylint: disable=no-member ... # pylint: enable=no-member apply to a specific line only: global VAR # pylint: disable=global-statement or for less verbosity, disable the ONLY following line (pylint 2.10+): # pylint: disable-next=global-statement global VAR Pylint's manual also has further examples. There is a wiki that documents all pylint messages and their codes."} +{"question_id": 51575931, "score": 345, "creation_date": 1532819320, "tags": ["python", "python-dataclasses"], "instruction": "Overriding default values when using inheritance in dataclasses\n\nI'm currently trying my hands on the dataclass constructions. I am currently stuck on trying to do some inheritance of a parent class. It looks like the order of the arguments are botched by my current approach such that the bool parameter in the child class is passed before the other parameters. This is causing a type error. from dataclasses import dataclass @dataclass class Parent: name: str age: int ugly: bool = False def print_name(self): print(self.name) def print_age(self): print(self.age) def print_id(self): print(f'The Name is {self.name} and {self.name} is {self.age} year old') @dataclass class Child(Parent): school: str ugly: bool = True jack = Parent('jack snr', 32, ugly=True) jack_son = Child('jack jnr', 12, school='havard', ugly=True) jack.print_id() jack_son.print_id() When I run this code I get this TypeError: TypeError: non-default argument 'school' follows default argument How do I fix this?", "output": "The way dataclasses combines attributes prevents you from being able to use attributes with defaults in a base class and then use attributes without a default (positional attributes) in a subclass. That's because the attributes are combined by starting from the bottom of the MRO, and building up an ordered list of the attributes in first-seen order; overrides are kept in their original location. So Parent starts out with ['name', 'age', 'ugly'], where ugly has a default, and then Child adds ['school'] to the end of that list (with ugly already in the list). This means you end up with ['name', 'age', 'ugly', 'school'] and because school doesn't have a default, this results in an invalid argument listing for __init__. This is documented in PEP-557 Dataclasses, under inheritance: When the Data Class is being created by the @dataclass decorator, it looks through all of the class's base classes in reverse MRO (that is, starting at object) and, for each Data Class that it finds, adds the fields from that base class to an ordered mapping of fields. After all of the base class fields are added, it adds its own fields to the ordered mapping. All of the generated methods will use this combined, calculated ordered mapping of fields. Because the fields are in insertion order, derived classes override base classes. and under Specification: TypeError will be raised if a field without a default value follows a field with a default value. This is true either when this occurs in a single class, or as a result of class inheritance. You do have a few options here to avoid this issue. The first option is to use separate base classes to force fields with defaults into a later position in the MRO order. At all cost, avoid setting fields directly on classes that are to be used as base classes, such as Parent. The following class hierarchy works: # base classes with fields; fields without defaults separate from fields with. @dataclass class _ParentBase: name: str age: int @dataclass class _ParentDefaultsBase: ugly: bool = False @dataclass class _ChildBase(_ParentBase): school: str @dataclass class _ChildDefaultsBase(_ParentDefaultsBase): ugly: bool = True # public classes, deriving from base-with, base-without field classes # subclasses of public classes should put the public base class up front. @dataclass class Parent(_ParentDefaultsBase, _ParentBase): def print_name(self): print(self.name) def print_age(self): print(self.age) def print_id(self): print(f\"The Name is {self.name} and {self.name} is {self.age} year old\") @dataclass class Child(_ChildDefaultsBase, Parent, _ChildBase): pass By pulling out fields into separate base classes with fields without defaults and fields with defaults, and a carefully selected inheritance order, you can produce an MRO that puts all fields without defaults before those with defaults. The reversed MRO (ignoring object) for Child is: _ParentBase _ChildBase _ParentDefaultsBase Parent _ChildDefaultsBase Note that while Parent doesn't set any new fields, it does inherit the fields from _ParentDefaultsBase and should not end up 'last' in the field listing order; the above order puts _ChildDefaultsBase last so its fields 'win'. The dataclass rules are also satisfied; the classes with fields without defaults (_ParentBase and _ChildBase) precede the classes with fields with defaults (_ParentDefaultsBase and _ChildDefaultsBase). The result is Parent and Child classes with a sane field older, while Child is still a subclass of Parent: >>> from inspect import signature >>> signature(Parent) None> >>> signature(Child) None> >>> issubclass(Child, Parent) True and so you can create instances of both classes: >>> jack = Parent('jack snr', 32, ugly=True) >>> jack_son = Child('jack jnr', 12, school='havard', ugly=True) >>> jack Parent(name='jack snr', age=32, ugly=True) >>> jack_son Child(name='jack jnr', age=12, school='havard', ugly=True) Another option is to only use fields with defaults; you can still make in an error to not supply a school value, by raising one in __post_init__: _no_default = object() @dataclass class Child(Parent): school: str = _no_default ugly: bool = True def __post_init__(self): if self.school is _no_default: raise TypeError(\"__init__ missing 1 required argument: 'school'\") but this does alter the field order; school ends up after ugly: ) -> None> and a type hint checker will complain about _no_default not being a string. You can also use the attrs project, which was the project that inspired dataclasses. It uses a different inheritance merging strategy; it pulls overridden fields in a subclass to the end of the fields list, so ['name', 'age', 'ugly'] in the Parent class becomes ['name', 'age', 'school', 'ugly'] in the Child class; by overriding the field with a default, attrs allows the override without needing to do a MRO dance. attrs supports defining fields without type hints, but lets stick to the supported type hinting mode by setting auto_attribs=True: import attr @attr.s(auto_attribs=True) class Parent: name: str age: int ugly: bool = False def print_name(self): print(self.name) def print_age(self): print(self.age) def print_id(self): print(f\"The Name is {self.name} and {self.name} is {self.age} year old\") @attr.s(auto_attribs=True) class Child(Parent): school: str ugly: bool = True"} +{"question_id": 11854847, "score": 344, "creation_date": 1344377464, "tags": ["python", "jupyter-notebook", "ipython"], "instruction": "How can I display an image from a file in Jupyter Notebook?\n\nI would like to use an IPython notebook as a way to interactively analyze some genome charts I am making with Biopython's GenomeDiagram module. While there is extensive documentation on how to use matplotlib to get graphs inline in IPython notebook, GenomeDiagram uses the ReportLab toolkit which I don't think is supported for inline graphing in IPython. I was thinking, however, that a way around this would be to write out the plot/genome diagram to a file and then open the image inline which would have the same result with something like this: gd_diagram.write(\"test.png\", \"PNG\") display(file=\"test.png\") However, I can't figure out how to do this - or know if it's possible. So does anyone know if images can be opened/displayed in IPython?", "output": "Courtesy of this post, you can do the following: from IPython.display import Image Image(filename='test.png')"} +{"question_id": 17972020, "score": 343, "creation_date": 1375277982, "tags": ["python", "sql", "sqlalchemy", "flask", "flask-sqlalchemy"], "instruction": "How to execute raw SQL in Flask-SQLAlchemy app\n\nHow do you execute raw SQL in SQLAlchemy? I have a python web app that runs on flask and interfaces to the database through SQLAlchemy. I need a way to run the raw SQL. The query involves multiple table joins along with Inline views. I've tried: connection = db.session.connection() connection.execute( ) But I keep getting gateway errors.", "output": "SQLAlchemy 2.0: with engine.connect() as connection: result = connection.execute(text('SELECT * FROM your_table')) # do something with the result.. SQLAlchemy 1.x: from sqlalchemy import text sql = text('select name from penguins') result = db.engine.execute(sql) names = [row[0] for row in result] print names Note that db.engine.execute() is \"connectionless\", which is deprecated in SQLAlchemy 2.0."} +{"question_id": 1630320, "score": 343, "creation_date": 1256644473, "tags": ["for-loop", "python", "idioms", "fencepost"], "instruction": "What is the pythonic way to detect the last element in a 'for' loop?\n\nHow can I treat the last element of the input specially, when iterating with a for loop? In particular, if there is code that should only occur \"between\" elements (and not \"after\" the last one), how can I structure the code? Currently, I write code like so: for i, data in enumerate(data_list): code_that_is_done_for_every_element if i != len(data_list) - 1: code_that_is_done_between_elements How can I simplify or improve this?", "output": "Most of the times it is easier (and cheaper) to make the first iteration the special case instead of the last one: first = True for data in data_list: if first: first = False else: between_items() item() This will work for any iterable, even for those that have no len(): file = open('/path/to/file') for line in file: process_line(line) # No way of telling if this is the last line! Apart from that, I don't think there is a generally superior solution as it depends on what you are trying to do. For example, if you are building a string from a list, it's naturally better to use str.join() than using a for loop \u201cwith special case\u201d. Using the same principle but more compact: for i, line in enumerate(data_list): if i > 0: between_items() item() Looks familiar, doesn't it? :) For @ofko, and others who really need to find out if the current value of an iterable without len() is the last one, you will need to look ahead: def lookahead(iterable): \"\"\"Pass through all values from the given iterable, augmented by the information if there are more values to come after the current one (True), or if it is the last value (False). \"\"\" # Get an iterator and pull the first value. it = iter(iterable) try: last = next(it) except StopIteration: return # Run the iterator to exhaustion (starting from the second value). for val in it: # Report the *previous* value (more to come). yield last, True last = val # Report the last value. yield last, False Then you can use it like this: >>> for i, has_more in lookahead(range(3)): ... print(i, has_more) 0 True 1 True 2 False"} +{"question_id": 16096754, "score": 342, "creation_date": 1366342287, "tags": ["python", "list", "filter", "list-comprehension", "nonetype"], "instruction": "remove None value from a list without removing the 0 value\n\nThis was my source I started with. My List L = [0, 23, 234, 89, None, 0, 35, 9] When I run this : L = filter(None, L) I get this results [23, 234, 89, 35, 9] But this is not what I need, what I really need is : [0, 23, 234, 89, 0, 35, 9] Because I'm calculating percentile of the data and the 0 make a lot of difference. How to remove the None value from a list without removing 0 value?", "output": ">>> L = [0, 23, 234, 89, None, 0, 35, 9] >>> [x for x in L if x is not None] [0, 23, 234, 89, 0, 35, 9] Just for fun, here's how you can adapt filter to do this without using a lambda, (I wouldn't recommend this code - it's just for scientific purposes) >>> from operator import is_not >>> from functools import partial >>> L = [0, 23, 234, 89, None, 0, 35, 9] >>> list(filter(partial(is_not, None), L)) [0, 23, 234, 89, 0, 35, 9]"} +{"question_id": 42231161, "score": 342, "creation_date": 1487089307, "tags": ["python", "asynchronous", "async-await", "python-asyncio"], "instruction": "asyncio.gather vs asyncio.wait (vs asyncio.TaskGroup)\n\nasyncio.gather and asyncio.wait seem to have similar uses: I have a bunch of async things that I want to execute/wait for (not necessarily waiting for one to finish before the next one starts). Since Python 3.11 there is yet another similar feature, asyncio.TaskGroup. They use a different syntax, and differ in some details, but it seems very un-pythonic to me to have several functions that have such a huge overlap in functionality. What am I missing?", "output": "Although similar in general cases (\"run and get results for many tasks\"), each function has some specific functionality for other cases (and see also TaskGroup for Python 3.11+ below): asyncio.gather() Returns a Future instance, allowing high level grouping of tasks: import asyncio from pprint import pprint import random async def coro(tag): print(\">\", tag) await asyncio.sleep(random.uniform(1, 3)) print(\"<\", tag) return tag loop = asyncio.get_event_loop() group1 = asyncio.gather(*[coro(\"group 1.{}\".format(i)) for i in range(1, 6)]) group2 = asyncio.gather(*[coro(\"group 2.{}\".format(i)) for i in range(1, 4)]) group3 = asyncio.gather(*[coro(\"group 3.{}\".format(i)) for i in range(1, 10)]) all_groups = asyncio.gather(group1, group2, group3) results = loop.run_until_complete(all_groups) loop.close() pprint(results) All tasks in a group can be cancelled by calling group2.cancel() or even all_groups.cancel(). See also .gather(..., return_exceptions=True), asyncio.wait() Supports waiting to be stopped after the first task is done, or after a specified timeout, allowing lower level precision of operations: import asyncio import random async def coro(tag): print(\">\", tag) await asyncio.sleep(random.uniform(0.5, 5)) print(\"<\", tag) return tag loop = asyncio.get_event_loop() tasks = [coro(i) for i in range(1, 11)] print(\"Get first result:\") finished, unfinished = loop.run_until_complete( asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)) for task in finished: print(task.result()) print(\"unfinished:\", len(unfinished)) print(\"Get more results in 2 seconds:\") finished2, unfinished2 = loop.run_until_complete( asyncio.wait(unfinished, timeout=2)) for task in finished2: print(task.result()) print(\"unfinished2:\", len(unfinished2)) print(\"Get all other results:\") finished3, unfinished3 = loop.run_until_complete(asyncio.wait(unfinished2)) for task in finished3: print(task.result()) loop.close() TaskGroup (Python 3.11+) Update: Python 3.11 introduces TaskGroups which can \"automatically\" await more than one task without gather() or await(): # Python 3.11+ ONLY! async def main(): async with asyncio.TaskGroup() as tg: task1 = tg.create_task(some_coro(...)) task2 = tg.create_task(another_coro(...)) print(\"Both tasks have completed now.\")"} +{"question_id": 3644902, "score": 341, "creation_date": 1283657404, "tags": ["python", "django", "authentication"], "instruction": "How to check if a user is logged in (how to properly use user.is_authenticated)?\n\nI am looking over this website but just can't seem to figure out how to do this as it's not working. I need to check if the current site user is logged in (authenticated), and am trying: request.user.is_authenticated despite being sure that the user is logged in, it returns just: > I'm able to do other requests (from the first section in the url above), such as: request.user.is_active which returns a successful response.", "output": "Update for Django 2.0 and later is_authenticated is a read-only attribute: if request.user.is_authenticated: # do something if the user is authenticated For Django 1.9 and older is_authenticated() was a function. Called like: if request.user.is_authenticated(): # do something if the user is authenticated As Peter Rowell pointed out, what may be tripping you up is that in the default Django template language, you don't tack on parenthesis to call functions. So you may have seen something like this in template code: {% if user.is_authenticated %} However, in Python code, it is indeed a method in the User class. NB: The method was removed in Django 2.0."} +{"question_id": 566746, "score": 340, "creation_date": 1235071039, "tags": ["python", "linux", "console", "terminal", "width"], "instruction": "How to get Linux console window width in Python\n\nIs there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window. Edit Looking for a solution that works on Linux", "output": "Not sure why it is in the module shutil, but it landed there in Python 3.3. See: Querying the size of the output terminal >>> import shutil >>> shutil.get_terminal_size((80, 20)) # pass fallback os.terminal_size(columns=87, lines=23) # returns a named-tuple A low-level implementation is in the os module. Cross-platform\u2014works under Linux, Mac OS, and Windows, probably other Unix-likes. There's a backport as well, though no longer relevant."} +{"question_id": 3570796, "score": 339, "creation_date": 1282776181, "tags": ["python", "abstract-class", "abc"], "instruction": "Why use Abstract Base Classes in Python?\n\nBecause I am used to the old ways of duck typing in Python, I fail to understand the need for ABC (abstract base classes). The help is good on how to use them. I tried to read the rationale in the PEP, but it went over my head. If I was looking for a mutable sequence container, I would check for __setitem__, or more likely try to use it (EAFP). I haven't come across a real life use for the numbers module, which does use ABCs, but that is the closest I have to understanding. Can anyone explain the rationale to me, please?", "output": "Short version ABCs offer a higher level of semantic contract between clients and the implemented classes. Long version There is a contract between a class and its callers. The class promises to do certain things and have certain properties. There are different levels to the contract. At a very low level, the contract might include the name of a method or its number of parameters. In a staticly-typed language, that contract would actually be enforced by the compiler. In Python, you can use EAFP or type introspection to confirm that the unknown object meets this expected contract. But there are also higher-level, semantic promises in the contract. For example, if there is a __str__() method, it is expected to return a string representation of the object. It could delete all contents of the object, commit the transaction and spit a blank page out of the printer... but there is a common understanding of what it should do, described in the Python manual. That's a special case, where the semantic contract is described in the manual. What should the print() method do? Should it write the object to a printer or a line to the screen, or something else? It depends - you need to read the comments to understand the full contract here. A piece of client code that simply checks that the print() method exists has confirmed part of the contract - that a method call can be made, but not that there is agreement on the higher level semantics of the call. Defining an Abstract Base Class (ABC) is a way of producing a contract between the class implementers and the callers. It isn't just a list of method names, but a shared understanding of what those methods should do. If you inherit from this ABC, you are promising to follow all the rules described in the comments, including the semantics of the print() method. Python's duck-typing has many advantages in flexibility over static-typing, but it doesn't solve all the problems. ABCs offer an intermediate solution between the free-form of Python and the bondage-and-discipline of a staticly-typed language."} +{"question_id": 11968976, "score": 339, "creation_date": 1345032440, "tags": ["python"], "instruction": "List files ONLY in the current directory\n\nIn Python, I only want to list all the files in the current directory ONLY. I do not want files listed from any sub directory or parent. There do seem to be similar solutions out there, but they don't seem to work for me. Here's my code snippet: import os for subdir, dirs, files in os.walk('./'): for file in files: do some stuff print file Let's suppose I have 2 files, holygrail.py and Tim inside my current directory. I have a folder as well and it contains two files - let's call them Arthur and Lancelot - inside it. When I run the script, this is what I get: holygrail.py Tim Arthur Lancelot I am happy with holygrail.py and Tim. But the two files, Arthur and Lancelot, I do not want listed.", "output": "Just use os.listdir and os.path.isfile instead of os.walk. Example: import os files = [f for f in os.listdir('.') if os.path.isfile(f)] for f in files: # do something But be careful while applying this to other directory, like files = [f for f in os.listdir(somedir) if os.path.isfile(f)] which would not work because f is not a full path but relative to the current directory. Therefore, for filtering on another directory, do os.path.isfile(os.path.join(somedir, f)) (Thanks Causality for the hint)"} +{"question_id": 26597116, "score": 339, "creation_date": 1414445119, "tags": ["python", "matplotlib", "seaborn"], "instruction": "Seaborn plots not showing up\n\nI'm sure I'm forgetting something very simple, but I cannot get certain plots to work with Seaborn. If I do: import seaborn as sns Then any plots that I create as usual with matplotlib get the Seaborn styling (with the grey grid in the background). However, if I try to do one of the examples, such as: In [1]: import seaborn as sns In [2]: sns.set() In [3]: df = sns.load_dataset('iris') In [4]: sns.pairplot(df, hue='species', size=2.5) Out[4]: The pairplot function returns a PairGrid object, but the plot doesn't show up. I'm a little confused because matplotlib seems to be functioning properly, and the Seaborn styles are applied to other matplotlib plots, but the Seaborn functions don't seem to do anything. Does anybody have any idea what might be the problem?", "output": "Plots created using seaborn need to be displayed like ordinary matplotlib plots. This can be done using the plt.show() function from matplotlib. Originally I posted the solution to use the already imported matplotlib object from seaborn (sns.plt.show()) however this is considered to be a bad practice. Therefore, simply directly import the _matplotlib.pyplot_ module and show your plots with import matplotlib.pyplot as plt plt.show() If the IPython notebook is used the inline backend can be invoked to remove the necessity of calling show after each plot. The respective magic is %matplotlib inline"} +{"question_id": 6475328, "score": 339, "creation_date": 1308967454, "tags": ["python", "file-io"], "instruction": "How can I read large text files line by line, without loading them into memory?\n\nI want to read a large file (>5GB), line by line, without loading its entire contents into memory. I cannot use readlines() since it creates a very large list in memory.", "output": "Use a for loop on a file object to read it line-by-line. Use with open(...) to let a context manager ensure that the file is closed after reading: with open(\"log.txt\") as infile: for line in infile: print(line)"} +{"question_id": 39491420, "score": 338, "creation_date": 1473858832, "tags": ["python", "json", "parsing"], "instruction": "Python/Json:Expecting property name enclosed in double quotes\n\nI've been trying to figure out a good way to load JSON objects in Python. I send this json data: {'http://example.org/about': {'http://purl.org/dc/terms/title': [{'type': 'literal', 'value': \"Anna's Homepage\"}]}} to the backend where it will be received as a string then I used json.loads(data) to parse it. But each time I got the same exception : ValueError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) I googled it but nothing seems to work besides this solution json.loads(json.dumps(data)) which personally seems for me not that efficient since it accept any kind of data even the ones that are not in json format. Any suggestions will be much appreciated.", "output": "This is not JSON: { 'http://example.org/about': { 'http://purl.org/dc/terms/title': [ {'type': 'literal', 'value': \"Anna's Homepage\"} ] } } This is JSON: { \"http://example.org/about\": { \"http://purl.org/dc/terms/title\": [ {\"type\": \"literal\", \"value\": \"Anna's Homepage\"} ] } } JSON specification - RFC7159 states that a string begins and ends with quotation mark. That is \". Single quote ' has no semantic meaning in JSON and is allowed only inside a string."} +{"question_id": 9856683, "score": 338, "creation_date": 1332632491, "tags": ["python"], "instruction": "Using Python's os.path, how do I go up one directory?\n\nI recently upgrade Django from v1.3.1 to v1.4. In my old settings.py I have TEMPLATE_DIRS = ( os.path.join(os.path.dirname( __file__ ), 'templates').replace('\\\\', '/'), # Put strings here, like \"/home/html/django_templates\" or \"C:/www/django/templates\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) This will point to /Users/hobbes3/Sites/mysite/templates, but because Django v1.4 moved the project folder to the same level as the app folders, my settings.py file is now in /Users/hobbes3/Sites/mysite/mysite/ instead of /Users/hobbes3/Sites/mysite/. How do I use os.path to look at a directory one level above from __file__. In other words, I want /Users/hobbes3/Sites/mysite/mysite/settings.py to find /Users/hobbes3/Sites/mysite/templates using relative paths.", "output": "os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'templates')) You can also use normpath to clean up the path, rather than abspath. However, in this situation, Django expects an absolute path rather than a relative path. For cross platform compatability, use os.pardir instead of '..'."} +{"question_id": 4674473, "score": 338, "creation_date": 1294869499, "tags": ["python", "arrays", "numpy", "slice", "valueerror"], "instruction": "ValueError: setting an array element with a sequence\n\nWhy do the following code samples: np.array([[1, 2], [2, 3, 4]]) np.array([1.2, \"abc\"], dtype=float) all give the following error? ValueError: setting an array element with a sequence.", "output": "Possible reason 1: trying to create a jagged array You may be creating an array from a list that isn't shaped like a multi-dimensional array: numpy.array([[1, 2], [2, 3, 4]]) # wrong! numpy.array([[1, 2], [2, [3, 4]]]) # wrong! In these examples, the argument to numpy.array contains sequences of different lengths. Those will yield this error message because the input list is not shaped like a \"box\" that can be turned into a multidimensional array. Possible reason 2: providing elements of incompatible types For example, providing a string as an element in an array of type float: numpy.array([1.2, \"abc\"], dtype=float) # wrong! If you really want to have a NumPy array containing both strings and floats, you could use the dtype object, which allows the array to hold arbitrary Python objects: numpy.array([1.2, \"abc\"], dtype=object)"} +{"question_id": 7152762, "score": 337, "creation_date": 1314042446, "tags": ["python", "io", "file-writing"], "instruction": "How to redirect 'print' output to a file?\n\nI want to redirect the print to a .txt file using Python. I have a for loop, which will print the output for each of my .bam file while I want to redirect all output to one file. So I tried to put: f = open('output.txt','w') sys.stdout = f at the beginning of my script. However I get nothing in the .txt file. My script is: #!/usr/bin/python import os,sys import subprocess import glob from os import path f = open('output.txt','w') sys.stdout = f path= '/home/xxx/nearline/bamfiles' bamfiles = glob.glob(path + '/*.bam') for bamfile in bamfiles: filename = bamfile.split('/')[-1] print 'Filename:', filename samtoolsin = subprocess.Popen([\"/share/bin/samtools/samtools\",\"view\",bamfile], stdout=subprocess.PIPE,bufsize=1) linelist= samtoolsin.stdout.readlines() print 'Readlines finished!' So what's the problem? Any other way besides this sys.stdout? I need my result look like: Filename: ERR001268.bam Readlines finished! Mean: 233 SD: 10 Interval is: (213, 252)", "output": "The most obvious way to do this would be to print to a file object: with open('out.txt', 'w') as f: print('Filename:', filename, file=f) # Python 3.x # print >> f, 'Filename:', filename # Python 2.x However, redirecting stdout also works for me. It is probably fine for a one-off script such as this: import sys orig_stdout = sys.stdout f = open('out.txt', 'w') sys.stdout = f for i in range(2): print('i = ', i) sys.stdout = orig_stdout f.close() Since Python 3.4 there's a simple context manager available to do this in the standard library: from contextlib import redirect_stdout with open('out.txt', 'w') as f: with redirect_stdout(f): print('data') Redirecting externally from the shell itself is another option, and often preferable: ./script.py > out.txt Other questions: What is the first filename in your script? I don't see it initialized. My first guess is that glob doesn't find any bamfiles, and therefore the for loop doesn't run. Check that the folder exists, and print out bamfiles in your script. Also, use os.path.join and os.path.basename to manipulate paths and filenames."} +{"question_id": 2801882, "score": 337, "creation_date": 1273486953, "tags": ["python", "matplotlib", "graph"], "instruction": "Generating a PNG with matplotlib when DISPLAY is undefined\n\nI am trying to use networkx with Python. When I run this program it get this error. Is there anything missing? #!/usr/bin/env python import networkx as nx import matplotlib import matplotlib.pyplot import matplotlib.pyplot as plt G=nx.Graph() G.add_node(1) G.add_nodes_from([2,3,4,5,6,7,8,9,10]) #nx.draw_graphviz(G) #nx_write_dot(G, 'node.png') nx.draw(G) plt.savefig(\"/var/www/node.png\") Traceback (most recent call last): File \"graph.py\", line 13, in nx.draw(G) File \"/usr/lib/pymodules/python2.5/networkx/drawing/nx_pylab.py\", line 124, in draw cf=pylab.gcf() File \"/usr/lib/pymodules/python2.5/matplotlib/pyplot.py\", line 276, in gcf return figure() File \"/usr/lib/pymodules/python2.5/matplotlib/pyplot.py\", line 254, in figure **kwargs) File \"/usr/lib/pymodules/python2.5/matplotlib/backends/backend_tkagg.py\", line 90, in new_figure_manager window = Tk.Tk() File \"/usr/lib/python2.5/lib-tk/Tkinter.py\", line 1650, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable I get a different error now: #!/usr/bin/env python import networkx as nx import matplotlib import matplotlib.pyplot import matplotlib.pyplot as plt matplotlib.use('Agg') G=nx.Graph() G.add_node(1) G.add_nodes_from([2,3,4,5,6,7,8,9,10]) #nx.draw_graphviz(G) #nx_write_dot(G, 'node.png') nx.draw(G) plt.savefig(\"/var/www/node.png\") /usr/lib/pymodules/python2.5/matplotlib/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. if warn: warnings.warn(_use_error_msg) Traceback (most recent call last): File \"graph.py\", line 15, in nx.draw(G) File \"/usr/lib/python2.5/site-packages/networkx-1.2.dev-py2.5.egg/networkx/drawing/nx_pylab.py\", line 124, in draw cf=pylab.gcf() File \"/usr/lib/pymodules/python2.5/matplotlib/pyplot.py\", line 276, in gcf return figure() File \"/usr/lib/pymodules/python2.5/matplotlib/pyplot.py\", line 254, in figure **kwargs) File \"/usr/lib/pymodules/python2.5/matplotlib/backends/backend_tkagg.py\", line 90, in new_figure_manager window = Tk.Tk() File \"/usr/lib/python2.5/lib-tk/Tkinter.py\", line 1650, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable I get a different error now: #!/usr/bin/env python import networkx as nx import matplotlib import matplotlib.pyplot import matplotlib.pyplot as plt matplotlib.use('Agg') G=nx.Graph() G.add_node(1) G.add_nodes_from([2,3,4,5,6,7,8,9,10]) #nx.draw_graphviz(G) #nx_write_dot(G, 'node.png') nx.draw(G) plt.savefig(\"/var/www/node.png\") /usr/lib/pymodules/python2.5/matplotlib/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. if warn: warnings.warn(_use_error_msg) Traceback (most recent call last): File \"graph.py\", line 15, in nx.draw(G) File \"/usr/lib/python2.5/site-packages/networkx-1.2.dev-py2.5.egg/networkx/drawing/nx_pylab.py\", line 124, in draw cf=pylab.gcf() File \"/usr/lib/pymodules/python2.5/matplotlib/pyplot.py\", line 276, in gcf return figure() File \"/usr/lib/pymodules/python2.5/matplotlib/pyplot.py\", line 254, in figure **kwargs) File \"/usr/lib/pymodules/python2.5/matplotlib/backends/backend_tkagg.py\", line 90, in new_figure_manager window = Tk.Tk() File \"/usr/lib/python2.5/lib-tk/Tkinter.py\", line 1650, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable", "output": "The main problem is that (on your system) matplotlib chooses an x-using backend by default. I just had the same problem on one of my servers. The solution for me was to add the following code in a place that gets read before any other pylab/matplotlib/pyplot import: import matplotlib # Force matplotlib to not use any Xwindows backend. matplotlib.use('Agg') The alternative is to set it in your .matplotlibrc"} +{"question_id": 3759981, "score": 336, "creation_date": 1285069777, "tags": ["python", "flask", "ip-address", "werkzeug"], "instruction": "Get IP address of visitors using Flask for Python\n\nI'm making a website where users can log on and download files, using the Flask micro-framework (based on Werkzeug) which uses Python (2.6 in my case). I need to get the IP address of users when they log on (for logging purposes). Does anyone know how to do this? Surely there is a way to do it with Python?", "output": "See the documentation on how to access the Request object and then get from this same Request object, the attribute remote_addr. Code example from flask import request from flask import jsonify @app.route(\"/get_my_ip\", methods=[\"GET\"]) def get_my_ip(): return jsonify({'ip': request.remote_addr}), 200 For more information see the Werkzeug documentation."} +{"question_id": 14257373, "score": 336, "creation_date": 1357818878, "tags": ["python", "csv", "csv-header"], "instruction": "How to skip the headers when processing a csv file using Python?\n\nI am using below referred code to edit a csv using Python. Functions called in the code form upper part of the code. Problem: I want the below referred code to start editing the csv from 2nd row, I want it to exclude 1st row which contains headers. Right now it is applying the functions on 1st row only and my header row is getting changed. in_file = open(\"tmob_notcleaned.csv\", \"rb\") reader = csv.reader(in_file) out_file = open(\"tmob_cleaned.csv\", \"wb\") writer = csv.writer(out_file) row = 1 for row in reader: row[13] = handle_color(row[10])[1].replace(\" - \",\"\").strip() row[10] = handle_color(row[10])[0].replace(\"-\",\"\").replace(\"(\",\"\").replace(\")\",\"\").strip() row[14] = handle_gb(row[10])[1].replace(\"-\",\"\").replace(\" \",\"\").replace(\"GB\",\"\").strip() row[10] = handle_gb(row[10])[0].strip() row[9] = handle_oem(row[10])[1].replace(\"Blackberry\",\"RIM\").replace(\"TMobile\",\"T-Mobile\").strip() row[15] = handle_addon(row[10])[1].strip() row[10] = handle_addon(row[10])[0].replace(\" by\",\"\").replace(\"FREE\",\"\").strip() writer.writerow(row) in_file.close() out_file.close() I tried to solve this problem by initializing row variable to 1 but it didn't work. Please help me in solving this issue.", "output": "Your reader variable is an iterable, by looping over it you retrieve the rows. To make it skip one item before your loop, simply call next(reader, None) and ignore the return value. You can also simplify your code a little; use the opened files as context managers to have them closed automatically: with open(\"tmob_notcleaned.csv\", \"rb\") as infile, open(\"tmob_cleaned.csv\", \"wb\") as outfile: reader = csv.reader(infile) next(reader, None) # skip the headers writer = csv.writer(outfile) for row in reader: # process each row writer.writerow(row) # no need to close, the files are closed automatically when you get to this point. If you wanted to write the header to the output file unprocessed, that's easy too, pass the output of next() to writer.writerow(): headers = next(reader, None) # returns the headers or `None` if the input is empty if headers: writer.writerow(headers)"} +{"question_id": 33068055, "score": 336, "creation_date": 1444585007, "tags": ["python", "amazon-web-services", "boto", "boto3"], "instruction": "How to handle errors with boto3?\n\nI am trying to figure how to do proper error handling with boto3. I am trying to create an IAM user: def create_user(username, iam_conn): try: user = iam_conn.create_user(UserName=username) return user except Exception as e: return e When the call to create_user succeeds, I get a neat object that contains the http status code of the API call and the data of the newly created user. Example: {'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'omitted' }, u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted', u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()), u'Path': '/', u'UserId': 'omitted', u'UserName': 'omitted' } } This works great. But when this fails (like if the user already exists), I just get an object of type botocore.exceptions.ClientError with only text to tell me what went wrong. Example: ClientError('An error occurred (EntityAlreadyExists) when calling the CreateUser operation: User with name omitted already exists.',) This (AFAIK) makes error handling very hard because I can't just switch on the resulting http status code (409 for user already exists according to the AWS API docs for IAM). This makes me think that I must be doing something the wrong way. The optimal way would be for boto3 to never throw exceptions, but juts always return an object that reflects how the API call went. Can anyone enlighten me on this issue or point me in the right direction?", "output": "Use the response contained within the exception. Here is an example: import boto3 from botocore.exceptions import ClientError try: iam = boto3.client('iam') user = iam.create_user(UserName='fred') print(\"Created user: %s\" % user) except ClientError as e: if e.response['Error']['Code'] == 'EntityAlreadyExists': print(\"User already exists\") else: print(\"Unexpected error: %s\" % e) The response dict in the exception will contain the following: ['Error']['Code'] e.g. 'EntityAlreadyExists' or 'ValidationError' ['ResponseMetadata']['HTTPStatusCode'] e.g. 400 ['ResponseMetadata']['RequestId'] e.g. 'd2b06652-88d7-11e5-99d0-812348583a35' ['Error']['Message'] e.g. \"An error occurred (EntityAlreadyExists) ...\" ['Error']['Type'] e.g. 'Sender' For more information see: boto3 error handling botocore error handling [Updated: 2018-03-07] The AWS Python SDK has begun to expose service exceptions on clients (though not on resources) that you can explicitly catch, so it is now possible to write that code like this: import botocore import boto3 try: iam = boto3.client('iam') user = iam.create_user(UserName='fred') print(\"Created user: %s\" % user) except iam.exceptions.EntityAlreadyExistsException: print(\"User already exists\") except botocore.exceptions.ParamValidationError as e: print(\"Parameter validation error: %s\" % e) except botocore.exceptions.ClientError as e: print(\"Unexpected error: %s\" % e) Unfortunately, there is currently no documentation for these errors/exceptions but you can get a list of the core errors as follows: import botocore import boto3 [e for e in dir(botocore.exceptions) if e.endswith('Error')] Note that you must import both botocore and boto3. If you only import botocore then you will find that botocore has no attribute named exceptions. This is because the exceptions are dynamically populated into botocore by boto3. You can get a list of service-specific exceptions as follows (replace iam with the relevant service as needed): import boto3 iam = boto3.client('iam') [e for e in dir(iam.exceptions) if e.endswith('Exception')] [Updated: 2021-09-07] In addition to the aforementioned client exception method, there is also a third-party helper package named aws-error-utils."} +{"question_id": 14984119, "score": 334, "creation_date": 1361375357, "tags": ["python", "pandas"], "instruction": "python pandas remove duplicate columns\n\nWhat is the easiest way to remove duplicate columns from a dataframe? I am reading a text file that has duplicate columns via: import pandas as pd df=pd.read_table(fname) The column names are: Time, Time Relative, N2, Time, Time Relative, H2, etc... All the Time and Time Relative columns contain the same data. I want: Time, Time Relative, N2, H2 All my attempts at dropping, deleting, etc such as: df=df.T.drop_duplicates().T Result in uniquely valued index errors: Reindexing only valid with uniquely valued index objects Sorry for being a Pandas noob. Any Suggestions would be appreciated. Additional Details Pandas version: 0.9.0 Python Version: 2.7.3 Windows 7 (installed via Pythonxy 2.7.3.0) data file (note: in the real file, columns are separated by tabs, here they are separated by 4 spaces): Time Time Relative [s] N2[%] Time Time Relative [s] H2[ppm] 2/12/2013 9:20:55 AM 6.177 9.99268e+001 2/12/2013 9:20:55 AM 6.177 3.216293e-005 2/12/2013 9:21:06 AM 17.689 9.99296e+001 2/12/2013 9:21:06 AM 17.689 3.841667e-005 2/12/2013 9:21:18 AM 29.186 9.992954e+001 2/12/2013 9:21:18 AM 29.186 3.880365e-005 ... etc ... 2/12/2013 2:12:44 PM 17515.269 9.991756+001 2/12/2013 2:12:44 PM 17515.269 2.800279e-005 2/12/2013 2:12:55 PM 17526.769 9.991754e+001 2/12/2013 2:12:55 PM 17526.769 2.880386e-005 2/12/2013 2:13:07 PM 17538.273 9.991797e+001 2/12/2013 2:13:07 PM 17538.273 3.131447e-005", "output": "Here's a one line solution to remove columns based on duplicate column names: df = df.loc[:,~df.columns.duplicated()].copy() How it works: Suppose the columns of the data frame are ['alpha','beta','alpha'] df.columns.duplicated() returns a boolean array: a True or False for each column. If it is False then the column name is unique up to that point, if it is True then the column name is duplicated earlier. For example, using the given example, the returned value would be [False,False,True]. Pandas allows one to index using boolean values whereby it selects only the True values. Since we want to keep the unduplicated columns, we need the above boolean array to be flipped (ie [True, True, False] = ~[False,False,True]) Finally, df.loc[:,[True,True,False]] selects only the non-duplicated columns using the aforementioned indexing capability. The final .copy() is there to copy the dataframe to (mostly) avoid getting errors about trying to modify an existing dataframe later down the line. Note: the above only checks columns names, not column values. To remove duplicated indexes Since it is similar enough, do the same thing on the index: df = df.loc[~df.index.duplicated(),:].copy() To remove duplicates by checking values without transposing Update and caveat: please be careful in applying this. Per the counter-example provided by DrWhat in the comments, this solution may not have the desired outcome in all cases. df = df.loc[:,~df.apply(lambda x: x.duplicated(),axis=1).all()].copy() This avoids the issue of transposing. Is it fast? No. Does it work? In some cases. Here, try it on this: # create a large(ish) dataframe ldf = pd.DataFrame(np.random.randint(0,100,size= (736334,1312))) #to see size in gigs #ldf.memory_usage().sum()/1e9 #it's about 3 gigs # duplicate a column ldf.loc[:,'dup'] = ldf.loc[:,101] # take out duplicated columns by values ldf = ldf.loc[:,~ldf.apply(lambda x: x.duplicated(),axis=1).all()].copy()"} +{"question_id": 13957829, "score": 333, "creation_date": 1355938048, "tags": ["python", "keyword", "raise"], "instruction": "How to use \"raise\" keyword in Python\n\nI have read the official definition of \"raise\", but I still don't quite understand what it does. In simplest terms, what is \"raise\"? Example usage would help.", "output": "It has two purposes. jackcogdill has given the first one: It's used for raising your own errors. if something: raise Exception('My error!') The second is to reraise the current exception in an exception handler, so that it can be handled further up the call stack. try: generate_exception() except SomeException as e: if not can_handle(e): raise handle_exception(e)"} +{"question_id": 16597265, "score": 332, "creation_date": 1368737549, "tags": ["python", "pandas"], "instruction": "Appending to an empty DataFrame in Pandas?\n\nIs it possible to append to an empty data frame that doesn't contain any indices or columns? I have tried to do this, but keep getting an empty dataframe at the end. e.g. import pandas as pd df = pd.DataFrame() data = ['some kind of data here' --> I have checked the type already, and it is a dataframe] df.append(data) The result looks like this: Empty DataFrame Columns: [] Index: []", "output": "The answers are very useful, but since pandas.DataFrame.append was deprecated (as already mentioned by various users), and the answers using pandas.concat are not \"Runnable Code Snippets\" I would like to add the following snippet: import pandas as pd df = pd.DataFrame(columns =['name','age']) row_to_append = pd.DataFrame([{'name':\"Alice\", 'age':\"25\"},{'name':\"Bob\", 'age':\"32\"}]) df = pd.concat([df,row_to_append]) So df is now: name age 0 Alice 25 1 Bob 32"} +{"question_id": 17015230, "score": 332, "creation_date": 1370821050, "tags": ["python"], "instruction": "Are nested try/except blocks in Python a good programming practice?\n\nI'm writing my own container, which needs to give access to a dictionary inside by attribute calls. The typical use of the container would be like this: dict_container = DictContainer() dict_container['foo'] = bar ... print dict_container.foo I know that it might be stupid to write something like this, but that's the functionality I need to provide. I was thinking about implementing this in a following way: def __getattribute__(self, item): try: return object.__getattribute__(item) except AttributeError: try: return self.dict[item] except KeyError: print \"The object doesn't have such attribute\" I'm not sure whether nested try/except blocks are a good practice, so another way would be to use hasattr() and has_key(): def __getattribute__(self, item): if hasattr(self, item): return object.__getattribute__(item) else: if self.dict.has_key(item): return self.dict[item] else: raise AttributeError(\"some customised error\") Or to use one of them and one try catch block like this: def __getattribute__(self, item): if hasattr(self, item): return object.__getattribute__(item) else: try: return self.dict[item] except KeyError: raise AttributeError(\"some customised error\") Which option is most Pythonic and elegant?", "output": "Your first example is perfectly fine. Even the official Python documentation recommends this style known as EAFP. Personally, I prefer to avoid nesting when it's not necessary: def __getattribute__(self, item): try: return object.__getattribute__(self, item) except AttributeError: pass # Fallback to dict try: return self.dict[item] except KeyError: raise AttributeError(\"The object doesn't have such attribute\") from None PS. has_key() has been deprecated for a long time in Python 2. Use item in self.dict instead."} +{"question_id": 1175208, "score": 331, "creation_date": 1248394922, "tags": ["python", "camelcasing"], "instruction": "Elegant Python function to convert CamelCase to snake_case?\n\nExample: >>> convert('CamelCase') 'camel_case'", "output": "Camel case to snake case import re name = 'CamelCaseName' name = re.sub(r'(?>> str.__bases__ (,) Another example: >>> class A(object): ... pass ... >>> class B(object): ... pass ... >>> class C(A, B): ... pass ... >>> C.__bases__ (, )"} +{"question_id": 14657241, "score": 330, "creation_date": 1359764528, "tags": ["python", "pandas", "duplicates"], "instruction": "How do I get a list of all the duplicate items using pandas in python?\n\nI have a list of items that likely has some export issues. I would like to get a list of the duplicate items so I can manually compare them. When I try to use pandas duplicated method, it only returns the first duplicate. Is there a a way to get all of the duplicates and not just the first one? A small subsection of my dataset looks like this: ID,ENROLLMENT_DATE,TRAINER_MANAGING,TRAINER_OPERATOR,FIRST_VISIT_DATE 1536D,12-Feb-12,\"06DA1B3-Lebanon NH\",,15-Feb-12 F15D,18-May-12,\"06405B2-Lebanon NH\",,25-Jul-12 8096,8-Aug-12,\"0643D38-Hanover NH\",\"0643D38-Hanover NH\",25-Jun-12 A036,1-Apr-12,\"06CB8CF-Hanover NH\",\"06CB8CF-Hanover NH\",9-Aug-12 8944,19-Feb-12,\"06D26AD-Hanover NH\",,4-Feb-12 1004E,8-Jun-12,\"06388B2-Lebanon NH\",,24-Dec-11 11795,3-Jul-12,\"0649597-White River VT\",\"0649597-White River VT\",30-Mar-12 30D7,11-Nov-12,\"06D95A3-Hanover NH\",\"06D95A3-Hanover NH\",30-Nov-11 3AE2,21-Feb-12,\"06405B2-Lebanon NH\",,26-Oct-12 B0FE,17-Feb-12,\"06D1B9D-Hartland VT\",,16-Feb-12 127A1,11-Dec-11,\"064456E-Hanover NH\",\"064456E-Hanover NH\",11-Nov-12 161FF,20-Feb-12,\"0643D38-Hanover NH\",\"0643D38-Hanover NH\",3-Jul-12 A036,30-Nov-11,\"063B208-Randolph VT\",\"063B208-Randolph VT\", 475B,25-Sep-12,\"06D26AD-Hanover NH\",,5-Nov-12 151A3,7-Mar-12,\"06388B2-Lebanon NH\",,16-Nov-12 CA62,3-Jan-12,,, D31B,18-Dec-11,\"06405B2-Lebanon NH\",,9-Jan-12 20F5,8-Jul-12,\"0669C50-Randolph VT\",,3-Feb-12 8096,19-Dec-11,\"0649597-White River VT\",\"0649597-White River VT\",9-Apr-12 14E48,1-Aug-12,\"06D3206-Hanover NH\",, 177F8,20-Aug-12,\"063B208-Randolph VT\",\"063B208-Randolph VT\",5-May-12 553E,11-Oct-12,\"06D95A3-Hanover NH\",\"06D95A3-Hanover NH\",8-Mar-12 12D5F,18-Jul-12,\"0649597-White River VT\",\"0649597-White River VT\",2-Nov-12 C6DC,13-Apr-12,\"06388B2-Lebanon NH\",, 11795,27-Feb-12,\"0643D38-Hanover NH\",\"0643D38-Hanover NH\",19-Jun-12 17B43,11-Aug-12,,,22-Oct-12 A036,11-Aug-12,\"06D3206-Hanover NH\",,19-Jun-12 My code looks like this currently: df_bigdata_duplicates = df_bigdata[df_bigdata.duplicated(cols='ID')] There area a couple duplicate items. But, when I use the above code, I only get the first item. In the API reference, I see how I can get the last item, but I would like to have all of them so I can visually inspect them to see why I am getting the discrepancy. So, in this example I would like to get all three A036 entries and both 11795 entries and any other duplicated entries, instead of the just first one. Any help is most appreciated.", "output": "Method #1: print all rows where the ID is one of the IDs in duplicated: >>> import pandas as pd >>> df = pd.read_csv(\"dup.csv\") >>> ids = df[\"ID\"] >>> df[ids.isin(ids[ids.duplicated()])].sort_values(\"ID\") ID ENROLLMENT_DATE TRAINER_MANAGING TRAINER_OPERATOR FIRST_VISIT_DATE 24 11795 27-Feb-12 0643D38-Hanover NH 0643D38-Hanover NH 19-Jun-12 6 11795 3-Jul-12 0649597-White River VT 0649597-White River VT 30-Mar-12 18 8096 19-Dec-11 0649597-White River VT 0649597-White River VT 9-Apr-12 2 8096 8-Aug-12 0643D38-Hanover NH 0643D38-Hanover NH 25-Jun-12 12 A036 30-Nov-11 063B208-Randolph VT 063B208-Randolph VT NaN 3 A036 1-Apr-12 06CB8CF-Hanover NH 06CB8CF-Hanover NH 9-Aug-12 26 A036 11-Aug-12 06D3206-Hanover NH NaN 19-Jun-12 but I couldn't think of a nice way to prevent repeating ids so many times. I prefer method #2: groupby on the ID. >>> pd.concat(g for _, g in df.groupby(\"ID\") if len(g) > 1) ID ENROLLMENT_DATE TRAINER_MANAGING TRAINER_OPERATOR FIRST_VISIT_DATE 6 11795 3-Jul-12 0649597-White River VT 0649597-White River VT 30-Mar-12 24 11795 27-Feb-12 0643D38-Hanover NH 0643D38-Hanover NH 19-Jun-12 2 8096 8-Aug-12 0643D38-Hanover NH 0643D38-Hanover NH 25-Jun-12 18 8096 19-Dec-11 0649597-White River VT 0649597-White River VT 9-Apr-12 3 A036 1-Apr-12 06CB8CF-Hanover NH 06CB8CF-Hanover NH 9-Aug-12 12 A036 30-Nov-11 063B208-Randolph VT 063B208-Randolph VT NaN 26 A036 11-Aug-12 06D3206-Hanover NH NaN 19-Jun-12"} +{"question_id": 33978, "score": 330, "creation_date": 1219985971, "tags": ["python", "performance", "memory-profiling"], "instruction": "Find out how much memory is being used by an object in Python\n\nHow would you go about finding out how much memory is being used by an object? I know it is possible to find out how much is used by a block of code, but not by an instantiated object (anytime during its life), which is what I want.", "output": "There's no easy way to find out the memory size of a python object. One of the problems you may find is that Python objects - like lists and dicts - may have references to other python objects (in this case, what would your size be? The size containing the size of each object or not?). There are some pointers overhead and internal structures related to object types and garbage collection. Finally, some python objects have non-obvious behaviors. For instance, lists reserve space for more objects than they have, most of the time; dicts are even more complicated since they can operate in different ways (they have a different implementation for small number of keys and sometimes they over allocate entries). There is a big chunk of code (and an updated big chunk of code) out there to try to best approximate the size of a python object in memory. You may also want to check some old description about PyObject (the internal C struct that represents virtually all python objects)."} +{"question_id": 35898734, "score": 329, "creation_date": 1457545124, "tags": ["python", "macos", "pip", "macports"], "instruction": "pip installs packages successfully, but executables are not found from the command line\n\nI installed Python 2.7 and pip following these instructions, on Mac OS X v10.10.3 (Yosemite). I can successfully install packages and import them inside my Python environment and Python scripts. However, any executable associated with a package that can be called from the command line in the terminal are not found. For example, I tried to install a package called \"rosdep\" as instructed here. I can run sudo pip install -U rosdep which installs without errors and the corresponding files are located in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages. However, sudo rosdep init subsequently reports that the rosdep command is not found. I even tried adding the above site-packages path to my $PATH, but the executables are still not found on the command line, even though the packages work perfectly from within python. Why does this happen, and how can I fix it? See also: Unable to import a module that is definitely installed - these two problems often have the same root cause.", "output": "Check your $PATH environment variable. tox has a command line mode: $ pip list | grep tox tox (2.3.1) Where is it? (The 2.7 stuff doesn't matter much here, sub in any 3.x and pip's behaving pretty much the same way) $ which tox /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/tox And what's in my $PATH? $ echo $PATH /opt/chefdk/bin:/opt/chefdk/embedded/bin:/opt/local/bin:..../opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin... Notice the /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin? That's what allows finding my pip-installed stuff Now, to see where things are from Python, try doing this (substitute rosdep for tox). $python >>> import tox >>> tox.__file__ that prints out: '/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tox/__init__.pyc' Now, cd to the directory right above lib in the above. Do you see a bin directory? Do you see rosdep in that bin? If so, try adding the bin to your $PATH. $ cd /opt/local/Library/Frameworks/Python.framework/Versions/2.7 $ ls -1 Output: Headers Python Resources bin include lib man share"} +{"question_id": 12201577, "score": 327, "creation_date": 1346344657, "tags": ["python", "matplotlib"], "instruction": "How can I convert an RGB image into grayscale in Python?\n\nI'm trying to use matplotlib to read in an RGB image and convert it to grayscale. In matlab I use this: img = rgb2gray(imread('image.png')); In the matplotlib tutorial they don't cover it. They just read in the image import matplotlib.image as mpimg img = mpimg.imread('image.png') and then they slice the array, but that's not the same thing as converting RGB to grayscale from what I understand. lum_img = img[:,:,0] I find it hard to believe that numpy or matplotlib doesn't have a built-in function to convert from rgb to gray. Isn't this a common operation in image processing? I wrote a very simple function that works with the image imported using imread in 5 minutes. It's horribly inefficient, but that's why I was hoping for a professional implementation built-in. Sebastian has improved my function, but I'm still hoping to find the built-in one. matlab's (NTSC/PAL) implementation: import numpy as np def rgb2gray(rgb): r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b return gray", "output": "How about doing it with Pillow: from PIL import Image img = Image.open('image.png').convert('L') img.save('greyscale.png') If an alpha (transparency) channel is present in the input image and should be preserved, use mode LA: img = Image.open('image.png').convert('LA') Using matplotlib and the formula Y' = 0.2989 R + 0.5870 G + 0.1140 B you could do: import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg def rgb2gray(rgb): return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140]) img = mpimg.imread('image.png') gray = rgb2gray(img) plt.imshow(gray, cmap=plt.get_cmap('gray'), vmin=0, vmax=1) plt.show()"} +{"question_id": 32072076, "score": 327, "creation_date": 1439899849, "tags": ["python", "pandas", "sorting", "dataframe", "unique"], "instruction": "Find the unique values in a column and then sort them\n\nI have a pandas dataframe. I want to print the unique values of one of its columns in ascending order. This is how I am doing it: import pandas as pd df = pd.DataFrame({'A':[1,1,3,2,6,2,8]}) a = df['A'].unique() print a.sort() The problem is that I am getting a None for the output.", "output": "sorted(iterable): Return a new sorted list from the items in iterable. CODE import pandas as pd df = pd.DataFrame({'A':[1,1,3,2,6,2,8]}) a = df['A'].unique() print(sorted(a)) OUTPUT [1, 2, 3, 6, 8]"} +{"question_id": 26502775, "score": 327, "creation_date": 1413964928, "tags": ["python", "pycharm"], "instruction": "Simplify Chained Comparison\n\nI have an integer value x, and I need to check if it is between a start and end values, so I write the following statements: if x >= start and x <= end: # do stuff This statement gets underlined, and the tooltip tells me that I must simplify chained comparison As far as I can tell, that comparison is about as simple as they come. What have I missed here?", "output": "In Python you can \"chain\" comparison operations which just means they are \"and\"ed together. In your case, it'd be like this: if start <= x <= end: Reference: https://docs.python.org/3/reference/expressions.html#comparisons"} +{"question_id": 18351951, "score": 326, "creation_date": 1377072138, "tags": ["python", "string", "list"], "instruction": "Check if string ends with one of the strings from a list\n\nWhat is the pythonic way of writing the following code? extensions = ['.mp3','.avi'] file_name = 'test.mp3' for extension in extensions: if file_name.endswith(extension): #do stuff I have a vague memory that the explicit declaration of the for loop can be avoided and be written in the if condition. Is this true?", "output": "str.endswith also accepts a tuple: >>> 'test.mp3'.endswith(('.mp3', '.avi')) True"} +{"question_id": 16579085, "score": 326, "creation_date": 1368679022, "tags": ["python", "list"], "instruction": "How can I verify if one list is a subset of another?\n\nI need to verify if a list is a subset of another - a boolean return is all I seek. Is testing equality on the smaller list after an intersection the fastest way to do this? Performance is of utmost importance given the number of datasets that need to be compared. Adding further facts based on discussions: Will either of the lists be the same for many tests? It does as one of them is a static lookup table. Does it need to be a list? It does not - the static lookup table can be anything that performs best. The dynamic one is a dict from which we extract the keys to perform a static lookup on. What would be the optimal solution given the scenario?", "output": "Use set.issubset Example: a = {1,2} b = {1,2,3} a.issubset(b) # True a = {1,2,4} b = {1,2,3} a.issubset(b) # False The performant function Python provides for this is set.issubset. It does have a few restrictions that make it unclear if it's the answer to your question, however. A list may contain items multiple times and has a specific order. A set does not. Additionally, sets only work on hashable objects. Are you asking about subset or subsequence (which means you'll want a string search algorithm)? Will either of the lists be the same for many tests? What are the datatypes contained in the list? And for that matter, does it need to be a list? Your other post intersect a dict and list made the types clearer and did get a recommendation to use dictionary key views for their set-like functionality. In that case it was known to work because dictionary keys behave like a set (so much so that before we had sets in Python we used dictionaries). One wonders how the issue got less specific in three hours."} +{"question_id": 7391945, "score": 325, "creation_date": 1315850506, "tags": ["python", "python-imaging-library"], "instruction": "How do I read image data from a URL in Python?\n\nWhat I'm trying to do is fairly simple when we're dealing with a local file, but the problem comes when I try to do this with a remote URL. Basically, I'm trying to create a PIL image object from a file pulled from a URL. Sure, I could always just fetch the URL and store it in a temp file, then open it into an image object, but that feels very inefficient. Here's what I have: Image.open(urlopen(url)) It flakes out complaining that seek() isn't available, so then I tried this: Image.open(urlopen(url).read()) But that didn't work either. Is there a Better Way to do this, or is writing to a temporary file the accepted way of doing this sort of thing?", "output": "The following works for Python 3: from PIL import Image import requests im = Image.open(requests.get(url, stream=True).raw) References: https://github.com/python-pillow/Pillow/pull/1151 https://github.com/python-pillow/Pillow/blob/master/CHANGES.rst#280-2015-04-01"} +{"question_id": 28218698, "score": 325, "creation_date": 1422546145, "tags": ["python", "pandas"], "instruction": "How to iterate over columns of a pandas dataframe\n\nI have this code using Pandas in Python: all_data = {} for ticker in ['FIUIX', 'FSAIX', 'FSAVX', 'FSTMX']: all_data[ticker] = web.get_data_yahoo(ticker, '1/1/2010', '1/1/2015') prices = DataFrame({tic: data['Adj Close'] for tic, data in all_data.iteritems()}) returns = prices.pct_change() I know I can run a regression like this: regs = sm.OLS(returns.FIUIX,returns.FSTMX).fit() but how can I do this for each column in the dataframe? Specifically, how can I iterate over columns, in order to run the regression on each? Specifically, I want to regress each other ticker symbol (FIUIX, FSAIX and FSAVX) on FSTMX, and store the residuals for each regression. I've tried various versions of the following, but nothing I've tried gives the desired result: resids = {} for k in returns.keys(): reg = sm.OLS(returns[k],returns.FSTMX).fit() resids[k] = reg.resid Is there something wrong with the returns[k] part of the code? How can I use the k value to access a column? Or else is there a simpler approach?", "output": "Old answer: for column in df: print(df[column]) The previous answer still works, but was added around the time of pandas 0.16.0. Better versions are available. Now you can do: for series_name, series in df.items(): print(series_name) print(series)"} +{"question_id": 1285911, "score": 324, "creation_date": 1250475155, "tags": ["python", "dictionary"], "instruction": "How do I check that multiple keys are in a dict in a single pass?\n\nI want to do something like: foo = { 'foo': 1, 'zip': 2, 'zam': 3, 'bar': 4 } if (\"foo\", \"bar\") in foo: #do stuff How do I check whether both foo and bar are in dict foo?", "output": "Well, you could do this: >>> if all(k in foo for k in (\"foo\",\"bar\")): ... print \"They're there!\" ... They're there!"} +{"question_id": 2804543, "score": 324, "creation_date": 1273510051, "tags": ["python", "subprocess"], "instruction": "read subprocess stdout line by line\n\nMy python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file and show some of it to the user. I thought the following would work, but the output doesn't show up in my application until the utility has produced a significant amount of output. # fake_utility.py, just generates lots of output over time import time i = 0 while True: print(hex(i)*512) i += 1 time.sleep(0.5) In the parent process: import subprocess proc = subprocess.Popen(['python', 'fake_utility.py'], stdout=subprocess.PIPE) for line in proc.stdout: # the real code does filtering here print(\"test:\", line.rstrip()) The behavior I really want is for the filter script to print each line as it is received from the subprocess, like tee does but within Python code. What am I missing? Is this even possible?", "output": "I think the problem is with the statement for line in proc.stdout, which reads the entire input before iterating over it. The solution is to use readline() instead: #filters output import subprocess proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE) while True: line = proc.stdout.readline() if not line: break #the real code does filtering here print \"test:\", line.rstrip() Of course you still have to deal with the subprocess' buffering. Note: according to the documentation the solution with an iterator should be equivalent to using readline(), except for the read-ahead buffer, but (or exactly because of this) the proposed change did produce different results for me (Python 2.5 on Windows XP)."} +{"question_id": 23668427, "score": 322, "creation_date": 1400122300, "tags": ["python", "pandas", "join", "merge"], "instruction": "pandas three-way joining multiple dataframes on columns\n\nI have 3 CSV files. Each has the first column as the (string) names of people, while all the other columns in each dataframe are attributes of that person. How can I \"join\" together all three CSV documents to create a single CSV with each row having all the attributes for each unique value of the person's string name? The join() function in pandas specifies that I need a multiindex, but I'm confused about what a hierarchical indexing scheme has to do with making a join based on a single index.", "output": "Zero's answer is basically a reduce operation. If I had more than a handful of dataframes, I'd put them in a list like this (generated via list comprehensions or loops or whatnot): dfs = [df0, df1, df2, ..., dfN] Assuming they have a common column, like name in your example, I'd do the following: import functools as ft df_final = ft.reduce(lambda left, right: pd.merge(left, right, on='name'), dfs) That way, your code should work with whatever number of dataframes you want to merge."} +{"question_id": 4941753, "score": 322, "creation_date": 1297230673, "tags": ["python", "function", "math"], "instruction": "Is there a math nCr function in Python?\n\nIs there a built-in nCr (n choose r) function included in the Python math library like the one shown below? I understand that the computation can be programmed, but I thought I'd check to see if it's built-in before I do.", "output": "On Python 3.8+, use math.comb: >>> from math import comb >>> comb(10, 3) 120 For older versions of Python, you can use the following program: import operator as op from functools import reduce def ncr(n, r): r = min(r, n-r) numer = reduce(op.mul, range(n, n-r, -1), 1) denom = reduce(op.mul, range(1, r+1), 1) return numer // denom # or / in Python 2"} +{"question_id": 41535881, "score": 322, "creation_date": 1483898204, "tags": ["python", "anaconda", "conda"], "instruction": "How do I upgrade to Python 3.6 with Conda?\n\nI want to get the latest version of Python to use f-strings in my code. Currently my version is (python -V): Python 3.5.2 :: Anaconda 4.2.0 (x86_64) How would I upgrade to Python 3.6?", "output": "Anaconda had not updated Python internally to 3.6, but later versions of Anaconda has a Python 3.6 version here. a) Method 1 If you wanted to update, you will type conda update python To update Anaconda, type conda update conda If you want to upgrade between major python versions, like 3.5 to 3.6, you'll have to do conda install python=$pythonversion$ b) Method 2 - Create a new environment (the better method) conda create --name py36 python=3.6 c) To get the absolute latest Python (3.6.5 at time of writing) conda create --name py365 python=3.6.5 --channel conda-forge You can see all this from here. Also, refer to this for force upgrading."} +{"question_id": 298772, "score": 321, "creation_date": 1227016351, "tags": ["javascript", "python", "django", "google-app-engine", "django-templates"], "instruction": "How can I use the variables from \"views.py\" in JavasScript, \"\" in a Django template?\n\nWhen I render a page using the Django template renderer, I can pass in a dictionary variable containing various values to manipulate them in the page using {{ myVar }}. Is there a way to access the same variable in JavaScript, (perhaps using the DOM; I don't know how Django makes the variables accessible)? I want to be able to look up details using an Ajax lookup based on the values contained in the variables passed in.", "output": "The {{variable}} is substituted directly into the HTML. Do a view source; it isn't a \"variable\" or anything like it. It's just rendered text. Having said that, you can put this kind of substitution into your JavaScript. This gives you \"dynamic\" JavaScript code."} +{"question_id": 370357, "score": 321, "creation_date": 1229396812, "tags": ["python", "scope", "global-variables", "local-variables", "shadowing"], "instruction": "UnboundLocalError trying to use a variable (supposed to be global) that is (re)assigned (even after first use)\n\nWhen I try this code: a, b, c = (1, 2, 3) def test(): print(a) print(b) print(c) c += 1 test() I get an error from the print(c) line that says: UnboundLocalError: local variable 'c' referenced before assignment or in some older versions: UnboundLocalError: 'c' not assigned If I comment out c += 1, all the prints are successful. I don't understand: why does printing a and b work, if c does not? How did c += 1 cause print(c) to fail, even when it comes later in the code? It seems like the assignment c += 1 creates a local variable c, which takes precedence over the global c. But how can a variable \"steal\" scope before it exists? Why is c apparently local here? See also How to use a global variable in a function? for questions that are simply about how to reassign a global variable from within a function, and Is it possible to modify a variable in python that is in an outer (enclosing), but not global, scope? for reassigning from an enclosing function (closure). See Why isn't the 'global' keyword needed to access a global variable? for cases where OP expected an error but didn't get one, from simply accessing a global without the global keyword. See How can a name be \"unbound\" in Python? What code can cause an `UnboundLocalError`? for cases where OP expected the variable to be local, but has a logical error that prevents assignment in every case. See How can \"NameError: free variable 'var' referenced before assignment in enclosing scope\" occur in real code? for a related problem caused by the del keyword.", "output": "Python treats variables in functions differently depending on whether you assign values to them from inside or outside the function. If a variable is assigned within a function, it is treated by default as a local variable. Therefore, when you uncomment the line, you are trying to reference the local variable c before any value has been assigned to it. If you want the variable c to refer to the global c = 3 assigned before the function, put global c as the first line of the function. As for python 3, there is now nonlocal c that you can use to refer to the nearest enclosing function scope that has a c variable."} +{"question_id": 39719689, "score": 321, "creation_date": 1474964666, "tags": ["python", "json", "python-2.7"], "instruction": "What is the difference between json.load() and json.loads() functions\n\nIn Python, what is the difference between json.load() and json.loads()? I guess that the load() function must be used with a file object (I need thus to use a context manager) while the loads() function take the path to the file as a string. It is a bit confusing. Does the letter \"s\" in json.loads() stand for string?", "output": "Yes, s stands for string. The json.loads function does not take the file path, but the file contents as a string. Look at the documentation. Simple example: with open(\"file.json\") as f: data = json.load(f) # ok data = json.loads(f) # not ok, f is not a string but a file text = '{\"a\": 1, \"b\": 2}' # a string with json encoded data data = json.loads(text)"} +{"question_id": 9387928, "score": 321, "creation_date": 1329874148, "tags": ["python", "pip", "easy-install", "package-managers"], "instruction": "What's the difference between dist-packages and site-packages?\n\nI'm a bit miffed by the python package installation process. Specifically, what's the difference between packages installed in the dist-packages directory and the site-packages directory?", "output": "dist-packages is a Debian-specific convention that is also present in its derivatives, like Ubuntu. Modules are installed to dist-packages when they come from the Debian package manager into this location: /usr/lib/python2.7/dist-packages Since easy_install and pip are installed from the package manager, they also use dist-packages, but they put packages here: /usr/local/lib/python2.7/dist-packages From the Debian Python Wiki: dist-packages instead of site-packages. Third party Python software installed from Debian packages goes into dist-packages, not site-packages. This is to reduce conflict between the system Python, and any from-source Python build you might install manually. This means that if you manually compile and install Python interpreter from source, it uses the site-packages directory. This allows you to keep the two installations separate, especially since Debian and Ubuntu rely on the system version of Python for many system utilities."} +{"question_id": 2844516, "score": 320, "creation_date": 1274027434, "tags": ["python", "dictionary", "filter"], "instruction": "How to filter a dictionary according to an arbitrary condition function?\n\nI have a dictionary of points, say: >>> points={'a':(3,4), 'b':(1,2), 'c':(5,5), 'd':(3,3)} I want to create a new dictionary with all the points whose x and y value is smaller than 5, i.e. points 'a', 'b' and 'd'. According to the the book, each dictionary has the items() function, which returns a list of (key, pair) tuple: >>> points.items() [('a', (3, 4)), ('c', (5, 5)), ('b', (1, 2)), ('d', (3, 3))] So I have written this: >>> for item in [i for i in points.items() if i[1][0]<5 and i[1][1]<5]: ... points_small[item[0]]=item[1] ... >>> points_small {'a': (3, 4), 'b': (1, 2), 'd': (3, 3)} Is there a more elegant way? I was expecting Python to have some super-awesome dictionary.filter(f) function...", "output": "You can use a dict comprehension: {k: v for k, v in points.items() if v[0] < 5 and v[1] < 5} And in Python 2, starting from 2.7: {k: v for k, v in points.iteritems() if v[0] < 5 and v[1] < 5}"} +{"question_id": 5389507, "score": 319, "creation_date": 1300788099, "tags": ["python", "list"], "instruction": "Iterating over every two elements in a list\n\nHow do I make a for loop or a list comprehension so that every iteration gives me two elements? l = [1,2,3,4,5,6] for i,k in ???: print str(i), '+', str(k), '=', str(i+k) Output: 1+2=3 3+4=7 5+6=11", "output": "Starting with Python 3.12, you can use the batched() function provided by the itertools module: from itertools import batched for x, y in batched(l, n=2): print(\"%d + %d = %d\" % (x, y, x + y)) Otherwise, you need a pairwise() (or grouped()) implementation. def pairwise(iterable): \"s -> (s0, s1), (s2, s3), (s4, s5), ...\" a = iter(iterable) return zip(a, a) for x, y in pairwise(l): print(\"%d + %d = %d\" % (x, y, x + y)) Or, more generally: def grouped(iterable, n): \"s -> (s0,s1,s2,...sn-1), (sn,sn+1,sn+2,...s2n-1), (s2n,s2n+1,s2n+2,...s3n-1), ...\" return zip(*[iter(iterable)]*n) for x, y in grouped(l, 2): print(\"%d + %d = %d\" % (x, y, x + y)) In Python 2, you should import izip as a replacement for Python 3's built-in zip() function. All credit to martineau for his answer to my question, I have found this to be very efficient as it only iterates once over the list and does not create any unnecessary lists in the process. N.B: This should not be confused with the pairwise recipe in Python's own itertools documentation, which yields s -> (s0, s1), (s1, s2), (s2, s3), ..., as pointed out by @lazyr in the comments. Little addition for those who would like to do type checking with mypy on Python 3: from typing import Iterable, Tuple, TypeVar T = TypeVar(\"T\") def grouped(iterable: Iterable[T], n=2) -> Iterable[Tuple[T, ...]]: \"\"\"s -> (s0,s1,s2,...sn-1), (sn,sn+1,sn+2,...s2n-1), ...\"\"\" return zip(*[iter(iterable)] * n)"} +{"question_id": 604266, "score": 319, "creation_date": 1236032298, "tags": ["python", "django", "django-models", "django-forms"], "instruction": "Django set default form values\n\nI have a Model as follows: class TankJournal(models.Model): user = models.ForeignKey(User) tank = models.ForeignKey(TankProfile) ts = models.IntegerField(max_length=15) title = models.CharField(max_length=50) body = models.TextField() I also have a model form for the above model as follows: class JournalForm(ModelForm): tank = forms.IntegerField(widget=forms.HiddenInput()) class Meta: model = TankJournal exclude = ('user','ts') I want to know how to set the default value for that tank hidden field. Here is my function to show/save the form so far: def addJournal(request, id=0): if not request.user.is_authenticated(): return HttpResponseRedirect('/') # checking if they own the tank from django.contrib.auth.models import User user = User.objects.get(pk=request.session['id']) if request.method == 'POST': form = JournalForm(request.POST) if form.is_valid(): obj = form.save(commit=False) # setting the user and ts from time import time obj.ts = int(time()) obj.user = user obj.tank = TankProfile.objects.get(pk=form.cleaned_data['tank_id']) # saving the test obj.save() else: form = JournalForm() try: tank = TankProfile.objects.get(user=user, id=id) except TankProfile.DoesNotExist: return HttpResponseRedirect('/error/')", "output": "You can use Form.initial. You have two options either populate the value when calling form constructor: form = JournalForm(initial={'tank': 123}) or set the value in the form definition: tank = forms.IntegerField(widget=forms.HiddenInput(), initial=123)"} +{"question_id": 2374640, "score": 319, "creation_date": 1267647673, "tags": ["python", "numpy", "statistics", "quantile", "percentile"], "instruction": "How do I calculate percentiles with python/numpy?\n\nIs there a convenient way to calculate percentiles for a sequence or single-dimensional numpy array? I am looking for something similar to Excel's percentile function.", "output": "NumPy has np.percentile(). import numpy as np a = np.array([1,2,3,4,5]) p = np.percentile(a, 50) # return 50th percentile, i.e. median. >>> print(p) 3.0 SciPy has scipy.stats.scoreatpercentile(), in addition to many other statistical goodies."} +{"question_id": 8248467, "score": 319, "creation_date": 1322079166, "tags": ["python", "matplotlib", "suptitle"], "instruction": "tight_layout() doesn't take into account figure suptitle\n\nIf I add a subtitle to my matplotlib figure it gets overlaid by the subplot's titles. Does anybody know how to easily take care of that? I tried the tight_layout() function, but it only makes things worse. Example: import numpy as np import matplotlib.pyplot as plt f = np.random.random(100) g = np.random.random(100) fig = plt.figure() fig.suptitle('Long Suptitle', fontsize=24) plt.subplot(121) plt.plot(f) plt.title('Very Long Title 1', fontsize=20) plt.subplot(122) plt.plot(g) plt.title('Very Long Title 2', fontsize=20) plt.tight_layout() plt.show()", "output": "You can adjust the subplot geometry in the very tight_layout call as follows: fig.tight_layout(rect=[0, 0.03, 1, 0.95]) As it's stated in the documentation (https://matplotlib.org/stable/users/explain/axes/tight_layout_guide.html): tight_layout() only considers ticklabels, axis labels, and titles. Thus, other artists may be clipped and also may overlap."} +{"question_id": 18421757, "score": 319, "creation_date": 1377368838, "tags": ["python", "shell", "logging", "error-handling", "subprocess"], "instruction": "live output from subprocess command\n\nI'm using a python script as a driver for a hydrodynamics code. When it comes time to run the simulation, I use subprocess.Popen to run the code, collect the output from stdout and stderr into a subprocess.PIPE --- then I can print (and save to a log-file) the output information, and check for any errors. The problem is, I have no idea how the code is progressing. If I run it directly from the command line, it gives me output about what iteration its at, what time, what the next time-step is, etc. Is there a way to both store the output (for logging and error checking), and also produce a live-streaming output? The relevant section of my code: ret_val = subprocess.Popen( run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True ) output, errors = ret_val.communicate() log_file.write(output) print output if( ret_val.returncode ): print \"RUN failed\\n\\n%s\\n\\n\" % (errors) success = False if( errors ): log_file.write(\"\\n\\n%s\\n\\n\" % errors) Originally I was piping the run_command through tee so that a copy went directly to the log-file, and the stream still output directly to the terminal -- but that way I can't store any errors (to my knowlege). My temporary solution so far: ret_val = subprocess.Popen( run_command, stdout=log_file, stderr=subprocess.PIPE, shell=True ) while not ret_val.poll(): log_file.flush() then, in another terminal, run tail -f log.txt (s.t. log_file = 'log.txt').", "output": "TLDR for Python 3: import subprocess import sys with open(\"test.log\", \"wb\") as f: process = subprocess.Popen(your_command, stdout=subprocess.PIPE) for c in iter(lambda: process.stdout.read(1), b\"\"): sys.stdout.buffer.write(c) f.buffer.write(c) You have two ways of doing this, either by creating an iterator from the read or readline functions and do: import subprocess import sys # replace \"w\" with \"wb\" for Python 3 with open(\"test.log\", \"w\") as f: process = subprocess.Popen(your_command, stdout=subprocess.PIPE) # replace \"\" with b'' for Python 3 for c in iter(lambda: process.stdout.read(1), \"\"): sys.stdout.write(c) f.write(c) or import subprocess import sys # replace \"w\" with \"wb\" for Python 3 with open(\"test.log\", \"w\") as f: process = subprocess.Popen(your_command, stdout=subprocess.PIPE) # replace \"\" with b\"\" for Python 3 for line in iter(process.stdout.readline, \"\"): sys.stdout.write(line) f.write(line) Or you can create a reader and a writer file. Pass the writer to the Popen and read from the reader import io import time import subprocess import sys filename = \"test.log\" with io.open(filename, \"wb\") as writer, io.open(filename, \"rb\", 1) as reader: process = subprocess.Popen(command, stdout=writer) while process.poll() is None: sys.stdout.write(reader.read()) time.sleep(0.5) # Read the remaining sys.stdout.write(reader.read()) This way you will have the data written in the test.log as well as on the standard output. The only advantage of the file approach is that your code doesn't block. So you can do whatever you want in the meantime and read whenever you want from the reader in a non-blocking way. When you use PIPE, read and readline functions will block until either one character is written to the pipe or a line is written to the pipe respectively."} +{"question_id": 2990121, "score": 318, "creation_date": 1275919242, "tags": ["python", "list", "loops", "for-loop", "iteration"], "instruction": "How do I loop through a list by twos?\n\nI want to loop through a Python list and process 2 list items at a time. Something like this in another language: for(int i = 0; i < list.length(); i+=2) { // do something with list[i] and list[i + 1] } What's the best way to accomplish this?", "output": "You can use a range with a step size of 2: Python 2 for i in xrange(0,10,2): print(i) Python 3 for i in range(0,10,2): print(i) Note: Use xrange in Python 2 instead of range because it is more efficient as it generates an iterable object, and not the whole list."} +{"question_id": 9232568, "score": 318, "creation_date": 1328897062, "tags": ["python", "pip"], "instruction": "Identifying the dependency relationship for python packages installed with pip\n\nWhen I do a pip freeze I see large number of Python packages that I didn't explicitly install, e.g. $ pip freeze Cheetah==2.4.3 GnuPGInterface==0.3.2 Landscape-Client==11.01 M2Crypto==0.20.1 PAM==0.4.2 PIL==1.1.7 PyYAML==3.09 Twisted-Core==10.2.0 Twisted-Web==10.2.0 (etc.) Is there a way for me to determine why pip installed these particular dependent packages? In other words, how do I determine the parent package that had these packages as dependencies? For example, I might want to use Twisted and I don't want to depend on a package until I know more about not accidentally uninstalling it or upgrading it.", "output": "You could try pipdeptree, which displays dependencies as a tree structure e.g.: $ pipdeptree Lookupy==0.1 wsgiref==0.1.2 argparse==1.2.1 psycopg2==2.5.2 Flask-Script==0.6.6 - Flask [installed: 0.10.1] - Werkzeug [required: >=0.7, installed: 0.9.4] - Jinja2 [required: >=2.4, installed: 2.7.2] - MarkupSafe [installed: 0.18] - itsdangerous [required: >=0.21, installed: 0.23] alembic==0.6.2 - SQLAlchemy [required: >=0.7.3, installed: 0.9.1] - Mako [installed: 0.9.1] - MarkupSafe [required: >=0.9.2, installed: 0.18] ipython==2.0.0 slugify==0.0.1 redis==2.9.1 To install it, run: pip install pipdeptree As noted by @Esteban in the comments you can also list the tree in reverse with -r or for a single package with -p . So to find which module(s) Werkzeug is a dependency for, you could run: $ pipdeptree -r -p Werkzeug Werkzeug==0.11.15 - Flask==0.12 [requires: Werkzeug>=0.7]"} +{"question_id": 41274007, "score": 317, "creation_date": 1482365992, "tags": ["python", "python-3.x", "anaconda", "conda"], "instruction": "Anaconda export Environment file\n\nHow can I make anaconda environment file which could be use on other computers? I exported my anaconda python environment to YML using conda env export > environment.yml. The exported environment.yml contains this line prefix: /home/superdev/miniconda3/envs/juicyenv which maps to my anaconda's location which will be different on other's pcs.", "output": "I can't find anything in the conda specs which allows you to export an environment file without the prefix: ... line. However, like Alex pointed out in the comments, conda doesn't seem to care about the prefix line when creating an environment from the file. With that in mind, if you want the other user to have no knowledge of your default install path, you can remove the prefix line with grep before writing to environment.yml. conda env export | grep -v \"^prefix: \" > environment.yml Either way, the other user then runs: conda env create -f environment.yml and the environment will get installed in their default conda environment path. If you want to specify a different install path than the default for your system (not related to 'prefix' in the environment.yml), just use the -p flag followed by the required path. conda env create -f environment.yml -p /home/user/anaconda3/envs/env_name Note that Conda recommends creating the environment.yml by hand, which is especially important if you are wanting to share your environment across platforms (Windows/Linux/Mac). In this case, you can just leave out the prefix line."} +{"question_id": 2829329, "score": 317, "creation_date": 1273775701, "tags": ["python", "multithreading", "exception"], "instruction": "Catch a thread's exception in the caller thread?\n\nI'm very new to Python and multithreaded programming in general. Basically, I have a script that will copy files to another location. I would like this to be placed in another thread so I can output .... to indicate that the script is still running. The problem that I am having is that if the files cannot be copied it will throw an exception. This is OK if running in the main thread; however, having the following code does not work: try: threadClass = TheThread(param1, param2, etc.) threadClass.start() ##### **Exception takes place here** except: print \"Caught an exception\" In the thread class itself, I tried to re-throw the exception, but it does not work. I have seen people on here ask similar questions, but they all seem to be doing something more specific than what I am trying to do (and I don't quite understand the solutions offered). I have seen people mention the usage of sys.exc_info(), however I do not know where or how to use it. Edit: The code for the thread class is below: class TheThread(threading.Thread): def __init__(self, sourceFolder, destFolder): threading.Thread.__init__(self) self.sourceFolder = sourceFolder self.destFolder = destFolder def run(self): try: shul.copytree(self.sourceFolder, self.destFolder) except: raise", "output": "The problem is that thread_obj.start() returns immediately. The child thread that you spawned executes in its own context, with its own stack. Any exception that occurs there is in the context of the child thread, and it is in its own stack. One way I can think of right now to communicate this information to the parent thread is by using some sort of message passing, so you might look into that. Try this on for size: import sys import threading import queue class ExcThread(threading.Thread): def __init__(self, bucket): threading.Thread.__init__(self) self.bucket = bucket def run(self): try: raise Exception('An error occured here.') except Exception: self.bucket.put(sys.exc_info()) def main(): bucket = queue.Queue() thread_obj = ExcThread(bucket) thread_obj.start() while True: try: exc = bucket.get(block=False) except queue.Empty: pass else: exc_type, exc_obj, exc_trace = exc # deal with the exception print exc_type, exc_obj print exc_trace thread_obj.join(0.1) if thread_obj.isAlive(): continue else: break if __name__ == '__main__': main()"} +{"question_id": 3777301, "score": 316, "creation_date": 1285236777, "tags": ["python", "shell"], "instruction": "How can I call a shell script from Python code?\n\nHow can I call a shell script from Python code?", "output": "The subprocess module will help you out. Blatantly trivial example: >>> import subprocess >>> subprocess.call(['sh', './test.sh']) # Thanks @Jim Dennis for suggesting the [] 0 Where test.sh is a simple shell script and 0 is its return value for this run."} +{"question_id": 33633370, "score": 316, "creation_date": 1447168798, "tags": ["python", "tensorflow", "tensor"], "instruction": "How can I print the value of a Tensor object in TensorFlow?\n\nI have been using the introductory example of matrix multiplication in TensorFlow. matrix1 = tf.constant([[3., 3.]]) matrix2 = tf.constant([[2.],[2.]]) product = tf.matmul(matrix1, matrix2) When I print the product, it is displaying it as a Tensor object: But how do I know the value of product? The following doesn't help: print product Output: Tensor(\"MatMul:0\", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32) I know that graphs run on Sessions, but isn't there a way I can check the output of a Tensor object without running the graph in a session?", "output": "The easiest[A] way to evaluate the actual value of a Tensor object is to pass it to the Session.run() method, or call Tensor.eval() when you have a default session (i.e. in a with tf.Session(): block, or see below). In general[B], you cannot print the value of a tensor without running some code in a session. If you are experimenting with the programming model, and want an easy way to evaluate tensors, the tf.InteractiveSession lets you open a session at the start of your program, and then use that session for all Tensor.eval() (and Operation.run()) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a Session object everywhere. For example, the following works in a Jupyter notebook: with tf.Session() as sess: print(product.eval()) This might seem silly for such a small expression, but one of the key ideas in TensorFlow 1.x is deferred execution: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a Session) is able to schedule its execution more efficiently (e.g., executing independent parts in parallel and using GPUs). [A]: To print the value of a tensor without returning it to your Python program, you can use the tf.print() operator, as Andrzej suggests in another answer. According to the official documentation: To make sure the operator runs, users need to pass the produced op to tf.compat.v1.Session's run method, or to use the op as a control dependency for executed ops by specifying with tf.compat.v1.control_dependencies([print_op]), which is printed to standard output. Also note that: In Jupyter notebooks and colabs, tf.print prints to the notebook cell outputs. It will not write to the notebook kernel's console logs. [B]: You might be able to use the tf.get_static_value() function to get the constant value of the given tensor if its value is efficiently calculable."} +{"question_id": 37087457, "score": 316, "creation_date": 1462617397, "tags": ["python", "dictionary", "python-typing"], "instruction": "Difference between defining typing.Dict and dict?\n\nI am practicing using type hints in Python 3.5. One of my colleague uses typing.Dict: import typing def change_bandwidths(new_bandwidths: typing.Dict, user_id: int, user_name: str) -> bool: print(new_bandwidths, user_id, user_name) return False def my_change_bandwidths(new_bandwidths: dict, user_id: int, user_name: str) ->bool: print(new_bandwidths, user_id, user_name) return True def main(): my_id, my_name = 23, \"Tiras\" simple_dict = {\"Hello\": \"Moon\"} change_bandwidths(simple_dict, my_id, my_name) new_dict = {\"new\": \"energy source\"} my_change_bandwidths(new_dict, my_id, my_name) if __name__ == \"__main__\": main() Both of them work just fine, there doesn't appear to be a difference. I have read the typing module documentation. Between typing.Dict or dict which one should I use in the program?", "output": "Note: typing.Dict has been deprecated as of Python 3.9, because the dict type itself can be used as a generic type directly (together with other standard containers). You can do the same in Python 3.7 or 3.8 if you use a from __future__ import annotations directive. My answer was originally written for much older Python 3 releases. There is no real difference between using a plain typing.Dict and dict, no. However, typing.Dict is a Generic type that lets you specify the type of the keys and values too, making it more flexible: def change_bandwidths(new_bandwidths: typing.Dict[str, str], user_id: int, user_name: str) -> bool: As such, it could well be that at some point in your project lifetime you want to define the dictionary argument a little more precisely, at which point expanding typing.Dict to typing.Dict[key_type, value_type] is a 'smaller' change than replacing dict. You can make this even more generic by using Mapping or MutableMapping types here; since your function doesn't need to alter the mapping, I'd stick with Mapping. A dict is one mapping, but you could create other objects that also satisfy the mapping interface, and your function might well still work with those: from collections.abc import Mapping def change_bandwidths(new_bandwidths: Mapping[str, str], user_id: int, user_name: str) -> bool: Now you are clearly telling other users of this function that your code won't actually alter the new_bandwidths mapping passed in. Your actual implementation is merely expecting an object that is printable. That may be a test implementation, but as it stands your code would continue to work if you used new_bandwidths: object, because any object in Python is printable."} +{"question_id": 242485, "score": 315, "creation_date": 1225179435, "tags": ["python", "debugging"], "instruction": "Starting python debugger automatically on error\n\nThis is a question I have wondered about for quite some time, yet I have never found a suitable solution. If I run a script and I come across, let's say an IndexError, python prints the line, location and quick description of the error and exits. Is it possible to automatically start pdb when an error is encountered? I am not against having an extra import statement at the top of the file, nor a few extra lines of code.", "output": "You can use traceback.print_exc to print the exceptions traceback. Then use sys.exc_info to extract the traceback and finally call pdb.post_mortem with that traceback import pdb, traceback, sys def bombs(): a = [] print a[0] if __name__ == '__main__': try: bombs() except: extype, value, tb = sys.exc_info() traceback.print_exc() pdb.post_mortem(tb) If you want to start an interactive command line with code.interact using the locals of the frame where the exception originated you can do import traceback, sys, code def bombs(): a = [] print a[0] if __name__ == '__main__': try: bombs() except: type, value, tb = sys.exc_info() traceback.print_exc() last_frame = lambda tb=tb: last_frame(tb.tb_next) if tb.tb_next else tb frame = last_frame().tb_frame ns = dict(frame.f_globals) ns.update(frame.f_locals) code.interact(local=ns)"} +{"question_id": 4382945, "score": 315, "creation_date": 1291766300, "tags": ["python", "inheritance", "abstract"], "instruction": "Abstract methods in Python\n\nI am having trouble in using inheritance with Python. While the concept seems too easy for me in Java yet up till now I have been unable to understand in Python which is surprising to me at least. I have a prototype which follow: class Shape(): def __init__(self, shape_name): self.shape = shape_name class Rectangle(Shape): def __init__(self, name): self.shape = name In the above code how can I make an abstract method that would need to be implemented for all the subclasses?", "output": "Something along these lines, using ABC import abc class Shape(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod def method_to_implement(self, input): \"\"\"Method documentation\"\"\" return Also read this good tutorial: https://pymotw.com/3/abc/ You can also check out zope.interface which was used prior to introduction of ABC in python. http://pypi.python.org/pypi/zope.interface https://zopeinterface.readthedocs.io/en/latest/README.html"} +{"question_id": 11743019, "score": 315, "creation_date": 1343746797, "tags": ["python", "datetime", "utc", "epoch", "strftime"], "instruction": "Convert python datetime to epoch with strftime\n\nI have a time in UTC from which I want the number of seconds since epoch. I am using strftime to convert it to the number of seconds. Taking 1st April 2012 as an example. >>>datetime.datetime(2012,04,01,0,0).strftime('%s') '1333234800' 1st of April 2012 UTC from epoch is 1333238400 but this above returns 1333234800 which is different by 1 hour. So it looks like that strftime is taking my system time into account and applies a timezone shift somewhere. I thought datetime was purely naive? How can I get around that? If possible avoiding to import other libraries unless standard. (I have portability concerns).", "output": "In Python 3.3+ you can use timestamp(): >>> datetime.datetime(2012,4,1,0,0).timestamp() 1333234800.0 In Python 3.2 or earlier, you could do it explicitly: >>> (datetime.datetime(2012,4,1,0,0) - datetime.datetime(1970,1,1)).total_seconds() 1333238400.0 Why you should not use datetime.strftime('%s') Python doesn't actually support %s as an argument to strftime (if you check at http://docs.python.org/library/datetime.html#strftime-and-strptime-behavior it's not in the list), the only reason it's working is because Python is passing the information to your system's strftime, which uses your local timezone. >>> datetime.datetime(2012,04,01,0,0).strftime('%s') '1333234800'"} +{"question_id": 2933, "score": 315, "creation_date": 1217975160, "tags": ["python", "user-interface", "deployment", "tkinter", "release-management"], "instruction": "Create a directly-executable cross-platform GUI app using Python\n\nPython works on multiple platforms and can be used for desktop and web applications, thus I conclude that there is some way to compile it into an executable for Mac, Windows and Linux. The problem being I have no idea where to start or how to write a GUI with it, can anybody shed some light on this and point me in the right direction please?", "output": "First you will need some GUI library with Python bindings and then (if you want) some program that will convert your python scripts into standalone executables. Cross-platform GUI libraries with Python bindings (Windows, Linux, Mac) Of course, there are many, but the most popular that I've seen in wild are: Tkinter - based on Tk GUI toolkit. De-facto standard GUI library for python, free for commercial projects. WxPython - based on WxWidgets. Popular, and free for commercial projects. Qt using the PyQt bindings or Qt for Python. The former is not free for commercial projects. The latter is less mature, but can be used for free. Qt itself supposedly supports Android and iOS as well, but achiving same with it's bindings should be tricky. Kivy written in Python for Python (update 2023). Supposedly supports Android and iOS as well. Note that users of WxWidgets (hence WxPython users), often need to use WxQt as well, because WxWidgets's own GUI is not yet at Qt's level (at time of writting). Complete list is at http://wiki.python.org/moin/GuiProgramming Stand-alone/ single executables For all platforms: PyInstaller - The most active (which could also be used with PyQt) fbs - if you chose Qt above (commercial, with free plan) For Windows: py2exe - used to be the most popular For Linux: Freeze - works the same way like py2exe but targets Linux platform For MacOS: py2app - again, works like py2exe but targets Mac OS"} +{"question_id": 23377108, "score": 314, "creation_date": 1398814240, "tags": ["python", "pandas", "group-by"], "instruction": "Pandas percentage of total with groupby\n\nThis is obviously simple, but as a numpy newbe I'm getting stuck. I have a CSV file that contains 3 columns, the State, the Office ID, and the Sales for that office. I want to calculate the percentage of sales per office in a given state (total of all percentages in each state is 100%). df = pd.DataFrame({'state': ['CA', 'WA', 'CO', 'AZ'] * 3, 'office_id': list(range(1, 7)) * 2, 'sales': [np.random.randint(100000, 999999) for _ in range(12)]}) df.groupby(['state', 'office_id']).agg({'sales': 'sum'}) This returns: sales state office_id AZ 2 839507 4 373917 6 347225 CA 1 798585 3 890850 5 454423 CO 1 819975 3 202969 5 614011 WA 2 163942 4 369858 6 959285 I can't seem to figure out how to \"reach up\" to the state level of the groupby to total up the sales for the entire state to calculate the fraction.", "output": "Update 2022-03 This answer by caner using transform looks much better than my original answer! df['sales'] / df.groupby('state')['sales'].transform('sum') Thanks to this comment by Paul Rougieux for surfacing it. Original Answer (2014) Paul H's answer is right that you will have to make a second groupby object, but you can calculate the percentage in a simpler way -- just groupby the state_office and divide the sales column by its sum. Copying the beginning of Paul H's answer: # From Paul H import numpy as np import pandas as pd np.random.seed(0) df = pd.DataFrame({'state': ['CA', 'WA', 'CO', 'AZ'] * 3, 'office_id': list(range(1, 7)) * 2, 'sales': [np.random.randint(100000, 999999) for _ in range(12)]}) state_office = df.groupby(['state', 'office_id']).agg({'sales': 'sum'}) # Change: groupby state_office and divide by sum state_pcts = state_office.groupby(level=0).apply(lambda x: 100 * x / float(x.sum())) Returns: sales state office_id AZ 2 16.981365 4 19.250033 6 63.768601 CA 1 19.331879 3 33.858747 5 46.809373 CO 1 36.851857 3 19.874290 5 43.273852 WA 2 34.707233 4 35.511259 6 29.781508"} +{"question_id": 12974474, "score": 314, "creation_date": 1350650320, "tags": ["python", "list", "tuples", "list-comprehension"], "instruction": "How to unzip a list of tuples into individual lists?\n\nI have a list of tuples l = [(1,2), (3,4), (8,9)]. How can I, succinctly and Pythonically, unzip this list into two independent lists, to get [ [1, 3, 8], [2, 4, 9] ]? In other words, how do I get the inverse of what zip does?", "output": "Use zip(*list): >>> l = [(1,2), (3,4), (8,9)] >>> list(zip(*l)) [(1, 3, 8), (2, 4, 9)] The zip() function pairs up the elements from all inputs, starting with the first values, then the second, etc. By using *l you apply all tuples in l as separate arguments to the zip() function, so zip() pairs up 1 with 3 with 8 first, then 2 with 4 and 9. Those happen to correspond nicely with the columns, or the transposition of l. zip() produces tuples; if you must have mutable list objects, just map() the tuples to lists or use a list comprehension to produce a list of lists: map(list, zip(*l)) # keep it a generator [list(t) for t in zip(*l)] # consume the zip generator into a list of lists"} +{"question_id": 6290739, "score": 313, "creation_date": 1307611808, "tags": ["python", "logging", "time", "python-logging"], "instruction": "Python logging: use milliseconds in time format\n\nBy default logging.Formatter('%(asctime)s') prints with the following format: 2011-06-09 10:54:40,638 where 638 is the millisecond. I need to change the comma to a dot: 2011-06-09 10:54:40.638 To format the time I can use: logging.Formatter(fmt='%(asctime)s',datestr=date_format_str) however the documentation doesn't specify how to format milliseconds. I've found this SO question which talks about microseconds, but a) I would prefer milliseconds and b) the following doesn't work on Python 2.6 (which I'm working on) due to the %f: logging.Formatter(fmt='%(asctime)s',datefmt='%Y-%m-%d,%H:%M:%S.%f')", "output": "This should work too: logging.Formatter( fmt='%(asctime)s.%(msecs)03d', datefmt='%Y-%m-%d,%H:%M:%S' )"} +{"question_id": 21415661, "score": 313, "creation_date": 1390939444, "tags": ["python", "pandas", "dataframe", "boolean", "filtering"], "instruction": "Logical operators for Boolean indexing in Pandas\n\nI'm working with a Boolean index in Pandas. The question is why the statement: a[(a['some_column']==some_number) & (a['some_other_column']==some_other_number)] works fine whereas a[(a['some_column']==some_number) and (a['some_other_column']==some_other_number)] exits with error? Example: a = pd.DataFrame({'x':[1,1],'y':[10,20]}) In: a[(a['x']==1)&(a['y']==10)] Out: x y 0 1 10 In: a[(a['x']==1) and (a['y']==10)] Out: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()", "output": "When you say (a['x']==1) and (a['y']==10) You are implicitly asking Python to convert (a['x']==1) and (a['y']==10) to Boolean values. NumPy arrays (of length greater than 1) and Pandas objects such as Series do not have a Boolean value -- in other words, they raise ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all(). when used as a Boolean value. That's because it's unclear when it should be True or False. Some users might assume they are True if they have non-zero length, like a Python list. Others might desire for it to be True only if all its elements are True. Others might want it to be True if any of its elements are True. Because there are so many conflicting expectations, the designers of NumPy and Pandas refuse to guess, and instead raise a ValueError. Instead, you must be explicit, by calling the empty(), all() or any() method to indicate which behavior you desire. In this case, however, it looks like you do not want Boolean evaluation, you want element-wise logical-and. That is what the & binary operator performs: (a['x']==1) & (a['y']==10) returns a boolean array. By the way, as alexpmil notes, the parentheses are mandatory since & has a higher operator precedence than ==. Without the parentheses, a['x']==1 & a['y']==10 would be evaluated as a['x'] == (1 & a['y']) == 10 which would in turn be equivalent to the chained comparison (a['x'] == (1 & a['y'])) and ((1 & a['y']) == 10) That is an expression of the form Series and Series. The use of and with two Series would again trigger the same ValueError as above. That's why the parentheses are mandatory."} +{"question_id": 51244223, "score": 312, "creation_date": 1531134853, "tags": ["python", "visual-studio-code", "debugging", "vscode-debugger"], "instruction": "Visual Studio Code: How debug Python script with arguments\n\nI'm using Visual Studio Code with the inbuilt Debugger in order to debug a Python script. Following this guide, I set up the argument in the launch.json file: But when I press on Debug, it says that my argument is not recognized my shell, or rather the argparser, says: error: unrecognized arguments: --city Auckland As Visual Studio Code is using PowerShell, let's execute the same file with the same argument: So: the same file, same path, and same argument. In the terminal it is working, but not in Visual Studio Code's Debugger. Where am I wrong?", "output": "I think the --City and Auckland are used as a single argument. Maybe try separating them like so... Single argument \"args\": [\"--city\",\"Auckland\"] Multiple arguments and multiple values Such as: --key1 value1 value2 --key2 value3 value4 Just put them into the args list one by one in sequence: \"args\": [\"--key1\", \"value1\", \"value2\", \"--key2\", \"value3\", \"value4\"]"} +{"question_id": 39050539, "score": 312, "creation_date": 1471668049, "tags": ["python", "pandas", "dataframe"], "instruction": "How to add multiple columns to pandas dataframe in one assignment\n\nI'm trying to figure out how to add multiple columns to pandas simultaneously with Pandas. I would like to do this in one step rather than multiple repeated steps. import pandas as pd data = {'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7]} df = pd.DataFrame(data) I thought this would work here... df[['column_new_1', 'column_new_2', 'column_new_3']] = [np.nan, 'dogs', 3]", "output": "I would have expected your syntax to work too. The problem arises because when you create new columns with the column-list syntax (df[[new1, new2]] = ...), pandas requires that the right hand side be a DataFrame (note that it doesn't actually matter if the columns of the DataFrame have the same names as the columns you are creating). Your syntax works fine for assigning scalar values to existing columns, and pandas is also happy to assign scalar values to a new column using the single-column syntax (df[new1] = ...). So the solution is either to convert this into several single-column assignments, or create a suitable DataFrame for the right-hand side. Here are several approaches that will work: import pandas as pd import numpy as np df = pd.DataFrame({ 'col_1': [0, 1, 2, 3], 'col_2': [4, 5, 6, 7] }) Then one of the following: 1) Three assignments in one, using iterable unpacking df['column_new_1'], df['column_new_2'], df['column_new_3'] = np.nan, 'dogs', 3 2) Use DataFrame() to expand a single row to match the index df[['column_new_1', 'column_new_2', 'column_new_3']] = pd.DataFrame([[np.nan, 'dogs', 3]], index=df.index) 3) Combine with a temporary DataFrame using pd.concat df = pd.concat( [ df, pd.DataFrame( [[np.nan, 'dogs', 3]], index=df.index, columns=['column_new_1', 'column_new_2', 'column_new_3'] ) ], axis=1 ) 4) Combine with a temporary DataFrame using .join This is similar to 3, but may be less efficient. df = df.join(pd.DataFrame( [[np.nan, 'dogs', 3]], index=df.index, columns=['column_new_1', 'column_new_2', 'column_new_3'] )) 5) Use a dictionary instead of the lists used in 3 and 4 This is a more \"natural\" way to create the temporary DataFrame than the previous two. Note that in Python 3.5 or earlier, the new columns will be sorted alphabetically. df = df.join(pd.DataFrame( { 'column_new_1': np.nan, 'column_new_2': 'dogs', 'column_new_3': 3 }, index=df.index )) 6) Use .assign() with multiple column arguments This may be the winner in Python 3.6+. But like the previous one, the new columns will be sorted alphabetically in earlier versions of Python. df = df.assign(column_new_1=np.nan, column_new_2='dogs', column_new_3=3) 7) Create new columns, then assign all values at once Based on this answer. This is interesting, but I don't know when it would be worth the trouble. new_cols = ['column_new_1', 'column_new_2', 'column_new_3'] new_vals = [np.nan, 'dogs', 3] df = df.reindex(columns=df.columns.tolist() + new_cols) # add empty cols df[new_cols] = new_vals # multi-column assignment works for existing cols 8) Three separate assignments In the end, it's hard to beat this. df['column_new_1'] = np.nan df['column_new_2'] = 'dogs' df['column_new_3'] = 3 Note: many of these options have already been covered in other questions: Add multiple columns to DataFrame and set them equal to an existing column Is it possible to add several columns at once to a pandas DataFrame? Add multiple empty columns to pandas DataFrame"} +{"question_id": 9157210, "score": 311, "creation_date": 1328515676, "tags": ["python", "exception", "message"], "instruction": "How do I raise the same Exception with a custom message in Python?\n\nI have this try block in my code: try: do_something_that_might_raise_an_exception() except ValueError as err: errmsg = 'My custom error message.' raise ValueError(errmsg) Strictly speaking, I am actually raising another ValueError, not the ValueError thrown by do_something...(), which is referred to as err in this case. How do I attach a custom message to err? I try the following code but fails due to err, a ValueError instance, not being callable: try: do_something_that_might_raise_an_exception() except ValueError as err: errmsg = 'My custom error message.' raise err(errmsg)", "output": "raise from We can chain the exceptions using raise from. try: 1 / 0 except ZeroDivisionError as e: raise Exception('Smelly socks') from e In this case, the exception your caller would catch has the line number of the place where we raise our exception. Traceback (most recent call last): File \"test.py\", line 2, in 1 / 0 ZeroDivisionError: division by zero The above exception was the direct cause of the following exception: Traceback (most recent call last): File \"test.py\", line 4, in raise Exception('Smelly socks') from e Exception: Smelly socks Notice the bottom exception only has the stacktrace from where we raised our exception. Your caller could still get the original exception by accessing the __cause__ attribute of the exception they catch. with_traceback Or you can use with_traceback. try: 1 / 0 except ZeroDivisionError as e: raise Exception('Smelly socks').with_traceback(e.__traceback__) Using this form, the exception your caller would catch has the traceback from where the original error occurred. Traceback (most recent call last): File \"test.py\", line 2, in 1 / 0 ZeroDivisionError: division by zero During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"test.py\", line 4, in raise Exception('Smelly socks').with_traceback(e.__traceback__) File \"test.py\", line 2, in 1 / 0 Exception: Smelly socks Notice the bottom exception has the line where we performed the invalid division as well as the line where we reraise the exception."} +{"question_id": 15085348, "score": 310, "creation_date": 1361870515, "tags": ["python", "multithreading", "python-multithreading"], "instruction": "What is the use of join() in threading?\n\nI was studying the python threading and came across join(). The author told that if thread is in daemon mode then i need to use join() so that thread can finish itself before main thread terminates. but I have also seen him using t.join() even though t was not daemon example code is this import threading import time import logging logging.basicConfig(level=logging.DEBUG, format='(%(threadName)-10s) %(message)s', ) def daemon(): logging.debug('Starting') time.sleep(2) logging.debug('Exiting') d = threading.Thread(name='daemon', target=daemon) d.setDaemon(True) def non_daemon(): logging.debug('Starting') logging.debug('Exiting') t = threading.Thread(name='non-daemon', target=non_daemon) d.start() t.start() d.join() t.join() i don't know what is use of t.join() as it is not daemon and i can see no change even if i remove it", "output": "A somewhat clumsy ascii-art to demonstrate the mechanism: The join() is presumably called by the main-thread. It could also be called by another thread, but would needlessly complicate the diagram. join-calling should be placed in the track of the main-thread, but to express thread-relation and keep it as simple as possible, I choose to place it in the child-thread instead. without join: +---+---+------------------ main-thread | | | +........... child-thread(short) +.................................. child-thread(long) with join +---+---+------------------***********+### main-thread | | | | +...........join() | child-thread(short) +......................join()...... child-thread(long) with join and daemon thread +-+--+---+------------------***********+### parent-thread | | | | | | +...........join() | child-thread(short) | +......................join()...... child-thread(long) +,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, child-thread(long + daemonized) '-' main-thread/parent-thread/main-program execution '.' child-thread execution '#' optional parent-thread execution after join()-blocked parent-thread could continue '*' main-thread 'sleeping' in join-method, waiting for child-thread to finish ',' daemonized thread - 'ignores' lifetime of other threads; terminates when main-programs exits; is normally meant for join-independent tasks So the reason you don't see any changes is because your main-thread does nothing after your join. You could say join is (only) relevant for the execution-flow of the main-thread. If, for example, you want to concurrently download a bunch of pages to concatenate them into a single large page, you may start concurrent downloads using threads, but need to wait until the last page/thread is finished before you start assembling a single page out of many. That's when you use join()."} +{"question_id": 2953834, "score": 310, "creation_date": 1275431346, "tags": ["python", "path", "string-literals"], "instruction": "How should I write a Windows path in a Python string literal?\n\nSuppose I need to refer to the path C:\\meshes\\as. If I try writing that directly, like \"C:\\meshes\\as\", I encounter problems - either some exception, or the path just doesn't work. Is this because \\ is acting as an escape character? How should I write the paths?", "output": "You can use always: 'C:/mydir' This works both in Linux and Windows. Another possibility is: 'C:\\\\mydir' If you have problems with some names you can also try raw string literals: r'C:\\mydir' However, the best practice is to use the os.path module functions that always joins with the correct path separator (os.path.sep) for your OS: os.path.join(mydir, myfile) From python 3.4 you can also use the pathlib module. This is equivalent to the above: pathlib.Path(mydir, myfile) or: pathlib.Path(mydir) / myfile"} +{"question_id": 35064426, "score": 310, "creation_date": 1453992039, "tags": ["python", "pip"], "instruction": "When would the -e, --editable option be useful with pip install?\n\nWhen would the -e, or --editable option be useful with pip install? For some projects the last line in requirements.txt is -e .. What does it do exactly?", "output": "As the man page says it: -e,--editable Install a project in editable mode (i.e. setuptools \"develop mode\") from a local project path or a VCS url. So you would use this when trying to install a package locally, most often in the case when you are developing it on your system. It will just link the package to the original location, basically meaning any changes to the original package would reflect directly in your environment. Some nuggets around the same here and here. An example run can be: pip install -e . or pip install -e ~/ultimate-utils/ultimate-utils-proj-src/ note the second is the full path to where the setup.py would be at."} +{"question_id": 736043, "score": 309, "creation_date": 1239313953, "tags": ["python", "string", "type-conversion"], "instruction": "Checking if a string can be converted to float in Python\n\nI've got some Python code that runs through a list of strings and converts them to integers or floating point numbers if possible. Doing this for integers is pretty easy if element.isdigit(): newelement = int(element) Floating point numbers are more difficult. Right now I'm using partition('.') to split the string and checking to make sure that one or both sides are digits. partition = element.partition('.') if (partition[0].isdigit() and partition[1] == '.' and partition[2].isdigit()) or (partition[0] == '' and partition[1] == '.' and partition[2].isdigit()) or (partition[0].isdigit() and partition[1] == '.' and partition[2] == ''): newelement = float(element) This works, but obviously the if statement for that is a bit of a bear. The other solution I considered is to just wrap the conversion in a try/catch block and see if it succeeds, as described in this question. Anyone have any other ideas? Opinions on the relative merits of the partition and try/catch approaches?", "output": "I would just use.. try: float(element) except ValueError: print(\"Not a float\") ..it's simple, and it works. Note that it will still throw OverflowError if element is e.g. 1<<1024. Another option would be a regular expression: import re if re.match(r'^-?\\d+(?:\\.\\d+)$', element) is None: print(\"Not float\")"} +{"question_id": 1392413, "score": 309, "creation_date": 1252393602, "tags": ["python", "directory"], "instruction": "Calculating a directory's size using Python?\n\nBefore I re-invent this particular wheel, has anybody got a nice routine for calculating the size of a directory using Python? It would be very nice if the routine would format the size nicely in Mb/Gb etc.", "output": "This walks all sub-directories; summing file sizes: import os def get_size(start_path = '.'): total_size = 0 for dirpath, dirnames, filenames in os.walk(start_path): for f in filenames: fp = os.path.join(dirpath, f) # skip if it is symbolic link if not os.path.islink(fp): total_size += os.path.getsize(fp) return total_size print(get_size(), 'bytes') And a oneliner for fun using os.listdir (Does not include sub-directories): import os sum(os.path.getsize(f) for f in os.listdir('.') if os.path.isfile(f)) Reference: os.path.getsize - Gives the size in bytes os.walk os.path.islink Updated To use os.path.getsize, this is clearer than using the os.stat().st_size method. Thanks to ghostdog74 for pointing this out! os.stat - st_size Gives the size in bytes. Can also be used to get file size and other file related information. import os nbytes = sum(d.stat().st_size for d in os.scandir('.') if d.is_file()) Update 2018 If you use Python 3.4 or previous then you may consider using the more efficient walk method provided by the third-party scandir package. In Python 3.5 and later, this package has been incorporated into the standard library and os.walk has received the corresponding increase in performance. Update 2019 Recently I've been using pathlib more and more, here's a pathlib solution: from pathlib import Path root_directory = Path('.') sum(f.stat().st_size for f in root_directory.glob('**/*') if f.is_file())"} +{"question_id": 4929251, "score": 309, "creation_date": 1297135302, "tags": ["python", "debugging"], "instruction": "How to step through Python code to help debug issues?\n\nIn Java/C# you can easily step through code to trace what might be going wrong, and IDE's make this process very user friendly. Can you trace through python code in a similar fashion?", "output": "Yes! There's a Python debugger called pdb just for doing that! You can launch a Python program through pdb via python -m pdb myscript.py. There are a few commands you can then issue, which are documented on the pdb page. Some useful ones to remember are: b: set a breakpoint c: continue debugging until you hit a breakpoint s: step through the code n: to go to next line of code l: list source code for the current file (default: 11 lines including the line being executed) u: navigate up a stack frame d: navigate down a stack frame p: to print the value of an expression in the current context If you don't want to use a command line debugger, some IDEs like Pydev, Wing IDE or PyCharm have a GUI debugger. Wing and PyCharm are commercial products, but Wing has a free \"Personal\" edition, and PyCharm has a free community edition."} +{"question_id": 5123839, "score": 308, "creation_date": 1298676375, "tags": ["python", "django", "performance", "object", "django-models"], "instruction": "Fastest way to get the first object from a queryset in django?\n\nOften I find myself wanting to get the first object from a queryset in Django, or return None if there aren't any. There are lots of ways to do this which all work. But I'm wondering which is the most performant. qs = MyModel.objects.filter(blah = blah) if qs.count() > 0: return qs[0] else: return None Does this result in two database calls? That seems wasteful. Is this any faster? qs = MyModel.objects.filter(blah = blah) if len(qs) > 0: return qs[0] else: return None Another option would be: qs = MyModel.objects.filter(blah = blah) try: return qs[0] except IndexError: return None This generates a single database call, which is good. But requires creating an exception object a lot of the time, which is a very memory-intensive thing to do when all you really need is a trivial if-test. How can I do this with just a single database call and without churning memory with exception objects?", "output": "Django 1.6 (released Nov 2013) introduced the convenience methods first() and last() which swallow the resulting exception and return None if the queryset returns no objects."} +{"question_id": 3444645, "score": 308, "creation_date": 1281392590, "tags": ["python", "pdf", "file-io", "pypdf"], "instruction": "Merge PDF files\n\nIs it possible, using Python, to merge separate PDF files? Assuming so, I need to extend this a little further. I am hoping to loop through folders in a directory and repeat this procedure. And I may be pushing my luck, but is it possible to exclude a page that is contained in each of the PDFs (my report generation always creates an extra blank page)?", "output": "Use Pypdf or its successor PyPDF2: A Pure-Python library built as a PDF toolkit. It is capable of: splitting documents page by page, merging documents page by page, (and much more) Here's a sample program that works with both versions. #!/usr/bin/env python import sys try: from PyPDF2 import PdfReader, PdfWriter except ImportError: from pyPdf import PdfFileReader, PdfFileWriter def pdf_cat(input_files, output_stream): input_streams = [] try: # First open all the files, then produce the output file, and # finally close the input files. This is necessary because # the data isn't read from the input files until the write # operation. Thanks to # https://stackoverflow.com/questions/6773631/problem-with-closing-python-pypdf-writing-getting-a-valueerror-i-o-operation/6773733#6773733 for input_file in input_files: input_streams.append(open(input_file, 'rb')) writer = PdfWriter() for reader in map(PdfReader, input_streams): for n in range(len(reader.pages)): writer.add_page(reader.pages[n]) writer.write(output_stream) finally: for f in input_streams: f.close() output_stream.close() if __name__ == '__main__': if sys.platform == \"win32\": import os, msvcrt msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) pdf_cat(sys.argv[1:], sys.stdout)"} +{"question_id": 17241004, "score": 308, "creation_date": 1371835525, "tags": ["python", "pandas"], "instruction": "How do I convert a Pandas series or index to a NumPy array?\n\nHow can I get the index or column of a DataFrame as a NumPy array or Python list?", "output": "To get a NumPy array, you should use the values attribute: In [1]: df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, index=['a', 'b', 'c']); df A B a 1 4 b 2 5 c 3 6 In [2]: df.index.values Out[2]: array(['a', 'b', 'c'], dtype=object) This accesses how the data is already stored, so there isn't any need for a conversion. Note: This attribute is also available for many other pandas objects. In [3]: df['A'].values Out[3]: Out[16]: array([1, 2, 3]) To get the index as a list, call tolist: In [4]: df.index.tolist() Out[4]: ['a', 'b', 'c'] And similarly, for columns."} +{"question_id": 2357230, "score": 307, "creation_date": 1267460473, "tags": ["python"], "instruction": "What is the proper way to comment functions in Python?\n\nIs there a generally accepted way to comment functions in Python? Is the following acceptable? ######################################################### # Create a new user ######################################################### def add(self):", "output": "The correct way to do it is to provide a docstring. That way, help(add) will also spit out your comment. def add(self): \"\"\"Create a new user. Line 2 of comment... And so on... \"\"\" That's three double quotes to open the comment and another three double quotes to end it. You can also use any valid Python string. It doesn't need to be multiline and double quotes can be replaced by single quotes. See: PEP 257"} +{"question_id": 21174961, "score": 307, "creation_date": 1389914521, "tags": ["python", "arrays", "numpy", "boolean", "numpy-ndarray"], "instruction": "How do I create a numpy array of all True or all False?\n\nIn Python, how do I create a numpy array of arbitrary shape filled with all True or all False?", "output": "The answer: numpy.full((2, 2), True) Explanation: numpy creates arrays of all ones or all zeros very easily: e.g. numpy.ones((2, 2)) or numpy.zeros((2, 2)) Since True and False are represented in Python as 1 and 0, respectively, we have only to specify this array should be boolean using the optional dtype parameter and we are done: numpy.ones((2, 2), dtype=bool) returns: array([[ True, True], [ True, True]], dtype=bool) UPDATE: 30 October 2013 Since numpy version 1.8, we can use full to achieve the same result with syntax that more clearly shows our intent (as fmonegaglia points out): numpy.full((2, 2), True, dtype=bool) UPDATE: 16 January 2017 Since at least numpy version 1.12, full automatically casts to the dtype of the second parameter, so we can just write: numpy.full((2, 2), True)"} +{"question_id": 36797282, "score": 307, "creation_date": 1461337365, "tags": ["python", "python-typing", "nonetype"], "instruction": "Python void return type annotation\n\nIn python 3.x, it is common to use return type annotation of a function, such as: def foo() -> str: return \"bar\" What is the correct annotation for the \"void\" type? I'm considering 3 options: def foo() -> None: not logical IMO, because None is not a type, def foo() -> type(None): using the best syntax I know for obtaining NoneType, def foo(): omit explicit return type information. Option 2. seems the most logical to me, but I've already seen some instances of 1.", "output": "Use option 1 for simplicity & adherence to spec. def foo() -> None Option 1 & 2 are the 'same' as per PEP 484 -- Type Hints, ... When used in a type hint, the expression None is considered equivalent to type(None). but the type-hinting specification does not use type(...). This is why most of the examples use None as return type."} +{"question_id": 8409095, "score": 306, "creation_date": 1323219362, "tags": ["python", "matplotlib"], "instruction": "Set markers for individual points on a line\n\nI have used Matplotlib to plot lines on a figure. Now I would now like to set the style, specifically the marker, for individual points on the line. How do I do this? To clarify my question, I want to be able to set the style for individual markers on a line, not every marker on said line.", "output": "Specify the keyword args linestyle and/or marker in your call to plot. For example, using a dashed line and blue circle markers: plt.plot(range(10), linestyle='--', marker='o', color='b', label='line with marker') plt.legend() A shortcut call for the same thing: plt.plot(range(10), '--bo', label='line with marker') plt.legend() Here is a list of the possible line and marker styles: ================ =============================== character description ================ =============================== - solid line style -- dashed line style -. dash-dot line style : dotted line style . point marker , pixel marker o circle marker v triangle_down marker ^ triangle_up marker < triangle_left marker > triangle_right marker 1 tri_down marker 2 tri_up marker 3 tri_left marker 4 tri_right marker s square marker p pentagon marker * star marker h hexagon1 marker H hexagon2 marker + plus marker x x marker D diamond marker d thin_diamond marker | vline marker _ hline marker ================ =============================== edit: with an example of marking an arbitrary subset of points, as requested in the comments: import numpy as np import matplotlib.pyplot as plt xs = np.linspace(-np.pi, np.pi, 30) ys = np.sin(xs) markers_on = [12, 17, 18, 19] plt.plot(xs, ys, '-gD', markevery=markers_on, label='line with select markers') plt.legend() plt.show() This last example using the markevery kwarg is possible in since 1.4+, due to the merge of this feature branch. If you are stuck on an older version of matplotlib, you can still achieve the result by overlaying a scatterplot on the line plot. See the edit history for more details."} +{"question_id": 5984633, "score": 304, "creation_date": 1305235344, "tags": ["python", "regex", "numbers", "regex-group"], "instruction": "Python re.sub group: number after \\number\n\nHow can I replace foobar with foo123bar? This doesn't work: >>> re.sub(r'(foo)', r'\\1123', 'foobar') 'J3bar' This works: >>> re.sub(r'(foo)', r'\\1hi', 'foobar') 'foohibar'", "output": "The answer is: re.sub(r'(foo)', r'\\g<1>123', 'foobar') Relevant excerpt from the documentation: In addition to character escapes and backreferences as described above, \\g will use the substring matched by the group named name, as defined by the (?P...) syntax. \\g uses the corresponding group number; \\g<2> is therefore equivalent to \\2, but isn\u2019t ambiguous in a replacement such as \\g<2>0. \\20 would be interpreted as a reference to group 20, not a reference to group 2 followed by the literal character '0'. The backreference \\g<0> substitutes in the entire substring matched by the RE."} +{"question_id": 10101700, "score": 304, "creation_date": 1334129528, "tags": ["python", "matplotlib", "legend"], "instruction": "Moving matplotlib legend outside of the axis makes it cutoff by the figure box\n\nI'm familiar with the following questions: Matplotlib savefig with a legend outside the plot How to put the legend out of the plot It seems that the answers in these questions have the luxury of being able to fiddle with the exact shrinking of the axis so that the legend fits. Shrinking the axes, however, is not an ideal solution because it makes the data smaller making it actually more difficult to interpret; particularly when its complex and there are lots of things going on ... hence needing a large legend The example of a complex legend in the documentation demonstrates the need for this because the legend in their plot actually completely obscures multiple data points. http://matplotlib.sourceforge.net/users/legend_guide.html#legend-of-complex-plots What I would like to be able to do is dynamically expand the size of the figure box to accommodate the expanding figure legend. import matplotlib.pyplot as plt import numpy as np x = np.arange(-2*np.pi, 2*np.pi, 0.1) fig = plt.figure(1) ax = fig.add_subplot(111) ax.plot(x, np.sin(x), label='Sine') ax.plot(x, np.cos(x), label='Cosine') ax.plot(x, np.arctan(x), label='Inverse tan') lgd = ax.legend(loc=9, bbox_to_anchor=(0.5,0)) ax.grid('on') Notice how the final label 'Inverse tan' is actually outside the figure box (and looks badly cutoff - not publication quality!) Finally, I've been told that this is normal behaviour in R and LaTeX, so I'm a little confused why this is so difficult in python... Is there a historical reason? Is Matlab equally poor on this matter? I have the (only slightly) longer version of this code on pastebin http://pastebin.com/grVjc007", "output": "[EDIT - 25th Feb 2025] My day job is no longer Python, so I'm not following the recent matplotlib developments. Please read all the newer answers here as there look to be some excellent modern suggestions compared to this solution from the ancient history of 2012. Sorry EMS, but I actually just got another response from the matplotlib mailling list (Thanks goes out to Benjamin Root). The code I am looking for is adjusting the savefig call to: fig.savefig('samplefigure', bbox_extra_artists=(lgd,), bbox_inches='tight') #Note that the bbox_extra_artists must be an iterable This is apparently similar to calling tight_layout, but instead you allow savefig to consider extra artists in the calculation. This did in fact resize the figure box as desired. import matplotlib.pyplot as plt import numpy as np plt.gcf().clear() x = np.arange(-2*np.pi, 2*np.pi, 0.1) fig = plt.figure(1) ax = fig.add_subplot(111) ax.plot(x, np.sin(x), label='Sine') ax.plot(x, np.cos(x), label='Cosine') ax.plot(x, np.arctan(x), label='Inverse tan') handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='upper center', bbox_to_anchor=(0.5,-0.1)) text = ax.text(-0.2,1.05, \"Aribitrary text\", transform=ax.transAxes) ax.set_title(\"Trigonometry\") ax.grid('on') fig.savefig('samplefigure', bbox_extra_artists=(lgd,text), bbox_inches='tight') This produces: [edit] The intent of this question was to completely avoid the use of arbitrary coordinate placements of arbitrary text as was the traditional solution to these problems. Despite this, numerous edits recently have insisted on putting these in, often in ways that led to the code raising an error. I have now fixed the issues and tidied the arbitrary text to show how these are also considered within the bbox_extra_artists algorithm. [edit] Some of the comments below note that since 2019, the command has been simplified. plt.savefig('x.png', bbox_inches='tight') was sufficient. Thanks for sharing. \u2013 mateuszb Jun 27, 2019"} +{"question_id": 2764017, "score": 304, "creation_date": 1272963810, "tags": ["python", "boolean", "equality", "language-specifications"], "instruction": "Is False == 0 and True == 1 an implementation detail or is it guaranteed by the language?\n\nIs it guaranteed that False == 0 and True == 1, in Python (assuming that they are not reassigned by the user)? For instance, is it in any way guaranteed that the following code will always produce the same results, whatever the version of Python (both existing and, likely, future ones)? 0 == False # True 1 == True # True ['zero', 'one'][False] # is 'zero' Any reference to the official documentation would be much appreciated! As noted in many answers, bool inherits from int. The question can therefore be recast as: \"Does the documentation officially say that programmers can rely on booleans inheriting from integers, with the values 0 and 1?\". This question is relevant for writing robust code that won't fail because of implementation details!", "output": "In Python 2.x this is not guaranteed as it is possible for True and False to be reassigned. However, even if this happens, boolean True and boolean False are still properly returned for comparisons. In Python 3.x True and False are keywords and will always be equal to 1 and 0. Under normal circumstances in Python 2, and always in Python 3: False object is of type bool which is a subclass of int: object | int | bool It is the only reason why in your example, ['zero', 'one'][False] does work. It would not work with an object which is not a subclass of integer, because list indexing only works with integers, or objects that define a __index__ method (thanks mark-dickinson). Edit: It is true of the current python version, and of that of Python 3. The docs for python 2 and the docs for Python 3 both say: There are two types of integers: [...] Integers (int) [...] Booleans (bool) and in the boolean subsection: Booleans: These represent the truth values False and True [...] Boolean values behave like the values 0 and 1, respectively, in almost all contexts, the exception being that when converted to a string, the strings \"False\" or \"True\" are returned, respectively. There is also, for Python 2: In numeric contexts (for example when used as the argument to an arithmetic operator), they [False and True] behave like the integers 0 and 1, respectively. So booleans are explicitly considered as integers in Python 2 and 3. So you're safe until Python 4 comes along. ;-)"} +{"question_id": 956867, "score": 303, "creation_date": 1244219537, "tags": ["python", "json", "serialization", "unicode", "python-2.x"], "instruction": "How to get string objects instead of Unicode from JSON\n\nI'm using Python 2 to parse JSON from ASCII encoded text files. When loading these files with either json or simplejson, all my string values are cast to Unicode objects instead of string objects. The problem is, I have to use the data with some libraries that only accept string objects. I can't change the libraries nor update them. Is it possible to get string objects instead of Unicode ones? Example >>> import json >>> original_list = ['a', 'b'] >>> json_list = json.dumps(original_list) >>> json_list '[\"a\", \"b\"]' >>> new_list = json.loads(json_list) >>> new_list [u'a', u'b'] # I want these to be of type `str`, not `unicode` (One easy and clean solution for 2017 is to use a recent version of Python \u2014 i.e. Python 3 and forward.)", "output": "A solution with object_hook It works for both Python 2.7 and 3.x. import json def json_load_byteified(file_handle): return _byteify( json.load(file_handle, object_hook=_byteify), ignore_dicts=True ) def json_loads_byteified(json_text): return _byteify( json.loads(json_text, object_hook=_byteify), ignore_dicts=True ) def _byteify(data, ignore_dicts = False): if isinstance(data, str): return data # If this is a list of values, return list of byteified values if isinstance(data, list): return [ _byteify(item, ignore_dicts=True) for item in data ] # If this is a dictionary, return dictionary of byteified keys and values # but only if we haven't already byteified it if isinstance(data, dict) and not ignore_dicts: return { _byteify(key, ignore_dicts=True): _byteify(value, ignore_dicts=True) for key, value in data.items() # changed to .items() for Python 2.7/3 } # Python 3 compatible duck-typing # If this is a Unicode string, return its string representation if str(type(data)) == \"\": return data.encode('utf-8') # If it's anything else, return it in its original form return data Example usage: >>> json_loads_byteified('{\"Hello\": \"World\"}') {'Hello': 'World'} >>> json_loads_byteified('\"I am a top-level string\"') 'I am a top-level string' >>> json_loads_byteified('7') 7 >>> json_loads_byteified('[\"I am inside a list\"]') ['I am inside a list'] >>> json_loads_byteified('[[[[[[[[\"I am inside a big nest of lists\"]]]]]]]]') [[[[[[[['I am inside a big nest of lists']]]]]]]] >>> json_loads_byteified('{\"foo\": \"bar\", \"things\": [7, {\"qux\": \"baz\", \"moo\": {\"cow\": [\"milk\"]}}]}') {'things': [7, {'qux': 'baz', 'moo': {'cow': ['milk']}}], 'foo': 'bar'} >>> json_load_byteified(open('somefile.json')) {'more json': 'from a file'} How does this work and why would I use it? Mark Amery's function is shorter and clearer than these ones, so what's the point of them? Why would you want to use them? Purely for performance. Mark's answer decodes the JSON text fully first with Unicode strings, then recurses through the entire decoded value to convert all strings to byte strings. This has a couple of undesirable effects: A copy of the entire decoded structure gets created in memory If your JSON object is really deeply nested (500 levels or more) then you'll hit Python's maximum recursion depth This answer mitigates both of those performance issues by using the object_hook parameter of json.load and json.loads. From the documentation: object_hook is an optional function that will be called with the result of any object literal decoded (a dict). The return value of object_hook will be used instead of the dict. This feature can be used to implement custom decoders Since dictionaries nested many levels deep in other dictionaries get passed to object_hook as they're decoded, we can byteify any strings or lists inside them at that point and avoid the need for deep recursion later. Mark's answer isn't suitable for use as an object_hook as it stands, because it recurses into nested dictionaries. We prevent that recursion in this answer with the ignore_dicts parameter to _byteify, which gets passed to it at all times except when object_hook passes it a new dict to byteify. The ignore_dicts flag tells _byteify to ignore dicts since they already been byteified. Finally, our implementations of json_load_byteified and json_loads_byteified call _byteify (with ignore_dicts=True) on the result returned from json.load or json.loads to handle the case where the JSON text being decoded doesn't have a dict at the top level."} +{"question_id": 1977362, "score": 303, "creation_date": 1262126056, "tags": ["python", "variables", "singleton", "module", "scope"], "instruction": "How to create module-wide variables in Python?\n\nIs there a way to set up a global variable inside of a module? When I tried to do it the most obvious way as appears below, the Python interpreter said the variable __DBNAME__ did not exist. ... __DBNAME__ = None def initDB(name): if not __DBNAME__: __DBNAME__ = name else: raise RuntimeError(\"Database name has already been set.\") ... And after importing the module in a different file ... import mymodule mymodule.initDB('mydb.sqlite') ... And the traceback was: UnboundLocalError: local variable '__DBNAME__' referenced before assignment Any ideas? I'm trying to set up a singleton by using a module, as per this fellow's recommendation.", "output": "Here is what is going on. First, the only global variables Python really has are module-scoped variables. You cannot make a variable that is truly global; all you can do is make a variable in a particular scope. (If you make a variable inside the Python interpreter, and then import other modules, your variable is in the outermost scope and thus global within your Python session.) All you have to do to make a module-global variable is just assign to a name. Imagine a file called foo.py, containing this single line: X = 1 Now imagine you import it. import foo print(foo.X) # prints 1 However, let's suppose you want to use one of your module-scope variables as a global inside a function, as in your example. Python's default is to assume that function variables are local. You simply add a global declaration in your function, before you try to use the global. def initDB(name): global __DBNAME__ # add this line! if __DBNAME__ is None: # see notes below; explicit test for None __DBNAME__ = name else: raise RuntimeError(\"Database name has already been set.\") By the way, for this example, the simple if not __DBNAME__ test is adequate, because any string value other than an empty string will evaluate true, so any actual database name will evaluate true. But for variables that might contain a number value that might be 0, you can't just say if not variablename; in that case, you should explicitly test for None using the is operator. I modified the example to add an explicit None test. The explicit test for None is never wrong, so I default to using it. Finally, as others have noted on this page, two leading underscores signals to Python that you want the variable to be \"private\" to the module. If you ever do an from mymodule import *, Python will not import names with two leading underscores into your name space. But if you just do a simple import mymodule and then say dir(mymodule) you will see the \"private\" variables in the list, and if you explicitly refer to mymodule.__DBNAME__ Python won't care, it will just let you refer to it. The double leading underscores are a major clue to users of your module that you don't want them rebinding that name to some value of their own. It is considered best practice in Python not to do import *, but to minimize the coupling and maximize explicitness by either using mymodule.something or by explicitly doing an import like from mymodule import something. EDIT: If, for some reason, you need to do something like this in a very old version of Python that doesn't have the global keyword, there is an easy workaround. Instead of setting a module global variable directly, use a mutable type at the module global level, and store your values inside it. In your functions, the global variable name will be read-only; you won't be able to rebind the actual global variable name. (If you assign to that variable name inside your function it will only affect the local variable name inside the function.) But you can use that local variable name to access the actual global object, and store data inside it. You can use a list but your code will be ugly: __DBNAME__ = [None] # use length-1 list as a mutable # later, in code: if __DBNAME__[0] is None: __DBNAME__[0] = name A dict is better. But the most convenient is a class instance, and you can just use a trivial class: class Box: pass __m = Box() # m will contain all module-level values __m.dbname = None # database name global in module # later, in code: if __m.dbname is None: __m.dbname = name (You don't really need to capitalize the database name variable.) I like the syntactic sugar of just using __m.dbname rather than __m[\"DBNAME\"]; it seems the most convenient solution in my opinion. But the dict solution works fine also. With a dict you can use any hashable value as a key, but when you are happy with names that are valid identifiers, you can use a trivial class like Box in the above."} +{"question_id": 5306756, "score": 302, "creation_date": 1300155044, "tags": ["python"], "instruction": "How to print a percentage value?\n\nGiven a float between 0 and 1, how to print it as a percentage? For example, 1/3 should print as 33%.", "output": "Since Python 3.0, str.format and format support a percentage presentation type: >>> f\"{1/3:.0%}\" '33%' >>> \"{:.0%}\".format(1/3) '33%' >>> format(1/3, \".0%\") '33%' Percentage. Multiplies the number by 100 and displays in fixed ('f') format, followed by a percent sign. The .0 part of the format spec .0% indicates that you want zero digits of precision after the decimal point, because with f\"{1/3:%}\" you would get the string '33.333333%'. It works with integers, floats, and decimals. See PEP 3101."} +{"question_id": 49220022, "score": 302, "creation_date": 1520771579, "tags": ["python", "python-typing", "mypy"], "instruction": "How can mypy ignore a single line in a source file?\n\nI'm using mypy in my python project for type checking. I'm also using PyYAML for reading and writing the project configuration files. Unfortunately, when using the recommended import mechanism from the PyYAML documentation this generates a spurious error in a try/except clause that attempts to import native libraries: from yaml import load, dump try: from yaml import CLoader as Loader, CDumper as Dumper except ImportError: from yaml import Loader, Dumper On my system CLoader and CDumper aren't present, which results in the errors error: Module 'yaml' has no attribute 'CLoader' and error: Module 'yaml' has no attribute 'CDumper'. Is there a way to have mypy ignore errors on this line? I was hoping that I could do something like this to have mypy skip that line: from yaml import load, dump try: from yaml import CLoader as Loader, CDumper as Dumper # nomypy except ImportError: from yaml import Loader, Dumper", "output": "You can ignore type errors with # type: ignore as of version 0.2 (see issue #500, Ignore specific lines): PEP 484 uses # type: ignore for ignoring type errors on particular lines ... Also, using # type: ignore close to the top of a file [skips] checking that file altogether. Source: mypy#500. See also the mypy documentation."} +{"question_id": 60018578, "score": 302, "creation_date": 1580572695, "tags": ["python", "machine-learning", "deep-learning", "pytorch"], "instruction": "What does model.eval() do in pytorch?\n\nWhen should I use .eval()? I understand it is supposed to allow me to \"evaluate my model\". How do I turn it back off for training? Example training code using .eval().", "output": "model.eval() is a kind of switch for some specific layers/parts of the model that behave differently during training and inference (evaluating) time. For example, Dropouts Layers, BatchNorm Layers etc. You need to turn them off during model evaluation, and .eval() will do it for you. In addition, the common practice for evaluating/validation is using torch.no_grad() in pair with model.eval() to turn off gradients computation: # evaluate model: model.eval() with torch.no_grad(): ... out_data = model(data) ... BUT, don't forget to turn back to training mode after eval step: # training step ... model.train() ..."} +{"question_id": 38442897, "score": 301, "creation_date": 1468863704, "tags": ["python", "testing", "pytest"], "instruction": "How do I disable a test using pytest?\n\nLet's say I have a bunch of tests: def test_func_one(): ... def test_func_two(): ... def test_func_three(): ... Is there a decorator or something similar that I could add to the functions to prevent pytest from running just that test? The result might look something like... @pytest.disable() def test_func_one(): ... def test_func_two(): ... def test_func_three(): ...", "output": "Pytest has the skip and skipif decorators, similar to the Python unittest module (which uses skip and skipIf), which can be found in the documentation here. Examples from the link can be found here: @pytest.mark.skip(reason=\"no way of currently testing this\") def test_the_unknown(): ... import sys @pytest.mark.skipif(sys.version_info < (3,3), reason=\"requires python3.3\") def test_function(): ... The first example always skips the test, the second example allows you to conditionally skip tests (great when tests depend on the platform, executable version, or optional libraries). For example, if I want to check if someone has the library pandas installed for a test. @pytest.mark.skipif( not importlib.util.find_spec(\"pandas\"), reason=\"requires the Pandas library\" ) def test_pandas_function(): import pandas ..."} +{"question_id": 154443, "score": 301, "creation_date": 1222800959, "tags": ["python"], "instruction": "How to avoid .pyc files?\n\nCan I run the python interpreter without generating the compiled .pyc files?", "output": "From \"What\u2019s New in Python 2.6 - Interpreter Changes\": Python can now be prevented from writing .pyc or .pyo files by supplying the -B switch to the Python interpreter, or by setting the PYTHONDONTWRITEBYTECODE environment variable before running the interpreter. This setting is available to Python programs as the sys.dont_write_bytecode variable, and Python code can change the value to modify the interpreter\u2019s behaviour. So run your program as python -B prog.py. Update 2010-11-27: Python 3.2 addresses the issue of cluttering source folders with .pyc files by introducing a special __pycache__ subfolder, see What's New in Python 3.2 - PYC Repository Directories. NOTE: The default behavior is to generate the bytecode and is done for \"performance\" reasons (for more information see here for python2 and see here for python3). The generation of bytecode .pyc files is a form of caching (i.e. greatly improves average performance). Configuring python with PYTHONDONTWRITEBYTECODE=1 can be bad for python performance (for python2 see https://www.python.org/dev/peps/pep-0304/ and for python3 see https://www.python.org/dev/peps/pep-3147/ ). If you are interested in the performance impact please see here https://github.com/python/cpython ."} +{"question_id": 19412462, "score": 300, "creation_date": 1381952988, "tags": ["python", "geocoding", "geo", "geography"], "instruction": "Getting distance between two points based on latitude/longitude\n\nI tried implementing the formula in Finding distances based on Latitude and Longitude. The applet does good for the two points I am testing: Yet my code is not working. from math import sin, cos, sqrt, atan2 R = 6373.0 lat1 = 52.2296756 lon1 = 21.0122287 lat2 = 52.406374 lon2 = 16.9251681 dlon = lon2 - lon1 dlat = lat2 - lat1 a = (sin(dlat/2))**2 + cos(lat1) * cos(lat2) * (sin(dlon/2))**2 c = 2 * atan2(sqrt(a), sqrt(1-a)) distance = R * c print \"Result\", distance print \"Should be\", 278.546 It returns the distance 5447.05546147. Why?", "output": "Just as a note, if you just need a quick and easy way of finding the distance between two points, I strongly recommend using the approach described in Kurt's answer instead of reimplementing Haversine\u2014see his post for rationale. This answer focuses just on answering the specific bug the OP ran into. It's because in Python, all the trigonometry functions use radians, not degrees. You can either convert the numbers manually to radians, or use the radians function from the math module: from math import sin, cos, sqrt, atan2, radians # Approximate radius of earth in km R = 6373.0 lat1 = radians(52.2296756) lon1 = radians(21.0122287) lat2 = radians(52.406374) lon2 = radians(16.9251681) dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2 c = 2 * atan2(sqrt(a), sqrt(1 - a)) distance = R * c print(\"Result: \", distance) print(\"Should be: \", 278.546, \"km\") The distance is now returning the correct value of 278.545589351 km."} +{"question_id": 14162723, "score": 300, "creation_date": 1357323966, "tags": ["python", "pandas", "numpy", "mysql-python"], "instruction": "Replacing Pandas or Numpy Nan with a None to use with MysqlDB\n\nI am trying to write a Pandas dataframe (or can use a numpy array) to a mysql database using MysqlDB . MysqlDB doesn't seem understand 'nan' and my database throws out an error saying nan is not in the field list. I need to find a way to convert the 'nan' into a NoneType. Any ideas?", "output": "For pandas > 1.3.0 see this answer. @bogatron has it right, you can use where, it's worth noting that you can do this natively in pandas: df1 = df.where(pd.notnull(df), None) Note: this changes the dtype of all columns to object. Example: In [1]: df = pd.DataFrame([1, np.nan]) In [2]: df Out[2]: 0 0 1 1 NaN In [3]: df1 = df.where(pd.notnull(df), None) In [4]: df1 Out[4]: 0 0 1 1 None Note: what you cannot do recast the DataFrames dtype to allow all datatypes types, using astype, and then the DataFrame fillna method: df1 = df.astype(object).replace(np.nan, 'None') Unfortunately neither this, nor using replace, works with None see this (closed) issue. As an aside, it's worth noting that for most use cases you don't need to replace NaN with None, see this question about the difference between NaN and None in pandas. However, in this specific case it seems you do (at least at the time of this answer)."} +{"question_id": 287085, "score": 300, "creation_date": 1226586831, "tags": ["python"], "instruction": "What do *args and **kwargs mean?\n\nWhat exactly do *args and **kwargs mean? According to the Python documentation, from what it seems, it passes in a tuple of arguments. def foo(hello, *args): print(hello) for each in args: print(each) if __name__ == '__main__': foo(\"LOVE\", [\"lol\", \"lololol\"]) This prints out: LOVE ['lol', 'lololol'] How do you effectively use them?", "output": "Putting *args and/or **kwargs as the last items in your function definition\u2019s argument list allows that function to accept an arbitrary number of arguments and/or keyword arguments. For example, if you wanted to write a function that returned the sum of all its arguments, no matter how many you supply, you could write it like this: def my_sum(*args): return sum(args) It\u2019s probably more commonly used in object-oriented programming, when you\u2019re overriding a function, and want to call the original function with whatever arguments the user passes in. You don\u2019t actually have to call them args and kwargs, that\u2019s just a convention. It\u2019s the * and ** that do the magic. There's a more in-depth look in the official Python documentation on arbitrary argument lists."} +{"question_id": 579687, "score": 299, "creation_date": 1235428682, "tags": ["python", "windows", "interop", "clipboard"], "instruction": "How do I copy a string to the clipboard?\n\nI'm trying to make a basic Windows application that builds a string out of user input and then adds it to the clipboard. How do I copy a string to the clipboard using Python?", "output": "Actually, pywin32 and ctypes seem to be an overkill for this simple task. tkinter is a cross-platform GUI framework, which ships with Python by default and has clipboard accessing methods along with other cool stuff. If all you need is to put some text to system clipboard, this will do it: from tkinter import Tk # in Python 2, use \"Tkinter\" instead r = Tk() r.withdraw() r.clipboard_clear() r.clipboard_append('i can has clipboardz?') r.update() # now it stays on the clipboard after the window is closed r.destroy() And that's all, no need to mess around with platform-specific third-party libraries. If you are using Python 2, replace tkinter with Tkinter."} +{"question_id": 12589481, "score": 299, "creation_date": 1348599926, "tags": ["python", "pandas", "dataframe", "group-by", "aggregate"], "instruction": "Multiple aggregations of the same column using pandas GroupBy.agg()\n\nIs there a pandas built-in way to apply two different aggregating functions f1, f2 to the same column df[\"returns\"], without having to call agg() multiple times? Example dataframe: import pandas as pd import datetime as dt import numpy as np pd.np.random.seed(0) df = pd.DataFrame({ \"date\" : [dt.date(2012, x, 1) for x in range(1, 11)], \"returns\" : 0.05 * np.random.randn(10), \"dummy\" : np.repeat(1, 10) }) The syntactically wrong, but intuitively right, way to do it would be: # Assume `f1` and `f2` are defined for aggregating. df.groupby(\"dummy\").agg({\"returns\": f1, \"returns\": f2}) Obviously, Python doesn't allow duplicate keys. Is there any other manner for expressing the input to agg()? Perhaps a list of tuples [(column, function)] would work better, to allow multiple functions applied to the same column? But agg() seems like it only accepts a dictionary. Is there a workaround for this besides defining an auxiliary function that just applies both of the functions inside of it? (How would this work with aggregation anyway?)", "output": "As of 2022-06-20, the below is the accepted practice for aggregations: df.groupby('dummy').agg( Mean=('returns', np.mean), Sum=('returns', np.sum)) see this answer for more information. Below the fold included for historical versions of pandas. You can simply pass the functions as a list: In [20]: df.groupby(\"dummy\").agg({\"returns\": [np.mean, np.sum]}) Out[20]: mean sum dummy 1 0.036901 0.369012 or as a dictionary: In [21]: df.groupby('dummy').agg({'returns': {'Mean': np.mean, 'Sum': np.sum}}) Out[21]: returns Mean Sum dummy 1 0.036901 0.369012"} +{"question_id": 50757497, "score": 299, "creation_date": 1528450083, "tags": ["python", "python-3.x", "asynchronous", "async-await", "python-asyncio"], "instruction": "Simplest async/await example possible in Python\n\nI've read many examples, blog posts, questions/answers about asyncio / async / await in Python 3.5+, many were complex, the simplest I found was probably this one. Still it uses ensure_future, and for learning purposes about asynchronous programming in Python, I would like to see an even more minimal example, and what are the minimal tools necessary to do a basic async / await example. Question: is it possible to give a simple example showing how async / await works, by using only these two keywords + code to run the async loop + other Python code but no other asyncio functions? Example: something like this: import asyncio async def async_foo(): print(\"async_foo started\") await asyncio.sleep(5) print(\"async_foo done\") async def main(): asyncio.ensure_future(async_foo()) # fire and forget async_foo() print('Do some actions 1') await asyncio.sleep(5) print('Do some actions 2') loop = asyncio.get_event_loop() loop.run_until_complete(main()) but without ensure_future, and still demonstrates how await / async works.", "output": "To answer your questions, I will provide three different solutions to the same problem. Case 1: just normal Python import time def sleep(): print(f'Time: {time.time() - start:.2f}') time.sleep(1) def sum_(name, numbers): total = 0 for number in numbers: print(f'Task {name}: Computing {total}+{number}') sleep() total += number print(f'Task {name}: Sum = {total}\\n') start = time.time() tasks = [ sum_(\"A\", [1, 2]), sum_(\"B\", [1, 2, 3]), ] end = time.time() print(f'Time: {end-start:.2f} sec') Output: Task A: Computing 0+1 Time: 0.00 Task A: Computing 1+2 Time: 1.00 Task A: Sum = 3 Task B: Computing 0+1 Time: 2.00 Task B: Computing 1+2 Time: 3.00 Task B: Computing 3+3 Time: 4.00 Task B: Sum = 6 Time: 5.00 sec Case 2: async/await done wrong import asyncio import time async def sleep(): print(f'Time: {time.time() - start:.2f}') time.sleep(1) async def sum_(name, numbers): total = 0 for number in numbers: print(f'Task {name}: Computing {total}+{number}') await sleep() total += number print(f'Task {name}: Sum = {total}\\n') start = time.time() loop = asyncio.new_event_loop() tasks = [ loop.create_task(sum_(\"A\", [1, 2])), loop.create_task(sum_(\"B\", [1, 2, 3])), ] loop.run_until_complete(asyncio.wait(tasks)) loop.close() end = time.time() print(f'Time: {end-start:.2f} sec') Output: Task A: Computing 0+1 Time: 0.00 Task A: Computing 1+2 Time: 1.00 Task A: Sum = 3 Task B: Computing 0+1 Time: 2.00 Task B: Computing 1+2 Time: 3.00 Task B: Computing 3+3 Time: 4.00 Task B: Sum = 6 Time: 5.00 sec Case 3: async/await done right The same as case 2, except the sleep function: async def sleep(): print(f'Time: {time.time() - start:.2f}') await asyncio.sleep(1) Output: Task A: Computing 0+1 Time: 0.00 Task B: Computing 0+1 Time: 0.00 Task A: Computing 1+2 Time: 1.01 Task B: Computing 1+2 Time: 1.01 Task A: Sum = 3 Task B: Computing 3+3 Time: 2.01 Task B: Sum = 6 Time: 3.02 sec Case 1 and case 2 give the same 5 seconds, whereas case 3 just 3 seconds. So the async/await done right is faster. The reason for the difference is within the implementation of the sleep function. # Case 1 def sleep(): ... time.sleep(1) # Case 2 async def sleep(): ... time.sleep(1) # Case 3 async def sleep(): ... await asyncio.sleep(1) In case 1 and case 2, they are the \"same\": they \"sleep\" without allowing others to use the resources. Whereas in case 3, it allows access to the resources when it is asleep. In case 2, we added async to the normal function. However the event loop will run it without interruption. Why? Because we didn't say where the loop is allowed to interrupt your function to run another task. In case 3, we told the event loop exactly where to interrupt the function to run another task. Where exactly? Right here! await asyncio.sleep(1) For more on this, read here. Consider reading A Hitchhikers Guide to Asynchronous Programming Asyncio Futures and Coroutines"} +{"question_id": 46775346, "score": 299, "creation_date": 1508172918, "tags": ["python", "pip", "dependency-management"], "instruction": "What do square brackets mean in pip install?\n\n$ pip install \"splinter[django]\" What do these square brackets do?", "output": "The syntax that you are using is: pip install \"project[extra]\" In your case, you are installing the splinter package which has the added support for django. \u2022 pip install splinter django would install two packages named splinter and django. \u2022 pip install splinter[django], on the other hand, installs splinter, but it also installs optional dependencies defined by splinter using the keyword in the brackets. In this case, as of 2024-05-15 it's Django, lxml and cssselect. Note that the keyword in brackets has nothing to do with the django package itself, but is just a string defined by the splinter package for a particular set of dependencies that also get installed. How the argument django is interpreted depends on the build system, but any setuptools-based build system (including most instances of setup.py) will likely just use them as a hook for optional dependencies. It's worth noting that the syntax supports using multiple keywords, e.g.: pip install \"splinter[django,flask,selenium]\" Kudos to @chepner for adding context in the comments."} +{"question_id": 37435369, "score": 298, "creation_date": 1464174630, "tags": ["python", "image", "matplotlib"], "instruction": "How to draw a rectangle on image\n\nHow to draw a rectangle on an image, like this: import matplotlib.pyplot as plt from PIL import Image import numpy as np im = np.array(Image.open('dog.png'), dtype=np.uint8) plt.imshow(im) To make it clear, I meant to draw a rectangle on top of the image for visualization, not to change the image data. So using matplotlib.patches.Patch would be the best option.", "output": "You can add a Rectangle patch to the matplotlib Axes. For example (using the image from the tutorial here): import matplotlib.pyplot as plt import matplotlib.patches as patches from PIL import Image im = Image.open('stinkbug.png') # Create figure and axes fig, ax = plt.subplots() # Display the image ax.imshow(im) # Create a Rectangle patch rect = patches.Rectangle((50, 100), 40, 30, linewidth=1, edgecolor='r', facecolor='none') # Add the patch to the Axes ax.add_patch(rect) plt.show()"} +{"question_id": 31037298, "score": 298, "creation_date": 1435180931, "tags": ["python", "pandas"], "instruction": "pandas get column average/mean\n\nI can't get the average or mean of a column in pandas. A have a dataframe. Neither of things I tried below gives me the average of the column weight >>> allDF ID birthyear weight 0 619040 1962 0.1231231 1 600161 1963 0.981742 2 25602033 1963 1.3123124 3 624870 1987 0.94212 The following returns several values, not one: allDF[['weight']].mean(axis=1) So does this: allDF.groupby('weight').mean()", "output": "If you only want the mean of the weight column, select the column (which is a Series) and call .mean(): In [479]: df Out[479]: ID birthyear weight 0 619040 1962 0.123123 1 600161 1963 0.981742 2 25602033 1963 1.312312 3 624870 1987 0.942120 In [480]: df.loc[:, 'weight'].mean() Out[480]: 0.83982437500000007"} +{"question_id": 18195758, "score": 297, "creation_date": 1376338012, "tags": ["python", "matplotlib", "colorbar", "imshow", "figsize"], "instruction": "Set Matplotlib colorbar size to match graph\n\nI cannot get the colorbar on imshow graphs like this one to be the same height as the graph, short of using Photoshop after the fact. How do I get the heights to match?", "output": "You can do this easily with a matplotlib AxisDivider. The example from the linked page also works without using subplots: import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np plt.figure() ax = plt.gca() im = ax.imshow(np.arange(100).reshape((10,10))) # create an axes on the right side of ax. The width of cax will be 5% # of ax and the padding between cax and ax will be fixed at 0.05 inch. divider = make_axes_locatable(ax) cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05) plt.colorbar(im, cax=cax)"} +{"question_id": 12145434, "score": 297, "creation_date": 1346083340, "tags": ["python", "jinja2", "templating-engine"], "instruction": "How to output loop.counter in python jinja template?\n\nI want to be able to output the current loop iteration to my template. According to the docs, there is a loop.counter variable that I am trying to use:
    {% for user in userlist %}
  • {{ user }} {{loop.counter}}
  • {% if loop.counter == 1 %} This is the First user {% endif %} {% endfor %}
But is being outputed to my template. What is the correct syntax?", "output": "The counter variable inside the loop is called loop.index in Jinja2. >>> from jinja2 import Template >>> s = \"{% for element in elements %}{{loop.index}} {% endfor %}\" >>> Template(s).render(elements=[\"a\", \"b\", \"c\", \"d\"]) 1 2 3 4 In addition to loop.index, there is also loop.index0 (index starting at 0) loop.revindex (reverse index; ending at 1) loop.revindex0 (reverse index; ending at 0) Even more at http://jinja.pocoo.org/docs/templates/."} +{"question_id": 7132861, "score": 297, "creation_date": 1313855588, "tags": ["python", "filenames"], "instruction": "How can I create a full path to a file from parts (e.g. path to the folder, name and extension)?\n\nI need to pass a file path name to a module. How do I build the file path from a directory name, base filename, and a file format string? The directory may or may not exist at the time of call. For example: dir_name='/home/me/dev/my_reports' base_filename='daily_report' format = 'pdf' I need to create a string '/home/me/dev/my_reports/daily_report.pdf' Concatenating the pieces manually doesn't seem to be a good way. I tried os.path.join: join(dir_name,base_filename,format) but it gives /home/me/dev/my_reports/daily_report/pdf", "output": "This works fine: os.path.join(dir_name, base_filename + '.' + filename_suffix) Keep in mind that os.path.join() exists only because different operating systems use different path separator characters. It smooths over that difference so cross-platform code doesn't have to be cluttered with special cases for each OS. There is no need to do this for file name \"extensions\" (see footnote) because they are always preceded by a dot character, on every OS that implements them. If using a function anyway makes you feel better (and you like needlessly complicating your code), you can do this: os.path.join(dir_name, '.'.join((base_filename, filename_suffix))) If you prefer to keep your code clean, simply include the dot in the suffix: suffix = '.pdf' os.path.join(dir_name, base_filename + suffix) That approach also happens to be compatible with the suffix conventions in pathlib, which was introduced in python 3.4 a few years after this question was asked. New code that doesn't require backward compatibility can do this: suffix = '.pdf' pathlib.PurePath(dir_name, base_filename + suffix) You might be tempted to use the shorter Path() instead of PurePath() if you're only handling paths for the local OS. I would question that choice, given the cross-platform issues behind the original question. Warning: Do not use pathlib's with_suffix() for this purpose. That method will corrupt base_filename if it ever contains a dot. Footnote: Outside of Microsoft operating systems, there is no such thing as a file name \"extension\". Its presence on Windows comes from MS-DOS and FAT, which borrowed it from CP/M, which has been dead for decades. That dot-plus-three-letters that many of us are accustomed to seeing is just part of the file name on every other modern OS, where it has no built-in meaning."} +{"question_id": 3548673, "score": 296, "creation_date": 1282574965, "tags": ["python", "filenames", "scons", "strip"], "instruction": "How can I replace (or strip) an extension from a filename in Python?\n\nIs there a built-in function in Python that would replace (or remove, whatever) the extension of a filename (if it has one)? Example: print replace_extension('/home/user/somefile.txt', '.jpg') In my example: /home/user/somefile.txt would become /home/user/somefile.jpg I don't know if it matters, but I need this for a SCons module I'm writing. (So perhaps there is some SCons specific function I can use ?) I'd like something clean. Doing a simple string replacement of all occurrences of .txt within the string is obviously not clean. (This would fail if my filename is somefile.txt.txt.txt)", "output": "Try os.path.splitext it should do what you want. import os print os.path.splitext('/home/user/somefile.txt')[0]+'.jpg' # /home/user/somefile.jpg os.path.splitext('/home/user/somefile.txt') # returns ('/home/user/somefile', '.txt')"} +{"question_id": 7558908, "score": 296, "creation_date": 1317058430, "tags": ["python", "list", "split", "tuples"], "instruction": "Unpacking a list / tuple of pairs into two lists / tuples\n\nI have a list that looks like this: my_list = [('1','a'),('2','b'),('3','c'),('4','d')] I want to separate the list in 2 lists. list1 = ['1','2','3','4'] list2 = ['a','b','c','d'] I can do it for example with: list1 = [] list2 = [] for i in list: list1.append(i[0]) list2.append(i[1]) But I want to know if there is a more elegant solution.", "output": ">>> source_list = [('1','a'),('2','b'),('3','c'),('4','d')] >>> list1, list2 = zip(*source_list) >>> list1 ('1', '2', '3', '4') >>> list2 ('a', 'b', 'c', 'd') Edit: Note that zip(*iterable) is its own inverse: >>> list(source_list) == zip(*zip(*source_list)) True When unpacking into two lists, this becomes: >>> list1, list2 = zip(*source_list) >>> list(source_list) == zip(list1, list2) True Addition suggested by rocksportrocker."} +{"question_id": 17873384, "score": 296, "creation_date": 1374815539, "tags": ["python", "list", "copy", "deep-copy"], "instruction": "How to deep copy a list?\n\nAfter E0_copy = list(E0), I guess E0_copy is a deep copy of E0 since id(E0) is not equal to id(E0_copy). Then I modify E0_copy in the loop, but why is E0 not the same after? E0 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] for k in range(3): E0_copy = list(E0) E0_copy[k][k] = 0 #print(E0_copy) print E0 # -> [[0, 2, 3], [4, 0, 6], [7, 8, 0]]", "output": "E0_copy is not a deep copy. You don't make a deep copy using list(). (Both list(...) and testList[:] are shallow copies, as well as testList.copy().) You use copy.deepcopy(...) for deep copying a list. copy.deepcopy(x[, memo]) Return a deep copy of x. See the following snippet - >>> a = [[1, 2, 3], [4, 5, 6]] >>> b = list(a) >>> a [[1, 2, 3], [4, 5, 6]] >>> b [[1, 2, 3], [4, 5, 6]] >>> a[0][1] = 10 >>> a [[1, 10, 3], [4, 5, 6]] >>> b # b changes too -> Not a deepcopy. [[1, 10, 3], [4, 5, 6]] Now see the deepcopy operation >>> import copy >>> b = copy.deepcopy(a) >>> a [[1, 10, 3], [4, 5, 6]] >>> b [[1, 10, 3], [4, 5, 6]] >>> a[0][1] = 9 >>> a [[1, 9, 3], [4, 5, 6]] >>> b # b doesn't change -> Deep Copy [[1, 10, 3], [4, 5, 6]] To explain, list(...) does not recursively make copies of the inner objects. It only makes a copy of the outermost list, while still referencing the same inner lists, hence, when you mutate the inner lists, the change is reflected in both the original list and the shallow copy. You can see that shallow copying references the inner lists by checking that id(a[0]) == id(b[0]) where b = list(a)."} +{"question_id": 2161752, "score": 295, "creation_date": 1264766900, "tags": ["python", "list", "frequency"], "instruction": "How to count the frequency of the elements in an unordered list?\n\nGiven an unordered list of values like a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2] How can I get the frequency of each value that appears in the list, like so? # `a` has 4 instances of `1`, 4 of `2`, 2 of `3`, 1 of `4,` 2 of `5` b = [4, 4, 2, 1, 2] # expected output", "output": "In Python 2.7 (or newer), you can use collections.Counter: >>> import collections >>> a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2] >>> counter = collections.Counter(a) >>> counter Counter({1: 4, 2: 4, 5: 2, 3: 2, 4: 1}) >>> counter.values() dict_values([2, 4, 4, 1, 2]) >>> counter.keys() dict_keys([5, 1, 2, 4, 3]) >>> counter.most_common(3) [(1, 4), (2, 4), (5, 2)] >>> dict(counter) {5: 2, 1: 4, 2: 4, 4: 1, 3: 2} >>> # Get the counts in order matching the original specification, >>> # by iterating over keys in sorted order >>> [counter[x] for x in sorted(counter.keys())] [4, 4, 2, 1, 2] If you are using Python 2.6 or older, you can download an implementation here."} +{"question_id": 37372603, "score": 295, "creation_date": 1463909465, "tags": ["python"], "instruction": "How to remove specific substrings from a set of strings in Python?\n\nI have a set of strings and all the strings have one of two specific substrings which I want to remove: set1 = {'Apple.good', 'Orange.good', 'Pear.bad', 'Pear.good', 'Banana.bad', 'Potato.bad'} I want the \".good\" and \".bad\" substrings removed from all the strings. I tried this: for x in set1: x.replace('.good', '') x.replace('.bad', '') but it doesn't seem to work, set1 stays exactly the same. I tried using for x in list(set1) instead but that doesn't change anything.", "output": "Strings are immutable. str.replace creates a new string. This is stated in the documentation: str.replace(old, new[, count]) Return a copy of the string with all occurrences of substring old replaced by new. [...] This means you have to re-allocate the set or re-populate it (re-allocating is easier with a set comprehension): new_set = {x.replace('.good', '').replace('.bad', '') for x in set1} P.S. if you're using Python 3.9 or newer, use str.removeprefix() or str.removesuffix() instead: new_set = {x.removesuffix('.good').removesuffix('.bad') for x in set1}"} +{"question_id": 21562986, "score": 294, "creation_date": 1391546639, "tags": ["python", "arrays", "numpy", "vector", "matrix"], "instruction": "numpy matrix vector multiplication\n\nWhen I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, an (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module. The thing is that I don't want to implement it manually to preserve the speed of the program. Example code is shown below: a = np.array([[5, 1, 3], [1, 1, 1], [1, 2, 1]]) b = np.array([1, 2, 3]) print a*b >> [[5 2 9] [1 2 3] [1 4 3]] What I want is: print a*b >> [16 6 8]", "output": "Simplest solution Use numpy.dot or a.dot(b). See the documentation here. >>> a = np.array([[ 5, 1 ,3], [ 1, 1 ,1], [ 1, 2 ,1]]) >>> b = np.array([1, 2, 3]) >>> print a.dot(b) array([16, 6, 8]) This occurs because numpy arrays are not matrices, and the standard operations *, +, -, / work element-wise on arrays. Note that while you can use numpy.matrix (as of early 2021) where * will be treated like standard matrix multiplication, numpy.matrix is deprecated and may be removed in future releases.. See the note in its documentation (reproduced below): It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. Thanks @HopeKing. Other Solutions Also know there are other options: As noted below, if using python3.5+ and numpy v1.10+, the @ operator works as you'd expect: >>> print(a @ b) array([16, 6, 8]) If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own. >>> np.einsum('ji,i->j', a, b) array([16, 6, 8]) As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices. >>> np.matmul(a, b) array([16, 6, 8]) numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations). >>> np.inner(a, b) array([16, 6, 8]) # Beware using for matrix-matrix multiplication though! >>> b = a.T >>> np.dot(a, b) array([[35, 9, 10], [ 9, 3, 4], [10, 4, 6]]) >>> np.inner(a, b) array([[29, 12, 19], [ 7, 4, 5], [ 8, 5, 6]]) If you have multiple 2D arrays to dot together, you may consider the np.linalg.multi_dot function, which simplifies the syntax of many nested np.dots. Note that this only works with 2D arrays (i.e. not for matrix-vector multiplication). >>> np.dot(np.dot(a, a.T), a).dot(a.T) array([[1406, 382, 446], [ 382, 106, 126], [ 446, 126, 152]]) >>> np.linalg.multi_dot((a, a.T, a, a.T)) array([[1406, 382, 446], [ 382, 106, 126], [ 446, 126, 152]]) Rarer options for edge cases If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1: >>> np.tensordot(a, b, axes=1) array([16, 6, 8]) Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n)."} +{"question_id": 14050281, "score": 294, "creation_date": 1356589042, "tags": ["python", "python-import"], "instruction": "How to check if a Python module exists without importing it\n\nHow can I know if a Python module exists, without importing it? Importing something that might not exist (not what I want) results in: try: import eggs except ImportError: pass", "output": "TL;DR) Use importlib.util.find_spec(module_name) (Python 3.4+). Python2: imp.find_module To check if import can find something in Python 2, using imp: import imp try: imp.find_module('eggs') found = True except ImportError: found = False To find dotted imports, you need to do more: import imp try: spam_info = imp.find_module('spam') spam = imp.load_module('spam', *spam_info) imp.find_module('eggs', spam.__path__) # __path__ is already a list found = True except ImportError: found = False You can also use pkgutil.find_loader (more or less the same as the Python 3 part: import pkgutil eggs_loader = pkgutil.find_loader('eggs') found = eggs_loader is not None Python 3 Python 3 \u2264 3.3: importlib.find_loader You should use importlib. I went about doing this like: import importlib spam_loader = importlib.find_loader('spam') found = spam_loader is not None My expectation being, if you can find a loader for it, then it exists. You can also be a bit more smart about it, like filtering out what loaders you will accept. For example: import importlib spam_loader = importlib.find_loader('spam') # only accept it as valid if there is a source file for the module - no bytecode only. found = issubclass(type(spam_loader), importlib.machinery.SourceFileLoader) Python 3 \u2265 3.4: importlib.util.find_spec In Python 3.4 importlib.find_loader Python documentation was deprecated in favour of importlib.util.find_spec. The recommended method is the importlib.util.find_spec. There are others like importlib.machinery.FileFinder, which is useful if you're after a specific file to load. Figuring out how to use them is beyond the scope of this. import importlib.util spam_spec = importlib.util.find_spec(\"spam\") found = spam_spec is not None This also works with relative imports, but you must supply the starting package, so you could also do: import importlib.util spam_spec = importlib.util.find_spec(\"..spam\", package=\"eggs.bar\") found = spam_spec is not None spam_spec.name == \"eggs.spam\" While I'm sure there exists a reason for doing this - I'm not sure what it would be. Warning When trying to find a submodule, it will import the parent module (for ALL of the above methods)! food/ |- __init__.py |- eggs.py ## __init__.py print(\"module food loaded\") ## eggs.py print(\"module eggs\") were you then to run >>> import importlib >>> spam_spec = importlib.util.find_spec(\"food.eggs\") module food loaded ModuleSpec(name='food.eggs', loader=<_frozen_importlib.SourceFileLoader object at 0x10221df28>, origin='/home/user/food/eggs.py') Comments are welcome on getting around this Acknowledgements @rvighne for importlib @lucas-guido for Python 3.3+ deprecating find_loader @enpenax for pkgutils.find_loader behaviour in Python 2.7"} +{"question_id": 774824, "score": 293, "creation_date": 1240351195, "tags": ["python", "setuptools"], "instruction": "Explain Python entry points?\n\nI've read the documentation on egg entry points in Pylons and on the Peak pages, and I still don't really understand. Could someone explain them to me?", "output": "An \"entry point\" is typically a function (or other callable function-like object) that a developer or user of your Python package might want to use, though a non-callable object can be supplied as an entry point as well (as correctly pointed out in the comments!). The most popular kind of entry point is the console_scripts entry point, which points to a function that you want made available as a command-line tool to whoever installs your package. This goes into your setup.py script like: entry_points={ 'console_scripts': [ 'cursive = cursive.tools.cmd:cursive_command', ], }, I have a package I've just deployed called cursive.tools, and I wanted it to make available a \"cursive\" command that someone could run from the command line, like: $ cursive --help usage: cursive ... The way to do this is define a function, like maybe a cursive_command function in the file cursive/tools/cmd.py that looks like: def cursive_command(): args = sys.argv[1:] if len(args) < 1: print \"usage: ...\" and so forth; it should assume that it's been called from the command line, parse the arguments that the user has provided, and ... well, do whatever the command is designed to do. Install the docutils package for a great example of entry-point use: it will install something like a half-dozen useful commands for converting Python documentation to other formats."} +{"question_id": 17097643, "score": 292, "creation_date": 1371159806, "tags": ["python", "pandas", "contains"], "instruction": "Search for \"does-not-contain\" on a DataFrame in pandas\n\nI've done some searching and can't figure out how to filter a dataframe by df[\"col\"].str.contains(word) however I'm wondering if there is a way to do the reverse: filter a dataframe by that set's compliment. eg: to the effect of !(df[\"col\"].str.contains(word)) Can this be done through a DataFrame method?", "output": "You can use the invert (~) operator (which acts like a not for boolean data): new_df = df[~df[\"col\"].str.contains(word)] where new_df is the copy returned by RHS. contains also accepts a regular expression... If the above throws a ValueError or TypeError, the reason is likely because you have mixed datatypes, so use na=False: new_df = df[~df[\"col\"].str.contains(word, na=False)] Or, new_df = df[df[\"col\"].str.contains(word) == False]"} +{"question_id": 41986507, "score": 291, "creation_date": 1485971826, "tags": ["python", "python-3.x", "ubuntu", "installation", "ubuntu-16.04"], "instruction": "Unable to set default python version to python3 in ubuntu\n\nI was trying to set default python version to python3 in Ubuntu 16.04. By default it is python2 (2.7). I followed below steps : update-alternatives --remove python /usr/bin/python2 update-alternatives --install /usr/bin/python python /usr/bin/python3 but I'm getting the following error for the second statement, rejeesh@rejeesh-Vostro-1015:~$ update-alternatives --install /usr/bin/python python /usr/bin/python3 update-alternatives: --install needs Use 'update-alternatives --help' for program usage information.", "output": "The second line mentioned can be changed to [sudo] update-alternatives --install /usr/bin/python python /usr/bin/python3 10 This gives a priority of 10 for the path of python3. The disadvantage of alternatively editing .bashrc is that using the commands with sudo will not work."} +{"question_id": 10079216, "score": 291, "creation_date": 1334002495, "tags": ["python"], "instruction": "Skip first entry in for loop in python?\n\nIn python, How do I do something like: for car in cars: # Skip first and last, do work for rest", "output": "To skip the first element in Python you can simply write for car in cars[1:]: # Do What Ever you want or to skip the last elem for car in cars[:-1]: # Do What Ever you want You can use this concept for any sequence (not for any iterable though)."} +{"question_id": 21017698, "score": 291, "creation_date": 1389263508, "tags": ["python", "python-3.x"], "instruction": "Why does \"bytes(n)\" create a length n byte string instead of converting n to a binary representation?\n\nI was trying to build this bytes object in Python 3: b'3\\r\\n' so I tried the obvious (for me), and found a weird behaviour: >>> bytes(3) + b'\\r\\n' b'\\x00\\x00\\x00\\r\\n' Apparently: >>> bytes(10) b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00' I've been unable to see any pointers on why the bytes conversion works this way reading the documentation. However, I did find some surprise messages in this Python issue about adding format to bytes (see also Python 3 bytes formatting): http://bugs.python.org/issue3982 This interacts even more poorly with oddities like bytes(int) returning zeroes now and: It would be much more convenient for me if bytes(int) returned the ASCIIfication of that int; but honestly, even an error would be better than this behavior. (If I wanted this behavior - which I never have - I'd rather it be a classmethod, invoked like \"bytes.zeroes(n)\".) Can someone explain me where this behaviour comes from?", "output": "Python 3.5+ introduces %-interpolation (printf-style formatting) for bytes: >>> b'%d\\r\\n' % 3 b'3\\r\\n' See PEP 0461 -- Adding % formatting to bytes and bytearray. On earlier versions, you could use str and .encode('ascii') the result: >>> s = '%d\\r\\n' % 3 >>> s.encode('ascii') b'3\\r\\n' Note: It is different from what int.to_bytes produces: >>> n = 3 >>> n.to_bytes((n.bit_length() + 7) // 8, 'big') or b'\\0' b'\\x03' >>> b'3' == b'\\x33' != b'\\x03' True"} +{"question_id": 32444138, "score": 291, "creation_date": 1441649622, "tags": ["python", "pandas", "dataframe", "concatenation"], "instruction": "Concatenate a list of pandas dataframes together\n\nI have a list of Pandas dataframes that I would like to combine into one Pandas dataframe. I am using Python 2.7.10 and Pandas 0.16.2 I created the list of dataframes from: import pandas as pd dfs = [] sqlall = \"select * from mytable\" for chunk in pd.read_sql_query(sqlall , cnxn, chunksize=10000): dfs.append(chunk) This returns a list of dataframes type(dfs[0]) Out[6]: pandas.core.frame.DataFrame type(dfs) Out[7]: list len(dfs) Out[8]: 408 Here is some sample data # sample dataframes d1 = pd.DataFrame({'one' : [1., 2., 3., 4.], 'two' : [4., 3., 2., 1.]}) d2 = pd.DataFrame({'one' : [5., 6., 7., 8.], 'two' : [9., 10., 11., 12.]}) d3 = pd.DataFrame({'one' : [15., 16., 17., 18.], 'two' : [19., 10., 11., 12.]}) # list of dataframes mydfs = [d1, d2, d3] I would like to combine d1, d2, and d3 into one pandas dataframe. Alternatively, a method of reading a large-ish table directly into a dataframe when using the chunksize option would be very helpful.", "output": "Given that all the dataframes have the same columns, you can simply concat them: import pandas as pd df = pd.concat(list_of_dataframes)"} +{"question_id": 5960337, "score": 291, "creation_date": 1305096564, "tags": ["python", "properties", "abstract-class", "decorator", "abstract-methods"], "instruction": "How to create abstract properties in python abstract classes?\n\nIn the following code, I create a base abstract class Base. I want all the classes that inherit from Base to provide the name property, so I made this property an @abstractmethod. Then I created a subclass of Base, called Base_1, which is meant to supply some functionality, but still remain abstract. There is no name property in Base_1, but nevertheless python instatinates an object of that class without an error. How does one create abstract properties? from abc import ABCMeta, abstractmethod class Base(object): # class Base(metaclass = ABCMeta): <- Python 3 __metaclass__ = ABCMeta def __init__(self, str_dir_config): self.str_dir_config = str_dir_config @abstractmethod def _do_stuff(self, signals): pass @property @abstractmethod def name(self): \"\"\"This property will be supplied by the inheriting classes individually. \"\"\" pass class Base1(Base): __metaclass__ = ABCMeta \"\"\"This class does not provide the name property and should raise an error. \"\"\" def __init__(self, str_dir_config): super(Base1, self).__init__(str_dir_config) # super().__init__(str_dir_config) <- Python 3 def _do_stuff(self, signals): print \"Base_1 does stuff\" # print(\"Base_1 does stuff\") <- Python 3 class C(Base1): @property def name(self): return \"class C\" if __name__ == \"__main__\": b1 = Base1(\"abc\")", "output": "Since Python 3.3 a bug was fixed meaning the property() decorator is now correctly identified as abstract when applied to an abstract method. Note: Order matters, you have to use @property above @abstractmethod Python 3.3+: (python docs): from abc import ABC, abstractmethod class C(ABC): @property @abstractmethod def my_abstract_property(self): ... Python 2: (python docs) from abc import ABCMeta, abstractproperty class C: __metaclass__ = ABCMeta @abstractproperty def my_abstract_property(self): ..."} +{"question_id": 11974318, "score": 289, "creation_date": 1345052598, "tags": ["python", "jinja2"], "instruction": "How to output a comma delimited list in jinja python template?\n\nIf I have a list of users say [\"Sam\", \"Bob\", \"Joe\"], I want to do something where I can output in my jinja template file: {% for user in userlist %} {{ user }} {% if !loop.last %} , {% endif %} {% endfor %} I want to make the output template be: Sam, Bob, Joe I tried the above code to check if it was on the last iteration of the loop and if not, then don't insert a comma, but it does not work. How do I do this?", "output": "You want your if check to be: {% if not loop.last %} , {% endif %} Note that you can also shorten the code by using If Expression: {{ \", \" if not loop.last else \"\" }}"} +{"question_id": 18169965, "score": 289, "creation_date": 1376208170, "tags": ["python", "time", "python-3.x"], "instruction": "How to delete last item in list?\n\nI have this program that calculates the time taken to answer a specific question, and quits out of the while loop when answer is incorrect, but i want to delete the last calculation, so i can call min() and it not be the wrong time, sorry if this is confusing. from time import time q = input('What do you want to type? ') a = ' ' record = [] while a != '': start = time() a = input('Type: ') end = time() v = end-start record.append(v) if a == q: print('Time taken to type name: {:.2f}'.format(v)) else: break for i in record: print('{:.2f} seconds.'.format(i))", "output": "If I understood the question correctly, you can use the slicing notation to keep everything except the last item: record = record[:-1] But a better way is to delete the item directly: del record[-1] Note 1: Note that using record = record[:-1] does not really remove the last element, but assign the sublist to record. This makes a difference if you run it inside a function and record is a parameter. With record = record[:-1] the original list (outside the function) is unchanged, with del record[-1] or record.pop() the list is changed. (as stated by @pltrdy in the comments) Note 2: The code could use some Python idioms. I highly recommend reading this: Code Like a Pythonista: Idiomatic Python (via wayback machine archive)."} +{"question_id": 20067636, "score": 289, "creation_date": 1384853074, "tags": ["python", "pandas", "dataframe", "group-by", "row"], "instruction": "Pandas dataframe get first row of each group\n\nI have a pandas DataFrame like following: df = pd.DataFrame({'id' : [1,1,1,2,2,3,3,3,3,4,4,5,6,6,6,7,7], 'value' : [\"first\",\"second\",\"second\",\"first\", \"second\",\"first\",\"third\",\"fourth\", \"fifth\",\"second\",\"fifth\",\"first\", \"first\",\"second\",\"third\",\"fourth\",\"fifth\"]}) I want to group this by [\"id\",\"value\"] and get the first row of each group: id value 0 1 first 1 1 second 2 1 second 3 2 first 4 2 second 5 3 first 6 3 third 7 3 fourth 8 3 fifth 9 4 second 10 4 fifth 11 5 first 12 6 first 13 6 second 14 6 third 15 7 fourth 16 7 fifth Expected outcome: id value 1 first 2 first 3 first 4 second 5 first 6 first 7 fourth I tried following, which only gives the first row of the DataFrame. In [25]: for index, row in df.iterrows(): ....: df2 = pd.DataFrame(df.groupby(['id','value']).reset_index().ix[0])", "output": "Use .first() to get the first (non-null) element. >>> df.groupby('id').first() value id 1 first 2 first 3 first 4 second 5 first 6 first 7 fourth If you need id as column: >>> df.groupby('id').first().reset_index() id value 0 1 first 1 2 first 2 3 first 3 4 second 4 5 first 5 6 first 6 7 fourth To get first n records, you can use .head(): >>> df.groupby('id').head(2).reset_index(drop=True) id value 0 1 first 1 1 second 2 2 first 3 2 second 4 3 first 5 3 third 6 4 second 7 4 fifth 8 5 first 9 6 first 10 6 second 11 7 fourth 12 7 fifth"} +{"question_id": 8897593, "score": 289, "creation_date": 1326815469, "tags": ["python", "nlp"], "instruction": "How to compute the similarity between two text documents?\n\nI want to take two documents and determine how similar they are. Any programming language if fine but I prefer Python.", "output": "The common way of doing this is to transform the documents into TF-IDF vectors and then compute the cosine similarity between them. Any textbook on information retrieval (IR) covers this. See esp. Introduction to Information Retrieval, which is free and available online. Computing Pairwise Similarities TF-IDF (and similar text transformations) are implemented in the Python packages Gensim and scikit-learn. In the latter package, computing cosine similarities is as easy as from sklearn.feature_extraction.text import TfidfVectorizer documents = [open(f).read() for f in text_files] tfidf = TfidfVectorizer().fit_transform(documents) # no need to normalize, since Vectorizer will return normalized tf-idf pairwise_similarity = tfidf * tfidf.T or, if the documents are plain strings, >>> corpus = [\"I'd like an apple\", ... \"An apple a day keeps the doctor away\", ... \"Never compare an apple to an orange\", ... \"I prefer scikit-learn to Orange\", ... \"The scikit-learn docs are Orange and Blue\"] >>> vect = TfidfVectorizer(min_df=1, stop_words=\"english\") >>> tfidf = vect.fit_transform(corpus) >>> pairwise_similarity = tfidf * tfidf.T though Gensim may have more options for this kind of task. See also this question. [Disclaimer: I was involved in the scikit-learn TF-IDF implementation.] Interpreting the Results From above, pairwise_similarity is a Scipy sparse matrix that is square in shape, with the number of rows and columns equal to the number of documents in the corpus. >>> pairwise_similarity <5x5 sparse matrix of type '' with 17 stored elements in Compressed Sparse Row format> You can convert the sparse array to a NumPy array via .toarray() or .A: >>> pairwise_similarity.toarray() array([[1. , 0.17668795, 0.27056873, 0. , 0. ], [0.17668795, 1. , 0.15439436, 0. , 0. ], [0.27056873, 0.15439436, 1. , 0.19635649, 0.16815247], [0. , 0. , 0.19635649, 1. , 0.54499756], [0. , 0. , 0.16815247, 0.54499756, 1. ]]) Let's say we want to find the document most similar to the final document, \"The scikit-learn docs are Orange and Blue\". This document has index 4 in corpus. You can find the index of the most similar document by taking the argmax of that row, but first you'll need to mask the 1's, which represent the similarity of each document to itself. You can do the latter through np.fill_diagonal(), and the former through np.nanargmax(): >>> import numpy as np >>> arr = pairwise_similarity.toarray() >>> np.fill_diagonal(arr, np.nan) >>> input_doc = \"The scikit-learn docs are Orange and Blue\" >>> input_idx = corpus.index(input_doc) >>> input_idx 4 >>> result_idx = np.nanargmax(arr[input_idx]) >>> corpus[result_idx] 'I prefer scikit-learn to Orange' Note: the purpose of using a sparse matrix is to save (a substantial amount of space) for a large corpus & vocabulary. Instead of converting to a NumPy array, you could do: >>> n, _ = pairwise_similarity.shape >>> pairwise_similarity[np.arange(n), np.arange(n)] = -1.0 >>> pairwise_similarity[input_idx].argmax() 3"} +{"question_id": 3086973, "score": 289, "creation_date": 1277141773, "tags": ["python", "csv", "dictionary", "data-conversion"], "instruction": "How do I convert this list of dictionaries to a csv file?\n\nI have a list of dictionaries that looks something like this: toCSV = [{'name':'bob','age':25,'weight':200},{'name':'jim','age':31,'weight':180}] What should I do to convert this to a csv file that looks something like this: name,age,weight bob,25,200 jim,31,180", "output": "import csv to_csv = [ {'name': 'bob', 'age': 25, 'weight': 200}, {'name': 'jim', 'age': 31, 'weight': 180}, ] keys = to_csv[0].keys() with open('people.csv', 'w', newline='') as output_file: dict_writer = csv.DictWriter(output_file, keys) dict_writer.writeheader() dict_writer.writerows(to_csv)"} +{"question_id": 58584413, "score": 287, "creation_date": 1572220768, "tags": ["python", "code-formatting", "python-black"], "instruction": "Black formatter - Ignore specific multi-line code\n\nI would like to ignore a specific multi-line code by black python formatter. Particularly, this is used for np.array or matrix construction which turned ugly when formatted. Below is the example. np.array( [ [1, 0, 0, 0], [0, -1, 0, 0], [0, 0, 1, 0], [0, 0, 0, -1], ] ) # Will be formatted to np.array([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, 1, 0], [0, 0, 0, -1]]) I found this issue in black github, but that only works for inline command, which is not what I have here. Is there anything I can do to achieve this for a multi-line code?", "output": "You can use #fmt: on/off (docs) as explained in the issue linked. Here, it would look like: # fmt: off np.array( [ [1, 0, 0, 0], [0, -1, 0, 0], [0, 0, 1, 0], [0, 0, 0, -1], ] ) # fmt: on # fmt: off disables formatting for all following lines until re-activated with # fmt: on."} +{"question_id": 9744775, "score": 287, "creation_date": 1331933502, "tags": ["python", "datetime", "timestamp"], "instruction": "How to convert integer timestamp into a datetime\n\nI have a data file containing timestamps like \"1331856000000\". Unfortunately, I don't have a lot of documentation for the format, so I'm not sure how the timestamp is formatted. I've tried Python's standard datetime.fromordinal() and datetime.fromtimestamp() and a few others, but nothing matches. I'm pretty sure that particular number corresponds to the current date (e.g. 2012-3-16), but not much more. How do I convert this number to a datetime?", "output": "datetime.datetime.fromtimestamp() is correct, except you are probably having timestamp in miliseconds (like in JavaScript), but fromtimestamp() expects Unix timestamp, in seconds. Do it like that: >>> import datetime >>> your_timestamp = 1331856000000 >>> date = datetime.datetime.fromtimestamp(your_timestamp / 1e3) and the result is: >>> date datetime.datetime(2012, 3, 16, 1, 0) Does it answer your question? EDIT: jfs correctly suggested in a now-deleted comment to use true division by 1e3 (float 1000). The difference is significant, if you would like to get precise results, thus I changed my answer. The difference results from the default behaviour of Python 2.x, which always returns int when dividing (using / operator) int by int (this is called floor division). By replacing the divisor 1000 (being an int) with the 1e3 divisor (being representation of 1000 as float) or with float(1000) (or 1000. etc.), the division becomes true division. Python 2.x returns float when dividing int by float, float by int, float by float etc. And when there is some fractional part in the timestamp passed to fromtimestamp() method, this method's result also contains information about that fractional part (as the number of microseconds)."} +{"question_id": 21458387, "score": 287, "creation_date": 1391089946, "tags": ["python", "django", "unit-testing", "django-signals"], "instruction": "TransactionManagementError \"You can't execute queries until the end of the 'atomic' block\" while using signals, but only during Unit Testing\n\nI am getting TransactionManagementError when trying to save a Django User model instance and in its post_save signal, I'm saving some models that have the user as the foreign key. The context and error is pretty similar to this question django TransactionManagementError when using signals However, in this case, the error occurs only while unit testing. It works well in manual testing, but unit tests fails. Is there anything that I'm missing? Here are the code snippets: views.py @csrf_exempt def mobileRegister(request): if request.method == 'GET': response = {\"error\": \"GET request not accepted!!\"} return HttpResponse(json.dumps(response), content_type=\"application/json\",status=500) elif request.method == 'POST': postdata = json.loads(request.body) try: # Get POST data which is to be used to save the user username = postdata.get('phone') password = postdata.get('password') email = postdata.get('email',\"\") first_name = postdata.get('first_name',\"\") last_name = postdata.get('last_name',\"\") user = User(username=username, email=email, first_name=first_name, last_name=last_name) user._company = postdata.get('company',None) user._country_code = postdata.get('country_code',\"+91\") user.is_verified=True user._gcm_reg_id = postdata.get('reg_id',None) user._gcm_device_id = postdata.get('device_id',None) # Set Password for the user user.set_password(password) # Save the user user.save() signal.py def create_user_profile(sender, instance, created, **kwargs): if created: company = None companycontact = None try: # Try to make userprofile with company and country code provided user = User.objects.get(id=instance.id) rand_pass = random.randint(1000, 9999) company = Company.objects.get_or_create(name=instance._company,user=user) companycontact = CompanyContact.objects.get_or_create(contact_type=\"Owner\",company=company,contact_number=instance.username) profile = UserProfile.objects.get_or_create(user=instance,phone=instance.username,verification_code=rand_pass,company=company,country_code=instance._country_code) gcmDevice = GCMDevice.objects.create(registration_id=instance._gcm_reg_id,device_id=instance._gcm_reg_id,user=instance) except Exception, e: pass tests.py class AuthTestCase(TestCase): fixtures = ['nextgencatalogs/fixtures.json'] def setUp(self): self.user_data={ \"phone\":\"0000000000\", \"password\":\"123\", \"first_name\":\"Gaurav\", \"last_name\":\"Toshniwal\" } def test_registration_api_get(self): response = self.client.get(\"/mobileRegister/\") self.assertEqual(response.status_code,500) def test_registration_api_post(self): response = self.client.post(path=\"/mobileRegister/\", data=json.dumps(self.user_data), content_type=\"application/json\") self.assertEqual(response.status_code,201) self.user_data['username']=self.user_data['phone'] user = User.objects.get(username=self.user_data['username']) # Check if the company was created company = Company.objects.get(user__username=self.user_data['phone']) self.assertIsInstance(company,Company) # Check if the owner's contact is the same as the user's phone number company_contact = CompanyContact.objects.get(company=company,contact_type=\"owner\") self.assertEqual(user.username,company_contact[0].contact_number) Traceback: ====================================================================== ERROR: test_registration_api_post (nextgencatalogs.apps.catalogsapp.tests.AuthTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/nextgencatalogs/apps/catalogsapp/tests.py\", line 29, in test_registration_api_post user = User.objects.get(username=self.user_data['username']) File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/manager.py\", line 151, in get return self.get_queryset().get(*args, **kwargs) File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py\", line 301, in get num = len(clone) File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py\", line 77, in __len__ self._fetch_all() File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py\", line 854, in _fetch_all self._result_cache = list(self.iterator()) File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/query.py\", line 220, in iterator for row in compiler.results_iter(): File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/sql/compiler.py\", line 710, in results_iter for rows in self.execute_sql(MULTI): File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/models/sql/compiler.py\", line 781, in execute_sql cursor.execute(sql, params) File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/backends/util.py\", line 47, in execute self.db.validate_no_broken_transaction() File \"/Users/gauravtoshniwal1989/Developer/Web/Server/ngc/ngcvenv/lib/python2.7/site-packages/django/db/backends/__init__.py\", line 365, in validate_no_broken_transaction \"An error occurred in the current transaction. You can't \" TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block. ----------------------------------------------------------------------", "output": "I ran into this same problem myself. This is caused by a quirk in how transactions are handled in the newer versions of Django coupled with a unittest that intentionally triggers an exception. I had a unittest that checked to make sure a unique column constraint was enforced by purposefully triggering an IntegrityError exception: def test_constraint(self): try: # Duplicates should be prevented. models.Question.objects.create(domain=self.domain, slug='barks') self.fail('Duplicate question allowed.') except IntegrityError: pass do_more_model_stuff() In Django 1.4, this works fine. However, in Django 1.5/1.6, each test is wrapped in a transaction, so if an exception occurs, it breaks the transaction until you explicitly roll it back. Therefore, any further ORM operations in that transaction, such as my do_more_model_stuff(), will fail with that django.db.transaction.TransactionManagementError exception. Like caio mentioned in the comments, the solution is to capture your exception with transaction.atomic like: from django.db import transaction def test_constraint(self): try: # Duplicates should be prevented. with transaction.atomic(): models.Question.objects.create(domain=self.domain, slug='barks') self.fail('Duplicate question allowed.') except IntegrityError: pass That will prevent the purposefully-thrown exception from breaking the entire unittest's transaction."} +{"question_id": 3206344, "score": 286, "creation_date": 1278610476, "tags": ["python", "jinja2", "flask"], "instruction": "Passing HTML to template using Flask/Jinja2\n\nI'm building an admin for Flask and SQLAlchemy, and I want to pass the HTML for the different inputs to my view using render_template. The templating framework seems to escape the HTML automatically, so all <\"'> characters are converted to HTML entities. How can I disable that so that the HTML renders correctly?", "output": "To turn off autoescaping when rendering a value, use the |safe filter. {{ something|safe }} Only do this on data you trust, since rendering untrusted data without escaping is a cross-site scripting vulnerability."} +{"question_id": 48647534, "score": 286, "creation_date": 1517934341, "tags": ["python", "pandas", "dataframe"], "instruction": "Find difference between two data frames\n\nI have two data frames df1 and df2, where df2 is a subset of df1. How do I get a new data frame (df3) which is the difference between the two data frames? In other word, a data frame that has all the rows/columns in df1 that are not in df2?", "output": "By using drop_duplicates pd.concat([df1,df2]).drop_duplicates(keep=False) Update : The above method only works for those data frames that don't already have duplicates themselves. For example: df1=pd.DataFrame({'A':[1,2,3,3],'B':[2,3,4,4]}) df2=pd.DataFrame({'A':[1],'B':[2]}) It will output like below , which is wrong Wrong Output : pd.concat([df1, df2]).drop_duplicates(keep=False) Out[655]: A B 1 2 3 Correct Output Out[656]: A B 1 2 3 2 3 4 3 3 4 How to achieve that? Method 1: Using isin with tuple df1[~df1.apply(tuple,1).isin(df2.apply(tuple,1))] Out[657]: A B 1 2 3 2 3 4 3 3 4 Method 2: merge with indicator df1.merge(df2,indicator = True, how='left').loc[lambda x : x['_merge']!='both'] Out[421]: A B _merge 1 2 3 left_only 2 3 4 left_only 3 3 4 left_only"} +{"question_id": 6523791, "score": 286, "creation_date": 1309363872, "tags": ["python", "module", "python-import"], "instruction": "Why is Python running my module when I import it, and how do I stop it?\n\nI have a Python program I'm building that can be run in either of 2 ways: the first is to call python main.py which prompts the user for input in a friendly manner and then runs the user input through the program. The other way is to call python batch.py -file- which will pass over all the friendly input gathering and run an entire file's worth of input through the program in a single go. The problem is that when I run batch.py, it imports some variables/methods/etc from main.py, and when it runs this code: import main at the first line of the program, it immediately errors because it tries to run the code in main.py. How can I stop Python from running the code contained in the main module which I'm importing?", "output": "Because this is just how Python works - keywords such as class and def are not declarations. Instead, they are real live statements which are executed. If they were not executed your module would be empty. The idiomatic approach is: # stuff to run always here such as class/def def main(): pass if __name__ == \"__main__\": # stuff only to run when not called via 'import' here main()"} +{"question_id": 3480184, "score": 286, "creation_date": 1281728406, "tags": ["python", "list", "parameter-passing"], "instruction": "Pass a list to a function to act as multiple arguments\n\nIn a function that expects a list of items, how can I pass a Python list item without getting an error? my_list = ['red', 'blue', 'orange'] function_that_needs_strings('red', 'blue', 'orange') # works! function_that_needs_strings(my_list) # breaks! Surely there must be a way to expand the list, and pass the function 'red','blue','orange' on the hoof? I think this is called 'unpacking'.", "output": "function_that_needs_strings(*my_list) # works! You can read all about it here: Unpacking Argument Lists - The Python Tutorial"} +{"question_id": 16891340, "score": 285, "creation_date": 1370242040, "tags": ["python"], "instruction": "Remove a prefix from a string\n\nI am trying to do the following, in a clear pythonic way: def remove_prefix(str, prefix): return str.lstrip(prefix) print(remove_prefix('template.extensions', 'template.')) This gives: xtensions Which is not what I was expecting (extensions). Obviously (stupid me), because I have used lstrip wrongly: lstrip will remove all characters which appear in the passed chars string, not considering that string as a real string, but as \"a set of characters to remove from the beginning of the string\". Is there a standard way to remove a substring from the beginning of a string?", "output": "For Python 3.9+: text.removeprefix(prefix) For older versions, the following provides the same behavior: def remove_prefix(text, prefix): if text.startswith(prefix): return text[len(prefix):] return text"} +{"question_id": 11154946, "score": 285, "creation_date": 1340362924, "tags": ["python", "argparse"], "instruction": "Require either of two arguments using argparse\n\nGiven: import argparse parser = argparse.ArgumentParser() parser.add_argument('--foo') parser.add_argument('--bar') print(parser.parse_args('--foo 1'.split())) How do I make at least one of \"foo, bar\" mandatory: --foo x, --bar y and --foo x --bar y are fine make at most one of \"foo, bar\" mandatory: --foo x or --bar y are fine, --foo x --bar y is not", "output": "I think you are searching for something like mutual exclusion (at least for the second part of your question). This way, only --foo or --bar will be accepted, not both. import argparse parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--foo',action=.....) group.add_argument('--bar',action=.....) args = parser.parse_args() BTW, just found another question referring to the same kind of issue."} +{"question_id": 521532, "score": 285, "creation_date": 1233944753, "tags": ["python", "pretty-print", "pprint"], "instruction": "How do I get Python's pprint to return a string instead of printing?\n\nIn other words, what's the sprintf equivalent for pprint?", "output": "The pprint module has a function named pformat, for just that purpose. From the documentation: Return the formatted representation of object as a string. indent, width and depth will be passed to the PrettyPrinter constructor as formatting parameters. Example: >>> import pprint >>> people = [ ... {\"first\": \"Brian\", \"last\": \"Kernighan\"}, ... {\"first\": \"Dennis\", \"last\": \"Richie\"}, ... ] >>> pprint.pformat(people, indent=4) \"[ { 'first': 'Brian', 'last': 'Kernighan'},\\n { 'first': 'Dennis', 'last': 'Richie'}]\""} +{"question_id": 8287628, "score": 284, "creation_date": 1322416223, "tags": ["python", "python-requests", "httprequest", "http-proxy"], "instruction": "Proxies with Python 'Requests' module\n\nAbout the excellent Requests module for Python: I can't seem to find in the documentation what the variable 'proxies' should contain. When I send it a dict with a standard \"IP:PORT\" value it rejected it asking for two values. So, I guess (because this doesn't seem to be covered in the documentation) that the first value is the IP address and the second the port? The documentation mentions this only: proxies \u2013 (optional) Dictionary mapping protocol to the URL of the proxy. So I tried this... What should I be doing? proxy = { ip: port} And should I convert these to some type before putting them in the dict? r = requests.get(url, headers=headers, proxies=proxy)", "output": "The proxies' dict syntax is {\"protocol\": \"scheme://ip:port\", ...}. With it you can specify different (or the same) proxie(s) for requests using the HTTP, HTTPS, and FTP protocols: http_proxy = \"http://10.10.1.10:3128\" https_proxy = \"https://10.10.1.11:1080\" ftp_proxy = \"ftp://10.10.1.10:3128\" proxies = { \"http\" : http_proxy, \"https\" : https_proxy, \"ftp\" : ftp_proxy } r = requests.get(url, headers=headers, proxies=proxies) Deduced from the Requests documentation: Parameters: method \u2013 method for the new Request object. url \u2013 URL for the new Request object. ... proxies \u2013 (optional) Dictionary mapping protocol to the URL of the proxy. ... On Linux, you can also do this via the HTTP_PROXY, HTTPS_PROXY, and FTP_PROXY environment variables: export HTTP_PROXY=10.10.1.10:3128 export HTTPS_PROXY=10.10.1.11:1080 export FTP_PROXY=10.10.1.10:3128 On Windows: set http_proxy=10.10.1.10:3128 set https_proxy=10.10.1.11:1080 set ftp_proxy=10.10.1.10:3128"} +{"question_id": 20762662, "score": 284, "creation_date": 1387895877, "tags": ["python", "coding-style", "python-import", "pep8"], "instruction": "What's the correct way to sort Python `import x` and `from x import y` statements?\n\nThe python style guide suggests to group imports like this: Imports should be grouped in the following order: standard library imports related third party imports local application/library specific imports However, it does not mention anything how the two different ways of imports should be laid out: from foo import bar import foo There are multiple ways to sort them (let's assume all those import belong to the same group): first from..import, then import from g import gg from x import xx import abc import def import x first import, then from..import import abc import def import x from g import gg from x import xx alphabetic order by module name, ignoring the kind of import import abc import def from g import gg import x from xx import xx PEP8 does not mention the preferred order for this and the \"cleanup imports\" features some IDEs have probably just do whatever the developer of that feature preferred. I'm looking for another PEP clarifying this or a relevant comment/email from the BDFL (or another Python core developer). Please don't post subjective answers stating your own preference.", "output": "Imports are generally sorted alphabetically and described in various places besides PEP 8. Alphabetically sorted modules are quicker to read and searchable. After all, Python is all about readability. Also, it is easier to verify that something is imported, and avoids duplicate imports. There is nothing available in PEP 8 regarding sorting. So it's all about choosing what you use. According to few references from reputable sites and repositories, also popularity, Alphabetical ordering is the way. for e.g. like this: import httplib import logging import random import StringIO import time import unittest from nova.api import openstack from nova.auth import users from nova.endpoint import cloud OR import a_standard import b_standard import a_third_party import b_third_party from a_soc import f from a_soc import g from b_soc import d Reddit official repository also states that In general PEP-8 import ordering should be used. However, there are a few additions which are that for each imported group the order of imports should be: import . style lines in alphabetical order from . import style in alphabetical order References: https://code.google.com/p/soc/wiki/PythonStyleGuide https://github.com/reddit/reddit/wiki/PythonImportGuidelines http://docs.openstack.org/developer/hacking/ http://developer.plone.org/reference_manuals/external/plone.api/contribute/conventions.html#grouping-and-sorting PS: the isort utility automatically sorts your imports."} +{"question_id": 291945, "score": 283, "creation_date": 1226712093, "tags": ["python", "django", "django-forms"], "instruction": "How do I filter ForeignKey choices in a Django ModelForm?\n\nSay I have the following in my models.py: class Company(models.Model): name = ... class Rate(models.Model): company = models.ForeignKey(Company) name = ... class Client(models.Model): name = ... company = models.ForeignKey(Company) base_rate = models.ForeignKey(Rate) I.e. there are multiple Companies, each having a range of Rates and Clients. Each Client should have a base Rate that is chosen from its parent Company's Rates, not another Company's Rates. When creating a form for adding a Client, I would like to remove the Company choices (as that has already been selected via an \"Add Client\" button on the Company page) and limit the Rate choices to that Company as well. How do I go about this in Django 1.0? My current forms.py file is just boilerplate at the moment: from models import * from django.forms import ModelForm class ClientForm(ModelForm): class Meta: model = Client And the views.py is also basic: from django.shortcuts import render_to_response, get_object_or_404 from models import * from forms import * def addclient(request, company_id): the_company = get_object_or_404(Company, id=company_id) if request.POST: form = ClientForm(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect(the_company.get_clients_url()) else: form = ClientForm() return render_to_response('addclient.html', {'form': form, 'the_company':the_company}) In Django 0.96 I was able to hack this in by doing something like the following before rendering the template: manipulator.fields[0].choices = [(r.id,r.name) for r in Rate.objects.filter(company_id=the_company.id)] ForeignKey.limit_choices_to seems promising but I don't know how to pass in the_company.id and I'm not clear if that will work outside the Admin interface anyway.", "output": "ForeignKey is represented by django.forms.ModelChoiceField, which is a ChoiceField whose choices are a model QuerySet. See the reference for ModelChoiceField. So, provide a QuerySet to the field's queryset attribute: form.fields[\"rate\"].queryset = Rate.objects.filter(company_id=the_company.id) This is done explicitly in the view. No hacking around."} +{"question_id": 2953462, "score": 282, "creation_date": 1275427641, "tags": ["python", "ping", "icmp"], "instruction": "Pinging servers in Python\n\nIn Python, is there a way to ping a server through ICMP and return TRUE if the server responds, or FALSE if there is no response?", "output": "This function works in any OS (Unix, Linux, macOS, and Windows) Python 2 and Python 3 EDITS: By @radato os.system was replaced by subprocess.call. This avoids shell injection vulnerability in cases where your hostname string might not be validated. import platform # For getting the operating system name import subprocess # For executing a shell command def ping(host): \"\"\" Returns True if host (str) responds to a ping request. Remember that a host may not respond to a ping (ICMP) request even if the host name is valid. \"\"\" # Option for the number of packets as a function of param = '-n' if platform.system().lower()=='windows' else '-c' # Building the command. Ex: \"ping -c 1 google.com\" command = ['ping', param, '1', host] return subprocess.call(command) == 0 Note that, according to @ikrase on Windows this function will still return True if you get a Destination Host Unreachable error. Explanation The command is ping in both Windows and Unix-like systems. The option -n (Windows) or -c (Unix) controls the number of packets which in this example was set to 1. platform.system() returns the platform name. Ex. 'Darwin' on macOS. subprocess.call() performs a system call. Ex. subprocess.call(['ls','-l'])."} +{"question_id": 20454199, "score": 282, "creation_date": 1386510885, "tags": ["python", "node.js", "centos", "npm"], "instruction": "How can I use a different version of python during NPM install?\n\nI have terminal access to a VPS running CentOS 5.9 and default Python 2.4.3 installed. I also installed Python 2.7.3 via these commands (I used make altinstall instead of make install): wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz tar -xf Python-2.7.3.tgz cd Python-2.7.3 ./configure make make altinstall Then I installed Node.js from source code via these commands: python2.7 ./configure make make install The problem is, when I use npm install and try to install a Node.js package which requires Python 2.4.3 (or later), I get this error: gyp ERR! configure error gyp ERR! stack Error: Python executable \"python\" is v2.4.3, which is not supported by gyp. gyp ERR! stack You can pass the --python switch to point to Python >= v2.5.0 & < 3.0.0. gyp ERR! stack at failPythonVersion (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:125:14) gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:114:9 How should I \"pass the --python switch to point to Python >= v2.5.0\"?", "output": "You can use the --python option to npm like so: npm install --python=python2.7 Or set it to be used always: npm config set python python2.7 Npm will in turn pass this option to node-gyp when needed."} +{"question_id": 6920302, "score": 282, "creation_date": 1312330694, "tags": ["python", "python-3.x", "button", "tkinter", "arguments"], "instruction": "How to pass arguments to a Button command in Tkinter?\n\nSuppose I have the following Button made with Tkinter in Python: import Tkinter as Tk win = Tk.Toplevel() frame = Tk.Frame(master=win).grid(row=1, column=1) button = Tk.Button(master=frame, text='press', command=action) The method action is called when I press the button, but what if I wanted to pass some arguments to the method action? I have tried with the following code: button = Tk.Button(master=frame, text='press', command=action(someNumber)) This just invokes the method immediately, and pressing the button does nothing. See Python Argument Binders for standard techniques (not Tkinter-specific) for solving the problem. Working with callbacks in Tkinter (or other GUI frameworks) has some special considerations because the return value from the callback is useless. If you try to create multiple Buttons in a loop, passing each one different arguments based on the loop counter, you may run into problems due to what is called late binding. Please see tkinter creating buttons in for loop passing command arguments for details.", "output": "This can be done using a lambda, like so: button = Tk.Button(master=frame, text='press', command= lambda: action(someNumber)) This is a simple way to bind the argument without an explicit wrapper method or modifying the original action."} +{"question_id": 4020419, "score": 282, "creation_date": 1288062674, "tags": ["python", "closures", "nested-function"], "instruction": "Why aren't python nested functions called closures?\n\nI have seen and used nested functions in Python, and they match the definition of a closure. So why are they called \"nested functions\" instead of \"closures\"? Are nested functions not closures because they are not used by the external world? UPDATE: I was reading about closures and it got me thinking about this concept with respect to Python. I searched and found the article mentioned by someone in a comment below, but I couldn't completely understand the explanation in that article, so that is why I am asking this question.", "output": "A closure occurs when a function has access to a local variable from an enclosing scope that has finished its execution. def make_printer(msg): def printer(): print(msg) return printer printer = make_printer('Foo!') printer() When make_printer is called, a new frame is put on the stack with the compiled code for the printer function as a constant and the value of msg as a local. It then creates and returns the function. Because the function printer references the msg variable, it is kept alive after the make_printer function has returned. So, if your nested functions don't access variables that are local to enclosing scopes, do so when they are executed outside of that scope, then they are not closures. Here's an example of a nested function which is not a closure. def make_printer(msg): def printer(msg=msg): print(msg) return printer printer = make_printer(\"Foo!\") printer() #Output: Foo! Here, we are binding the value to the default value of a parameter. This occurs when the function printer is created and so no reference to the value of msg external to printer needs to be maintained after make_printer returns. msg is just a normal local variable of the function printer in this context."} +{"question_id": 28654047, "score": 282, "creation_date": 1424574508, "tags": ["python", "pandas"], "instruction": "Convert columns into rows with Pandas\n\nSo my dataset has some information by location for n dates. The problem is each date is actually a different column header. For example the CSV looks like location name Jan-2010 Feb-2010 March-2010 A \"test\" 12 20 30 B \"foo\" 18 20 25 What I would like is for it to look like location name Date Value A \"test\" Jan-2010 12 A \"test\" Feb-2010 20 A \"test\" March-2010 30 B \"foo\" Jan-2010 18 B \"foo\" Feb-2010 20 B \"foo\" March-2010 25 My problem is I don't know how many dates are in the column (though I know they will always start after name)", "output": "Use .melt: df.melt(id_vars=[\"location\", \"name\"], var_name=\"Date\", value_name=\"Value\") location name Date Value 0 A \"test\" Jan-2010 12 1 B \"foo\" Jan-2010 18 2 A \"test\" Feb-2010 20 3 B \"foo\" Feb-2010 20 4 A \"test\" March-2010 30 5 B \"foo\" March-2010 25 Old(er) versions: <0.20 You can use pd.melt to get most of the way there, and then sort: >>> df location name Jan-2010 Feb-2010 March-2010 0 A test 12 20 30 1 B foo 18 20 25 >>> df2 = pd.melt(df, id_vars=[\"location\", \"name\"], var_name=\"Date\", value_name=\"Value\") >>> df2 location name Date Value 0 A test Jan-2010 12 1 B foo Jan-2010 18 2 A test Feb-2010 20 3 B foo Feb-2010 20 4 A test March-2010 30 5 B foo March-2010 25 >>> df2 = df2.sort([\"location\", \"name\"]) >>> df2 location name Date Value 0 A test Jan-2010 12 2 A test Feb-2010 20 4 A test March-2010 30 1 B foo Jan-2010 18 3 B foo Feb-2010 20 5 B foo March-2010 25 (Might want to throw in a .reset_index(drop=True), just to keep the output clean.) Note: pd.DataFrame.sort has been deprecated in favour of pd.DataFrame.sort_values."} +{"question_id": 49433936, "score": 281, "creation_date": 1521736482, "tags": ["python", "machine-learning", "deep-learning", "neural-network", "pytorch"], "instruction": "How do I initialize weights in PyTorch?\n\nHow do I initialize weights and biases of a network (via e.g. He or Xavier initialization)?", "output": "Single layer To initialize the weights of a single layer, use a function from torch.nn.init. For instance: conv1 = torch.nn.Conv2d(...) torch.nn.init.xavier_uniform(conv1.weight) Alternatively, you can modify the parameters by writing to conv1.weight.data (which is a torch.Tensor). Example: conv1.weight.data.fill_(0.01) The same applies for biases: conv1.bias.data.fill_(0.01) nn.Sequential or custom nn.Module Pass an initialization function to torch.nn.Module.apply. It will initialize the weights in the entire nn.Module recursively. apply(fn): Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init). Example: def init_weights(m): if isinstance(m, nn.Linear): torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) net.apply(init_weights)"} +{"question_id": 28885132, "score": 281, "creation_date": 1425580198, "tags": ["python", "performance", "python-3.x", "python-internals"], "instruction": "Why is 'x' in ('x',) faster than 'x' == 'x'?\n\n>>> timeit.timeit(\"'x' in ('x',)\") 0.04869917374131205 >>> timeit.timeit(\"'x' == 'x'\") 0.06144205736110564 Also works for tuples with multiple elements, both versions seem to grow linearly: >>> timeit.timeit(\"'x' in ('x', 'y')\") 0.04866674801541748 >>> timeit.timeit(\"'x' == 'x' or 'x' == 'y'\") 0.06565782838087131 >>> timeit.timeit(\"'x' in ('y', 'x')\") 0.08975995576448526 >>> timeit.timeit(\"'x' == 'y' or 'x' == 'y'\") 0.12992391047427532 Based on this, I think I should totally start using in everywhere instead of ==!", "output": "Both methods dispatch to is; you can prove this by doing from timeit import Timer min(Timer(\"x == x\", setup=\"x = 'a' * 1000000\").repeat(10, 10000)) # 0.00045456900261342525 min(Timer(\"x == y\", setup=\"x = 'a' * 1000000; y = 'a' * 1000000\").repeat(10, 10000)) # 0.5256857610074803 The first can only be so fast because it checks by identity. To find out why one would take longer than the other, let's trace through execution. They both start in ceval.c, from COMPARE_OP since that is the bytecode involved TARGET(COMPARE_OP) { PyObject *right = POP(); PyObject *left = TOP(); PyObject *res = cmp_outcome(oparg, left, right); Py_DECREF(left); Py_DECREF(right); SET_TOP(res); if (res == NULL) goto error; PREDICT(POP_JUMP_IF_FALSE); PREDICT(POP_JUMP_IF_TRUE); DISPATCH(); } This pops the values from the stack (technically it only pops one) PyObject *right = POP(); PyObject *left = TOP(); and runs the compare: PyObject *res = cmp_outcome(oparg, left, right); cmp_outcome is this: static PyObject * cmp_outcome(int op, PyObject *v, PyObject *w) { int res = 0; switch (op) { case PyCmp_IS: ... case PyCmp_IS_NOT: ... case PyCmp_IN: res = PySequence_Contains(w, v); if (res < 0) return NULL; break; case PyCmp_NOT_IN: ... case PyCmp_EXC_MATCH: ... default: return PyObject_RichCompare(v, w, op); } v = res ? Py_True : Py_False; Py_INCREF(v); return v; } This is where the paths split. The PyCmp_IN branch does int PySequence_Contains(PyObject *seq, PyObject *ob) { Py_ssize_t result; PySequenceMethods *sqm = seq->ob_type->tp_as_sequence; if (sqm != NULL && sqm->sq_contains != NULL) return (*sqm->sq_contains)(seq, ob); result = _PySequence_IterSearch(seq, ob, PY_ITERSEARCH_CONTAINS); return Py_SAFE_DOWNCAST(result, Py_ssize_t, int); } Note that a tuple is defined as static PySequenceMethods tuple_as_sequence = { ... (objobjproc)tuplecontains, /* sq_contains */ }; PyTypeObject PyTuple_Type = { ... &tuple_as_sequence, /* tp_as_sequence */ ... }; So the branch if (sqm != NULL && sqm->sq_contains != NULL) will be taken and *sqm->sq_contains, which is the function (objobjproc)tuplecontains, will be taken. This does static int tuplecontains(PyTupleObject *a, PyObject *el) { Py_ssize_t i; int cmp; for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(a); ++i) cmp = PyObject_RichCompareBool(el, PyTuple_GET_ITEM(a, i), Py_EQ); return cmp; } ...Wait, wasn't that PyObject_RichCompareBool what the other branch took? Nope, that was PyObject_RichCompare. That code path was short so it likely just comes down to the speed of these two. Let's compare. int PyObject_RichCompareBool(PyObject *v, PyObject *w, int op) { PyObject *res; int ok; /* Quick result when objects are the same. Guarantees that identity implies equality. */ if (v == w) { if (op == Py_EQ) return 1; else if (op == Py_NE) return 0; } ... } The code path in PyObject_RichCompareBool pretty much immediately terminates. For PyObject_RichCompare, it does PyObject * PyObject_RichCompare(PyObject *v, PyObject *w, int op) { PyObject *res; assert(Py_LT <= op && op <= Py_GE); if (v == NULL || w == NULL) { ... } if (Py_EnterRecursiveCall(\" in comparison\")) return NULL; res = do_richcompare(v, w, op); Py_LeaveRecursiveCall(); return res; } The Py_EnterRecursiveCall/Py_LeaveRecursiveCall combo are not taken in the previous path, but these are relatively quick macros that'll short-circuit after incrementing and decrementing some globals. do_richcompare does: static PyObject * do_richcompare(PyObject *v, PyObject *w, int op) { richcmpfunc f; PyObject *res; int checked_reverse_op = 0; if (v->ob_type != w->ob_type && ...) { ... } if ((f = v->ob_type->tp_richcompare) != NULL) { res = (*f)(v, w, op); if (res != Py_NotImplemented) return res; ... } ... } This does some quick checks to call v->ob_type->tp_richcompare which is PyTypeObject PyUnicode_Type = { ... PyUnicode_RichCompare, /* tp_richcompare */ ... }; which does PyObject * PyUnicode_RichCompare(PyObject *left, PyObject *right, int op) { int result; PyObject *v; if (!PyUnicode_Check(left) || !PyUnicode_Check(right)) Py_RETURN_NOTIMPLEMENTED; if (PyUnicode_READY(left) == -1 || PyUnicode_READY(right) == -1) return NULL; if (left == right) { switch (op) { case Py_EQ: case Py_LE: case Py_GE: /* a string is equal to itself */ v = Py_True; break; case Py_NE: case Py_LT: case Py_GT: v = Py_False; break; default: ... } } else if (...) { ... } else { ...} Py_INCREF(v); return v; } Namely, this shortcuts on left == right... but only after doing if (!PyUnicode_Check(left) || !PyUnicode_Check(right)) if (PyUnicode_READY(left) == -1 || PyUnicode_READY(right) == -1) All in all the paths then look something like this (manually recursively inlining, unrolling and pruning known branches) POP() // Stack stuff TOP() // // case PyCmp_IN: // Dispatch on operation // sqm != NULL // Dispatch to builtin op sqm->sq_contains != NULL // *sqm->sq_contains // // cmp == 0 // Do comparison in loop i < Py_SIZE(a) // v == w // op == Py_EQ // ++i // cmp == 0 // // res < 0 // Convert to Python-space res ? Py_True : Py_False // Py_INCREF(v) // // Py_DECREF(left) // Stack stuff Py_DECREF(right) // SET_TOP(res) // res == NULL // DISPATCH() // vs POP() // Stack stuff TOP() // // default: // Dispatch on operation // Py_LT <= op // Checking operation op <= Py_GE // v == NULL // w == NULL // Py_EnterRecursiveCall(...) // Recursive check // v->ob_type != w->ob_type // More operation checks f = v->ob_type->tp_richcompare // Dispatch to builtin op f != NULL // // !PyUnicode_Check(left) // ...More checks !PyUnicode_Check(right)) // PyUnicode_READY(left) == -1 // PyUnicode_READY(right) == -1 // left == right // Finally, doing comparison case Py_EQ: // Immediately short circuit Py_INCREF(v); // // res != Py_NotImplemented // // Py_LeaveRecursiveCall() // Recursive check // Py_DECREF(left) // Stack stuff Py_DECREF(right) // SET_TOP(res) // res == NULL // DISPATCH() // Now, PyUnicode_Check and PyUnicode_READY are pretty cheap since they only check a couple of fields, but it should be obvious that the top one is a smaller code path, it has fewer function calls, only one switch statement and is just a bit thinner. TL;DR: Both dispatch to if (left_pointer == right_pointer); the difference is just how much work they do to get there. in just does less."} +{"question_id": 21487329, "score": 280, "creation_date": 1391192589, "tags": ["python", "pandas", "dataframe", "matplotlib"], "instruction": "Add x and y labels to a pandas plot\n\nSuppose I have the following code that plots something very simple using pandas: import pandas as pd values = [[1, 2], [2, 5]] df2 = pd.DataFrame(values, columns=['Type A', 'Type B'], index=['Index 1', 'Index 2']) df2.plot(lw=2, colormap='jet', marker='.', markersize=10, title='Video streaming dropout by category') How do I easily set x and y-labels while preserving my ability to use specific colormaps? I noticed that the plot() wrapper for pandas DataFrames doesn't take any parameters specific for that.", "output": "In Pandas version 1.10 you can use parameters xlabel and ylabel in the method plot: df.plot(xlabel='X Label', ylabel='Y Label', title='Plot Title')"} +{"question_id": 20069009, "score": 280, "creation_date": 1384856916, "tags": ["python", "pandas", "group-by", "greatest-n-per-group", "window-functions"], "instruction": "Pandas get topmost n records within each group\n\nSuppose I have pandas DataFrame like this: df = pd.DataFrame({'id':[1,1,1,2,2,2,2,3,4], 'value':[1,2,3,1,2,3,4,1,1]}) which looks like: id value 0 1 1 1 1 2 2 1 3 3 2 1 4 2 2 5 2 3 6 2 4 7 3 1 8 4 1 I want to get a new DataFrame with top 2 records for each id, like this: id value 0 1 1 1 1 2 3 2 1 4 2 2 7 3 1 8 4 1 I can do it with numbering records within group after groupby: dfN = df.groupby('id').apply(lambda x:x['value'].reset_index()).reset_index() which looks like: id level_1 index value 0 1 0 0 1 1 1 1 1 2 2 1 2 2 3 3 2 0 3 1 4 2 1 4 2 5 2 2 5 3 6 2 3 6 4 7 3 0 7 1 8 4 0 8 1 then for the desired output: dfN[dfN['level_1'] <= 1][['id', 'value']] Output: id value 0 1 1 1 1 2 3 2 1 4 2 2 7 3 1 8 4 1 But is there more effective/elegant approach to do this? And also is there more elegant approach to number records within each group (like SQL window function row_number()).", "output": "Did you try df.groupby('id').head(2) Output generated: id value id 1 0 1 1 1 1 2 2 3 2 1 4 2 2 3 7 3 1 4 8 4 1 (Keep in mind that you might need to order/sort before, depending on your data) EDIT: As mentioned by the questioner, use df.groupby('id').head(2).reset_index(drop=True) to remove the MultiIndex and flatten the results: id value 0 1 1 1 1 2 2 2 1 3 2 2 4 3 1 5 4 1"} +{"question_id": 717148, "score": 280, "creation_date": 1238853789, "tags": ["python", "queue", "thread-safety", "python-multithreading", "deque"], "instruction": "queue.Queue vs. collections.deque\n\nI need a queue which multiple threads can put stuff into, and multiple threads may read from. Python has at least two queue classes, queue.Queue and collections.deque, with the former seemingly using the latter internally. Both claim to be thread-safe in the documentation. However, the Queue docs also state: collections.deque is an alternative implementation of unbounded queues with fast atomic append() and popleft() operations that do not require locking and also support indexing. Which I guess I don't quite understand: Does this mean deque isn't fully thread-safe after all? If it is, I may not fully understand the difference between the two classes. I can see that Queue adds blocking functionality. On the other hand, it loses some deque features like support for the in operator. Is accessing the internal deque object directly x in Queue().queue thread-safe? Also, why does Queue employ a mutex for its operations when deque is thread-safe already?", "output": "queue.Queue and collections.deque serve different purposes. queue.Queue is intended for allowing different threads to communicate using queued messages/data, whereas collections.deque is simply intended as a data structure. That's why queue.Queue has methods like put_nowait(), get_nowait(), and join(), whereas collections.deque doesn't. queue.Queue isn't intended to be used as a collection, which is why it lacks the likes of the in operator. It boils down to this: if you have multiple threads and you want them to be able to communicate without the need for locks, you're looking for queue.Queue; if you just want a queue or a double-ended queue as a datastructure, use collections.deque. Finally, accessing and manipulating the internal deque of a queue.Queue is playing with fire - you really don't want to be doing that."} +{"question_id": 38217545, "score": 280, "creation_date": 1467786670, "tags": ["python", "pip", "anaconda", "virtualenv", "pyenv"], "instruction": "What is the difference between pyenv, virtualenv, and Anaconda?\n\nI am a Ruby programmer trying to learn Python. I am pretty familiar with pyenv since it is like a copy and paste from RBEnv. Pyenv helps in allowing to have more than one version of Python in a system and also to isolate the Python instance without touching sensitive parts of system. I suppose every Python installation comes with a pip package. What I still don't understand is, there are many good Python libraries out there that suggest to use this virtualenv and Anaconda. I can even find a virtualenv plugin for pyenv. Now I am getting confused with the purpose of these two, pyenv and virtualenv. Worse, inside pyenv, there is a virtualenv plugin. What is the difference between pyenv and virtualenv? Is there any difference in using a pip command inside both pyenv and virtualenv? What does this pyenv virtualenv do?", "output": "It's worth mentioning pip here as well, as conda and pip have similarities and differences that are relevant to this topic. pip: the Python Package Manager. You might think of pip as the Python equivalent of the Ruby gem command pip is not included with Python by default. You may install Python using Homebrew, which will install pip automatically: brew install python The final version of OS X did not include pip by default. To add pip to your Mac system's version of Python, you can sudo easy_install pip You can find and publish Python packages using PyPI: The Python Package Index The requirements.txt file is comparable to the ruby gemfile To create a requirements text file, pip freeze > requirements.txt Note, at this point, we have Python installed on our system, and we have created a requirements.txt file that outlines all of the Python packages that have been installed on your system. pyenv: Python Version Manager From the documentation: pyenv lets you easily switch between multiple versions of Python. It's simple, unobtrusive, and follows the Unix tradition of single-purpose tools that do one thing well. This project was forked from RBEnv and ruby-build, and modified for Python. Many folks hesitate to use Python 3. If you need to use different versions of Python, pyenv lets you manage this easily. virtualenv: Python Environment Manager. From the documentation: The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.7/site-packages (or whatever your platform\u2019s standard location is), it\u2019s easy to end up in a situation where you unintentionally upgrade an application that shouldn\u2019t be upgraded. To create a virtualenv, simply invoke virtualenv ENV, where ENV is is a directory to place the new virtual environment. To initialize the virtualenv, you need to source ENV/bin/activate. To stop using, simply call deactivate. Once you activate the virtualenv, you might install all of a workspace's package requirements by running pip install -r against the project's requirements.txt file. Anaconda: Package Manager + Python Version Manager + Environment Manager + Additional Scientific Libraries. Anaconda is a commercial distribution of Python with the most popular Python libraries. You are not permitted to use Anaconda in an organisation with more than 200 employees. From the documentation: Anaconda 4.2.0 includes an easy installation of Python (2.7.12, 3.4.5, and/or 3.5.2) and updates of over 100 pre-built and tested scientific and analytic Python packages that include NumPy, Pandas, SciPy, Matplotlib, and IPython, with over 620 more packages available via a simple conda install As a web developer, I haven't used Anaconda. It's approximately 3 GB, including all the packages. There is a slimmed down Miniconda version, which seems like it could be a more simple option than using pip + virtualenv, although I don't have experience using it personally. While Conda allows you to install packages, these packages are separate than PyPI packages, so you may still need to use pip additionally depending on the types of packages you need to install. See also: Conda vs pip vs virtualenv (section in documentation from anaconda) The difference between pip and Conda (Stack Overflow) The relationship between virtualenv and pyenv (Stack Overflow)"} +{"question_id": 4912972, "score": 279, "creation_date": 1296991561, "tags": ["python", "subclass", "assert"], "instruction": "How do I check (at runtime) if one class is a subclass of another?\n\nLet's say that I have a class Suit and four subclasses of suit: Heart, Spade, Diamond, Club. class Suit: ... class Heart(Suit): ... class Spade(Suit): ... class Diamond(Suit): ... class Club(Suit): ... I have a method which receives a suit as a parameter, which is a class object, not an instance. More precisely, it may receive only one of the four values: Heart, Spade, Diamond, Club. How can I make an assertion which ensures such a thing? Something like: def my_method(suit): assert(suit subclass of Suit) ... I'm using Python 3.", "output": "You can use issubclass(), but the given example has suit passed as an instance. You'll need to pass the instance's class to issubclass() (or you will receive the error TypeError: issubclass() arg 1 must be a class ) assert issubclass(suit.__class__, Suit) or just use isinstance() : assert isinstance(suit, Suit)"} +{"question_id": 2043453, "score": 278, "creation_date": 1263229887, "tags": ["python", "shell", "command-line"], "instruction": "Executing multi-line statements in the one-line command-line\n\nI'm using Python with -c to execute a one-liner loop, i.e.: python -c \"for r in range(10): print 'rob'\" This works fine. However, if I import a module before the for loop, I get a syntax error: python -c \"import sys; for r in range(10): print 'rob'\" File \"\", line 1 import sys; for r in range(10): print 'rob' ^ SyntaxError: invalid syntax How can this be fixed? It's important to me to have this as a one-liner so that I can include it in a Makefile.", "output": "You could do echo -e \"import sys\\nfor r in range(10): print 'rob'\" | python Or without pipes: python -c \"exec(\\\"import sys\\nfor r in range(10): print 'rob'\\\")\" Or (echo \"import sys\" ; echo \"for r in range(10): print 'rob'\") | python Or SilentGhost's answer or Crast's answer."} +{"question_id": 15900338, "score": 278, "creation_date": 1365505976, "tags": ["python", "httprequest", "python-requests", "http-status-codes"], "instruction": "Python 'Requests' POST with param data\n\nThis is the raw request for an API call: POST http://192.168.3.45:8080/api/v2/event/log?sessionKey=b299d17b896417a7b18f46544d40adb734240cc2&format=json HTTP/1.1 Accept-Encoding: gzip,deflate Content-Type: application/json Content-Length: 86 Host: 192.168.3.45:8080 Connection: Keep-Alive User-Agent: Apache-HttpClient/4.1.1 (java 1.5) {\"eventType\":\"AAS_PORTAL_START\",\"data\":{\"uid\":\"hfe3hf45huf33545\",\"aid\":\"1\",\"vid\":\"1\"}} This request returns a success (2xx) response. Now I am trying to post this request using Requests: import requests headers = {'content-type' : 'application/json'} data ={\"eventType\" : \"AAS_PORTAL_START\", \"data\" : {\"uid\": \"hfe3hf45huf33545\", \"aid\": \"1\", \"vid\": \"1\"} } url = (\"http://192.168.3.45:8080/api/v2/event/log?\" \"sessionKey=9ebbd0b25760557393a43064a92bae539d962103&\" \"format=xml&\" \"platformId=1\") requests.post(url, params=data, headers=headers) The response from this request is: Everything looks fine to me, and I am not quite sure what I posting wrong to get a 400 response.", "output": "params is for GET-style URL parameters, and data is for POST-style body information. It is perfectly legal to provide both types of information in a request, and your request does so too, but you encoded the URL parameters into the URL already. Your raw post contains JSON data though. Requests can handle JSON encoding for you, and it'll set the correct Content-Type header too; all you need to do is pass in the Python object to be encoded as JSON into the json keyword argument. You could split out the URL parameters as well: params = {'sessionKey': '9ebbd0b25760557393a43064a92bae539d962103', 'format': 'xml', 'platformId': 1} Then post your data with: import requests url = 'http://192.168.3.45:8080/api/v2/event/log' data = {\"eventType\": \"AAS_PORTAL_START\", \"data\": {\"uid\": \"hfe3hf45huf33545\", \"aid\": \"1\", \"vid\": \"1\"}} params = {'sessionKey': '9ebbd0b25760557393a43064a92bae539d962103', 'format': 'xml', 'platformId': 1} requests.post(url, params=params, json=data) The json keyword is new in Requests version 2.4.2; if you still have to use an older version, encode the JSON manually using the json module and post the encoded result as the data key; you will have to explicitly set the Content-Type header in that case: import requests import json headers = {'content-type': 'application/json'} url = 'http://192.168.3.45:8080/api/v2/event/log' data = {\"eventType\": \"AAS_PORTAL_START\", \"data\": {\"uid\": \"hfe3hf45huf33545\", \"aid\": \"1\", \"vid\": \"1\"}} params = {'sessionKey': '9ebbd0b25760557393a43064a92bae539d962103', 'format': 'xml', 'platformId': 1} requests.post(url, params=params, data=json.dumps(data), headers=headers)"} +{"question_id": 38972052, "score": 278, "creation_date": 1471342012, "tags": ["python", "anaconda", "conda"], "instruction": "anaconda update all possible packages?\n\nI tried the conda search --outdated, there are lots of outdated packages, for example the scipy is 0.17.1 but the latest is 0.18.0. However, when I do the conda update --all. It will not update any packages. update 1 conda update --all --alt-hint Fetching package metadata ....... Solving package specifications: .......... # All requested packages already installed. # packages in environment at /home/user/opt/anaconda2: # update 2 I can update those packages separately. I can do conda update scipy. But why I cannot update all of them in one go?", "output": "TL;DR: dependency conflicts: Updating one requires (by its requirements) to downgrade another You are right: conda update --all is actually the way to go1. Conda always tries to upgrade the packages to the newest version in the series (say Python 2.x or 3.x). Dependency conflicts But it is possible that there are dependency conflicts (which prevent a further upgrade). Conda usually warns very explicitly if they occur. e.g. X requires Y <5.0, so Y will never be >= 5.0 That's why you 'cannot' upgrade them all. Resolving Update 1: since a while, mamba has proven to be an extremely powerful drop-in replacement for conda in terms of dependency resolution and (IMH experience) finds solutions to problems where conda fails. A way to invoke it without installing mamba is via the --solver=libmamba flag (requires conda-libmamba-solver), as pointed out by matteo in the comments. To add: maybe it could work but a newer version of X working with Y > 5.0 is not available in conda. It is possible to install with pip, since more packages are available in pip. But be aware that pip also installs packages if dependency conflicts exist and that it usually breaks your conda environment in the sense that you cannot reliably install with conda anymore. If you do that, do it as a last resort and after all packages have been installed with conda. It's rather a hack. A safe way you can try is to add conda-forge as a channel when upgrading (add -c conda-forge as a flag) or any other channel you find that contains your package if you really need this new version. This way conda does also search in this places for available packages. Considering your update: You can upgrade them each separately, but doing so will not only include an upgrade but also a downgrade of another package as well. Say, to add to the example above: X > 2.0 requires Y < 5.0, X < 2.0 requires Y > 5.0 So upgrading Y > 5.0 implies downgrading X to < 2.0 and vice versa. (this is a pedagogical example, of course, but it's the same in reality, usually just with more complicated dependencies and sub-dependencies) So you still cannot upgrade them all by doing the upgrades separately; the dependencies are just not satisfiable so earlier or later, an upgrade will downgrade an already upgraded package again. Or break the compatibility of the packages (which you usually don't want!), which is only possible by explicitly invoking an ignore-dependencies and force-command. But that is only to hack your way around issues, definitely not the normal-user case! 1 If you actually want to update the packages of your installation, which you usually don't. The command run in the base environment will update the packages in this, but usually you should work with virtual environments (conda create -n myenv and then conda activate myenv). Executing conda update --all inside such an environment will update the packages inside this environment. However, since the base environment is also an environment, the answer applies to both cases in the same way."} +{"question_id": 2831212, "score": 277, "creation_date": 1273798555, "tags": ["python", "list", "performance", "data-structures", "set"], "instruction": "Python Sets vs Lists\n\nIn Python, which data structure is more efficient/speedy? Assuming that order is not important to me and I would be checking for duplicates anyway, is a Python set slower than a Python list?", "output": "It depends on what you are intending to do with it. Sets are significantly faster when it comes to determining if an object is present in the set (as in x in s), but its elements are not ordered so you cannot access items by index as you would in a list. Sets are also somewhat slower to iterate over in practice. You can use the timeit module to see which is faster for your situation."} +{"question_id": 33282368, "score": 276, "creation_date": 1445521028, "tags": ["python", "matplotlib", "seaborn", "heatmap"], "instruction": "Plotting a 2D heatmap\n\nUsing Matplotlib, I want to plot a 2D heat map. My data is an n-by-n Numpy array, each with a value between 0 and 1. So for the (i, j) element of this array, I want to plot a square at the (i, j) coordinate in my heat map, whose color is proportional to the element's value in the array. How can I do this?", "output": "The imshow() function with parameters interpolation='nearest' and cmap='hot' should do what you want. Please review the interpolation parameter details, and see Interpolations for imshow and Image antialiasing. import matplotlib.pyplot as plt import numpy as np a = np.random.random((16, 16)) plt.imshow(a, cmap='hot', interpolation='nearest') plt.show()"} +{"question_id": 595374, "score": 276, "creation_date": 1235750796, "tags": ["python", "multiplication"], "instruction": "What's the function like sum() but for multiplication? product()?\n\nPython's sum() function returns the sum of numbers in an iterable. sum([3,4,5]) == 3 + 4 + 5 == 12 I'm looking for the function that returns the product instead. somelib.somefunc([3,4,5]) == 3 * 4 * 5 == 60", "output": "In Python 3.8, the prod function was added to the math module. See: math.prod(). Older info: Python 3.7 and prior The function you're looking for would be called prod() or product() but Python doesn't have that function. So, you need to write your own (which is easy). Pronouncement on prod() Yes, that's right. Guido rejected the idea for a built-in prod() function because he thought it was rarely needed. Alternative with reduce() As you suggested, it is not hard to make your own using reduce() and operator.mul(): from functools import reduce # Required in Python 3 import operator def prod(iterable): return reduce(operator.mul, iterable, 1) >>> prod(range(1, 5)) 24 Note, in Python 3, the reduce() function was moved to the functools module. Specific case: Factorials As a side note, the primary motivating use case for prod() is to compute factorials. We already have support for that in the math module: >>> import math >>> math.factorial(10) 3628800 Alternative with logarithms If your data consists of floats, you can compute a product using sum() with exponents and logarithms: >>> from math import log, exp >>> data = [1.2, 1.5, 2.5, 0.9, 14.2, 3.8] >>> exp(sum(map(log, data))) 218.53799999999993 >>> 1.2 * 1.5 * 2.5 * 0.9 * 14.2 * 3.8 218.53799999999998 Note, the use of log() requires that all the inputs are positive."} +{"question_id": 7002429, "score": 275, "creation_date": 1312921334, "tags": ["python", "list", "dictionary", "extract", "key-value"], "instruction": "How to extract all values from a dictionary in Python?\n\nI have a dictionary d = {1:-0.3246, 2:-0.9185, 3:-3985, ...}. How do I extract all of the values of d into a list l?", "output": "For Python v2: If you only need the dictionary keys 1, 2, and 3 use: your_dict.keys(). If you only need the dictionary values -0.3246, -0.9185, and -3985 use: your_dict.values(). If you want both keys and values use: your_dict.items() which returns a list of tuples [(key1, value1), (key2, value2), ...]."} +{"question_id": 674519, "score": 275, "creation_date": 1237831112, "tags": ["python", "list", "dictionary"], "instruction": "How can I convert a dictionary into a list of tuples?\n\nIf I have a dictionary like: {'a': 1, 'b': 2, 'c': 3} How can I convert it to this? [('a', 1), ('b', 2), ('c', 3)] And how can I convert it to this? [(1, 'a'), (2, 'b'), (3, 'c')]", "output": ">>> d = { 'a': 1, 'b': 2, 'c': 3 } >>> list(d.items()) [('a', 1), ('c', 3), ('b', 2)] For Python 3.6 and later, the order of the list is what you would expect. In Python 2, you don't need list."} +{"question_id": 71591971, "score": 274, "creation_date": 1648058522, "tags": ["python", "macos", "terminal", "atom-editor", "macos-monterey"], "instruction": "How can I fix the \"zsh: command not found: python\" error? (macOS Monterey 12.3, Python 3.10, Atom IDE, and atom-python-run 0.9.7)\n\nSince I got the macOS v12.3 (Monterey) update (not sure it's related though), I have been getting this error when I try to run my Python code in the terminal: I am using Python 3.10.3, Atom IDE, and run the code in the terminal via atom-python-run package (which used to work perfectly fine). The settings for the package go like this: The which command in the terminal returns the following (which is odd, because earlier it would return something to just which python): I gather the error occurs because the terminal calls for python instead of python3, but I am super new to any coding and have no idea why it started now and how to fix it. Nothing of these has worked for me: I deleted and then reinstalled the Python interpreter from python.org. I tried alias python='python3' (which I saw in one of the threads here). I tried export PATH=\"/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin\" (which I found here). To reset zsh and paths, I deleted all associated hidden files in /local/users/ and ran the terminal once again. I deleted everything and reinstalled Mac OS X and the Python interpreter only to get the same error.", "output": "OK, after a couple of days trying, this is what has worked for me: I reinstalled Monterey (not sure it was essential, but I just figured I had messed with terminal and $PATH too much). I installed python via brew rather than from the official website. It would still return command not found error. I ran echo \"alias python=/usr/bin/python3\" >> ~/.zshrc in terminal to alias python with python3. Relaunch the shell or run source ~/.zshrc Problem solved. As far as I get it, there is no more pre-installed python 2.x in macOS as of 12.3 hence the error. I still find it odd though that atom-python-run would call for python instead of python3 despite the settings."} +{"question_id": 6234405, "score": 274, "creation_date": 1307156002, "tags": ["python", "exception", "logging", "python-logging"], "instruction": "Logging uncaught exceptions in Python\n\nHow do you cause uncaught exceptions to output via the logging module rather than to stderr? I realize the best way to do this would be: try: raise Exception, 'Throwing a boring exception' except Exception, e: logging.exception(e) But my situation is such that it would be really nice if logging.exception(...) were invoked automatically whenever an exception isn't caught.", "output": "As Ned pointed out, sys.excepthook is invoked every time an exception is raised and uncaught. The practical implication of this is that in your code you can override the default behavior of sys.excepthook to do whatever you want (including using logging.exception). As a straw man example: import sys def foo(exctype, value, tb): print('My Error Information') print('Type:', exctype) print('Value:', value) print('Traceback:', tb) Override sys.excepthook: >>> sys.excepthook = foo Commit obvious syntax error (leave out the colon) and get back custom error information: >>> def bar(a, b) My Error Information Type: Value: invalid syntax (, line 1) Traceback: None For more information about sys.excepthook, read the docs."} +{"question_id": 4830535, "score": 274, "creation_date": 1296231575, "tags": ["python", "jinja2"], "instruction": "How do I format a date in Jinja2?\n\nUsing Jinja2, how do I format a date field? I know in Python I can simply do this: print(car.date_of_manufacture.strftime('%Y-%m-%d')) But how do I format the date in Jinja2?", "output": "There are two ways to do it. The direct approach would be to simply call (and print) the strftime() method in your template, for example {{ car.date_of_manufacture.strftime('%Y-%m-%d') }} Another, sightly better approach would be to define your own filter, e.g.: from flask import Flask import babel app = Flask(__name__) @app.template_filter() def format_datetime(value, format='medium'): if format == 'full': format=\"EEEE, d. MMMM y 'at' HH:mm\" elif format == 'medium': format=\"EE dd.MM.y HH:mm\" return babel.dates.format_datetime(value, format) (This filter is based on babel for reasons regarding i18n, but you can use strftime too). The advantage of the filter is, that you can write {{ car.date_of_manufacture|format_datetime }} {{ car.date_of_manufacture|format_datetime('full') }} which looks nicer and is more maintainable. Another common filter is also the \"timedelta\" filter, which evaluates to something like \"written 8 minutes ago\". You can use babel.dates.format_timedelta for that, and register it as filter similar to the datetime example given here."} +{"question_id": 713138, "score": 274, "creation_date": 1238749449, "tags": ["python", "docstring"], "instruction": "Getting the docstring from a function\n\nI have the following function: def my_func(): \"\"\"My docstring is both funny and informative\"\"\" pass How do I get access to the docstring?", "output": "Interactively, you can display it with: help(my_func) Or from code you can retrieve it with (surround it with print(.) to get a formatted output): my_func.__doc__"} +{"question_id": 5074803, "score": 273, "creation_date": 1298356036, "tags": ["python", "django", "parsing", "url"], "instruction": "Retrieving parameters from a URL\n\nGiven a URL like the following, how can I parse the value of the query parameters? For example, in this case I want the value of some_key . /some_path?some_key=some_value' I am using Django in my environment; is there a method on the request object that could help me? I tried using self.request.get('some_key') but it is not returning the value some_value as I had hoped.", "output": "This is not specific to Django, but for Python in general. For a Django specific answer, see this one from @jball037 Python 2: import urlparse url = 'https://www.example.com/some_path?some_key=some_value' parsed = urlparse.urlparse(url) captured_value = urlparse.parse_qs(parsed.query)['some_key'][0] print captured_value Python 3: from urllib.parse import urlparse from urllib.parse import parse_qs url = 'https://www.example.com/some_path?some_key=some_value' parsed_url = urlparse(url) captured_value = parse_qs(parsed_url.query)['some_key'][0] print(captured_value) parse_qs returns a list. The [0] gets the first item of the list so the output of each script is some_value Here's the 'parse_qs' documentation for Python 3"} +{"question_id": 285061, "score": 273, "creation_date": 1226518387, "tags": ["python", "attributes", "object"], "instruction": "How do you programmatically set an attribute?\n\nSuppose I have a python object x and a string s, how do I set the attribute s on x? So: >>> x = SomeObject() >>> attr = 'myAttr' >>> # magic goes here >>> x.myAttr 'magic' What's the magic? The goal of this, incidentally, is to cache calls to x.__getattr__().", "output": "setattr(x, attr, 'magic') For help on it: >>> help(setattr) Help on built-in function setattr in module __builtin__: setattr(...) setattr(object, name, value) Set a named attribute on an object; setattr(x, 'y', v) is equivalent to ``x.y = v''. However, you should note that you can't do that to a \"pure\" instance of object. But it is likely you have a simple subclass of object where it will work fine. I would strongly urge the O.P. to never make instances of object like that."} +{"question_id": 14295680, "score": 273, "creation_date": 1358010434, "tags": ["python", "importerror", "python-module"], "instruction": "Unable to import a module that is definitely installed\n\nAfter installing Mechanize, I don't seem to be able to import it. I have tried installing from pip, easy_install, and via python setup.py install from the GitHub repository. But however I try it, I have no success at the interactive Python prompt: Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import mechanize Traceback (most recent call last): File \"\", line 1, in ImportError: No module named mechanize The installations I ran previously reported that they had completed successfully, so I expect the import to work. What could be causing this error? See also: pip installs packages successfully, but executables are not found from the command line for the case where the code can be imported, but executables (or wrapper scripts) included with the package can't be found at the command line. Either way this is fundamentally an environment setup issue.", "output": "In my case, it was a permission problem. The package was somehow installed with root read/write permission only, and another user just could not read/write to it!"} +{"question_id": 20110170, "score": 273, "creation_date": 1384994223, "tags": ["python", "pandas", "dataframe", "flatten", "multi-index"], "instruction": "Turn Pandas Multi-Index into column\n\nI have a dataframe with 2 index levels: value Trial measurement 1 0 13 1 3 2 4 2 0 NaN 1 12 3 0 34 Which I want to turn into this: Trial measurement value 1 0 13 1 1 3 1 2 4 2 0 NaN 2 1 12 3 0 34 How can I best do this? I need this because I want to aggregate the data as instructed here, but I can't select my columns like that if they are in use as indices.", "output": "The reset_index() is a pandas DataFrame method that will transfer index values into the DataFrame as columns. The default setting for the parameter is drop=False (which will keep the index values as columns). All you have to do call .reset_index() after the name of the DataFrame: df = df.reset_index()"} +{"question_id": 11994325, "score": 272, "creation_date": 1345146205, "tags": ["python", "flask"], "instruction": "How to divide flask app into multiple py files?\n\nMy flask application currently consists of a single test.py file with multiple routes and the main() route defined. Is there some way I could create a test2.py file that contains routes that were not handled in test.py? @app.route('/somepath') def somehandler(): # Handler code here I am concerned that there are too many routes in test.py and would like to make it such that I can run python test.py, which will also pick up the routes on test.py as if it were part of the same file. What changes to I have to make in test.py and/or include in test2.py to get this to work?", "output": "You can use the usual Python package structure to divide your App into multiple modules, see the Flask docs. However, Flask uses a concept of blueprints for making application components and supporting common patterns within an application or across applications. You can create a sub-component of your app as a Blueprint in a separate file: simple_page = Blueprint('simple_page', __name__, template_folder='templates') @simple_page.route('/') def show(page): # stuff And then use it in the main part: from yourapplication.simple_page import simple_page app = Flask(__name__) app.register_blueprint(simple_page) Blueprints can also bundle specific resources: templates or static files. Please refer to the Flask docs for all the details."} +{"question_id": 31323499, "score": 272, "creation_date": 1436460001, "tags": ["python", "python-2.7", "scikit-learn", "valueerror"], "instruction": "sklearn error ValueError: Input contains NaN, infinity or a value too large for dtype('float64')\n\nI am using sklearn and having a problem with the affinity propagation. I have built an input matrix and I keep getting the following error. ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). I have run np.isnan(mat.any()) #and gets False np.isfinite(mat.all()) #and gets True I tried using mat[np.isfinite(mat) == True] = 0 to remove the infinite values but this did not work either. What can I do to get rid of the infinite values in my matrix, so that I can use the affinity propagation algorithm? I am using anaconda and python 2.7.9.", "output": "This might happen inside scikit, and it depends on what you're doing. I recommend reading the documentation for the functions you're using. You might be using one which depends e.g. on your matrix being positive definite and not fulfilling that criteria. EDIT: How could I miss that: np.isnan(mat.any()) #and gets False np.isfinite(mat.all()) #and gets True is obviously wrong. Right would be: np.any(np.isnan(mat)) and np.all(np.isfinite(mat)) You want to check whether any of the elements are NaN, and not whether the return value of the any function is a number..."} +{"question_id": 10252010, "score": 271, "creation_date": 1334948421, "tags": ["python", "json", "serialization", "pickle"], "instruction": "Serializing class instance to JSON\n\nI am trying to create a JSON string representation of a class instance and having difficulty. Let's say the class is built like this: class testclass: value1 = \"a\" value2 = \"b\" A call to the json.dumps is made like this: t = testclass() json.dumps(t) It is failing and telling me that the testclass is not JSON serializable. TypeError: <__main__.testclass object at 0x000000000227A400> is not JSON serializable I have also tried using the pickle module : t = testclass() print(pickle.dumps(t, pickle.HIGHEST_PROTOCOL)) And it gives class instance information but not a serialized content of the class instance. b'\\x80\\x03c__main__\\ntestclass\\nq\\x00)\\x81q\\x01}q\\x02b.' What am I doing wrong?", "output": "The basic problem is that the JSON encoder json.dumps() only knows how to serialize a limited set of object types by default, all built-in types. List here: https://docs.python.org/3.3/library/json.html#encoders-and-decoders One good solution would be to make your class inherit from JSONEncoder and then implement the JSONEncoder.default() function, and make that function emit the correct JSON for your class. A simple solution would be to call json.dumps() on the .__dict__ member of that instance. That is a standard Python dict and if your class is simple it will be JSON serializable. class Foo(object): def __init__(self): self.x = 1 self.y = 2 foo = Foo() s = json.dumps(foo) # raises TypeError with \"is not JSON serializable\" s = json.dumps(foo.__dict__) # s set to: {\"x\":1, \"y\":2} The above approach is discussed in this blog posting: Serializing arbitrary Python objects to JSON using _dict_ And, of course, Python offers a built-in function that accesses .__dict__ for you, called vars(). So the above example can also be done as: s = json.dumps(vars(foo)) # s set to: {\"x\":1, \"y\":2}"} +{"question_id": 7274267, "score": 271, "creation_date": 1314899048, "tags": ["python", "datetime"], "instruction": "Print all day-dates between two dates\n\nFor example: from datetime import date d1 = date(2008,8,15) d2 = date(2008,9,15) I'm looking for simple code to print all dates in-between: 2008,8,15 2008,8,16 2008,8,17 ... 2008,9,14 2008,9,15 Thanks", "output": "I came up with this: from datetime import date, timedelta start_date = date(2008, 8, 15) end_date = date(2008, 9, 15) # perhaps date.today() delta = end_date - start_date # returns timedelta for i in range(delta.days + 1): day = start_date + timedelta(days=i) print(day) The output: 2008-08-15 2008-08-16 ... 2008-09-13 2008-09-14 2008-09-15 Your question asks for dates in-between but I believe you meant including the start and end points, so they are included. Otherwise: # To remove the end date, delete the \"+ 1\" # at the end of the range function: for i in range(delta.days): # To remove the start date, insert a 1 # to the beginning of the range function: for i in range(1, delta.days + 1):"} +{"question_id": 34269772, "score": 271, "creation_date": 1450104204, "tags": ["python", "python-typing", "python-dataclasses", "namedtuple"], "instruction": "Type hints in namedtuple\n\nConsider following piece of code: from collections import namedtuple point = namedtuple(\"Point\", (\"x:int\", \"y:int\")) The Code above is just a way to demonstrate as to what I am trying to achieve. I would like to make namedtuple with type hints. Do you know any elegant way how to achieve result as intended?", "output": "The preferred syntax for a typed namedtuple since Python 3.6 is using typing.NamedTuple like so: from typing import NamedTuple class Point(NamedTuple): x: int y: int = 1 # Set default value Point(3) # -> Point(x=3, y=1) Starting with Python 3.7, consider using a dataclasses: from dataclasses import dataclass @dataclass class Point: x: int y: int = 1 # Set default value Point(3) # -> Point(x=3, y=1)"} +{"question_id": 10040143, "score": 271, "creation_date": 1333694972, "tags": ["python", "django", "django-queryset", "django-filter", "inequality"], "instruction": "\">\", \"<\", \">=\" and \"<=\" don't work with \"filter()\" in Django\n\nWith = below, I could filter persons by age: qs = Person.objects.filter(age = 20) # \u2191 Here But with >, <, >= and <= below, I couldn't filter persons by age: qs = Person.objects.filter(age > 20) # \u2191 Here qs = Person.objects.filter(age < 20) # \u2191 Here qs = Person.objects.filter(age >= 20) # \u2191\u2191 Here qs = Person.objects.filter(age <= 20) # \u2191\u2191 Here Then, I got the error below: NameError: name 'age' is not defined How can I do greater than(>), greater than or equal to(>=), less than(<) and less than or equal to(>=) with filter() in Django?", "output": "Greater than: Person.objects.filter(age__gt=20) Greater than or equal to: Person.objects.filter(age__gte=20) Less than: Person.objects.filter(age__lt=20) Less than or equal to: Person.objects.filter(age__lte=20) You can find them all in [the documentation].(https://docs.djangoproject.com/en/stable/ref/models/querysets/)."} +{"question_id": 5292303, "score": 271, "creation_date": 1300049549, "tags": ["python", "comparison", "tuples"], "instruction": "How does tuple comparison work in Python?\n\nI have been reading the Core Python programming book, and the author shows an example like: (4, 5) < (3, 5) # Equals false So, I'm wondering, how/why does it equal false? How does python compare these two tuples? Btw, it's not explained in the book.", "output": "Tuples are compared position by position: the first item of the first tuple is compared to the first item of the second tuple; if they are not equal (i.e. the first is greater or smaller than the second) then that's the result of the comparison, else the second item is considered, then the third and so on. See Common Sequence Operations: Sequences of the same type also support comparisons. In particular, tuples and lists are compared lexicographically by comparing corresponding elements. This means that to compare equal, every element must compare equal and the two sequences must be of the same type and have the same length. Also Value Comparisons for further details: Lexicographical comparison between built-in collections works as follows: For two collections to compare equal, they must be of the same type, have the same length, and each pair of corresponding elements must compare equal (for example, [1,2] == (1,2) is false because the type is not the same). Collections that support order comparison are ordered the same as their first unequal elements (for example, [1,2,x] <= [1,2,y] has the same value as x <= y). If a corresponding element does not exist, the shorter collection is ordered first (for example, [1,2] < [1,2,3] is true). If not equal, the sequences are ordered the same as their first differing elements. For example, cmp([1,2,x], [1,2,y]) returns the same as cmp(x,y). If the corresponding element does not exist, the shorter sequence is considered smaller (for example, [1,2] < [1,2,3] returns True). Note 1: < and > do not mean \"smaller than\" and \"greater than\" but \"is before\" and \"is after\": so (0, 1) \"is before\" (1, 0). Note 2: tuples must not be considered as vectors in a n-dimensional space, compared according to their length. Note 3: referring to question https://stackoverflow.com/questions/36911617/python-2-tuple-comparison: do not think that a tuple is \"greater\" than another only if any element of the first is greater than the corresponding one in the second. Note 4: as @david Winiecki mentioned in the comments, in case of two tuples of different length, the first one which reaches its end, being the previous part equal, is declared as the lower: (1, 2) < (1, 2, 3), since 1=1, 2=2 and then the first tuple ends"} +{"question_id": 18675907, "score": 270, "creation_date": 1378574417, "tags": ["python", "bash", "anaconda", "conda"], "instruction": "How can I run Conda?\n\nI installed Anaconda and can run Python, so I assume that I installed it correctly. Following this introductory documentation, I am trying to install Python v3.3, so I am copying and pasting the following line into my console: conda create -n py33 python=3.3 anaconda However, that gives me an error: -bash: conda: command not found What do I need to do to run Conda? I am working on a Linux system.", "output": "It turns out that I had not set the path. To do so, I first had to edit .bash_profile (I downloaded it to my local desktop to do that; I do not know how to text edit a file from Linux) Then add this to .bash_profile: PATH=$PATH:$HOME/anaconda/bin"} +{"question_id": 50777849, "score": 270, "creation_date": 1528572957, "tags": ["python", "python-3.x", "pip", "conda"], "instruction": "From conda create requirements.txt for pip3\n\nI usually use conda to manage my environments, but now I am on a project that needs a little more horsepower than my laptop. So I am trying to use my university's workstations which have new Intel Xeons. But I don't have admin rights and the workstation does not have conda so I am forced to work with virtualenv and pip3. How do I generate a requirements.txt from conda that will work with pip3 and venv? conda list -e > requirements.txt does not generate a compatible file: = is not a valid operator. Did you mean == ? The conda output is: # This file may be used to create an environment using: # $ conda create --name --file # platform: osx-64 certifi=2016.2.28=py36_0 cycler=0.10.0=py36_0 freetype=2.5.5=2 icu=54.1=0 libpng=1.6.30=1 matplotlib=2.0.2=np113py36_0 mkl=2017.0.3=0 numpy=1.13.1=py36_0 openssl=1.0.2l=0 pip=9.0.1=py36_1 pyparsing=2.2.0=py36_0 pyqt=5.6.0=py36_2 python=3.6.2=0 python-dateutil=2.6.1=py36_0 pytz=2017.2=py36_0 qt=5.6.2=2 readline=6.2=2 scikit-learn=0.19.0=np113py36_0 scipy=0.19.1=np113py36_0 setuptools=36.4.0=py36_1 sip=4.18=py36_0 six=1.10.0=py36_0 sqlite=3.13.0=0 tk=8.5.18=0 wheel=0.29.0=py36_0 xz=5.2.3=0 zlib=1.2.11=0 I thought I would just manually change all = to == but the there are two = in the conda output. Which one to change? Surely there is an easier way? EDIT: pip freeze > requirements.txt gives: certifi==2016.2.28 cycler==0.10.0 matplotlib==2.0.2 matplotlib-venn==0.11.5 numpy==1.13.1 pyparsing==2.2.0 python-dateutil==2.6.1 pytz==2017.2 scikit-learn==0.19.0 scipy==0.19.1 six==1.10.0", "output": "As the comment at the top indicates, the output of conda list -e > requirements.txt can be used to create a conda virtual environment with conda create --name --file requirements.txt but this output isn't in the right format for pip. If you want a file which you can use to create a pip virtual environment (i.e. a requirements.txt in the right format) you can install pip within the conda environment, then use pip to create requirements.txt. conda activate conda install pip pip freeze > requirements.txt Then use the resulting requirements.txt to create a pip virtual environment: python3 -m venv env source env/bin/activate pip install -r requirements.txt When I tested this, the packages weren't identical across the outputs (pip included fewer packages) but it was sufficient to set up a functional environment. For those getting odd path references in requirements.txt, use: pip list --format=freeze > requirements.txt"} +{"question_id": 3475251, "score": 270, "creation_date": 1281689167, "tags": ["python", "string", "list", "split"], "instruction": "Split a string by a delimiter in Python\n\nConsider the following input string: 'MATCHES__STRING' I want to split that string wherever the \"delimiter\" __ occurs. This should output a list of strings: ['MATCHES', 'STRING'] To split on whitespace, see How do I split a string into a list of words?. To extract everything before the first delimiter, see Splitting on first occurrence. To extract everything before the last delimiter, see Partition string in Python and get value of last segment after colon.", "output": "Use the str.split method: >>> \"MATCHES__STRING\".split(\"__\") ['MATCHES', 'STRING']"} +{"question_id": 2272149, "score": 269, "creation_date": 1266316504, "tags": ["python", "rounding"], "instruction": "Round to 5 (or other number) in Python\n\nIs there a built-in function that can round like the following? 10 -> 10 12 -> 10 13 -> 15 14 -> 15 16 -> 15 18 -> 20", "output": "I don't know of a standard function in Python, but this works for me: Python 3 def myround(x, base=5): return base * round(x/base) It is easy to see why the above works. You want to make sure that your number divided by 5 is an integer, correctly rounded. So, we first do exactly that (round(x/5)), and then since we divided by 5, we multiply by 5 as well. I made the function more generic by giving it a base parameter, defaulting to 5. Python 2 In Python 2, float(x) would be needed to ensure that / does floating-point division, and a final conversion to int is needed because round() returns a floating-point value in Python 2. def myround(x, base=5): return int(base * round(float(x)/base))"} +{"question_id": 52949531, "score": 269, "creation_date": 1540298979, "tags": ["python", "macos", "pip"], "instruction": "Could not install packages due to an EnvironmentError: [Errno 13]\n\nIn my MacOS Mojave terminal I wanted to install a python package with pip. At the end it says: You are using pip version 10.0.1, however version 18.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. So I wanted to update pip with the given command but I got an error: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pip-18.0-py2.7.egg/EGG-INFO/PKG-INFO' Consider using the `--user` option or check the permissions. I don't really understand what to do now. Also I realized it says Python 2.7 in the error message but I have and want to use only python 3.", "output": "If you want to use python3+ to install the packages you need to use pip3 install package_name And to solve the errno 13 you have to add --user at the end pip3 install package_name --user EDIT: For any project in Python it's highly recommended to work on a Virtual environment, which is a tool that helps to keep dependencies required by different projects separate by creating isolated Python virtual environments for them. In order to create one with python3+ you have to use the following command: virtualenv enviroment_name -p python3 And then you work on it just by activating it: source enviroment_name/bin/activate Once the virtual environment is activated, the name of your virtual environment will appear on the left side of the terminal. This will let you know that the virtual environment is currently active. Now you can install dependencies related to the project in this virtual environment by just using pip. pip install package_name"} +{"question_id": 4932438, "score": 269, "creation_date": 1297164431, "tags": ["python", "class"], "instruction": "How can I choose a custom string representation for a class itself (not instances of the class)?\n\nConsider this class: class foo(object): pass The default string representation looks something like this: >>> str(foo) \"\" How can I make this display a custom string? See How to print instances of a class using print()? for the corresponding question about instances of the class. In fact, this question is really a special case of that one - because in Python, classes are themselves also objects belonging to their own class - but it's not directly obvious how to apply the advice, since the default \"class of classes\" is pre-defined.", "output": "Implement __str__() or __repr__() in the class's metaclass. class MC(type): def __repr__(self): return 'Wahaha!' class C(object): __metaclass__ = MC print(C) Use __str__ if you mean a readable stringification, use __repr__ for unambiguous representations. Edit: Python 3 Version class MC(type): def __repr__(self): return 'Wahaha!' class C(object, metaclass=MC): pass print(C)"} +{"question_id": 4432208, "score": 268, "creation_date": 1292265075, "tags": ["python", "syntax", "operators", "modulo"], "instruction": "What is the result of % (modulo operator / percent sign) in Python?\n\nWhat does the % do in a calculation? I can't seem to work out what it does. Does it work out a percent of the calculation for example: 4 % 2 is apparently equal to 0. How?", "output": "The % (modulo) operator yields the remainder from the division of the first argument by the second. The numeric arguments are first converted to a common type. A zero right argument raises the ZeroDivisionError exception. The arguments may be floating point numbers, e.g., 3.14%0.7 equals 0.34 (since 3.14 equals 4*0.7 + 0.34.) The modulo operator always yields a result with the same sign as its second operand (or zero); the absolute value of the result is strictly smaller than the absolute value of the second operand [2]. Taken from https://docs.python.org/reference/expressions.html#binary-arithmetic-operations Example 1: 6%2 evaluates to 0 because there's no remainder if 6 is divided by 2 (3 times). Example 2: 7%2 evaluates to 1 because there's a remainder of 1 when 7 is divided by 2 (3 times). So to summarise that, it returns the remainder of a division operation, or 0 if there is no remainder. So 6%2 means find the remainder of 6 divided by 2."} +{"question_id": 8247605, "score": 268, "creation_date": 1322074614, "tags": ["python", "git", "pip"], "instruction": "Configuring so that pip install can work from github\n\nWe'd like to use pip with github to install private packages to our production servers. This question concerns what needs to be in the github repo in order for the install to be successful. Assuming the following command line (which authenticates just fine and tries to install): pip install git+ssh://git@github.com/BlahCo/search/tree/prod_release_branch/ProductName What needs to reside in the ProductName? Is it the contents of what would normally be in the tar file after running setup.py with the sdist option, or is the actual tar.gz file, or something else? I'm asking here because I've tried several variations and can't make it work. Any help appreciated.", "output": "You need the whole python package, with a setup.py file in it. A package named foo would be: foo # the installable package \u251c\u2500\u2500 foo \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 bar.py \u2514\u2500\u2500 setup.py And install from github like: $ pip install git+ssh://git@github.com/myuser/foo.git or $ pip install git+https://github.com/myuser/foo.git@v123 or $ pip install git+https://github.com/myuser/foo.git@newbranch More info at https://pip.pypa.io/en/stable/cli/pip_install/"} +{"question_id": 8718885, "score": 268, "creation_date": 1325626178, "tags": ["python", "python-import"], "instruction": "Importing module from string variable using \"__import__\" gives different results than a normal import statement\n\nI'm working on a documentation (personal) for nested matplotlib (MPL) library, which differs from MPL own provided, by interested submodule packages. I'm writing Python script which I hope will automate document generation from future MPL releases. I selected interested submodules/packages and want to list their main classes from which I'll generate list and process it with pydoc. The problem is that I can't find a way to instruct Python to load a submodule from a string. Here is an example of what I tried: import matplotlib.text as text x = dir(text) i = __import__('matplotlib.text') y = dir(i) j = __import__('matplotlib') z = dir(j) And here is a 3-way comparison of above lists through pprint: I don't understand what's loaded in y object - it's base matplotlib plus something else, but it lacks information that I wanted and that is main classes from matplotlib.text package. It's the top blue coloured part on screenshot (x list).", "output": "The __import__ function can be a bit hard to understand. If you change i = __import__('matplotlib.text') to i = __import__('matplotlib.text', fromlist=['']) then i will refer to matplotlib.text. In Python 3.1 or later, you can use importlib: import importlib i = importlib.import_module(\"matplotlib.text\") Some notes If you're trying to import something from a sub-folder e.g. ./feature/email.py, the code will look like importlib.import_module(\"feature.email\") Before Python 3.3 you could not import anything if there was no __init__.py in the folder with file you were trying to import (see caveats before deciding if you want to keep the file for backward compatibility e.g. with pytest)."} +{"question_id": 44780357, "score": 268, "creation_date": 1498566635, "tags": ["python", "python-3.x", "newline", "f-string"], "instruction": "How can I use newline '\\n' in an f-string to format a list of strings?\n\nI tried this code: names = ['Adam', 'Bob', 'Cyril'] text = f\"Winners are:\\n{'\\n'.join(names)}\" print(text) However, '\\' cannot be used inside the {...} expression portions of an f-string. How can I make it work? The result should be: Winners are: Adam Bob Cyril See Why isn't it possible to use backslashes inside the braces of f-strings? How can I work around the problem? for some additional discussion of why the limitation exists.", "output": "Python 3.12+ You can use backslashes within f-strings and the existing code from the question works as expected. See https://docs.python.org/3.12/whatsnew/3.12.html#pep-701-syntactic-formalization-of-f-strings. Python < 3.12 You can't. Backslashes cannot appear inside the curly braces {}; doing so results in a SyntaxError: >>> f'{\\}' SyntaxError: f-string expression part cannot include a backslash This is specified in the PEP for f-strings: Backslashes may not appear inside the expression portions of f-strings, [...] One option is assigning '\\n' to a name and then .join on that inside the f-string; that is, without using a literal: names = ['Adam', 'Bob', 'Cyril'] nl = '\\n' text = f\"Winners are:{nl}{nl.join(names)}\" print(text) Results in: Winners are: Adam Bob Cyril Another option, as specified by @wim, is to use chr(10) to get \\n returned and then join there. f\"Winners are:\\n{chr(10).join(names)}\" Yet another, of course, is to '\\n'.join beforehand and then add the name accordingly: n = \"\\n\".join(names) text = f\"Winners are:\\n{n}\" which results in the same output. Note: This is one of the small differences between f-strings and str.format. In the latter, you can always use punctuation granted that a corresponding wacky dict is unpacked that contains those keys: >>> \"{\\\\} {*}\".format(**{\"\\\\\": 'Hello', \"*\": 'World!'}) \"Hello World!\" (Please don't do this.) In the former, punctuation isn't allowed because you can't have identifiers that use them. Aside: I would definitely opt for print or format, as the other answers suggest as an alternative. The options I've given only apply if you must for some reason use f-strings. Just because something is new, doesn't mean you should try and do everything with it ;-)"} +{"question_id": 15858192, "score": 268, "creation_date": 1365301254, "tags": ["python", "matplotlib", "plot", "subplot"], "instruction": "How to set xlim and ylim for a subplot\n\nI would like to limit the X and Y axis in matplotlib for a specific subplot. The subplot figure itself doesn't have any axis property. I want for example to change only the limits for the second plot: import matplotlib.pyplot as plt fig=plt.subplot(131) plt.scatter([1,2],[3,4]) fig=plt.subplot(132) plt.scatter([10,20],[30,40]) fig=plt.subplot(133) plt.scatter([15,23],[35,43]) plt.show()", "output": "You should use the object-oriented interface to matplotlib, rather than the state machine interface. Almost all of the plt.* function are thin wrappers that basically do gca().*. plt.subplot returns an axes object. Once you have a reference to the axes object you can plot directly to it, change its limits, etc. import matplotlib.pyplot as plt ax1 = plt.subplot(131) ax1.scatter([1, 2], [3, 4]) ax1.set_xlim([0, 5]) ax1.set_ylim([0, 5]) ax2 = plt.subplot(132) ax2.scatter([1, 2],[3, 4]) ax2.set_xlim([0, 5]) ax2.set_ylim([0, 5]) and so on for as many axes as you want. Or better, wrap it all up in a loop: import matplotlib.pyplot as plt DATA_x = ([1, 2], [2, 3], [3, 4]) DATA_y = DATA_x[::-1] XLIMS = [[0, 10]] * 3 YLIMS = [[0, 10]] * 3 for j, (x, y, xlim, ylim) in enumerate(zip(DATA_x, DATA_y, XLIMS, YLIMS)): ax = plt.subplot(1, 3, j + 1) ax.scatter(x, y) ax.set_xlim(xlim) ax.set_ylim(ylim)"} +{"question_id": 8364674, "score": 267, "creation_date": 1322874806, "tags": ["python", "arrays", "numpy", "count", "boolean"], "instruction": "How to count the number of true elements in a NumPy bool array\n\nI have a NumPy array 'boolarr' of boolean type. I want to count the number of elements whose values are True. Is there a NumPy or Python routine dedicated for this task? Or, do I need to iterate over the elements in my script?", "output": "You have multiple options. Two options are the following. boolarr.sum() numpy.count_nonzero(boolarr) Here's an example: >>> import numpy as np >>> boolarr = np.array([[0, 0, 1], [1, 0, 1], [1, 0, 1]], dtype=np.bool) >>> boolarr array([[False, False, True], [ True, False, True], [ True, False, True]], dtype=bool) >>> boolarr.sum() 5 Of course, that is a bool-specific answer. More generally, you can use numpy.count_nonzero. >>> np.count_nonzero(boolarr) 5"} +{"question_id": 3167154, "score": 266, "creation_date": 1278085316, "tags": ["python"], "instruction": "How to split a dos path into its components in Python\n\nI have a string variable which represents a dos path e.g: var = \"d:\\stuff\\morestuff\\furtherdown\\THEFILE.txt\" I want to split this string into: [ \"d\", \"stuff\", \"morestuff\", \"furtherdown\", \"THEFILE.txt\" ] I have tried using split() and replace() but they either only process the first backslash or they insert hex numbers into the string. I need to convert this string variable into a raw string somehow so that I can parse it. What's the best way to do this? I should also add that the contents of var i.e. the path that I'm trying to parse, is actually the return value of a command line query. It's not path data that I generate myself. Its stored in a file, and the command line tool is not going to escape the backslashes.", "output": "I've been bitten loads of times by people writing their own path fiddling functions and getting it wrong. Spaces, slashes, backslashes, colons -- the possibilities for confusion are not endless, but mistakes are easily made anyway. So I'm a stickler for the use of os.path, and recommend it on that basis. (However, the path to virtue is not the one most easily taken, and many people when finding this are tempted to take a slippery path straight to damnation. They won't realise until one day everything falls to pieces, and they -- or, more likely, somebody else -- has to work out why everything has gone wrong, and it turns out somebody made a filename that mixes slashes and backslashes -- and some person suggests that the answer is \"not to do that\". Don't be any of these people. Except for the one who mixed up slashes and backslashes -- you could be them if you like.) You can get the drive and path+file like this: drive, path_and_file = os.path.splitdrive(path) Get the path and the file: path, file = os.path.split(path_and_file) Getting the individual folder names is not especially convenient, but it is the sort of honest middling discomfort that heightens the pleasure of later finding something that actually works well: folders = [] while 1: path, folder = os.path.split(path) if folder != \"\": folders.append(folder) else: if path != \"\": folders.append(path) break folders.reverse() (This pops a \"\\\" at the start of folders if the path was originally absolute. You could lose a bit of code if you didn't want that.)"} +{"question_id": 38411942, "score": 266, "creation_date": 1468677442, "tags": ["python", "anaconda", "conda"], "instruction": "Install a specific (ana)conda package version\n\nI want to install the 'rope' package in my current active environment using conda. Currently, the following 'rope' versions are available: (data_downloader) ~/code/data_downloader $ conda search rope Using Anaconda Cloud api site https://api.anaconda.org Fetching package metadata: .... cached-property 1.2.0 py27_0 defaults 1.2.0 py34_0 defaults 1.2.0 py35_0 defaults 1.3.0 py27_0 defaults 1.3.0 py34_0 defaults 1.3.0 py35_0 defaults rope 0.9.4 py26_0 defaults 0.9.4 py27_0 defaults 0.9.4 py33_0 defaults 0.9.4 py34_0 defaults 0.9.4 py26_1 defaults 0.9.4 py27_1 defaults 0.9.4 py33_1 defaults 0.9.4 py34_1 defaults . 0.9.4 py35_1 defaults I would like to install 1.3.0 py35_0 defaults I've tried all sorts of permutations of conda install which I'm not going to list here because none of them are correct. I am also not sure what the py35_0 is (I'm assuming this is the version of the python against which the package was built?) and I also don't know what 'defaults' means?", "output": "There is no version 1.3.0 for rope. 1.3.0 refers to the package cached-property. The highest available version of rope is 0.9.4. You can install different versions with conda install package=version. But in this case there is only one version of rope so you don't need that. The reason you see the cached-property in this listing is because it contains the string \"rope\": \"cached-p rope erty\" py35_0 means that you need python version 3.5 for this specific version. If you only have python3.4 and the package is only for version 3.5 you cannot install it with conda. I am not quite sure on the defaults either. It should be an indication that this package is inside the default conda channel."} +{"question_id": 40206569, "score": 265, "creation_date": 1477248296, "tags": ["python", "django", "python-3.x"], "instruction": "Django model \"doesn't declare an explicit app_label\"\n\nAfter a dozen hours of troubleshooting, I thought I was finally in business, but then I got: Model class django.contrib.contenttypes.models.ContentType doesn't declare an explicit app_label There is so little info on this on the web, and no solution out there has resolved my issue. I'm using Python 3.4 and Django 1.10. From my settings.py: INSTALLED_APPS = [ 'DeleteNote.apps.DeletenoteConfig', 'LibrarySync.apps.LibrarysyncConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] And my apps.py files look like this: from django.apps import AppConfig class DeletenoteConfig(AppConfig): name = 'DeleteNote' and from django.apps import AppConfig class LibrarysyncConfig(AppConfig): name = 'LibrarySync'", "output": "Are you missing putting in your application name into the settings file? The myAppNameConfig is the default class generated at apps.py by the .manage.py createapp myAppName command. Where myAppName is the name of your app. settings.py INSTALLED_APPS = [ 'myAppName.apps.myAppNameConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] This way, the settings file finds out what you want to call your application. You can change how it looks later in the apps.py file by adding the following code in myAppName/apps.py class myAppNameConfig(AppConfig): name = 'myAppName' verbose_name = 'A Much Better Name'"} +{"question_id": 28986489, "score": 264, "creation_date": 1426076399, "tags": ["python", "replace", "pandas", "dataframe"], "instruction": "How to replace text in a string column of a Pandas dataframe?\n\nI have a column in my dataframe like this: range \"(2,30)\" \"(50,290)\" \"(400,1000)\" ... and I want to replace the , comma with - dash. I'm currently using this method but nothing is changed. org_info_exc['range'].replace(',', '-', inplace=True) Can anybody help?", "output": "Use the vectorised str method replace: df['range'] = df['range'].str.replace(',','-') df range 0 (2-30) 1 (50-290) EDIT: so if we look at what you tried and why it didn't work: df['range'].replace(',','-',inplace=True) from the docs we see this description: str or regex: str: string exactly matching to_replace will be replaced with value So because the str values do not match, no replacement occurs, compare with the following: df = pd.DataFrame({'range':['(2,30)',',']}) df['range'].replace(',','-', inplace=True) df['range'] 0 (2,30) 1 - Name: range, dtype: object here we get an exact match on the second row and the replacement occurs."} +{"question_id": 27263805, "score": 264, "creation_date": 1417581884, "tags": ["python", "pandas", "list"], "instruction": "Pandas column of lists, create a row for each list element\n\nI have a dataframe where some cells contain lists of multiple values. Rather than storing multiple values in a cell, I'd like to expand the dataframe so that each item in the list gets its own row (with the same values in all other columns). So if I have: import pandas as pd import numpy as np df = pd.DataFrame( {'trial_num': [1, 2, 3, 1, 2, 3], 'subject': [1, 1, 1, 2, 2, 2], 'samples': [list(np.random.randn(3).round(2)) for i in range(6)] } ) df Out[10]: samples subject trial_num 0 [0.57, -0.83, 1.44] 1 1 1 [-0.01, 1.13, 0.36] 1 2 2 [1.18, -1.46, -0.94] 1 3 3 [-0.08, -4.22, -2.05] 2 1 4 [0.72, 0.79, 0.53] 2 2 5 [0.4, -0.32, -0.13] 2 3 How do I convert to long form, e.g.: subject trial_num sample sample_num 0 1 1 0.57 0 1 1 1 -0.83 1 2 1 1 1.44 2 3 1 2 -0.01 0 4 1 2 1.13 1 5 1 2 0.36 2 6 1 3 1.18 0 # etc. The index is not important, it's OK to set existing columns as the index and the final ordering isn't important.", "output": "UPDATE: the solution below was helpful for older Pandas versions, because the DataFrame.explode() wasn\u2019t available. Starting from Pandas 0.25.0 you can simply use DataFrame.explode(). lst_col = 'samples' r = pd.DataFrame({ col:np.repeat(df[col].values, df[lst_col].str.len()) for col in df.columns.drop(lst_col)} ).assign(**{lst_col:np.concatenate(df[lst_col].values)})[df.columns] Result: In [103]: r Out[103]: samples subject trial_num 0 0.10 1 1 1 -0.20 1 1 2 0.05 1 1 3 0.25 1 2 4 1.32 1 2 5 -0.17 1 2 6 0.64 1 3 7 -0.22 1 3 8 -0.71 1 3 9 -0.03 2 1 10 -0.65 2 1 11 0.76 2 1 12 1.77 2 2 13 0.89 2 2 14 0.65 2 2 15 -0.98 2 3 16 0.65 2 3 17 -0.30 2 3 PS here you may find a bit more generic solution UPDATE: some explanations: IMO the easiest way to understand this code is to try to execute it step-by-step: in the following line we are repeating values in one column N times where N - is the length of the corresponding list: In [10]: np.repeat(df['trial_num'].values, df[lst_col].str.len()) Out[10]: array([1, 1, 1, 2, 2, 2, 3, 3, 3, 1, 1, 1, 2, 2, 2, 3, 3, 3], dtype=int64) this can be generalized for all columns, containing scalar values: In [11]: pd.DataFrame({ ...: col:np.repeat(df[col].values, df[lst_col].str.len()) ...: for col in df.columns.drop(lst_col)} ...: ) Out[11]: trial_num subject 0 1 1 1 1 1 2 1 1 3 2 1 4 2 1 5 2 1 6 3 1 .. ... ... 11 1 2 12 2 2 13 2 2 14 2 2 15 3 2 16 3 2 17 3 2 [18 rows x 2 columns] using np.concatenate() we can flatten all values in the list column (samples) and get a 1D vector: In [12]: np.concatenate(df[lst_col].values) Out[12]: array([-1.04, -0.58, -1.32, 0.82, -0.59, -0.34, 0.25, 2.09, 0.12, 0.83, -0.88, 0.68, 0.55, -0.56, 0.65, -0.04, 0.36, -0.31]) putting all this together: In [13]: pd.DataFrame({ ...: col:np.repeat(df[col].values, df[lst_col].str.len()) ...: for col in df.columns.drop(lst_col)} ...: ).assign(**{lst_col:np.concatenate(df[lst_col].values)}) Out[13]: trial_num subject samples 0 1 1 -1.04 1 1 1 -0.58 2 1 1 -1.32 3 2 1 0.82 4 2 1 -0.59 5 2 1 -0.34 6 3 1 0.25 .. ... ... ... 11 1 2 0.68 12 2 2 0.55 13 2 2 -0.56 14 2 2 0.65 15 3 2 -0.04 16 3 2 0.36 17 3 2 -0.31 [18 rows x 3 columns] using pd.DataFrame()[df.columns] will guarantee that we are selecting columns in the original order..."} +{"question_id": 575196, "score": 264, "creation_date": 1235320971, "tags": ["python", "scope"], "instruction": "Why can a function modify some arguments as perceived by the caller, but not others?\n\nI'm trying to understand Python's approach to variable scope. In this example, why is f() able to alter the value of x, as perceived within main(), but not the value of n? def f(n, x): n = 2 x.append(4) print('In f():', n, x) def main(): n = 1 x = [0,1,2,3] print('Before:', n, x) f(n, x) print('After: ', n, x) main() Output: Before: 1 [0, 1, 2, 3] In f(): 2 [0, 1, 2, 3, 4] After: 1 [0, 1, 2, 3, 4] See also: How do I pass a variable by reference? Are Python variables pointers? Or else, what are they?", "output": "Some answers contain the word \"copy\" in the context of a function call. I find it confusing. Python doesn't copy objects you pass during a function call ever. Function parameters are names. When you call a function, Python binds these parameters to whatever objects you pass (via names in a caller scope). Objects can be mutable (like lists) or immutable (like integers and strings in Python). A mutable object you can change. You can't change a name, you just can bind it to another object. Your example is not about scopes or namespaces, it is about naming and binding and mutability of an object in Python. def f(n, x): # these `n`, `x` have nothing to do with `n` and `x` from main() n = 2 # put `n` label on `2` balloon x.append(4) # call `append` method of whatever object `x` is referring to. print('In f():', n, x) x = [] # put `x` label on `[]` ballon # x = [] has no effect on the original list that is passed into the function Here are nice pictures on the difference between variables in other languages and names in Python."} +{"question_id": 10592605, "score": 263, "creation_date": 1337040371, "tags": ["python", "machine-learning", "scikit-learn", "classification"], "instruction": "Save classifier to disk in scikit-learn\n\nHow do I save a trained Naive Bayes classifier to disk and use it to predict data? I have the following sample program from the scikit-learn website: from sklearn import datasets iris = datasets.load_iris() from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() y_pred = gnb.fit(iris.data, iris.target).predict(iris.data) print \"Number of mislabeled points : %d\" % (iris.target != y_pred).sum()", "output": "Classifiers are just objects that can be pickled and dumped like any other. To continue your example: import cPickle # save the classifier with open('my_dumped_classifier.pkl', 'wb') as fid: cPickle.dump(gnb, fid) # load it again with open('my_dumped_classifier.pkl', 'rb') as fid: gnb_loaded = cPickle.load(fid)"} +{"question_id": 2909975, "score": 263, "creation_date": 1274845084, "tags": ["python", "file", "path"], "instruction": "Python list directory, subdirectory, and files\n\nI'm trying to make a script to list all directories, subdirectories, and files in a given directory. I tried this: import sys, os root = \"/home/patate/directory/\" path = os.path.join(root, \"targetdirectory\") for r, d, f in os.walk(path): for file in f: print(os.path.join(root, file)) Unfortunately, it doesn't work properly. I get all the files, but not their complete paths. For example, if the directory struct would be: /home/patate/directory/targetdirectory/123/456/789/file.txt It would print: /home/patate/directory/targetdirectory/file.txt I need the first result.", "output": "Use os.path.join to concatenate the directory and file name: import os for path, subdirs, files in os.walk(root): for name in files: print(os.path.join(path, name)) Note the usage of path and not root in the concatenation, since using root would be incorrect. In Python 3.4, the pathlib module was added for easier path manipulations. So the equivalent to os.path.join would be: pathlib.PurePath(path, name) The advantage of pathlib is that you can use a variety of useful methods on paths. If you use the concrete Path variant you can also do actual OS calls through them, like changing into a directory, deleting the path, opening the file it points to and much more."} +{"question_id": 18453566, "score": 263, "creation_date": 1377553538, "tags": ["python"], "instruction": "Get list of values for list of keys\n\nIs there a built-in/quick way to use a list of keys to a dictionary to get a list of corresponding items? For instance I have: >>> mydict = {'one': 1, 'two': 2, 'three': 3} >>> mykeys = ['three', 'one'] How can I use mykeys to get the corresponding values in the dictionary as a list? >>> mydict.WHAT_GOES_HERE(mykeys) [3, 1]", "output": "A list comprehension seems to be a good way to do this: >>> [mydict[x] for x in mykeys] [3, 1]"} +{"question_id": 13905741, "score": 263, "creation_date": 1355694142, "tags": ["python", "python-3.x", "scope", "list-comprehension", "python-internals"], "instruction": "Accessing class variables from a list comprehension in the class definition\n\nHow do you access other class variables from a list comprehension within the class definition? The following works in Python 2 but fails in Python 3: class Foo: x = 5 y = [x for i in range(1)] Python 3.11 gives the error: NameError: name 'x' is not defined Trying Foo.x doesn't work either. Any ideas on how to do this in Python 3? A slightly more complicated motivating example: from collections import namedtuple class StateDatabase: State = namedtuple('State', ['name', 'capital']) db = [State(*args) for args in [ ['Alabama', 'Montgomery'], ['Alaska', 'Juneau'], # ... ]] In this example, apply() would have been a decent workaround, but it is sadly removed from Python 3.", "output": "Class scope and list, set or dictionary comprehensions, as well as generator expressions, do not mix. The TL;DR You cannot access the class scope from functions, list comprehensions or generator expressions enclosed in that scope; they act as if that scope does not exist. In Python 2, list comprehensions were implemented using a shortcut so actually could access the class scope, but in Python 3 they got their own scope (as they should have had all along) and thus your example breaks. Other comprehension types have their own scope regardless of Python version, so a similar example with a set or dict comprehension would break in Python 2. # Same error, in Python 2 or 3 y = {x: x for i in range(1)} The why; or, the official word on this In Python 3, list comprehensions were given a proper scope (local namespace) of their own, to prevent their local variables bleeding over into the surrounding scope (see List comprehension rebinds names even after scope of comprehension. Is this right?). That's great when using such a list comprehension in a module or in a function, but in classes, scoping is a little, uhm, strange. This is documented in pep 227: Names in class scope are not accessible. Names are resolved in the innermost enclosing function scope. If a class definition occurs in a chain of nested scopes, the resolution process skips class definitions. and in the class compound statement documentation: The class\u2019s suite is then executed in a new execution frame (see section Naming and binding), using a newly created local namespace and the original global namespace. (Usually, the suite contains only function definitions.) When the class\u2019s suite finishes execution, its execution frame is discarded but its local namespace is saved. [4] A class object is then created using the inheritance list for the base classes and the saved local namespace for the attribute dictionary. Emphasis mine; the execution frame is the temporary scope. Because the scope is repurposed as the attributes on a class object, allowing it to be used as a nonlocal scope as well leads to undefined behaviour; what would happen if a class method referred to x as a nested scope variable, then manipulates Foo.x as well, for example? More importantly, what would that mean for subclasses of Foo? Python has to treat a class scope differently as it is very different from a function scope. Last, but definitely not least, the linked Naming and binding section in the Execution model documentation mentions class scopes explicitly: The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods \u2013 this includes comprehensions and generator expressions since they are implemented using a function scope. This means that the following will fail: class A: a = 42 b = list(a + i for i in range(10)) The (small) exception; or, why one part may still work There's one part of a comprehension or generator expression that executes in the surrounding scope, regardless of Python version. That would be the expression for the outermost iterable. In your example, it's the range(1): y = [x for i in range(1)] # ^^^^^^^^ Thus, using x in that expression would not throw an error: # Runs fine y = [i for i in range(x)] This only applies to the outermost iterable; if a comprehension has multiple for clauses, the iterables for inner for clauses are evaluated in the comprehension's scope: # NameError y = [i for i in range(1) for j in range(x)] # ^^^^^^^^^^^^^^^^^ ----------------- # outer loop inner, nested loop This design decision was made in order to throw an error at genexp creation time instead of iteration time when creating the outermost iterable of a generator expression throws an error, or when the outermost iterable turns out not to be iterable. Comprehensions share this behavior for consistency. Looking under the hood; or, way more detail than you ever wanted You can see this all in action using the dis module. I'm using Python 3.3 in the following examples, because it adds qualified names that neatly identify the code objects we want to inspect. The bytecode produced is otherwise functionally identical to Python 3.2. To create a class, Python essentially takes the whole suite that makes up the class body (so everything indented one level deeper than the class : line), and executes that as if it were a function: >>> import dis >>> def foo(): ... class Foo: ... x = 5 ... y = [x for i in range(1)] ... return Foo ... >>> dis.dis(foo) 2 0 LOAD_BUILD_CLASS 1 LOAD_CONST 1 (\", line 2>) 4 LOAD_CONST 2 ('Foo') 7 MAKE_FUNCTION 0 10 LOAD_CONST 2 ('Foo') 13 CALL_FUNCTION 2 (2 positional, 0 keyword pair) 16 STORE_FAST 0 (Foo) 5 19 LOAD_FAST 0 (Foo) 22 RETURN_VALUE The first LOAD_CONST there loads a code object for the Foo class body, then makes that into a function, and calls it. The result of that call is then used to create the namespace of the class, its __dict__. So far so good. The thing to note here is that the bytecode contains a nested code object; in Python, class definitions, functions, comprehensions and generators all are represented as code objects that contain not only bytecode, but also structures that represent local variables, constants, variables taken from globals, and variables taken from the nested scope. The compiled bytecode refers to those structures and the python interpreter knows how to access those given the bytecodes presented. The important thing to remember here is that Python creates these structures at compile time; the class suite is a code object (\", line 2>) that is already compiled. Let's inspect that code object that creates the class body itself; code objects have a co_consts structure: >>> foo.__code__.co_consts (None, \", line 2>, 'Foo') >>> dis.dis(foo.__code__.co_consts[1]) 2 0 LOAD_FAST 0 (__locals__) 3 STORE_LOCALS 4 LOAD_NAME 0 (__name__) 7 STORE_NAME 1 (__module__) 10 LOAD_CONST 0 ('foo..Foo') 13 STORE_NAME 2 (__qualname__) 3 16 LOAD_CONST 1 (5) 19 STORE_NAME 3 (x) 4 22 LOAD_CONST 2 ( at 0x10a385420, file \"\", line 4>) 25 LOAD_CONST 3 ('foo..Foo.') 28 MAKE_FUNCTION 0 31 LOAD_NAME 4 (range) 34 LOAD_CONST 4 (1) 37 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 40 GET_ITER 41 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 44 STORE_NAME 5 (y) 47 LOAD_CONST 5 (None) 50 RETURN_VALUE The above bytecode creates the class body. The function is executed and the resulting locals() namespace, containing x and y is used to create the class (except that it doesn't work because x isn't defined as a global). Note that after storing 5 in x, it loads another code object; that's the list comprehension; it is wrapped in a function object just like the class body was; the created function takes a positional argument, the range(1) iterable to use for its looping code, cast to an iterator. As shown in the bytecode, range(1) is evaluated in the class scope. From this you can see that the only difference between a code object for a function or a generator, and a code object for a comprehension is that the latter is executed immediately when the parent code object is executed; the bytecode simply creates a function on the fly and executes it in a few small steps. Python 2.x uses inline bytecode there instead, here is output from Python 2.7: 2 0 LOAD_NAME 0 (__name__) 3 STORE_NAME 1 (__module__) 3 6 LOAD_CONST 0 (5) 9 STORE_NAME 2 (x) 4 12 BUILD_LIST 0 15 LOAD_NAME 3 (range) 18 LOAD_CONST 1 (1) 21 CALL_FUNCTION 1 24 GET_ITER >> 25 FOR_ITER 12 (to 40) 28 STORE_NAME 4 (i) 31 LOAD_NAME 2 (x) 34 LIST_APPEND 2 37 JUMP_ABSOLUTE 25 >> 40 STORE_NAME 5 (y) 43 LOAD_LOCALS 44 RETURN_VALUE No code object is loaded, instead a FOR_ITER loop is run inline. So in Python 3.x, the list generator was given a proper code object of its own, which means it has its own scope. However, the comprehension was compiled together with the rest of the python source code when the module or script was first loaded by the interpreter, and the compiler does not consider a class suite a valid scope. Any referenced variables in a list comprehension must look in the scope surrounding the class definition, recursively. If the variable wasn't found by the compiler, it marks it as a global. Disassembly of the list comprehension code object shows that x is indeed loaded as a global: >>> foo.__code__.co_consts[1].co_consts ('foo..Foo', 5, at 0x10a385420, file \"\", line 4>, 'foo..Foo.', 1, None) >>> dis.dis(foo.__code__.co_consts[1].co_consts[2]) 4 0 BUILD_LIST 0 3 LOAD_FAST 0 (.0) >> 6 FOR_ITER 12 (to 21) 9 STORE_FAST 1 (i) 12 LOAD_GLOBAL 0 (x) 15 LIST_APPEND 2 18 JUMP_ABSOLUTE 6 >> 21 RETURN_VALUE This chunk of bytecode loads the first argument passed in (the range(1) iterator), and just like the Python 2.x version uses FOR_ITER to loop over it and create its output. Had we defined x in the foo function instead, x would be a cell variable (cells refer to nested scopes): >>> def foo(): ... x = 2 ... class Foo: ... x = 5 ... y = [x for i in range(1)] ... return Foo ... >>> dis.dis(foo.__code__.co_consts[2].co_consts[2]) 5 0 BUILD_LIST 0 3 LOAD_FAST 0 (.0) >> 6 FOR_ITER 12 (to 21) 9 STORE_FAST 1 (i) 12 LOAD_DEREF 0 (x) 15 LIST_APPEND 2 18 JUMP_ABSOLUTE 6 >> 21 RETURN_VALUE The LOAD_DEREF will indirectly load x from the code object cell objects: >>> foo.__code__.co_cellvars # foo function `x` ('x',) >>> foo.__code__.co_consts[2].co_cellvars # Foo class, no cell variables () >>> foo.__code__.co_consts[2].co_consts[2].co_freevars # Refers to `x` in foo ('x',) >>> foo().y [2] The actual referencing looks the value up from the current frame data structures, which were initialized from a function object's .__closure__ attribute. Since the function created for the comprehension code object is discarded again, we do not get to inspect that function's closure. To see a closure in action, we'd have to inspect a nested function instead: >>> def spam(x): ... def eggs(): ... return x ... return eggs ... >>> spam(1).__code__.co_freevars ('x',) >>> spam(1)() 1 >>> spam(1).__closure__ >>> spam(1).__closure__[0].cell_contents 1 >>> spam(5).__closure__[0].cell_contents 5 So, to summarize: List comprehensions get their own code objects in Python 3 (up to Python 3.11), and there is no difference between code objects for functions, generators or comprehensions; comprehension code objects are wrapped in a temporary function object and called immediately. Code objects are created at compile time, and any non-local variables are marked as either global or as free variables, based on the nested scopes of the code. The class body is not considered a scope for looking up those variables. When executing the code, Python has only to look into the globals, or the closure of the currently executing object. Since the compiler didn't include the class body as a scope, the temporary function namespace is not considered. A workaround; or, what to do about it If you were to create an explicit scope for the x variable, like in a function, you can use class-scope variables for a list comprehension: >>> class Foo: ... x = 5 ... def y(x): ... return [x for i in range(1)] ... y = y(x) ... >>> Foo.y [5] The 'temporary' y function can be called directly; we replace it when we do with its return value. Its scope is considered when resolving x: >>> foo.__code__.co_consts[1].co_consts[2] \", line 4> >>> foo.__code__.co_consts[1].co_consts[2].co_cellvars ('x',) Of course, people reading your code will scratch their heads over this a little; you may want to put a big fat comment in there explaining why you are doing this. The best work-around is to just use __init__ to create an instance variable instead: def __init__(self): self.y = [self.x for i in range(1)] and avoid all the head-scratching, and questions to explain yourself. For your own concrete example, I would not even store the namedtuple on the class; either use the output directly (don't store the generated class at all), or use a global: from collections import namedtuple State = namedtuple('State', ['name', 'capital']) class StateDatabase: db = [State(*args) for args in [ ('Alabama', 'Montgomery'), ('Alaska', 'Juneau'), # ... ]] PEP 709; part of Python 3.12, changes some of this all again In Python 3.12, comprehensions have been made a lot more efficient by removing the nested function and inlining the loop, while still maintaining a separate scope. The details of how this was done are outlined in PEP 709 - Inlined comprehensions, but the long and short of it is that instead of creating a new function object and then calling it, with LOAD_CONST, MAKE_FUNCTION and CALL bytecodes, any clashing names used in the loop are first moved to the stack before executing the comprehension bytecode inline. It is important to note that this change only affects performance and interaction with the class scope has not changed. You still can't access names created in a class scope, for the reasons outlined above. Using Python 3.12.0b4 the bytecode for the Foo class now looks like this: # creating `def foo()` and its bytecode elided Disassembly of \", line 2>: 2 0 RESUME 0 2 LOAD_NAME 0 (__name__) 4 STORE_NAME 1 (__module__) 6 LOAD_CONST 0 ('foo..Foo') 8 STORE_NAME 2 (__qualname__) 3 10 LOAD_CONST 1 (5) 12 STORE_NAME 3 (x) 4 14 PUSH_NULL 16 LOAD_NAME 4 (range) 18 LOAD_CONST 2 (1) 20 CALL 1 28 GET_ITER 30 LOAD_FAST_AND_CLEAR 0 (.0) 32 LOAD_FAST_AND_CLEAR 1 (i) 34 LOAD_FAST_AND_CLEAR 2 (x) 36 SWAP 4 38 BUILD_LIST 0 40 SWAP 2 >> 42 FOR_ITER 8 (to 62) 46 STORE_FAST 1 (i) 48 LOAD_GLOBAL 6 (x) 58 LIST_APPEND 2 60 JUMP_BACKWARD 10 (to 42) >> 62 END_FOR 64 SWAP 4 66 STORE_FAST 2 (x) 68 STORE_FAST 1 (i) 70 STORE_FAST 0 (.0) 72 STORE_NAME 5 (y) 74 RETURN_CONST 3 (None) Here, the most important bytecode is the one at offset 34: 34 LOAD_FAST_AND_CLEAR 2 (x) This takes the value for the variable x in the local scope and pushes it on the stack, and then clears the name. If there is no variable x in the current scope, this stores a C NULL value on the stack. The name is now gone from the local scope now until the bytecode at offset 66 is reached: 66 STORE_FAST 2 (x) This restores x to what it was before the list comprehension; if a NULL was stored on the stack to indicate that there was no variable named x, then there still won't be a variable x after this bytecode has been executed. The rest of the bytecode between the LOAD_FAST_AND_CLEAR and STORE_FAST calls is more or less the same it was before, with SWAP bytecodes used to access the iterator for the range(1) object instead of LOAD_FAST (.0) in the function bytecode in earlier Python 3.x versions."} +{"question_id": 53939751, "score": 262, "creation_date": 1545884870, "tags": ["python", "django", "visual-studio-code", "pylint"], "instruction": "Pylint \"unresolved import\" error in Visual Studio Code\n\nI am using the following setup macOS v10.14 (Mojave) Python 3.7.1 Visual Studio Code 1.30 Pylint 2.2.2 Django 2.1.4 I want to use linting to make my life a bit easier in Visual Studio Code. However, for every import I have states \"unresolved import\". Even on default Django imports (i.e. from django.db import models). I presume it is because it is not seeing the virtual environment Python files. Everything works just fine, but it's starting to get annoying. The interpreter choices I have are all system versions of Python. It does not seem to see my virtual environment Python at all (it is not in the same directory as my workspace, so that part makes sense). If I set up the python.PythonPath in the settings.json file, it just ignores it and does not list my virtual environment path as an option. I also tried setting it up in my global Python settings, but it also does not show up. Is there a quick fix to get it working?", "output": "In your workspace settings, you can set your Python path like this: { \"python.defaultInterpreterPath\": \"/path/to/your/venv/bin/python\", }"} +{"question_id": 5649407, "score": 262, "creation_date": 1302698638, "tags": ["python", "arrays"], "instruction": "How to convert hexadecimal string to bytes in Python?\n\nI have a long Hex string that represents a series of values of different types. I need to convert this Hex String into bytes or bytearray so that I can extract each value from the raw data. How can I do this? For example, the string \"ab\" should convert to the bytes b\"\\xab\" or equivalent byte array. Longer example: >>> # what to use in place of `convert` here? >>> convert(\"8e71c61de6a2321336184f813379ec6bf4a3fb79e63cd12b\") b'\\x8eq\\xc6\\x1d\\xe6\\xa22\\x136\\x18O\\x813y\\xeck\\xf4\\xa3\\xfby\\xe6<\\xd1+'", "output": "Suppose your hex string is something like >>> hex_string = \"deadbeef\" Convert it to a bytearray (Python 3 and 2.7): >>> bytearray.fromhex(hex_string) bytearray(b'\\xde\\xad\\xbe\\xef') Convert it to a bytes object (Python 3): >>> bytes.fromhex(hex_string) b'\\xde\\xad\\xbe\\xef' Note that bytes is an immutable version of bytearray. Convert it to a string (Python \u2264 2.7): >>> hex_data = hex_string.decode(\"hex\") >>> hex_data \"\\xde\\xad\\xbe\\xef\""} +{"question_id": 35245401, "score": 262, "creation_date": 1454786740, "tags": ["python", "pip", "anaconda", "conda"], "instruction": "Combining conda environment.yml with pip requirements.txt\n\nI work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from ~gohlke. At the moment I have two files: environment.yml for conda with: # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda and requirements.txt for pip which can be used after activating above conda environment: # run: pip install -i requirements.txt docx gooey http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl Is there a possibility to combine them in one file (for conda)?", "output": "Pip dependencies can be included in the environment.yml file like this (docs): # run: conda env create --file environment.yml name: test-env dependencies: - python>=3.5 - anaconda - pip - numpy=1.13.3 # pin version for conda - pip: # works for regular pip packages - docx - gooey - matplotlib==2.0.0 # pin version for pip # and for wheels - http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl It also works for .whl files in the same directory (see Dengar's answer) as well as with common pip packages."} +{"question_id": 10344197, "score": 262, "creation_date": 1335495540, "tags": ["python", "django"], "instruction": "How does Django's nested Meta class work?\n\nI am using Django which allows people to add extra parameters to a class by using class Meta. class FooModel(models.Model): ... class Meta: ... The only thing I found in Python's documentation was: class FooMetaClass(type): ... class FooClass: __metaclass__ = FooMetaClass However, I don't think this is the same thing.", "output": "You are asking a question about two different things: Meta inner class in Django models: This is just a class container with some options (metadata) attached to the model. It defines such things as available permissions, associated database table name, whether the model is abstract or not, singular and plural versions of the name etc. Short explanation is here: Django docs: Models: Meta options List of available meta options is here: Django docs: Model Meta options For latest version of Django: Django docs: Model Meta options Metaclass in Python: The best description is here: What are metaclasses in Python?"} +{"question_id": 13714454, "score": 262, "creation_date": 1354667689, "tags": ["python", "matplotlib", "figsize"], "instruction": "Specifying and saving a figure with exact size in pixels\n\nSay I have an image of size 3841 x 7195 pixels. I would like to save the contents of the figure to disk, resulting in an image of the exact size I specify in pixels. No axis, no titles. Just the image. I don't personally care about DPIs, as I only want to specify the size the image takes in the screen in disk in pixels. I have read other threads, and they all seem to do conversions to inches and then specify the dimensions of the figure in inches and adjust dpi's in some way. I would like to avoid dealing with the potential loss of accuracy that could result from pixel-to-inches conversions. I have tried with: w = 7195 h = 3841 fig = plt.figure(frameon=False) fig.set_size_inches(w,h) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(im_np, aspect='normal') fig.savefig(some_path, dpi=1) with no luck (Python complains that width and height must each be below 32768 (?)) From everything I have seen, matplotlib requires the figure size to be specified in inches and dpi, but I am only interested in the pixels the figure takes in disk. How can I do this? To clarify: I am looking for a way to do this with matplotlib, and not with other image-saving libraries.", "output": "Matplotlib doesn't work with pixels directly, but rather physical sizes and DPI. If you want to display a figure with a certain pixel size, you need to know the DPI of your monitor. For example this link will detect that for you. If you have an image of 3841x7195 pixels it is unlikely that you monitor will be that large, so you won't be able to show a figure of that size (matplotlib requires the figure to fit in the screen, if you ask for a size too large it will shrink to the screen size). Let's imagine you want an 800x800 pixel image just for an example. Here's how to show an 800x800 pixel image in my monitor (my_dpi=96): plt.figure(figsize=(800/my_dpi, 800/my_dpi), dpi=my_dpi) So you basically just divide the dimensions in pixels by your DPI. If you want to save a figure of a specific size, then it is a different matter. Screen DPIs are not so important anymore (unless you ask for a figure that won't fit in the screen). Using the same example of the 800x800 pixel figure, we can save it in different resolutions using the dpi keyword of savefig. To save it in the same resolution as the screen just use the same dpi: plt.savefig('my_fig.png', dpi=my_dpi) To save it as an 8000x8000 pixel image, use a dpi 10 times larger: plt.savefig('my_fig.png', dpi=my_dpi * 10) Note that the setting of the DPI is not supported by all backends. Here, the PNG backend is used, but the pdf and ps backends will implement the size differently. Also, changing the DPI and sizes will also affect things like fontsize. A larger DPI will keep the same relative sizes of fonts and elements, but if you want smaller fonts for a larger figure you need to increase the physical size instead of the DPI. Getting back to your example, if you want to save a image with 3841 x 7195 pixels, you could do the following: plt.figure(figsize=(3.841, 7.195), dpi=100) ( your code ...) plt.savefig('myfig.png', dpi=1000) Note that I used the figure dpi of 100 to fit in most screens, but saved with dpi=1000 to achieve the required resolution. In my system this produces a png with 3840x7190 pixels -- it seems that the DPI saved is always 0.02 pixels/inch smaller than the selected value, which will have a (small) effect on large image sizes. Some more discussion of this here."} +{"question_id": 17351016, "score": 261, "creation_date": 1372359030, "tags": ["python", "windows", "python-3.x", "simplehttpserver"], "instruction": "Set up Python simpleHTTPserver on Windows\n\nI want to set up Python SimpleHTTPServer on Windows XP. I have Python installed on my computer. I am executing the following command: python -m SimpleHTTPServer 8888 But I am getting the error: C:\\Python33\\python.exe: No module named SimpleHTTPServer Is SimpleHTTPServer for Python available on Windows? If yes, what do I do to set up the server?", "output": "From Stack Overflow question What is the Python 3 equivalent of \"python -m SimpleHTTPServer\": SimpleHTTPServer is for python2, so you're getting the error. In python3, The following works: python -m http.server [] Because using Python 3, the module SimpleHTTPServer has been replaced by http.server, at least in Windows."} +{"question_id": 16970982, "score": 261, "creation_date": 1370548296, "tags": ["python", "arrays", "numpy", "unique"], "instruction": "Find unique rows in numpy.array\n\nI need to find unique rows in a numpy.array. For example: >>> a # I have array([[1, 1, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [0, 1, 1, 1, 0, 0], [1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 0]]) >>> new_a # I want to get to array([[1, 1, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 0]]) I know that i can create a set and loop over the array, but I am looking for an efficient pure numpy solution. I believe that there is a way to set data type to void and then I could just use numpy.unique, but I couldn't figure out how to make it work.", "output": "As of NumPy 1.13, one can simply choose the axis for selection of unique values in any N-dim array. To get unique rows, use np.unique as follows: unique_rows = np.unique(original_array, axis=0)"} +{"question_id": 1249388, "score": 260, "creation_date": 1249751581, "tags": ["python", "numbers"], "instruction": "Removing all non-numeric characters from string in Python\n\nHow do we remove all non-numeric characters from a string in Python?", "output": ">>> import re >>> re.sub(\"[^0-9]\", \"\", \"sdkjh987978asd098as0980a98sd\") '987978098098098' >>> # or >>> re.sub(r\"\\D\", \"\", \"sdkjh987978asd098as0980a98sd\") '987978098098098'"} +{"question_id": 12834785, "score": 260, "creation_date": 1349942167, "tags": ["python", "argparse"], "instruction": "Having options in argparse with a dash\n\nI want to have some options in argparse module such as --pm-export however when I try to use it like args.pm-export I get the error that there is not attribute pm. How can I get around this issue? Is it possible to have - in command line options?", "output": "From the argparse docs: For optional argument actions, the value of dest is normally inferred from the option strings. ArgumentParser generates the value of dest by taking the first long option string and stripping away the initial -- string. Any internal - characters will be converted to _ characters to make sure the string is a valid attribute name. So you should be using args.pm_export."} +{"question_id": 2866380, "score": 260, "creation_date": 1274279086, "tags": ["python", "testing", "timeit", "database-tuning"], "instruction": "How can I time a code segment for testing performance with Pythons timeit?\n\nI've a python script which works just as it should, but I need to write the execution time. I've googled that I should use timeit but I can't seem to get it to work. My Python script looks like this: import sys import getopt import timeit import random import os import re import ibm_db import time from string import maketrans myfile = open(\"results_update.txt\", \"a\") for r in range(100): rannumber = random.randint(0, 100) update = \"update TABLE set val = %i where MyCount >= '2010' and MyCount < '2012' and number = '250'\" % rannumber #print rannumber conn = ibm_db.pconnect(\"dsn=myDB\",\"usrname\",\"secretPWD\") for r in range(5): print \"Run %s\\n\" % r ibm_db.execute(query_stmt) query_stmt = ibm_db.prepare(conn, update) myfile.close() ibm_db.close(conn) What I need is the time it takes to execute the query and write it to the file results_update.txt. The purpose is to test an update statement for my database with different indexes and tuning mechanisms.", "output": "You can use time.time() or time.clock() before and after the block you want to time. import time t0 = time.time() code_block t1 = time.time() total = t1-t0 This method is not as exact as timeit (it does not average several runs) but it is straightforward. time.time() (in Windows and Linux) and time.clock() (in Linux) are not precise enough for fast functions because they return the time in an integer number of seconds (you'll get total = 0 for short durations). In this case or if you want to average the time elapsed by several runs, you have to manually call the function multiple times (As I think you already do in you example code and timeit does this automatically when you set its number argument) import time def myfast(): code n = 10000 t0 = time.time() for i in range(n): myfast() t1 = time.time() total_n = t1-t0 In Windows, time.clock() has higher precision because it returns the time in microseconds whereas time.time() returns the time in seconds. So choose whichever makes sense for your situation."} +{"question_id": 7908636, "score": 260, "creation_date": 1319661494, "tags": ["python", "pandas", "matplotlib", "seaborn", "mplcursors"], "instruction": "How to add hovering annotations to a plot\n\nI am using matplotlib to make scatter plots. Each point on the scatter plot is associated with a named object. I would like to be able to see the name of an object when I hover my cursor over the point on the scatter plot associated with that object. In particular, it would be nice to be able to quickly see the names of the points that are outliers. The closest thing I have been able to find while searching here is the annotate command, but that appears to create a fixed label on the plot. Unfortunately, with the number of points that I have, the scatter plot would be unreadable if I labeled each point. Does anyone know of a way to create labels that only appear when the cursor hovers in the vicinity of that point?", "output": "Here is a code that uses a scatter and shows an annotation upon hovering over the scatter points. import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) x = np.random.rand(15) y = np.random.rand(15) names = np.array(list(\"ABCDEFGHIJKLMNO\")) c = np.random.randint(1,5,size=15) norm = plt.Normalize(1,4) cmap = plt.cm.RdYlGn fig,ax = plt.subplots() sc = plt.scatter(x,y,c=c, s=100, cmap=cmap, norm=norm) annot = ax.annotate(\"\", xy=(0,0), xytext=(20,20),textcoords=\"offset points\", bbox=dict(boxstyle=\"round\", fc=\"w\"), arrowprops=dict(arrowstyle=\"->\")) annot.set_visible(False) def update_annot(ind): pos = sc.get_offsets()[ind[\"ind\"][0]] annot.xy = pos text = \"{}, {}\".format(\" \".join(list(map(str,ind[\"ind\"]))), \" \".join([names[n] for n in ind[\"ind\"]])) annot.set_text(text) annot.get_bbox_patch().set_facecolor(cmap(norm(c[ind[\"ind\"][0]]))) annot.get_bbox_patch().set_alpha(0.4) def hover(event): vis = annot.get_visible() if event.inaxes == ax: cont, ind = sc.contains(event) if cont: update_annot(ind) annot.set_visible(True) fig.canvas.draw_idle() else: if vis: annot.set_visible(False) fig.canvas.draw_idle() fig.canvas.mpl_connect(\"motion_notify_event\", hover) plt.show() Because people also want to use this solution for a line plot instead of a scatter, the following would be the same solution for plot (which works slightly differently). import matplotlib.pyplot as plt import numpy as np; np.random.seed(1) x = np.sort(np.random.rand(15)) y = np.sort(np.random.rand(15)) names = np.array(list(\"ABCDEFGHIJKLMNO\")) norm = plt.Normalize(1,4) cmap = plt.cm.RdYlGn fig,ax = plt.subplots() line, = plt.plot(x,y, marker=\"o\") annot = ax.annotate(\"\", xy=(0,0), xytext=(-20,20),textcoords=\"offset points\", bbox=dict(boxstyle=\"round\", fc=\"w\"), arrowprops=dict(arrowstyle=\"->\")) annot.set_visible(False) def update_annot(ind): x,y = line.get_data() annot.xy = (x[ind[\"ind\"][0]], y[ind[\"ind\"][0]]) text = \"{}, {}\".format(\" \".join(list(map(str,ind[\"ind\"]))), \" \".join([names[n] for n in ind[\"ind\"]])) annot.set_text(text) annot.get_bbox_patch().set_alpha(0.4) def hover(event): vis = annot.get_visible() if event.inaxes == ax: cont, ind = line.contains(event) if cont: update_annot(ind) annot.set_visible(True) fig.canvas.draw_idle() else: if vis: annot.set_visible(False) fig.canvas.draw_idle() fig.canvas.mpl_connect(\"motion_notify_event\", hover) plt.show() In case someone is looking for a solution for lines in twin axes, refer to How to make labels appear when hovering over a point in multiple axis? In case someone is looking for a solution for bar plots, please refer to e.g. this answer."} +{"question_id": 32400867, "score": 259, "creation_date": 1441377864, "tags": ["python", "csv", "pandas", "request"], "instruction": "Pandas read_csv from url\n\nI'm trying to read a csv-file from given URL, using Python 3.x: import pandas as pd import requests url = \"https://github.com/cs109/2014_data/blob/master/countries.csv\" s = requests.get(url).content c = pd.read_csv(s) I have the following error \"Expected file path name or file-like object, got type\" How can I fix this? I'm using Python 3.4", "output": "Update: From pandas 0.19.2 you can now just pass read_csv() the url directly, although that will fail if it requires authentication. For older pandas versions, or if you need authentication, or for any other HTTP-fault-tolerant reason: Use pandas.read_csv with a file-like object as the first argument. If you want to read the csv from a string, you can use io.StringIO. For the URL https://github.com/cs109/2014_data/blob/master/countries.csv, you get html response, not raw csv; you should use the url given by the Raw link in the github page for getting raw csv response , which is https://raw.githubusercontent.com/cs109/2014_data/master/countries.csv Example: import pandas as pd import io import requests url = \"https://raw.githubusercontent.com/cs109/2014_data/master/countries.csv\" s = requests.get(url).content c = pd.read_csv(io.StringIO(s.decode('utf-8'))) Note: in Python 2.x, the string-buffer object was StringIO.StringIO"} +{"question_id": 54106071, "score": 259, "creation_date": 1547023246, "tags": ["python", "visual-studio-code", "virtualenv"], "instruction": "How can I set up a virtual environment for Python in Visual Studio Code?\n\nIn my project folder I created a venv folder: python -m venv venv When I run command select python interpreter in Visual Studio Code, my venv folder is not shown. I went one level up like suggested here, but Visual Studio Code doesn't see my virtual interpreter. What did I miss?", "output": "I have been using Visual Studio Code for a while now and found an another way to show virtual environments in Visual Studio Code. Go to the parent folder in which venv is there through a command prompt. Type code . and Enter. [It is working on both Windows and Linux for me.] That should also show the virtual environments present in that folder. Original Answer I almost run into same problem every time I am working on Visual Studio Code using venv. I follow the below steps: Go to menu File \u2192 Preferences \u2192 Settings. Click on Workspace settings. Under Files:Association, in the JSON: Schemas section, you will find Edit in settings.json. Click on that. Update \"python.defaultInterpreterPath\": \"Your_venv_path/bin/python\" under workspace settings. (For Windows): Update \"python.defaultInterpreterPath\": \"Your_venv_path\\Scripts\\python.exe\" under workspace settings. Restart Visual Studio Code in case if it still doesn't show your venv. Note: Use python.pythonPath instead of python.defaultInterpreterPath for older versions."} +{"question_id": 20449427, "score": 259, "creation_date": 1386472095, "tags": ["python", "python-3.x", "python-2.7", "input", "int"], "instruction": "How can I read inputs as numbers?\n\nWhy are x and y strings instead of ints in the below code? (Note: in Python 2.x use raw_input(). In Python 3.x use input(). raw_input() was renamed to input() in Python 3.x) play = True while play: x = input(\"Enter a number: \") y = input(\"Enter a number: \") print(x + y) print(x - y) print(x * y) print(x / y) print(x % y) if input(\"Play again? \") == \"no\": play = False", "output": "Since Python 3, input returns a string which you have to explicitly convert to int, like this x = int(input(\"Enter a number: \")) y = int(input(\"Enter a number: \")) You can accept numbers of any base like this >>> data = int(input(\"Enter a number: \"), 8) Enter a number: 777 >>> data 511 >>> data = int(input(\"Enter a number: \"), 16) Enter a number: FFFF >>> data 65535 >>> data = int(input(\"Enter a number: \"), 2) Enter a number: 10101010101 >>> data 1365 The second parameter tells it the base of the number and then internally it understands and converts it. If the entered data is wrong it will throw a ValueError. >>> data = int(input(\"Enter a number: \"), 2) Enter a number: 1234 Traceback (most recent call last): File \"\", line 1, in ValueError: invalid literal for int() with base 2: '1234' For values that can have a fractional component, the type would be float rather than int: x = float(input(\"Enter a number:\")) Differences between Python 2 and 3 Summary Python 2's input function evaluated the received data, converting it to an integer implicitly (read the next section to understand the implication), but Python 3's input function does not do that anymore. Python 2's equivalent of Python 3's input is the raw_input function. Python 2.x There were two functions to get user input, called input and raw_input. The difference between them is, raw_input doesn't evaluate the data and returns as it is, in string form. But, input will evaluate whatever you entered and the result of evaluation will be returned. For example, >>> import sys >>> sys.version '2.7.6 (default, Mar 22 2014, 22:59:56) \\n[GCC 4.8.2]' >>> data = input(\"Enter a number: \") Enter a number: 5 + 17 >>> data, type(data) (22, ) The data 5 + 17 is evaluated and the result is 22. When it evaluates the expression 5 + 17, it detects that you are adding two numbers and so the result will also be of the same int type. So, the type conversion is done for free, and 22 is returned as the result of the input and stored in the data variable. You can think of input as the raw_input composed with an eval call. >>> data = eval(raw_input(\"Enter a number: \")) Enter a number: 5 + 17 >>> data, type(data) (22, ) Note: You should be careful when you are using input in Python 2.x. I explained why one should be careful when using it, in this answer. But, raw_input doesn't evaluate the input and returns as it is, as a string. >>> import sys >>> sys.version '2.7.6 (default, Mar 22 2014, 22:59:56) \\n[GCC 4.8.2]' >>> data = raw_input(\"Enter a number: \") Enter a number: 5 + 17 >>> data, type(data) ('5 + 17', ) Python 3.x Python 3.x's input and Python 2.x's raw_input are similar and raw_input is not available in Python 3.x. >>> import sys >>> sys.version '3.4.0 (default, Apr 11 2014, 13:05:11) \\n[GCC 4.8.2]' >>> data = input(\"Enter a number: \") Enter a number: 5 + 17 >>> data, type(data) ('5 + 17', )"} +{"question_id": 41171791, "score": 259, "creation_date": 1481829670, "tags": ["python", "python-3.x", "subprocess"], "instruction": "How to suppress or capture the output of subprocess.run()?\n\nFrom the examples in docs on subprocess.run() it seems like there shouldn't be any output from subprocess.run([\"ls\", \"-l\"]) # doesn't capture output However, when I try it in a python shell the listing gets printed. I wonder if this is the default behaviour and how to suppress the output of run().", "output": "Suppressing Here is how to suppress output, in order of decreasing levels of cleanliness. They assume you are on Python 3. You can redirect to the special subprocess.DEVNULL target. import subprocess # To redirect stdout (only): subprocess.run( ['ls', '-l'], stdout = subprocess.DEVNULL ) # to redirect stderr to /dev/null as well: subprocess.run( ['ls', '-l'], stdout = subprocess.DEVNULL, stderr = subprocess.DEVNULL ) # Alternatively, you can merge stderr and stdout streams and redirect # the one stream to /dev/null subprocess.run( ['ls', '-l'], stdout = subprocess.DEVNULL, stderr = subprocess.STDOUT ) If you want a fully manual method, can redirect to /dev/null by opening the file handle yourself. Everything else would be identical to method #1. import os import subprocess with open(os.devnull, 'w') as devnull: subprocess.run( ['ls', '-l'], stdout = devnull ) Capturing Here is how to capture output (to use later or parse), in order of decreasing levels of cleanliness. They assume you are on Python 3. NOTE: The below examples use universal_newlines=True (Python <= 3.6). This causes the STDOUT and STDERR to be captured as str instead of bytes. Omit universal_newlines=True to get bytes data Python >= 3.7 accepts text=True as a short form for universal_newlines=True If you simply want to capture both STDOUT and STDERR independently, AND you are on Python >= 3.7, use capture_output=True. import subprocess result = subprocess.run( ['ls', '-l'], capture_output = True, # Python >= 3.7 only text = True # Python >= 3.7 only ) print(result.stdout) print(result.stderr) You can use subprocess.PIPE to capture STDOUT and STDERR independently. This works on any version of Python that supports subprocess.run. import subprocess result = subprocess.run( ['ls', '-l'], stdout = subprocess.PIPE, universal_newlines = True # Python >= 3.7 also accepts \"text=True\" ) print(result.stdout) # To also capture stderr... result = subprocess.run( ['ls', '-l'], stdout = subprocess.PIPE, stderr = subprocess.PIPE, universal_newlines = True # Python >= 3.7 also accepts \"text=True\" ) print(result.stdout) print(result.stderr) # To mix stdout and stderr into a single string result = subprocess.run( ['ls', '-l'], stdout = subprocess.PIPE, stderr = subprocess.STDOUT, universal_newlines = True # Python >= 3.7 also accepts \"text=True\" ) print(result.stdout)"} +{"question_id": 5319922, "score": 258, "creation_date": 1300237817, "tags": ["python", "string"], "instruction": "Check if a word is in a string in Python\n\nI'm working with Python, and I'm trying to find out if you can tell if a word is in a string. I have found some information about identifying if the word is in the string - using .find, but is there a way to do an if statement. I would like to have something like the following: if string.find(word): print(\"success\")", "output": "What is wrong with: if word in mystring: print('success')"} +{"question_id": 27517425, "score": 258, "creation_date": 1418783263, "tags": ["python", "pandas"], "instruction": "Whether to use apply vs transform on a group object, to subtract two columns and get mean\n\nConsider the following dataframe: columns = ['A', 'B', 'C', 'D'] records = [ ['foo', 'one', 0.162003, 0.087469], ['bar', 'one', -1.156319, -1.5262719999999999], ['foo', 'two', 0.833892, -1.666304], ['bar', 'three', -2.026673, -0.32205700000000004], ['foo', 'two', 0.41145200000000004, -0.9543709999999999], ['bar', 'two', 0.765878, -0.095968], ['foo', 'one', -0.65489, 0.678091], ['foo', 'three', -1.789842, -1.130922] ] df = pd.DataFrame.from_records(records, columns=columns) \"\"\" A B C D 0 foo one 0.162003 0.087469 1 bar one -1.156319 -1.526272 2 foo two 0.833892 -1.666304 3 bar three -2.026673 -0.322057 4 foo two 0.411452 -0.954371 5 bar two 0.765878 -0.095968 6 foo one -0.654890 0.678091 7 foo three -1.789842 -1.130922 \"\"\" The following commands work: df.groupby('A').apply(lambda x: (x['C'] - x['D'])) df.groupby('A').apply(lambda x: (x['C'] - x['D']).mean()) but none of the following work: df.groupby('A').transform(lambda x: (x['C'] - x['D'])) # KeyError or ValueError: could not broadcast input array from shape (5) into shape (5,3) df.groupby('A').transform(lambda x: (x['C'] - x['D']).mean()) # KeyError or TypeError: cannot concatenate a non-NDFrame object Why? The example on the documentation seems to suggest that calling transform on a group allows one to do row-wise operation processing: # Note that the following suggests row-wise operation (x.mean is the column mean) zscore = lambda x: (x - x.mean()) / x.std() transformed = ts.groupby(key).transform(zscore) In other words, I thought that transform is essentially a specific type of apply (the one that does not aggregate). Where am I wrong? For reference, below is the construction of the original dataframe above: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : randn(8), 'D' : randn(8)})", "output": "Two major differences between apply and transform There are two major differences between the transform and apply groupby methods. Input: apply implicitly passes all the columns for each group as a DataFrame to the custom function. while transform passes each column for each group individually as a Series to the custom function. Output: The custom function passed to apply can return a scalar, or a Series or DataFrame (or numpy array or even list). The custom function passed to transform must return a sequence (a one dimensional Series, array or list) the same length as the group. So, transform works on just one Series at a time and apply works on the entire DataFrame at once. Inspecting the custom function It can help quite a bit to inspect the input to your custom function passed to apply or transform. Examples Let's create some sample data and inspect the groups so that you can see what I am talking about: import pandas as pd import numpy as np df = pd.DataFrame({'State':['Texas', 'Texas', 'Florida', 'Florida'], 'a':[4,5,1,3], 'b':[6,10,3,11]}) State a b 0 Texas 4 6 1 Texas 5 10 2 Florida 1 3 3 Florida 3 11 Let's create a simple custom function that prints out the type of the implicitly passed object and then raises an exception so that execution can be stopped. def inspect(x): print(type(x)) raise Now let's pass this function to both the groupby apply and transform methods to see what object is passed to it: df.groupby('State').apply(inspect) RuntimeError As you can see, a DataFrame is passed into the inspect function. You might be wondering why the type, DataFrame, got printed out twice. Pandas runs the first group twice. It does this to determine if there is a fast way to complete the computation or not. This is a minor detail that you shouldn't worry about. Now, let's do the same thing with transform df.groupby('State').transform(inspect) RuntimeError It is passed a Series - a totally different Pandas object. So, transform is only allowed to work with a single Series at a time. It is impossible for it to act on two columns at the same time. So, if we try and subtract column a from b inside of our custom function we would get an error with transform. See below: def subtract_two(x): return x['a'] - x['b'] df.groupby('State').transform(subtract_two) KeyError: ('a', 'occurred at index a') We get a KeyError as pandas is attempting to find the Series index a which does not exist. You can complete this operation with apply as it has the entire DataFrame: df.groupby('State').apply(subtract_two) State Florida 2 -2 3 -8 Texas 0 -2 1 -5 dtype: int64 The output is a Series and a little confusing as the original index is kept, but we have access to all columns. Displaying the passed pandas object It can help even more to display the entire pandas object within the custom function, so you can see exactly what you are operating with. You can use print statements by I like to use the display function from the IPython.display module so that the DataFrames get nicely outputted in HTML in a jupyter notebook: from IPython.display import display def subtract_two(x): display(x) return x['a'] - x['b'] Screenshot: Transform must return a single dimensional sequence the same size as the group The other difference is that transform must return a single dimensional sequence the same size as the group. In this particular instance, each group has two rows, so transform must return a sequence of two rows. If it does not then an error is raised: def return_three(x): return np.array([1, 2, 3]) df.groupby('State').transform(return_three) ValueError: transform must return a scalar value for each group The error message is not really descriptive of the problem. You must return a sequence the same length as the group. So, a function like this would work: def rand_group_len(x): return np.random.rand(len(x)) df.groupby('State').transform(rand_group_len) a b 0 0.962070 0.151440 1 0.440956 0.782176 2 0.642218 0.483257 3 0.056047 0.238208 Returning a single scalar object also works for transform If you return just a single scalar from your custom function, then transform will use it for each of the rows in the group: def group_sum(x): return x.sum() df.groupby('State').transform(group_sum) a b 0 9 16 1 9 16 2 4 14 3 4 14"} +{"question_id": 9764298, "score": 258, "creation_date": 1332124510, "tags": ["python", "list", "sorting"], "instruction": "Given parallel lists, how can I sort one while permuting (rearranging) the other in the same way?\n\nSuppose I have: list1 = [3, 2, 4, 1, 1] list2 = ['three', 'two', 'four', 'one', 'one2'] Calling list1.sort() will sort it, resulting in [1, 1, 2, 3, 4]. However, can I get list2 to be rearranged in sync with that, to get a result like this? list1 = [1, 1, 2, 3, 4] list2 = ['one', 'one2', 'two', 'three', 'four'] Sometimes, people phrase the problem differently: given two lists, they would like to use one to determine the sort order for the other - i.e., sort list2 in the order described by the corresponding values in list1. The trick is that this is equivalent to sorting the \"key\" values (list1), and then rearranging list2 in the same way. In other words, exactly what is described here. Some answers for the other question, though, discard the \"sorted keys\" afterwards. See also: How can I sort a list, according to where its elements appear in another list? - this is another common way that people want to sort one list \"based on\" another. Before attempting to close duplicate questions, take special care to check exactly what the OP wants. A key clue: do the lists need to be the same length?", "output": "One classic approach to this problem is to use the \"decorate, sort, undecorate\" idiom, which is especially simple using python's built-in zip function: >>> list1 = [3,2,4,1, 1] >>> list2 = ['three', 'two', 'four', 'one', 'one2'] >>> list1, list2 = zip(*sorted(zip(list1, list2))) >>> list1 (1, 1, 2, 3, 4) >>> list2 ('one', 'one2', 'two', 'three', 'four') These of course are no longer lists, but that's easily remedied, if it matters: >>> list1, list2 = (list(t) for t in zip(*sorted(zip(list1, list2)))) >>> list1 [1, 1, 2, 3, 4] >>> list2 ['one', 'one2', 'two', 'three', 'four'] It's worth noting that the above may sacrifice speed for terseness; the in-place version, which takes up 3 lines, is a tad faster on my machine for small lists: >>> %timeit zip(*sorted(zip(list1, list2))) 100000 loops, best of 3: 3.3 us per loop >>> %timeit tups = zip(list1, list2); tups.sort(); zip(*tups) 100000 loops, best of 3: 2.84 us per loop On the other hand, for larger lists, the one-line version could be faster: >>> %timeit zip(*sorted(zip(list1, list2))) 100 loops, best of 3: 8.09 ms per loop >>> %timeit tups = zip(list1, list2); tups.sort(); zip(*tups) 100 loops, best of 3: 8.51 ms per loop As Quantum7 points out, JSF's suggestion is a bit faster still, but it will probably only ever be a little bit faster, because Python uses the very same DSU idiom internally for all key-based sorts. It's just happening a little closer to the bare metal. (This shows just how well optimized the zip routines are!) I think the zip-based approach is more flexible and is a little more readable, so I prefer it. Note that when elements of list1 are equal, this approach will end up comparing elements of list2. If elements of list2 don't support comparison, or don't produce a boolean when compared (for example, if list2 is a list of NumPy arrays), this will fail, and if elements of list2 are very expensive to compare, it might be better to avoid comparison anyway. In that case, you can sort indices as suggested in jfs's answer, or you can give the sort a key function that avoids comparing elements of list2: result1, result2 = zip(*sorted(zip(list1, list2), key=lambda x: x[0])) Also, the use of zip(*...) as a transpose fails when the input is empty. If your inputs might be empty, you will have to handle that case separately."} +{"question_id": 46092104, "score": 258, "creation_date": 1504774749, "tags": ["python", "subclass", "python-typing"], "instruction": "Subclass in type hinting\n\nI want to allow type hinting using Python 3 to accept sub classes of a certain class. E.g.: class A: pass class B(A): pass class C(A): pass def process_any_subclass_type_of_A(cls: A): if cls == B: # do something elif cls == C: # do something else Now when typing the following code: process_any_subclass_type_of_A(B) I get an PyCharm IDE hint Expected type A, got Type[B] instead. How can I change type hinting here to accept any subtypes of A? According to PEP 484 (\"Expressions whose type is a subtype of a specific argument type are also accepted for that argument.\"), I understand that my solution (cls: A) should work?", "output": "When you specify cls: A, you're saying that cls expects an instance of type A. For python 3.5.2 through 3.8, the type hint to specify cls as a class object for the type A (or its subtypes) uses typing.Type. from typing import Type def process_any_subclass_type_of_A(cls: Type[A]): pass From The type of class objects: Sometimes you want to talk about class objects that inherit from a given class. This can be spelled as Type[C] where C is a class. In other words, when C is the name of a class, using C to annotate an argument declares that the argument is an instance of C (or of a subclass of C), but using Type[C] as an argument annotation declares that the argument is a class object deriving from C (or C itself). From python 3.9 onwards, it is recommended to use the builtin type instead. def process_any_subclass_type_of_A(cls: type[A]): pass"} +{"question_id": 5188792, "score": 258, "creation_date": 1299203244, "tags": ["python", "string"], "instruction": "How to check a string for specific characters?\n\nHow can I check if a string has several specific characters in it using Python 2? For example, given the following string: The criminals stole $1,000,000 in jewels. How do I detect if it has dollar signs ($), commas (,), and numbers?", "output": "Assuming your string is s: '$' in s # found '$' not in s # not found # original answer given, but less Pythonic than the above... s.find('$')==-1 # not found s.find('$')!=-1 # found And so on for other characters. ... or pattern = re.compile(r'[\\d\\$,]') if pattern.findall(s): print('Found') else: print('Not found') ... or chars = set('0123456789$,') if any((c in chars) for c in s): print('Found') else: print('Not Found')"} +{"question_id": 10825926, "score": 258, "creation_date": 1338423069, "tags": ["python", "python-3.x", "rounding"], "instruction": "Why does Python 3 round half to even?\n\nI was just re-reading What\u2019s New In Python 3.0 and it states: The round() function rounding strategy and return type have changed. Exact halfway cases are now rounded to the nearest even result instead of away from zero. (For example, round(2.5) now returns 2 rather than 3.) and the documentation for round(): For the built-in types supporting round(), values are rounded to the closest multiple of 10 to the power minus n; if two multiples are equally close, rounding is done toward the even choice So, in Python 2 (for example, v2.7.3) I get the expected: round(2.5) 3.0 round(3.5) 4.0 However, now under Python 3 (for example v3.2.3): round(2.5) 2 round(3.5) 4 This seems counter-intuitive and contrary to what I understand about rounding (and bound to trip up people). English isn't my native language but until I read this I thought I knew what rounding meant :-/ I am sure at the time Python 3 was introduced there must have been some discussion of this, but I was unable to find a good reason in my search. Does anyone have insight into why this was changed to this? Are there any other mainstream programming languages (e.g., C, C++, Java, Perl, ..) that do this sort of (to me inconsistent) rounding? What am I missing here? UPDATE: @Li-aungYip's comment re \"Banker's rounding\" gave me the right search term/keywords to search for and I found this SO question: Why does .NET use banker's rounding as default?, so I will be reading that carefully.", "output": "Python 3's way (called \"round half to even\" or \"banker's rounding\") is considered the standard rounding method these days, though some language implementations aren't on the bus yet. The simple \"always round 0.5 up\" technique results in a slight bias toward the higher number. With large numbers of calculations, this can be significant. The Python 3 approach eliminates this issue. There is more than one method of rounding in common use. IEEE 754, the international standard for floating-point math, defines five different rounding methods (the one used by Python 3 is the default). And there are others. This behavior is not as widely known as it ought to be. AppleScript was, if I remember correctly, an early adopter of this rounding method. The round command in AppleScript offers several options, but round-toward-even is the default as it is in IEEE 754. Apparently the engineer who implemented the round command got so fed up with all the requests to \"make it work like I learned in school\" that he implemented just that: round 2.5 rounding as taught in school is a valid AppleScript command. :-)"} +{"question_id": 41920124, "score": 257, "creation_date": 1485687526, "tags": ["python", "multiprocessing", "progress-bar", "tqdm"], "instruction": "Multiprocessing : use tqdm to display a progress bar\n\nTo make my code more \"pythonic\" and faster, I use multiprocessing and a map function to send it a) the function and b) the range of iterations. The implanted solution (i.e., calling tqdm directly on the range tqdm.tqdm(range(0, 30))) does not work with multiprocessing (as formulated in the code below). The progress bar is displayed from 0 to 100% (when python reads the code?) but it does not indicate the actual progress of the map function. How can one display a progress bar that indicates at which step the 'map' function is ? from multiprocessing import Pool import tqdm import time def _foo(my_number): square = my_number * my_number time.sleep(1) return square if __name__ == '__main__': p = Pool(2) r = p.map(_foo, tqdm.tqdm(range(0, 30))) p.close() p.join() Any help or suggestions are welcome...", "output": "Solution found. Be careful! Due to multiprocessing, the estimation time (iteration per loop, total time, etc.) could be unstable, but the progress bar works perfectly. Note: Context manager for Pool is only available in Python 3.3+. import time from multiprocessing import Pool from random import randint from tqdm import tqdm def _foo(my_number): square = my_number * my_number time.sleep(randint(1, 2) / 2) return square if __name__ == \"__main__\": max_ = 30 with Pool(processes=2) as p, tqdm(total=max_) as pbar: for result in p.imap(_foo, range(0, max_)): pbar.update() pbar.refresh() # do something with `result`"} +{"question_id": 33128325, "score": 257, "creation_date": 1444833369, "tags": ["python", "python-3.x", "python-asyncio"], "instruction": "How to set class attribute with await in __init__\n\nHow can I define a class with await in the constructor or class body? For example what I want: import asyncio # some code class Foo(object): async def __init__(self, settings): self.settings = settings self.pool = await create_pool(dsn) foo = Foo(settings) # it raises: # TypeError: __init__() should return None, not 'coroutine' or example with class body attribute: class Foo(object): self.pool = await create_pool(dsn) # Sure it raises syntax Error def __init__(self, settings): self.settings = settings foo = Foo(settings) My solution (But I would like to see a more elegant way) class Foo(object): def __init__(self, settings): self.settings = settings async def init(self): self.pool = await create_pool(dsn) foo = Foo(settings) await foo.init()", "output": "Most magic methods aren't designed to work with async def/await - in general, you should only be using await inside the dedicated asynchronous magic methods - __aiter__, __anext__, __aenter__, and __aexit__. Using it inside other magic methods either won't work at all, as is the case with __init__ (unless you use some tricks described in other answers here), or will force you to always use whatever triggers the magic method call in an asynchronous context. Existing asyncio libraries tend to deal with this in one of two ways: First, I've seen the factory pattern used (asyncio-redis, for example): import asyncio dsn = \"...\" class Foo(object): @classmethod async def create(cls, settings): self = cls() self.settings = settings self.pool = await create_pool(dsn) return self async def main(settings): settings = \"...\" foo = await Foo.create(settings) Other libraries use a top-level coroutine function that creates the object, rather than a factory method: import asyncio dsn = \"...\" async def create_foo(settings): foo = Foo(settings) await foo._init() return foo class Foo(object): def __init__(self, settings): self.settings = settings async def _init(self): self.pool = await create_pool(dsn) async def main(): settings = \"...\" foo = await create_foo(settings) The create_pool function from aiopg that you want to call in __init__ is actually using this exact pattern. This at least addresses the __init__ issue. I haven't seen class variables that make asynchronous calls in the wild that I can recall, so I don't know that any well-established patterns have emerged."} +{"question_id": 7816363, "score": 257, "creation_date": 1318996506, "tags": ["python"], "instruction": "if A vs if A is not None:\n\nCan I use: if A: instead of if A is not None: The latter seems so verbose. Is there a difference?", "output": "The statement if A: will call A.__bool__() (see Special method names documentation), which was called __nonzero__ in Python 2, and use the return value of that function. Here's the summary: object.__bool__(self) Called to implement truth value testing and the built-in operation bool(); should return False or True. When this method is not defined, __len__() is called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither __len__() nor __bool__(), all its instances are considered true. On the other hand, if A is not None: compares only the reference A with None to see whether it is the same or not."} +{"question_id": 65348890, "score": 256, "creation_date": 1608242137, "tags": ["python", "python-3.x", "windows-10"], "instruction": "Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings\n\nI was trying to download a GUI, but the terminal kept giving me this error: Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases. I'm trying to install it using this command: python -m pip install --upgrade pip setuptools virtualenv", "output": "You need to download Python from https://python.org. When in the installation, be sure to check the option that adds Python to PATH."} +{"question_id": 1092531, "score": 256, "creation_date": 1246975251, "tags": ["python", "events", "event-handling", "dispatcher"], "instruction": "Which Python packages offer a stand-alone event system?\n\nI am aware of pydispatcher, but there must be other event-related packages around for Python. Which libraries are available? I'm not interested in event managers that are part of large frameworks, I'd rather use a small bare-bones solution that I can easily extend.", "output": "PyPI packages As of February 2025, these are the event-related packages available on PyPI, ordered by most recent release date. psygnal 0.12.0: February 2024 pymitter 1.0.0: January 2025 blinker 1.9.0: November 2024 pluggy 1.5.0: April 2024 Louie 2.0.1: July 2023 Events 0.5: July 2023 2020 zope.event 5.0: June 2023 PyDispatcher 2.0.7: February 2023 python-dispatch 0.2.2: Jun 2023 python-dispatch 0.2.2: Jun 2023 RxPy3 1.0.1: June 2020 PyPubSub 4.0.3: Jan 2019 pyeventdispatcher 0.2.3a0: 2018 buslane 0.0.5: 2018 PyPyDispatcher 2.1.2: 2017 axel 0.0.7: 2016 dispatcher 1.0: 2012 py-notify 0.3.1: 2008 There's more That's a lot of libraries to choose from, using very different terminology (events, signals, handlers, method dispatch, hooks, ...). I'm trying to keep an overview of the above packages, plus the techniques mentioned in the answers here. First, some terminology... Observer pattern The most basic style of event system is the 'bag of handler methods', which is a simple implementation of the Observer pattern. Basically, the handler methods (callables) are stored in an array and are each called when the event 'fires'. Publish-Subscribe The disadvantage of Observer event systems is that you can only register the handlers on the actual Event object (or handlers list). So at registration time the event already needs to exist. That's why the second style of event systems exists: the publish-subscribe pattern. Here, the handlers don't register on an event object (or handler list), but on a central dispatcher. Also the notifiers only talk to the dispatcher. What to listen for, or what to publish is determined by 'signal', which is nothing more than a name (string). Mediator pattern Might be of interest as well: the Mediator pattern. Hooks A 'hook' system is usally used in the context of application plugins. The application contains fixed integration points (hooks), and each plugin may connect to that hook and perform certain actions. Other 'events' Note: threading.Event is not an 'event system' in the above sense. It's a thread synchronization system where one thread waits until another thread 'signals' the Event object. Network messaging libraries often use the term 'events' too; sometimes these are similar in concept; sometimes not. They can of course traverse thread-, process- and computer boundaries. See e.g. pyzmq, pymq, Twisted, Tornado, gevent, eventlet. Weak references In Python, holding a reference to a method or object ensures that it won't get deleted by the garbage collector. This can be desirable, but it can also lead to memory leaks: the linked handlers are never cleaned up. Some event systems use weak references instead of regular ones to solve this. Some words about the various libraries Observer-style event systems: psygnal has a very clean interface with connect() and emit() methods. zope.event shows the bare bones of how this works (see Lennart's answer). Note: this example does not even support handler arguments. LongPoke's 'callable list' implementation shows that such an event system can be implemented very minimalistically by subclassing list. Felk's variation EventHook also ensures the signatures of callees and callers. spassig's EventHook (Michael Foord's Event Pattern) is a straightforward implementation. Josip's Valued Lessons Event class is basically the same, but uses a set instead of a list to store the bag, and implements __call__ which are both reasonable additions. PyNotify is similar in concept and also provides additional concepts of variables and conditions ('variable changed event'). Homepage is not functional. axel is basically a bag-of-handlers with more features related to threading, error handling, ... python-dispatch requires the even source classes to derive from pydispatch.Dispatcher. buslane is class-based, supports single- or multiple handlers and facilitates extensive type hints. Pithikos' Observer/Event is a lightweight design. Publish-subscribe libraries: blinker has some nifty features such as automatic disconnection and filtering based on sender. PyPubSub is a stable package, and promises \"advanced features that facilitate debugging and maintaining topics and messages\". pymitter is a Python port of Node.js EventEmitter2 and offers namespaces, wildcards and TTL. PyDispatcher seems to emphasize flexibility with regards to many-to-many publication etc. Supports weak references. louie is a reworked PyDispatcher and should work \"in a wide variety of contexts\". pypydispatcher is based on (you guessed it...) PyDispatcher and also works in PyPy. django.dispatch is a rewritten PyDispatcher \"with a more limited interface, but higher performance\". pyeventdispatcher is based on PHP's Symfony framework's event-dispatcher. dispatcher was extracted from django.dispatch but is getting fairly old. Cristian Garcia's EventManger is a really short implementation. Others: pluggy contains a hook system which is used by pytest plugins. RxPy3 implements the Observable pattern and allows merging events, retry etc. Qt's Signals and Slots are available from PyQt or PySide2. They work as callback when used in the same thread, or as events (using an event loop) between two different threads. Signals and Slots have the limitation that they only work in objects of classes that derive from QObject."} +{"question_id": 14037540, "score": 256, "creation_date": 1356507827, "tags": ["python", "file", "csv", "file-io", "python-2.7"], "instruction": "Writing a Python list of lists to a csv file\n\nI have a long list of lists of the following form --- a = [[1.2,'abc',3],[1.2,'werew',4],........,[1.4,'qew',2]] i.e. the values in the list are of different types -- float,int, strings.How do I write it into a csv file so that my output csv file looks like 1.2,abc,3 1.2,werew,4 . . . 1.4,qew,2", "output": "Python's built-in csv module can handle this easily: import csv with open('out.csv', 'w', newline='') as f: writer = csv.writer(f) writer.writerows(a) This assumes your list is defined as a, as it is in your question. You can tweak the exact format of the output CSV via the various optional parameters to csv.writer()."} +{"question_id": 19587118, "score": 256, "creation_date": 1382696259, "tags": ["python", "directory"], "instruction": "Iterating through directories with Python\n\nI need to iterate through the subdirectories of a given directory and search for files. If I get a file I have to open it and change the content and replace it with my own lines. I tried this: import os rootdir ='C:/Users/sid/Desktop/test' for subdir, dirs, files in os.walk(rootdir): for file in files: f=open(file,'r') lines=f.readlines() f.close() f=open(file,'w') for line in lines: newline = \"No you are not\" f.write(newline) f.close() but I am getting an error. What am I doing wrong?", "output": "The actual walk through the directories works as you have coded it. If you replace the contents of the inner loop with a simple print statement you can see that each file is found: import os rootdir = 'C:/Users/sid/Desktop/test' for subdir, dirs, files in os.walk(rootdir): for file in files: print(os.path.join(subdir, file))"} +{"question_id": 3929278, "score": 256, "creation_date": 1287018013, "tags": ["python"], "instruction": "What does |= (ior) do in Python?\n\nGoogle won't let me search |= so I'm having trouble finding relevant documentation. Anybody know?", "output": "|= performs an in-place+ operation between pairs of objects. In particular, between: sets: a union operation dicts: an update operation counters: a union (of multisets) operation numbers: a bitwise OR, binary operation In most cases, it is related to the | operator. See examples below. Sets For example, the union of two sets assigned to s1 and s2 share the following equivalent expressions: >>> s1 = s1 | s2 # 1 >>> s1 |= s2 # 2 >>> s1.__ior__(s2) # 3 where the final value of s1 is equivalent either by: an assigned OR operation an in-place OR operation an in-place OR operation via special method++ Example Here we apply OR (|) and the in-place OR (|=) to sets: >>> s1 = {\"a\", \"b\", \"c\"} >>> s2 = {\"d\", \"e\", \"f\"} >>> # OR, | >>> s1 | s2 {'a', 'b', 'c', 'd', 'e', 'f'} >>> s1 # `s1` is unchanged {'a', 'b', 'c'} >>> # In-place OR, |= >>> s1 |= s2 >>> s1 # `s1` is reassigned {'a', 'b', 'c', 'd', 'e', 'f'} Dictionaries In Python 3.9+, new merge (|) and update (|=) operators are proposed between dictionaries. Note: these are not the same as set operators mentioned above. Given operations between two dicts assigned to d1 and d2: >>> d1 = d1 | d2 # 1 >>> d1 |= d2 # 2 where d1 is equivalent via: an assigned merge-right operation an in-place merge-right (update) operation; equivalent to d1.update(d2) Example Here we apply merge (|) and update (|=) to dicts: >>> d1 = {\"a\": 0, \"b\": 1, \"c\": 2} >>> d2 = {\"c\": 20, \"d\": 30} >>> # Merge, | >>> d1 | d2 {\"a\": 0, \"b\": 1, \"c\": 20, \"d\": 30} >>> d1 {\"a\": 0, \"b\": 1, \"c\": 2} >>> # Update, |= >>> d1 |= d2 >>> d1 {\"a\": 0, \"b\": 1, \"c\": 20, \"d\": 30} Counters The collections.Counter is related to a mathematical datastructure called a multiset (mset). It is basically a dict of (object, multiplicity) key-value pairs. Given operations between two counters assigned to c1 and c2: >>> c1 = c1 | c2 # 1 >>> c1 |= c2 # 2 where c1 is equivalent via: an assigned union operation an in-place union operation A union of multisets contains the maximum multiplicities per entry. Note, this does not behave the same way as between two sets or between two regular dicts. Example Here we apply union (|) and the in-place union (|=) to Counters: import collections as ct >>> c1 = ct.Counter({2: 2, 3: 3}) >>> c2 = ct.Counter({1: 1, 3: 5}) >>> # Union, | >>> c1 | c2 Counter({2: 2, 3: 5, 1: 1}) >>> c1 Counter({2: 2, 3: 3}) >>> # In-place Union, |= >>> c1 |= c2 >>> c1 Counter({2: 2, 3: 5, 1: 1}) Numbers Lastly, you can do binary math. Given operations between two numbers assigned to n1 and n2: >>> n1 = n1 | n2 # 1 >>> n1 |= n2 # 2 where n1 is equivalent via: an assigned bitwise OR operation an in-place bitwise OR operation Example Here we apply bitwise OR (|) and the in-place bitwise OR (|=) to numbers: >>> n1 = 0 >>> n2 = 1 >>> # Bitwise OR, | >>> n1 | n2 1 >>> n1 0 >>> # In-place Bitwise OR, |= >>> n1 |= n2 >>> n1 1 Review This section briefly reviews some bitwise math. In the simplest case, the bitwise OR operation compares two binary bits. It will always return 1 except when both bits are 0. >>> assert 1 == (1 | 1) == (1 | 0) == (0 | 1) >>> assert 0 == (0 | 0) We now extend this idea beyond binary numbers. Given any two integral numbers (lacking fractional components), we apply the bitwise OR and get an integral result: >>> a = 10 >>> b = 16 >>> a | b 26 How? In general, the bitwise operations follow some \"rules\": internally compare binary equivalents apply the operation return the result as the given type Let's apply these rules to our regular integers above. (1) Compare binary equivalents, seen here as strings (0b denotes binary): >>> bin(a) '0b1010' >>> bin(b) '0b10000' (2) Apply a bitwise OR operation to each column (0 when both are 0, else 1): 01010 10000 ----- 11010 (3) Return the result in the given type, e.g. base 10, decimal: >>> int(0b11010) 26 The internal binary comparison means we can apply the latter to integers in any base, e.g. hex and octal: >>> a = 10 # 10, dec >>> b = 0b10000 # 16, bin >>> c = 0xa # 10, hex >>> d = 0o20 # 16, oct >>> a | b 26 >>> c | d 26 See Also An example of overloading the __ior__() method to iterate iterables in a MutableSet abstract base class R. Hettinger's OrderedSet recipe (see lines 3 and 10 respectively) A thread on Python-ideas on why to use |= to update a set A section B.8 of Dive in Python 3 on special methods of Python operators In-place binary operators fallback to regular methods, see cpython source code (eval.c and abstract.c). Thanks @asottile. A post on how python handles displaying prepended zeros in bitwise computations +The in-place bitwise OR operator cannot be applied to literals; assign objects to names. ++Special methods return the same operations as their corresponding operators."} +{"question_id": 9777783, "score": 255, "creation_date": 1332190455, "tags": ["python", "numpy", "number-formatting", "scientific-notation"], "instruction": "Suppress Scientific Notation in Numpy When Creating Array From Nested List\n\nI have a nested Python list that looks like the following: my_list = [[3.74, 5162, 13683628846.64, 12783387559.86, 1.81], [9.55, 116, 189688622.37, 260332262.0, 1.97], [2.2, 768, 6004865.13, 5759960.98, 1.21], [3.74, 4062, 3263822121.39, 3066869087.9, 1.93], [1.91, 474, 44555062.72, 44555062.72, 0.41], [5.8, 5006, 8254968918.1, 7446788272.74, 3.25], [4.5, 7887, 30078971595.46, 27814989471.31, 2.18], [7.03, 116, 66252511.46, 81109291.0, 1.56], [6.52, 116, 47674230.76, 57686991.0, 1.43], [1.85, 623, 3002631.96, 2899484.08, 0.64], [13.76, 1227, 1737874137.5, 1446511574.32, 4.32], [13.76, 1227, 1737874137.5, 1446511574.32, 4.32]] I then import Numpy, and set print options to (suppress=True). When I create an array: my_array = numpy.array(my_list) I can't for the life of me suppress scientific notation: [[ 3.74000000e+00 5.16200000e+03 1.36836288e+10 1.27833876e+10 1.81000000e+00] [ 9.55000000e+00 1.16000000e+02 1.89688622e+08 2.60332262e+08 1.97000000e+00] [ 2.20000000e+00 7.68000000e+02 6.00486513e+06 5.75996098e+06 1.21000000e+00] [ 3.74000000e+00 4.06200000e+03 3.26382212e+09 3.06686909e+09 1.93000000e+00] [ 1.91000000e+00 4.74000000e+02 4.45550627e+07 4.45550627e+07 4.10000000e-01] [ 5.80000000e+00 5.00600000e+03 8.25496892e+09 7.44678827e+09 3.25000000e+00] [ 4.50000000e+00 7.88700000e+03 3.00789716e+10 2.78149895e+10 2.18000000e+00] [ 7.03000000e+00 1.16000000e+02 6.62525115e+07 8.11092910e+07 1.56000000e+00] [ 6.52000000e+00 1.16000000e+02 4.76742308e+07 5.76869910e+07 1.43000000e+00] [ 1.85000000e+00 6.23000000e+02 3.00263196e+06 2.89948408e+06 6.40000000e-01] [ 1.37600000e+01 1.22700000e+03 1.73787414e+09 1.44651157e+09 4.32000000e+00] [ 1.37600000e+01 1.22700000e+03 1.73787414e+09 1.44651157e+09 4.32000000e+00]] If I create a simple numpy array directly: new_array = numpy.array([1.5, 4.65, 7.845]) I have no problem and it prints as follows: [ 1.5 4.65 7.845] Does anyone know what my problem is?", "output": "This is what you need: np.set_printoptions(suppress=True) Here is the documentation which says suppress: bool, optional If True, always print floating point numbers using fixed point notation, in which case numbers equal to zero in the current precision will print as zero. If False, then scientific notation is used when absolute value of the smallest number is < 1e-4 or the ratio of the maximum absolute value to the minimum is > 1e3. The default is False. In the original question, the difference between the array created \"directly\" and the original \"big\" array is that the big array contains very large numbers (e.g. 1.44651157e+09), so NumPy chooses the scientific notation for it, unless it's suppressed."} +{"question_id": 5644836, "score": 255, "creation_date": 1302672899, "tags": ["python", "exception", "warnings"], "instruction": "In Python, how does one catch warnings as if they were exceptions?\n\nA third-party library (written in C) that I use in my python code is issuing warnings. I want to be able to use the try except syntax to properly handle these warnings. Is there a way to do this?", "output": "To handle warnings as errors simply use this: import warnings warnings.filterwarnings(\"error\") After this you will be able to catch warnings same as errors, e.g. this will work: try: some_heavy_calculations() except RuntimeWarning: breakpoint() You can also reset the behaviour of warnings by running: warnings.resetwarnings() P.S. Added this answer because the best answer in comments contains misspelling: filterwarnigns instead of filterwarnings."} +{"question_id": 930995, "score": 255, "creation_date": 1243732023, "tags": ["python", "unit-testing"], "instruction": "assertEquals vs. assertEqual in python\n\nIs there a difference between assertEquals and assertEqual in the python unittest.TestCase? And if there is not, why are there two functions? Only for convenience?", "output": "Actually, in Python 2.6, both assertEqual and assertEquals are convenience aliases to failUnlessEqual. The source declares them thus: # Synonyms for assertion methods assertEqual = assertEquals = failUnlessEqual In Python 3, to your point, failUnlessEqual is explicitly deprecated. assertEquals carries this comment :-) # Synonyms for assertion methods # The plurals are undocumented. Keep them that way to discourage use. # Do not add more. Do not remove. # Going through a deprecation cycle on these would annoy many people. So, the upshot appears to be that you should use whatever you like for Python 2.x, but tend toward assertEqual for Python 3."} +{"question_id": 14313510, "score": 254, "creation_date": 1358139552, "tags": ["python", "numpy", "time-series", "moving-average", "rolling-computation"], "instruction": "How can I calculate a rolling / moving average using Python + NumPy / SciPy?\n\nThere doesn\u2019t seem to be any function in NumPy or SciPy that simply calculate the moving average, leading to convoluted solutions. My question is twofold: What's the easiest way to (correctly) implement a moving average with NumPy? Since this seems nontrivial and error-prone, is there a good reason not to have the batteries included in this case?", "output": "A simple way to achieve this is by using np.convolve. The idea behind this is to leverage the way the discrete convolution is computed and use it to return a rolling mean. This can be done by convolving with a sequence of np.ones of a length equal to the sliding window length we want. In order to do so we could define the following function: def moving_average(x, w): return np.convolve(x, np.ones(w), 'valid') / w This function will be taking the convolution of the sequence x and a sequence of ones of length w. Note that the chosen mode is valid so that the convolution product is only given for points where the sequences overlap completely. Some examples: x = np.array([5,3,8,10,2,1,5,1,0,2]) For a moving average with a window of length 2 we would have: moving_average(x, 2) # array([4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]) And for a window of length 4: moving_average(x, 4) # array([6.5 , 5.75, 5.25, 4.5 , 2.25, 1.75, 2. ]) How does convolve work? Let\u2019s have a more in-depth look at the way the discrete convolution is being computed. The following function aims to replicate the way np.convolve is computing the output values: def mov_avg(x, w): for m in range(len(x)-(w-1)): yield sum(np.ones(w) * x[m:m+w]) / w Which, for the same example above would also yield: list(mov_avg(x, 2)) # [4.0, 5.5, 9.0, 6.0, 1.5, 3.0, 3.0, 0.5, 1.0] So what is being done at each step is to take the inner product between the array of ones and the current window. In this case the multiplication by np.ones(w) is superfluous given that we are directly taking the sum of the sequence. Below is an example of how the first outputs are computed so that it is a little clearer. Lets suppose we want a window of w=4: [1,1,1,1] [5,3,8,10,2,1,5,1,0,2] = (1*5 + 1*3 + 1*8 + 1*10) / w = 6.5 And the following output would be computed as: [1,1,1,1] [5,3,8,10,2,1,5,1,0,2] = (1*3 + 1*8 + 1*10 + 1*2) / w = 5.75 And so on, returning a moving average of the sequence once all overlaps have been performed."} +{"question_id": 20109391, "score": 254, "creation_date": 1384990299, "tags": ["python", "pandas"], "instruction": "How to make good reproducible pandas examples\n\nHaving spent a decent amount of time watching both the r and pandas tags on SO, the impression that I get is that pandas questions are less likely to contain reproducible data. This is something that the R community has been pretty good about encouraging, and thanks to guides like this, newcomers are able to get some help on putting together these examples. People who are able to read these guides and come back with reproducible data will often have much better luck getting answers to their questions. How can we create good reproducible examples for pandas questions? Simple dataframes can be put together, e.g.: import pandas as pd df = pd.DataFrame({'user': ['Bob', 'Jane', 'Alice'], 'income': [40000, 50000, 42000]}) But many example datasets need more complicated structure, e.g.: datetime indices or data Multiple categorical variables (is there an equivalent to R's expand.grid() function, which produces all possible combinations of some given variables?) MultiIndex data For datasets that are hard to mock up using a few lines of code, is there an equivalent to R's dput() that allows you to generate copy-pasteable code to regenerate your datastructure?", "output": "Note: Most of the ideas here are pretty generic for Stack Overflow, indeed questions in general. See Minimal, Reproducible Example or Short, Self Contained, Correct Example. Disclaimer: Writing a good question is hard. The Good: Do include a small example DataFrame, either as runnable code: In [1]: df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B']) or make it \"copy and pasteable\" using pd.read_clipboard(sep=r'\\s\\s+'). In [2]: df Out[2]: A B 0 1 2 1 1 3 2 4 6 Test it yourself to make sure it works and reproduces the issue. You can format the text for Stack Overflow by highlighting and using Ctrl+K (or prepend four spaces to each line), or place three backticks (```) above and below your code with your code unindented. I really do mean small. The vast majority of example DataFrames could be fewer than 6 rows and 6 columns,[citation needed] and I bet I can do it in 5x3. Can you reproduce the error with df = df.head()[relevant_columns]? If not, fiddle around to see if you can make up a small DataFrame which exhibits the issue you are facing. But every rule has an exception, the obvious one being for performance issues (in which case definitely use %timeit and possibly %prun to profile your code), where you should generate: df = pd.DataFrame(np.random.randn(100000000, 10)) Consider using np.random.seed so we have the exact same frame. Having said that, \"make this code fast for me\" is not strictly on topic for the site. For getting runnable code, df.to_dict is often useful, with the different orient options for different cases. In the example above, I could have grabbed the columns and values from df.to_dict('split'). Write out the outcome you desire (similarly to above) In [3]: iwantthis Out[3]: A B 0 1 5 1 4 6 Explain where the numbers come from: The 5 is the sum of the B column for the rows where A is 1. Do show the code you've tried: In [4]: df.groupby('A').sum() Out[4]: B A 1 5 4 6 But say what's incorrect: The A column is in the index rather than a column. Do show you've done some research (search the documentation, search Stack Overflow), and give a summary: The docstring for sum simply states \"Compute sum of group values\" The groupby documentation doesn't give any examples for this. Aside: the answer here is to use df.groupby('A', as_index=False).sum(). If it's relevant that you have Timestamp columns, e.g. you're resampling or something, then be explicit and apply pd.to_datetime to them for good measure. df['date'] = pd.to_datetime(df['date']) # this column ought to be date. Sometimes this is the issue itself: they were strings. The Bad: Don't include a MultiIndex, which we can't copy and paste (see above). This is kind of a grievance with Pandas' default display, but nonetheless annoying: In [11]: df Out[11]: C A B 1 2 3 2 6 The correct way is to include an ordinary DataFrame with a set_index call: In [12]: df = pd.DataFrame([[1, 2, 3], [1, 2, 6]], columns=['A', 'B', 'C']) In [13]: df = df.set_index(['A', 'B']) In [14]: df Out[14]: C A B 1 2 3 2 6 Do provide insight to what it is when giving the outcome you want: B A 1 1 5 0 Be specific about how you got the numbers (what are they)... double check they're correct. If your code throws an error, do include the entire stack trace. This can be edited out later if it's too noisy. Show the line number and the corresponding line of your code which it's raising against. Pandas 2.0 introduced a number of changes, and Pandas 1.0 before that, so if you're getting unexpected output, include the version: pd.__version__ On that note, you might also want to include the version of Python, your OS, and any other libraries. You could use pd.show_versions() or the session_info package (which shows loaded libraries and Jupyter/IPython environment). The Ugly: Don't link to a CSV file we don't have access to (and ideally don't link to an external source at all). df = pd.read_csv('my_secret_file.csv') # ideally with lots of parsing options Most data is proprietary, we get that. Make up similar data and see if you can reproduce the problem (something small). Don't explain the situation vaguely in words, like you have a DataFrame which is \"large\", mention some of the column names in passing (be sure not to mention their dtypes). Try and go into lots of detail about something which is completely meaningless without seeing the actual context. Presumably no one is even going to read to the end of this paragraph. Essays are bad; it's easier with small examples. Don't include 10+ (100+??) lines of data munging before getting to your actual question. Don't include long strings, large numbers, floats with lots of decimals, or other lengthy scalars if you can avoid it. Please, we see enough of this in our day jobs. We want to help, but not like this.... Cut the intro, and just show the relevant data (or a small, simplified version of it) in the step which is causing you trouble."} +{"question_id": 16926130, "score": 254, "creation_date": 1370374879, "tags": ["python", "binary", "formatting", "bitwise-operators"], "instruction": "Convert to binary and keep leading zeros\n\nI'm trying to convert an integer to binary using the bin() function in Python. However, it always removes the leading zeros, which I actually need, such that the result is always 8-bit: Example: bin(1) -> 0b1 # What I would like: bin(1) -> 0b00000001 Is there a way of doing this?", "output": "Use the format() function: >>> format(14, '#010b') '0b00001110' The format() function simply formats the input following the Format Specification mini language. The # makes the format include the 0b prefix, and the 010 size formats the output to fit in 10 characters width, with 0 padding; 2 characters for the 0b prefix, the other 8 for the binary digits. This is the most compact and direct option. If you are putting the result in a larger string, use an formatted string literal (3.6+) or use str.format() and put the second argument for the format() function after the colon of the placeholder {:..}: >>> value = 14 >>> f'The produced output, in binary, is: {value:#010b}' 'The produced output, in binary, is: 0b00001110' >>> 'The produced output, in binary, is: {:#010b}'.format(value) 'The produced output, in binary, is: 0b00001110' As it happens, even for just formatting a single value (so without putting the result in a larger string), using a formatted string literal is faster than using format(): >>> import timeit >>> timeit.timeit(\"f_(v, '#010b')\", \"v = 14; f_ = format\") # use a local for performance 0.40298633499332936 >>> timeit.timeit(\"f'{v:#010b}'\", \"v = 14\") 0.2850222919951193 But I'd use that only if performance in a tight loop matters, as format(...) communicates the intent better. If you did not want the 0b prefix, simply drop the # and adjust the length of the field: >>> format(14, '08b') '00001110'"} +{"question_id": 12850345, "score": 253, "creation_date": 1349999617, "tags": ["python", "pandas"], "instruction": "How do I combine two dataframes?\n\nI have a initial dataframe D. I extract two data frames from it like this: A = D[D.label == k] B = D[D.label != k] I want to combine A and B into one DataFrame. The order of the data is not important. However, when we sample A and B from D, they retain their indexes from D.", "output": "Updated in 2025 df_merged = pandas.concat([df1, df2], ignore_index=True, sort=False) ** OLD ANSWER** DEPRECATED: DataFrame.append and Series.append were deprecated in v1.4.0. Use append: df_merged = df1.append(df2, ignore_index=True) And to keep their indexes, set ignore_index=False."} +{"question_id": 2769061, "score": 253, "creation_date": 1273008066, "tags": ["python"], "instruction": "How to erase the file contents of text file in Python?\n\nI have text file which I want to erase in Python. How do I do that?", "output": "In Python: open('file.txt', 'w').close() Or alternatively, if you have already an opened file: f = open('file.txt', 'r+') f.truncate(0) # need '0' when using r+"} +{"question_id": 48915810, "score": 253, "creation_date": 1519248657, "tags": ["python", "memory", "pytorch", "contiguous"], "instruction": "What does .contiguous() do in PyTorch?\n\nWhat does x.contiguous() do for a tensor x?", "output": "There are a few operations on Tensors in PyTorch that do not change the contents of a tensor, but change the way the data is organized. These operations include: narrow(), view(), expand() and transpose() For example: when you call transpose(), PyTorch doesn't generate a new tensor with a new layout, it just modifies meta information in the Tensor object so that the offset and stride describe the desired new shape. In this example, the transposed tensor and original tensor share the same memory: x = torch.randn(3,2) y = torch.transpose(x, 0, 1) x[0, 0] = 42 print(y[0,0]) # prints 42 This is where the concept of contiguous comes in. In the example above, x is contiguous but y is not because its memory layout is different to that of a tensor of same shape made from scratch. Note that the word \"contiguous\" is a bit misleading because it's not that the content of the tensor is spread out around disconnected blocks of memory. Here bytes are still allocated in one block of memory but the order of the elements is different! When you call contiguous(), it actually makes a copy of the tensor such that the order of its elements in memory is the same as if it had been created from scratch with the same data. Normally you don't need to worry about this. You're generally safe to assume everything will work, and wait until you get a RuntimeError: input is not contiguous where PyTorch expects a contiguous tensor to add a call to contiguous()."} +{"question_id": 26180528, "score": 253, "creation_date": 1412345281, "tags": ["python", "dictionary", "tuples", "namedtuple"], "instruction": "Convert a namedtuple into a dictionary\n\nI have a named tuple class in python class Town(collections.namedtuple('Town', [ 'name', 'population', 'coordinates', 'population', 'capital', 'state_bird'])): # ... I'd like to convert Town instances into dictionaries. I don't want it to be rigidly tied to the names or number of the fields in a Town. Is there a way to write it such that I could add more fields, or pass an entirely different named tuple in and get a dictionary. I can not alter the original class definition as its in someone else's code. So I need to take an instance of a Town and convert it to a dictionary.", "output": "TL;DR: there's a method _asdict provided for this. Here is a demonstration of the usage: >>> from collections import namedtuple >>> fields = ['name', 'population', 'coordinates', 'capital', 'state_bird'] >>> Town = namedtuple('Town', fields) >>> funkytown = Town('funky', 300, 'somewhere', 'lipps', 'chicken') >>> funkytown._asdict() {'name': 'funky', 'population': 300, 'coordinates': 'somewhere', 'capital': 'lipps', 'state_bird': 'chicken'} This is a documented method of namedtuples, i.e. unlike the usual convention in python the leading underscore on the method name isn't there to discourage use. Along with the other methods added to namedtuples, _make, _replace, _source, _fields, it has the underscore only to try and prevent conflicts with possible field names. Note: For some 2.7.5 < python version < 3.5.0 code out in the wild, you might see this version: >>> vars(funkytown) OrderedDict([('name', 'funky'), ('population', 300), ('coordinates', 'somewhere'), ('capital', 'lipps'), ('state_bird', 'chicken')]) For a while the documentation had mentioned that _asdict was obsolete (see here), and suggested to use the built-in method vars. That advice is now outdated; in order to fix a bug related to subclassing, the __dict__ property which was present on namedtuples has again been removed by this commit."} +{"question_id": 5324647, "score": 252, "creation_date": 1300275239, "tags": ["python", "image", "image-processing", "python-imaging-library"], "instruction": "How to merge a transparent png image with another image using PIL\n\nI have a transparent png image foo.png and I've opened another image with: im = Image.open(\"foo2.png\") Now what I need is to merge foo.png with foo2.png. (foo.png contains some text and I want to print that text on foo2.png)", "output": "from PIL import Image background = Image.open(\"test1.png\") foreground = Image.open(\"test2.png\") background.paste(foreground, (0, 0), foreground) background.show() First parameter to .paste() is the image to paste. Second are coordinates, and the secret sauce is the third parameter. It indicates a mask that will be used to paste the image. If you pass a image with transparency, then the alpha channel is used as mask. Check the docs."} +{"question_id": 11094718, "score": 252, "creation_date": 1340082731, "tags": ["python", "ubuntu", "eventlet"], "instruction": "Error message \"error: command 'gcc' failed with exit status 1\" while installing eventlet\n\nI wanted to install Eventlet on my system in order to have \"Herd\" for software deployment, but the terminal is showing a GCC error (as root): cd ~ easy_install -U eventlet Output: Searching for eventlet Reading http://pypi.python.org/simple/eventlet/ Reading http://wiki.secondlife.com/wiki/Eventlet Reading http://eventlet.net Best match: eventlet 0.9.16 Processing eventlet-0.9.16-py2.7.egg eventlet 0.9.16 is already the active version in easy-install.pth Using /usr/local/lib/python2.7/dist-packages/eventlet-0.9.16-py2.7.egg Processing dependencies for eventlet Searching for greenlet>=0.3 Reading http://pypi.python.org/simple/greenlet/ Reading https://github.com/python-greenlet/greenlet Reading http://bitbucket.org/ambroff/greenlet Best match: greenlet 0.3.4 Downloading http://pypi.python.org/packages/source/g/greenlet/greenlet- 0.3.4.zip#md5=530a69acebbb0d66eb5abd83523d8272 Processing greenlet-0.3.4.zip Writing /tmp/easy_install-_aeHYm/greenlet-0.3.4/setup.cfg Running greenlet-0.3.4/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_aeHYm/greenlet-0.3.4/egg-dist-tmp-t9_gbW In file included from greenlet.c:5:0: greenlet.h:8:20: fatal error: Python.h: No such file or directory compilation terminated. error: Setup script exited with error: command 'gcc' failed with exit status 1` Why can't Python.h be found?", "output": "Your install is failing because you don't have the Python development headers installed. First, update the packages with sudo apt update. You can do this through APT on Ubuntu/Debian with: sudo apt-get install python-dev For Python 3, use: sudo apt-get install python3-dev For eventlet you might also need the libevent libraries installed so if you get an error talking about that you can install libevent with: sudo apt-get install libevent-dev"} +{"question_id": 42212810, "score": 252, "creation_date": 1487017194, "tags": ["python", "jupyter-notebook", "tqdm"], "instruction": "tqdm in Jupyter Notebook prints new progress bars repeatedly\n\nI am using tqdm to print progress in a script I'm running in a Jupyter notebook. I am printing all messages to the console via tqdm.write(). However, this still gives me a skewed output like so: That is, each time a new line has to be printed, a new progress bar is printed on the next line. This does not happen when I run the script via terminal. How can I solve this?", "output": "Try using tqdm.notebook.tqdm instead of tqdm, as outlined here. This could be as simple as changing your import to: from tqdm.notebook import tqdm EDIT: After testing, it seems that tqdm actually works fine in 'text mode' in Jupyter notebook. It's hard to tell because you haven't provided a minimal example, but it looks like your problem is caused by a print statement in each iteration. The print statement is outputting a number (~0.89) in between each status bar update, which is messing up the output. Try removing the print statement."} +{"question_id": 33159518, "score": 251, "creation_date": 1444947714, "tags": ["python", "ipython", "jupyter-notebook", "jupyter"], "instruction": "collapse cell in jupyter notebook\n\nI am using ipython Jupyter notebook. Let's say I defined a function that occupies a lot of space on my screen. Is there a way to collapse the cell? I want the function to remain executed and callable, yet I want to hide / collapse the cell in order to better visualize the notebook. How can I do this?", "output": "UPDATE: The newer jupyter-lab is a more modern and feature-rich interface which supports cell folding by default. See @intsco's answer below UPDATE 2 Since jupyter-lab now also supports extensions, you can extend the built-in cell-folding functionality with the Collapsible_Headings extension. Original answer: The jupyter contrib nbextensions Python package contains a code-folding extension that can be enabled within the notebook. Follow the link (Github) for documentation. To install using command line: pip install jupyter_contrib_nbextensions jupyter contrib nbextension install --user To make life easier in managing them, I'd also recommend the jupyter nbextensions configurator package. This provides an extra tab in your Notebook interface from where you can easily (de)activate all installed extensions. Installation: pip install jupyter_nbextensions_configurator jupyter nbextensions_configurator enable --user"} +{"question_id": 1435415, "score": 251, "creation_date": 1253134564, "tags": ["python", "debugging", "memory-management", "memory-leaks"], "instruction": "Python memory leaks\n\nI have a long-running script which, if let to run long enough, will consume all the memory on my system. Without going into details about the script, I have two questions: Are there any \"Best Practices\" to follow, which will help prevent leaks from occurring? What techniques are there to debug memory leaks in Python?", "output": "Have a look at this article: Tracing python memory leaks Also, note that the garbage collection module actually can have debug flags set. Look at the set_debug function. Additionally, look at this code by Gnibbler for determining the types of objects that have been created after a call."} +{"question_id": 4548684, "score": 250, "creation_date": 1293563326, "tags": ["python", "datetime", "time"], "instruction": "How to get the seconds since epoch from the time + date output of gmtime()?\n\nHow do you do reverse gmtime(), where you put the time + date and get the number of seconds? I have strings like 'Jul 9, 2009 @ 20:02:58 UTC', and I want to get back the number of seconds between the epoch and July 9, 2009. I have tried time.strftime but I don't know how to use it properly, or if it is the correct command to use.", "output": "If you got here because a search engine told you this is how to get the Unix timestamp, stop reading this answer. Scroll up one. If you want to reverse time.gmtime(), you want calendar.timegm(). >>> calendar.timegm(time.gmtime()) 1293581619.0 You can turn your string into a time tuple with time.strptime(), which returns a time tuple that you can pass to calendar.timegm(): >>> import calendar >>> import time >>> calendar.timegm(time.strptime('Jul 9, 2009 @ 20:02:58 UTC', '%b %d, %Y @ %H:%M:%S UTC')) 1247169778 More information about calendar module here"} +{"question_id": 1403674, "score": 250, "creation_date": 1252564349, "tags": ["list", "python"], "instruction": "Make a new list containing every Nth item in the original list\n\nSay we have a list of integers from 0 to 1000: [0, 1, 2, 3, ..., 997, 998, 999] How do I get a new list containing the first and every subsequent 10th item? [0, 10, 20, 30, ..., 990] I can do this using a for loop, but is there a neater way, perhaps even in one line of code?", "output": ">>> xs = list(range(165)) >>> xs[0::10] [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160] Note that this is around 100 times faster than looping and checking a modulus for each element: $ python -m timeit -s \"xs = list(range(1000))\" \"[x for i, x in enumerate(xs) if i % 10 == 0]\" 500 loops, best of 5: 476 usec per loop $ python -m timeit -s \"xs = list(range(1000))\" \"xs[0::10]\" 100000 loops, best of 5: 3.32 usec per loop"} +{"question_id": 28311655, "score": 250, "creation_date": 1423011437, "tags": ["python", "pandas"], "instruction": "Ignoring NaNs with str.contains\n\nI want to find rows that contain a string, like so: DF[DF.col.str.contains(\"foo\")] However, this fails because some elements are NaN: ValueError: cannot index with vector containing NA / NaN values So I resort to the obfuscated DF[DF.col.notnull()][DF.col.dropna().str.contains(\"foo\")] Is there a better way?", "output": "There's a flag for that: In [11]: df = pd.DataFrame([[\"foo1\"], [\"foo2\"], [\"bar\"], [np.nan]], columns=['a']) In [12]: df.a.str.contains(\"foo\") Out[12]: 0 True 1 True 2 False 3 NaN Name: a, dtype: object In [13]: df.a.str.contains(\"foo\", na=False) Out[13]: 0 True 1 True 2 False 3 False Name: a, dtype: bool See the str.replace docs: na : default NaN, fill value for missing values. So you can do the following: In [21]: df.loc[df.a.str.contains(\"foo\", na=False)] Out[21]: a 0 foo1 1 foo2"} +{"question_id": 9110593, "score": 249, "creation_date": 1328178007, "tags": ["python", "asynchronous", "python-requests", "httprequest"], "instruction": "Asynchronous requests with Python's Requests\n\nI tried the sample provided within the documentation of the Requests library for Python. With async.map(rs), I get the response codes, but I want to get the content of each page requested. This, for example, does not work: out = async.map(rs) print out[0].content", "output": "Note The below answer is not applicable to Requests v0.13.0 or later. The asynchronous functionality was moved to GRequests after this question was written. However, you could just replace requests with grequests below and it should work. I've left this answer as is to reflect the original question which was about using requests < v0.13.0. To do multiple tasks with async.map asynchronously you have to: Define a function for what you want to do with each object (your task) Add that function as an event hook in your request Call async.map on a list of all the requests / actions Example: from requests import async # If using requests > v0.13.0, use # from grequests import async urls = [ 'http://python-requests.org', 'http://httpbin.org', 'http://python-guide.org', 'http://kennethreitz.com' ] # A simple task to do to each response object def do_something(response): print response.url # A list to hold our things to do via async async_list = [] for u in urls: # The \"hooks = {...\" part is where you define what you want to do # # Note the lack of parentheses following do_something, this is # because the response will be used as the first argument automatically action_item = async.get(u, hooks = {'response' : do_something}) # Add the task to our list of things to do via async async_list.append(action_item) # Do our list of things to do via async async.map(async_list)"} +{"question_id": 3949310, "score": 249, "creation_date": 1287239940, "tags": ["python", "data-structures", "set", "cpython"], "instruction": "How is set() implemented?\n\nI've seen people say that set objects in python have O(1) membership-checking. How are they implemented internally to allow this? What sort of data structure does it use? What other implications does that implementation have? Every answer here was really enlightening, but I can only accept one, so I'll go with the closest answer to my original question. Thanks all for the info!", "output": "According to this thread: Indeed, CPython's sets are implemented as something like dictionaries with dummy values (the keys being the members of the set), with some optimization(s) that exploit this lack of values So basically a set uses a hashtable as its underlying data structure. This explains the O(1) membership checking, since looking up an item in a hashtable is an O(1) operation, on average. If you are so inclined you can even browse the CPython source code for set which, according to Achim Domma, was originally mostly a cut-and-paste from the dict implementation. Note: Nowadays, set and dict's implementations have diverged significantly, so the precise behaviors (e.g. arbitrary order vs. insertion order) and performance in various use cases differs; they're still implemented in terms of hashtables, so average case lookup and insertion remains O(1), but set is no longer just \"dict, but with dummy/omitted values\"."} +{"question_id": 56227419, "score": 249, "creation_date": 1558382420, "tags": ["python", "math", "hash", "floating-point", "pi"], "instruction": "Why does Python's hash of infinity have the digits of \u03c0?\n\nThe hash of infinity in Python has digits matching pi: >>> inf = float('inf') >>> hash(inf) 314159 >>> int(math.pi*1e5) 314159 Is that just a coincidence or is it intentional?", "output": "_PyHASH_INF is defined as a constant equal to 314159. I can't find any discussion about this, or comments giving a reason. I think it was chosen more or less arbitrarily. I imagine that as long as they don't use the same meaningful value for other hashes, it shouldn't matter."} +{"question_id": 904746, "score": 248, "creation_date": 1243202209, "tags": ["python", "replace"], "instruction": "How to remove all characters after a specific character in python?\n\nI have a string. How do I remove all text after a certain character? (In this case ...) The text after will ... change so I that's why I want to remove all characters after a certain one.", "output": "Split on your separator at most once, and take the first piece: sep = '...' stripped = text.split(sep, 1)[0] You didn't say what should happen if the separator isn't present. Both this and Alex's solution will return the entire string in that case."} +{"question_id": 12905999, "score": 248, "creation_date": 1350348127, "tags": ["python", "dictionary"], "instruction": "How to create key or append an element to key?\n\nI have an empty dictionary. Name: dict_x It is to have keys of which values are lists. From a separate iteration, I obtain a key (ex: key_123), and an item (a tuple) to place in the list of dict_x's value key_123. If this key already exists, I want to append this item. If this key does not exist, I want to create it with an empty list and then append to it or just create it with a tuple in it. In future when again this key comes up, since it exists, I want the value to be appended again. My code consists of this: Get key and value. See if NOT key exists in dict_x. and if not create it: dict_x[key] == [] Afterwards: dict_x[key].append(value) Is this the way to do it? Shall I try to use try/except blocks?", "output": "Use dict.setdefault(): dict.setdefault(key,[]).append(value) help(dict.setdefault): setdefault(...) D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D"} +{"question_id": 2058925, "score": 248, "creation_date": 1263404467, "tags": ["python", "formatting", "code-formatting"], "instruction": "How can I break up this long line in Python?\n\nHow would you go about formatting a long line such as this? I'd like to get it to no more than 80 characters wide: logger.info(\"Skipping {0} because its thumbnail was already in our system as {1}.\".format(line[indexes['url']], video.title)) Is this my best option? url = \"Skipping {0} because its thumbnail was already in our system as {1}.\" logger.info(url.format(line[indexes['url']], video.title))", "output": "That's a start. It's not a bad practice to define your longer strings outside of the code that uses them. It's a way to separate data and behavior. Your first option is to join string literals together implicitly by making them adjacent to one another: (\"This is the first line of my text, \" \"which will be joined to a second.\") Or with line ending continuations, which is a little more fragile, as this works: \"This is the first line of my text, \" \\ \"which will be joined to a second.\" But this doesn't: \"This is the first line of my text, \" \\ \"which will be joined to a second.\" See the difference? No? Well you won't when it's your code either. (There's a space after \\ in the second example.) The downside to implicit joining is that it only works with string literals, not with strings taken from variables, so things can get a little more hairy when you refactor. Also, you can only interpolate formatting on the combined string as a whole. Alternatively, you can join explicitly using the concatenation operator (+): (\"This is the first line of my text, \" + \"which will be joined to a second.\") Explicit is better than implicit, as the zen of python says, but this creates three strings instead of one, and uses twice as much memory: there are the two you have written, plus one which is the two of them joined together, so you have to know when to ignore the zen. The upside is you can apply formatting to any of the substrings separately on each line, or to the whole lot from outside the parentheses. Finally, you can use triple-quoted strings: \"\"\"This is the first line of my text which will be joined to a second.\"\"\" This is often my favorite, though its behavior is slightly different as the newline and any leading whitespace on subsequent lines will show up in your final string. You can eliminate the newline with an escaping backslash. \"\"\"This is the first line of my text \\ which will be joined to a second.\"\"\" This has the same problem as the same technique above, in that correct code only differs from incorrect code by invisible whitespace. Which one is \"best\" depends on your particular situation, but the answer is not simply aesthetic, but one of subtly different behaviors."} +{"question_id": 8408046, "score": 248, "creation_date": 1323211864, "tags": ["python", "django"], "instruction": "How to change the name of a Django app?\n\nI have changed the name of an app in Django by renaming its folder, imports and all its references (templates/indexes). But now I get this error when I try to run python manage.py runserver Error: Could not import settings 'nameofmynewapp.settings' (Is it on sys.path?): No module named settings How can I debug and solve this error? Any clues?", "output": "Follow these steps to change an app's name in Django: Rename the folder which is in your project root Change any references to your app in their dependencies, i.e. the app's views.py, urls.py , manage.py , and settings.py files. Edit the database table django_content_type with the following command: UPDATE django_content_type SET app_label='' WHERE app_label='' Also, if you have models, you will have to rename the model tables. For postgres, use ALTER TABLE _modelName RENAME TO _modelName. For mysql too, I think it is the same (as mentioned by @null_radix). (For Django >= 1.7) Update the django_migrations table to avoid having your previous migrations re-run: UPDATE django_migrations SET app='' WHERE app=''. Note: there is some debate (in comments) if this step is required for Django 1.8+; If someone knows for sure please update here. If your models.py 's Meta Class has app_name listed, make sure to rename that too (mentioned by @will). If you've namespaced your static or templates folders inside your app, you'll also need to rename those. For example, rename old_app/static/old_app to new_app/static/new_app. For renaming django models, you'll need to change django_content_type.name entry in DB. For postgreSQL, use UPDATE django_content_type SET name='' where name='' AND app_label='' Update 16Jul2021: Also, the __pycache__/ folder inside the app must be removed, otherwise you get EOFError: marshal data too short when trying to run the server. Mentioned by @Serhii Kushchenko Update the apps.py file in the app you're renaming to use the new name. Change the \"Config\" class name to \"Config\" and within this class change name = '' to name = '' Meta point (If using virtualenv): Worth noting, if you are renaming the directory that contains your virtualenv, there will likely be several files in your env that contain an absolute path and will also need to be updated. If you are getting errors such as ImportError: No module named ... this might be the culprit. (thanks to @danyamachine for providing this). Other references: you might also want to refer to the below links for a more complete picture: Renaming an app with Django and South How do I migrate a model out of one django app and into a new one? How to change the name of a Django app? Backwards migration with Django South Easiest way to rename a model using Django/South? Python code (thanks to A.Raouf) to automate the above steps (Untested code. You have been warned!) Python code (thanks to rafaponieman) to automate the above steps (Untested code. You have been warned!)"} +{"question_id": 64261546, "score": 248, "creation_date": 1602156410, "tags": ["python", "visual-c++", "pip"], "instruction": "How to solve \"error: Microsoft Visual C++ 14.0 or greater is required\" when installing Python packages?\n\nI'm trying to install a package on Python, but Python is throwing an error on installing packages. I'm getting an error every time I tried to install pip install google-search-api. Here is the error how can I successfully install it? error: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https://visualstudio.microsoft.com/visual-cpp-build-tools/ I already updated that and have the latest version of 14.27 but the problem is throwing the same error.", "output": "Go to this link and download Microsoft C++ Build Tools: https://visualstudio.microsoft.com/visual-cpp-build-tools/ Open the installer, then follow the steps. You might have something like this, just download it or resume. If updating above doesn't work then you need to configure or make some updates here. You can make some updates here too by clicking \"Modify\". Check that and download what you need there or you might find that you just need to update Microsoft Visual C++ as stated on the error, but I also suggest updating everything there because you might still need it on your future programs. I think those with the C++ as I've done that before and had a similar problem just like that when installing a python package for creating WorldCloud visualization. UPDATE: December 28, 2020 You can also follow these steps here: Select: Workloads \u2192 Desktop development with C++ Then for Individual Components, select only: Windows 10 SDK C++ x64/x86 build tools You can also achieve the same automatically using the following command: vs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools Reference: https://www.scivision.dev/python-windows-visual-c-14-required"} +{"question_id": 9792664, "score": 247, "creation_date": 1332267579, "tags": ["python", "set"], "instruction": "Converting a list to a set changes element order\n\nRecently I noticed that when I am converting a list to set the order of elements is changed and is sorted by character. Consider this example: x=[1,2,20,6,210] print(x) # [1, 2, 20, 6, 210] # the order is same as initial order set(x) # set([1, 2, 20, 210, 6]) # in the set(x) output order is sorted My questions are - Why is this happening? How can I do set operations (especially set difference) without losing the initial order?", "output": "A set is an unordered data structure, so it does not preserve the insertion order. CPython's set enumeration order depends on last bits of the element's hash and whether there have been key collisions during insertion. This depends on your requirements. If you have an normal list, and want to remove some set of elements while preserving the order of the list, you can do this with a list comprehension: >>> a = [1, 2, 20, 6, 210] >>> b = set([6, 20, 1]) >>> [x for x in a if x not in b] [2, 210] If you need a data structure that supports both fast membership tests and preservation of insertion order, you can use the keys of a Python dictionary, which starting from Python 3.7 is guaranteed to preserve the insertion order: >>> a = dict.fromkeys([1, 2, 20, 6, 210]) >>> b = dict.fromkeys([6, 20, 1]) >>> dict.fromkeys(x for x in a if x not in b) {2: None, 210: None} b doesn't really need to be ordered here \u2013 you could use a set as well. Note that a.keys() - b.keys() returns the set difference as a set, so it won't preserve the insertion order. In older versions of Python, you can use collections.OrderedDict instead: >>> a = collections.OrderedDict.fromkeys([1, 2, 20, 6, 210]) >>> b = collections.OrderedDict.fromkeys([6, 20, 1]) >>> collections.OrderedDict.fromkeys(x for x in a if x not in b) OrderedDict([(2, None), (210, None)])"} +{"question_id": 1275735, "score": 246, "creation_date": 1250216698, "tags": ["python", "django", "django-templates"], "instruction": "How can I access a dictionary element in a Django template?\n\nI would like to print out the number of votes that each choice got. I have this code in a template: {% for choice in choices %} {{choice.choice}} - {{votes[choice.id]}}
{% endfor %} votes is just a dictionary while choices is a model object. It raises an exception with this message: \"Could not parse the remainder\"", "output": "To echo / extend upon Jeff's comment, I think you should simply aim for a property in your Choice class that calculates the number of votes associated with that object: class Choice(models.Model): text = models.CharField(max_length=200) def calculateVotes(self): return Vote.objects.filter(choice=self).count() votes = property(calculateVotes) And then in your template, you can do: {% for choice in choices %} {{choice.choice}} - {{choice.votes}}
{% endfor %} The template tag, is, in my humble opinion, a bit overkill for this solution, but it's not a terrible solution either. The goal of templates in Django is to insulate you from code in your templates and vice versa. I'd try the above method and see what SQL the ORM generates as I'm not sure off the top of my head if it will precache the properties and just create a subselect for the property or if it will iteratively / on-demand run the query to calculate vote count. But if it generates atrocious queries, you could always populate the property in your view with data you've collected yourself."} +{"question_id": 4265988, "score": 246, "creation_date": 1290596211, "tags": ["python", "module", "random"], "instruction": "Generate random numbers with a given (numerical) distribution\n\nI have a file with some probabilities for different values e.g.: 1 0.1 2 0.05 3 0.05 4 0.2 5 0.4 6 0.2 I would like to generate random numbers using this distribution. Does an existing module that handles this exist? It's fairly simple to code on your own (build the cumulative density function, generate a random value [0,1] and pick the corresponding value) but it seems like this should be a common problem and probably someone has created a function/module for it. I need this because I want to generate a list of birthdays (which do not follow any distribution in the standard random module).", "output": "scipy.stats.rv_discrete might be what you want. You can supply your probabilities via the values parameter. You can then use the rvs() method of the distribution object to generate random numbers. As pointed out by Eugene Pakhomov in the comments, you can also pass a p keyword parameter to numpy.random.choice(), e.g. numpy.random.choice(numpy.arange(1, 7), p=[0.1, 0.05, 0.05, 0.2, 0.4, 0.2]) If you are using Python 3.6 or above, you can use random.choices() from the standard library \u2013 see the answer by Mark Dickinson."} +{"question_id": 77364550, "score": 246, "creation_date": 1698301344, "tags": ["python", "python-3.x", "numpy", "pip"], "instruction": "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?\n\nEarlier I installed some packages like Matplotlib, NumPy, pip (version 23.3.1), wheel (version 0.41.2), etc., and did some programming with those. I used the command C:\\Users\\UserName>pip list to find the list of packages that I have installed, and I am using Python 3.12.0 (by employing code C:\\Users\\UserName>py -V). I need to use pyspedas to analyse some data. I am following the instruction that that I received from site to install the package, with a variation (I am not sure whether it matters or not: I am using py, instead of python). The commands that I use, in the order, are: py -m venv pyspedas .\\pyspedas\\Scripts\\activate pip install pyspedas After the last step, I am getting the following output: Collecting pyspedas Using cached pyspedas-1.4.47-py3-none-any.whl.metadata (14 kB) Collecting numpy>=1.19.5 (from pyspedas) Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl.metadata (61 kB) Collecting requests (from pyspedas) Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB) Collecting geopack>=1.0.10 (from pyspedas) Using cached geopack-1.0.10-py3-none-any.whl (114 kB) Collecting cdflib<1.0.0 (from pyspedas) Using cached cdflib-0.4.9-py3-none-any.whl (72 kB) Collecting cdasws>=1.7.24 (from pyspedas) Using cached cdasws-1.7.43.tar.gz (21 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting netCDF4>=1.6.2 (from pyspedas) Using cached netCDF4-1.6.5-cp312-cp312-win_amd64.whl.metadata (1.8 kB) Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error \u00d7 Getting requirements to build wheel did not run successfully. \u2502 exit code: 1 \u2570\u2500> [33 lines of output] Traceback (most recent call last): File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 353, in main() File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 112, in get_requires_for_build_wheel backend = _build_backend() ^^^^^^^^^^^^^^^^ File \"C:\\Users\\UserName\\pyspedas\\Lib\\site-packages\\pip\\_vendor\\pyproject_hooks\\_in_process\\_in_process.py\", line 77, in _build_backend obj = import_module(mod_path) ^^^^^^^^^^^^^^^^^^^^^^^ File \"C:\\Users\\UserName\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\importlib\\__init__.py\", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"\", line 1381, in _gcd_import File \"\", line 1354, in _find_and_load File \"\", line 1304, in _find_and_load_unlocked File \"\", line 488, in _call_with_frames_removed File \"\", line 1381, in _gcd_import File \"\", line 1354, in _find_and_load File \"\", line 1325, in _find_and_load_unlocked File \"\", line 929, in _load_unlocked File \"\", line 994, in exec_module File \"\", line 488, in _call_with_frames_removed File \"C:\\Users\\UserName\\AppData\\Local\\Temp\\pip-build-env-_lgbq70y\\overlay\\Lib\\site-packages\\setuptools\\__init__.py\", line 16, in import setuptools.version File \"C:\\Users\\UserName\\AppData\\Local\\Temp\\pip-build-env-_lgbq70y\\overlay\\Lib\\site-packages\\setuptools\\version.py\", line 1, in import pkg_resources File \"C:\\Users\\UserName\\AppData\\Local\\Temp\\pip-build-env-_lgbq70y\\overlay\\Lib\\site-packages\\pkg_resources\\__init__.py\", line 2191, in register_finder(pkgutil.ImpImporter, find_on_path) ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error \u00d7 Getting requirements to build wheel did not run successfully. \u2502 exit code: 1 \u2570\u2500> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. After little bit of googling, I came to know that this issues was reported at multiple places, but none for this package. I did install wheel in the new environment as mentioned in the answer here, but the problem still persists. Instead of setting up a virtual environment, I simply executed the command py -m pip install pyspedas. But I am still getting the error. What I could gather is that the program has an issue with Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done I am using IDLE in Windows 11.", "output": "Due to the removal of the long-deprecated pkgutil.ImpImporter class, the pip command may not work for Python 3.12. You just have to manually install pip for Python 3.12 python -m ensurepip --upgrade python -m pip install --upgrade setuptools python -m pip install In your virtual environment: pip install --upgrade setuptools Python comes with an ensurepip, which can install pip in a Python environment. https://pip.pypa.io/en/stable/installation/ On Linux/macOS terminal: python -m ensurepip --upgrade On Windows: py -m ensurepip --upgrade also, make sure to upgrade pip: py -m pip install --upgrade pip To install numpy on Python 3.12, you must use numpy version 1.26.4 pip install numpy==1.26.4 https://github.com/numpy/numpy/issues/23808#issuecomment-1722440746 for Ubuntu sudo apt install python3.12-dev or python3.12 -m pip install --upgrade setuptools"} +{"question_id": 28436769, "score": 246, "creation_date": 1423585865, "tags": ["python", "anaconda", "conda"], "instruction": "How to change default Anaconda python environment\n\nI've installed Anaconda and created two extra environments: py3k (which holds Python 3.3) and py34 (which holds Python 3.4). Besides those, I have a default environment named 'root' which the Anaconda installer created by default and which holds Python 2.7. This last one is the default, whenever I launch 'ipython' from the terminal it gives me version 2.7. In order to work with Python 3.4, I need to issue the commands (in the shell) source activate py34 ipython which change the default environment to Python 3.4. This works fine, but it's annoying since most of the time I work on Python 3.4, instead of Python 2.7 (which I hold for teaching purposes, it's a rather long story). Anyway, I'll like to know how to change the default environment to Python 3.4, bearing in mind that I don't want to reinstall everything from scratch.", "output": "If you just want to temporarily change to another environment, use source activate environment-name ETA: This may be deprecated. I believe the current correct command is: source conda activate environment-name (you can create environment-name with conda create) To change permanently, there is no method except creating a startup script that runs the above code. Typically it's best to just create new environments. However, if you really want to change the Python version in the default environment, you can do so as follows: First, make sure you have the latest version of conda by running conda update conda Then run conda install python=3.5 This will attempt to update all your packages in your root environment to Python 3 versions. If it is not possible (e.g., because some package is not built for Python 3.5), it will give you an error message indicating which package(s) caused the issue. If you installed packages with pip, you'll have to reinstall them."} +{"question_id": 16134281, "score": 246, "creation_date": 1366566487, "tags": ["python", "unit-testing", "python-unittest", "python-mock"], "instruction": "Python Mocking a function from an imported module\n\nI want to understand how to @patch a function from an imported module. This is where I am so far. app/mocking.py: from app.my_module import get_user_name def test_method(): return get_user_name() if __name__ == \"__main__\": print \"Starting Program...\" test_method() app/my_module/__init__.py: def get_user_name(): return \"Unmocked User\" test/mock-test.py: import unittest from app.mocking import test_method def mock_get_user(): return \"Mocked This Silly\" @patch('app.my_module.get_user_name') class MockingTestTestCase(unittest.TestCase): def test_mock_stubs(self, mock_method): mock_method.return_value = 'Mocked This Silly') ret = test_method() self.assertEqual(ret, 'Mocked This Silly') if __name__ == '__main__': unittest.main() This does not work as I would expect. The \"patched\" module simply returns the unmocked value of get_user_name. How do I mock methods from other packages that I am importing into a namespace under test?", "output": "When you are using the patch decorator from the unittest.mock package you are patching it in the namespace that is under test (in this case app.mocking.get_user_name), not the namespace the function is imported from (in this case app.my_module.get_user_name). To do what you describe with @patch try something like the below: from mock import patch from app.mocking import test_method class MockingTestTestCase(unittest.TestCase): @patch('app.mocking.get_user_name') def test_mock_stubs(self, test_patch): test_patch.return_value = 'Mocked This Silly' ret = test_method() self.assertEqual(ret, 'Mocked This Silly') The standard library documentation includes a useful section describing this."} +{"question_id": 55142951, "score": 245, "creation_date": 1552483399, "tags": ["python", "tensorflow", "keras", "tensorflow2.0"], "instruction": "TensorFlow 2.0 - AttributeError: module 'tensorflow' has no attribute 'Session'\n\nWhen I am executing the command sess = tf.Session() in a TensorFlow 2.0 environment, I am getting an error message as below: Traceback (most recent call last): File \"\", line 1, in AttributeError: module 'tensorflow' has no attribute 'Session' System information: OS platform and distribution: Windows 10 Python version: 3.7.1 TensorFlow version: 2.0.0-alpha0 (installed with pip) Steps to reproduce: Installation pip install --upgrade pip pip install tensorflow==2.0.0-alpha0 pip install keras pip install numpy==1.16.2 Execution Execute command: import tensorflow as tf Execute command: sess = tf.Session()", "output": "According to TF 1:1 Symbols Map, in TensorFlow 2.0, you should use tf.compat.v1.Session() instead of tf.Session(). https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0 To get TensorFlow 1.x-like behaviour in TensorFlow 2.0, one can run import tensorflow.compat.v1 as tf tf.disable_v2_behavior() but then one cannot benefit of many improvements made in TensorFlow 2.0. For more details, please refer to the migration guide."} +{"question_id": 19548011, "score": 244, "creation_date": 1382548052, "tags": ["python", "xcode", "macos", "scrapy", "lxml"], "instruction": "Cannot install Lxml on Mac OS X 10.9\n\nI want to install Lxml so I can then install Scrapy. When I updated my Mac today it wouldn't let me reinstall lxml, I get the following error: In file included from src/lxml/lxml.etree.c:314: /private/tmp/pip_build_root/lxml/src/lxml/includes/etree_defs.h:9:10: fatal error: 'libxml/xmlversion.h' file not found #include \"libxml/xmlversion.h\" ^ 1 error generated. error: command 'cc' failed with exit status 1 I have tried using brew to install libxml2 and libxslt, both installed fine but I still cannot install lxml. Last time I was installing I needed to enable the developer tools on Xcode but since it's updated to Xcode 5 it doesn't give me that option anymore. Does anyone know what I need to do?", "output": "You should install or upgrade the command line tool for Xcode. Try this in a terminal: xcode-select --install If Xcode Command Line Tools are already installed, but you're still not able to install lxml, then reset xcode-select by following the command: sudo xcode-select --reset This will reset the path to the Xcode Command Line Tools directory and may resolve the issue. After you've installed or updated Xcode Command Line Tools, Hopefully, this will resolve the issue and allow you to install the \"lxml\" package."} +{"question_id": 15974730, "score": 244, "creation_date": 1365778925, "tags": ["python", "url", "flask"], "instruction": "How do I get the different parts of a Flask request's url?\n\nI want to detect if the request came from the localhost:5000 or foo.herokuapp.com host and what path was requested. How do I get this information about a Flask request?", "output": "You can examine the url through several Request fields: Imagine your application is listening on the following application root: http://www.example.com/myapplication And a user requests the following URI: http://www.example.com/myapplication/foo/page.html?x=y In this case the values of the above mentioned attributes would be the following: path /foo/page.html full_path /foo/page.html?x=y script_root /myapplication base_url http://www.example.com/myapplication/foo/page.html url http://www.example.com/myapplication/foo/page.html?x=y url_root http://www.example.com/myapplication/ You can easily extract the host part with the appropriate splits. An example of using this: from flask import request @app.route('/') def index(): return request.base_url"} +{"question_id": 35760943, "score": 244, "creation_date": 1456965120, "tags": ["python", "django", "cors", "django-rest-framework", "middleware"], "instruction": "How can I enable CORS on Django REST Framework\n\nHow can I enable CORS on my Django REST Framework? the reference doesn't help much, it says that I can do by a middleware, but how can I do that?", "output": "The link you referenced in your question recommends using django-cors-headers, whose documentation says to install the library python -m pip install django-cors-headers and then add it to your installed apps: INSTALLED_APPS = ( ... 'corsheaders', ... ) You will also need to add a middleware class to listen in on responses: MIDDLEWARE = [ ..., 'corsheaders.middleware.CorsMiddleware', 'django.middleware.common.CommonMiddleware', ..., ] and specify domains for CORS, e.g.: CORS_ALLOWED_ORIGINS = [ 'http://localhost:3030', ] Please browse the configuration section of its documentation, paying particular attention to the various CORS_ORIGIN_ settings. You'll need to set some of those based on your needs."} +{"question_id": 925024, "score": 244, "creation_date": 1243586712, "tags": ["python", "matplotlib"], "instruction": "How can I remove the top and right axis?\n\nInstead of the default \"boxed\" axis style I want to have only the left and bottom axis, i.e.: +------+ | | | | | | ---> | | | | +------+ +------- This should be easy, but I can't find the necessary options in the docs.", "output": "This is the suggested Matplotlib 3 solution from the official website HERE: import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2*np.pi, 100) y = np.sin(x) ax = plt.subplot(111) ax.plot(x, y) # Hide the right and top spines ax.spines[['right', 'top']].set_visible(False) plt.show()"} +{"question_id": 51025893, "score": 244, "creation_date": 1529936960, "tags": ["python", "flask"], "instruction": "Flask at first run: Do not use the development server in a production environment\n\nI installed the Flask plugin in PyCharm Community Edition and I just have this simple code in my flask app: from flask import Flask app = Flask(__name__) @app.route('/') def index(): return '

Hello!

' if __name__ == \"__main__\": app.run(debug=True) And I get this message: WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead * Restarting with stat * Debugger is active! * Debugger PIN: 123-456-789 * Running on http://127.0.0.1:5000/ Why am I getting this error when I run Flask? A previous version of the message read \"Do not use the development server in a production environment.\"", "output": "As of Flask 2.2, the development server always shows this warning, it is not possible to disable it. The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. Use a production WSGI server instead. See the deployment docs from Flask for more information. That warning is just a warning though, it's not an error preventing your app from running. If your app isn't working, there's something else wrong with your code. That warning applies to the development server, not Flask itself. The Flask framework is appropriate for any type of application and deployment."} +{"question_id": 11552320, "score": 244, "creation_date": 1342657697, "tags": ["python", "sleep"], "instruction": "Correct way to pause a Python program\n\nI've been using the input function as a way to pause my scripts: print(\"something\") wait = input(\"Press Enter to continue.\") print(\"something\") Is there a formal way to do this?", "output": "It seems fine to me (or raw_input() in Python 2.X). Alternatively, you could use time.sleep() if you want to pause for a certain number of seconds. import time print(\"something\") time.sleep(5.5) # Pause 5.5 seconds print(\"something\")"} +{"question_id": 7374748, "score": 244, "creation_date": 1315689334, "tags": ["python", "properties", "attributes"], "instruction": "What's the difference between a Python \"property\" and \"attribute\"?\n\nI am generally confused about the difference between a \"property\" and an \"attribute\", and I can't find a great resource to concisely detail the differences.", "output": "Properties are a special kind of attribute. Basically, when Python encounters the following code: spam = SomeObject() print(spam.eggs) it looks up eggs in SomeObject1, and then examines eggs to see if it has a __get__, __set__, or __delete__ method -- if it does, it's a property, and Python will call the __get__ method (since we were doing lookup) and return whatever that method returns. If it is not a property, then eggs is looked up in spam, and whatever is found there will be returned. More information about Python's data model and descriptors. 1 Many thanks to Robert Seimer for the correction on the lookup sequence."} +{"question_id": 366422, "score": 244, "creation_date": 1229253834, "tags": ["python", "default-parameters"], "instruction": "How can I avoid issues caused by Python's early-bound default parameters (e.g. mutable default arguments \"remembering\" old data)?\n\nSometimes it seems natural to have a default parameter which is an empty list. However, Python produces unexpected behavior in these situations. For example, consider this function: def my_func(working_list=[]): working_list.append(\"a\") print(working_list) The first time it is called, the default will work, but calls after that will update the existing list (with one \"a\" each call) and print the updated version. How can I fix the function so that, if it is called repeatedly without an explicit argument, a new empty list is used each time?", "output": "def my_func(working_list=None): if working_list is None: working_list = [] # alternative: # working_list = [] if working_list is None else working_list working_list.append(\"a\") print(working_list) The docs say you should use None as the default and explicitly test for it in the body of the function. Aside x is None is the comparison recommended by PEP 8: Comparisons to singletons like None should always be done with is or is not, never the equality operators. Also, beware of writing if x when you really mean if x is not None [...] See also What is the difference between \"is None\" and \"== None\""} +{"question_id": 54432583, "score": 244, "creation_date": 1548815669, "tags": ["python", "pandas", "performance", "apply"], "instruction": "When should I (not) want to use pandas apply() in my code?\n\nI have seen many answers posted to questions on Stack Overflow involving the use of the Pandas method apply. I have also seen users commenting under them saying that \"apply is slow, and should be avoided\". I have read many articles on the topic of performance that explain apply is slow. I have also seen a disclaimer in the docs about how apply is simply a convenience function for passing UDFs (can't seem to find that now). So, the general consensus is that apply should be avoided if possible. However, this raises the following questions: If apply is so bad, then why is it in the API? How and when should I make my code apply-free? Are there ever any situations where apply is good (better than other possible solutions)?", "output": "apply, the Convenience Function you Never Needed We start by addressing the questions in the OP, one by one. \"If apply is so bad, then why is it in the API?\" DataFrame.apply and Series.apply are convenience functions defined on DataFrame and Series object respectively. apply accepts any user defined function that applies a transformation/aggregation on a DataFrame. apply is effectively a silver bullet that does whatever any existing pandas function cannot do. Some of the things apply can do: Run any user-defined function on a DataFrame or Series Apply a function either row-wise (axis=1) or column-wise (axis=0) on a DataFrame Perform index alignment while applying the function Perform aggregation with user-defined functions (however, we usually prefer agg or transform in these cases) Perform element-wise transformations Broadcast aggregated results to original rows (see the result_type argument). Accept positional/keyword arguments to pass to the user-defined functions. ...Among others. For more information, see Row or Column-wise Function Application in the documentation. So, with all these features, why is apply bad? It is because apply is slow. Pandas makes no assumptions about the nature of your function, and so iteratively applies your function to each row/column as necessary. Additionally, handling all of the situations above means apply incurs some major overhead at each iteration. Further, apply consumes a lot more memory, which is a challenge for memory bounded applications. There are very few situations where apply is appropriate to use (more on that below). If you're not sure whether you should be using apply, you probably shouldn't. pandas 2.2 update: apply now supports engine='numba' More info in the release notes as well as GH54666 Choose between the python (default) engine or the numba engine in apply. The numba engine will attempt to JIT compile the passed function, which may result in speedups for large DataFrames. It also supports the following engine_kwargs : nopython (compile the function in nopython mode) nogil (release the GIL inside the JIT compiled function) parallel (try to apply the function in parallel over the DataFrame) Note: Due to limitations within numba/how pandas interfaces with numba, you should only use this if raw=True Let's address the next question. \"How and when should I make my code apply-free?\" To rephrase, here are some common situations where you will want to get rid of any calls to apply. Numeric Data If you're working with numeric data, there is likely already a vectorized cython function that does exactly what you're trying to do (if not, please either ask a question on Stack Overflow or open a feature request on GitHub). Contrast the performance of apply for a simple addition operation. df = pd.DataFrame({\"A\": [9, 4, 2, 1], \"B\": [12, 7, 5, 4]}) df A B 0 9 12 1 4 7 2 2 5 3 1 4 df.apply(np.sum) A 16 B 28 dtype: int64 df.sum() A 16 B 28 dtype: int64 Performance wise, there's no comparison, the cythonized equivalent is much faster. There's no need for a graph, because the difference is obvious even for toy data. %timeit df.apply(np.sum) %timeit df.sum() 2.22 ms \u00b1 41.2 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 471 \u00b5s \u00b1 8.16 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) Even if you enable passing raw arrays with the raw argument, it's still twice as slow. %timeit df.apply(np.sum, raw=True) 840 \u00b5s \u00b1 691 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) Another example: df.apply(lambda x: x.max() - x.min()) A 8 B 8 dtype: int64 df.max() - df.min() A 8 B 8 dtype: int64 %timeit df.apply(lambda x: x.max() - x.min()) %timeit df.max() - df.min() 2.43 ms \u00b1 450 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 1.23 ms \u00b1 14.7 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) In general, seek out vectorized alternatives if possible. String/Regex Pandas provides \"vectorized\" string functions in most situations, but there are rare cases where those functions do not... \"apply\", so to speak. A common problem is to check whether a value in a column is present in another column of the same row. df = pd.DataFrame({ 'Name': ['mickey', 'donald', 'minnie'], 'Title': ['wonderland', \"welcome to donald's castle\", 'Minnie mouse clubhouse'], 'Value': [20, 10, 86]}) df Name Value Title 0 mickey 20 wonderland 1 donald 10 welcome to donald's castle 2 minnie 86 Minnie mouse clubhouse This should return the row second and third row, since \"donald\" and \"minnie\" are present in their respective \"Title\" columns. Using apply, this would be done using df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1) 0 False 1 True 2 True dtype: bool df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)] Name Title Value 1 donald welcome to donald's castle 10 2 minnie Minnie mouse clubhouse 86 However, a better solution exists using list comprehensions. df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]] Name Title Value 1 donald welcome to donald's castle 10 2 minnie Minnie mouse clubhouse 86 %timeit df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)] %timeit df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]] 2.85 ms \u00b1 38.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 788 \u00b5s \u00b1 16.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) The thing to note here is that iterative routines happen to be faster than apply, because of the lower overhead. If you need to handle NaNs and invalid dtypes, you can build on this using a custom function you can then call with arguments inside the list comprehension. For more information on when list comprehensions should be considered a good option, see my writeup: Are for-loops in pandas really bad? When should I care?. Note Date and datetime operations also have vectorized versions. So, for example, you should prefer pd.to_datetime(df['date']), over, say, df['date'].apply(pd.to_datetime). Read more at the docs. A Common Pitfall: Exploding Columns of Lists s = pd.Series([[1, 2]] * 3) s 0 [1, 2] 1 [1, 2] 2 [1, 2] dtype: object People are tempted to use apply(pd.Series). This is horrible in terms of performance. s.apply(pd.Series) 0 1 0 1 2 1 1 2 2 1 2 A better option is to listify the column and pass it to pd.DataFrame. pd.DataFrame(s.tolist()) 0 1 0 1 2 1 1 2 2 1 2 %timeit s.apply(pd.Series) %timeit pd.DataFrame(s.tolist()) 2.65 ms \u00b1 294 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 816 \u00b5s \u00b1 40.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) Lastly, \"Are there any situations where apply is good?\" Apply is a convenience function, so there are situations where the overhead is negligible enough to forgive. It really depends on how many times the function is called. Functions that are Vectorized for Series, but not DataFrames What if you want to apply a string operation on multiple columns? What if you want to convert multiple columns to datetime? These functions are vectorized for Series only, so they must be applied over each column that you want to convert/operate on. df = pd.DataFrame( pd.date_range('2018-12-31','2019-01-31', freq='2D').date.astype(str).reshape(-1, 2), columns=['date1', 'date2']) df date1 date2 0 2018-12-31 2019-01-02 1 2019-01-04 2019-01-06 2 2019-01-08 2019-01-10 3 2019-01-12 2019-01-14 4 2019-01-16 2019-01-18 5 2019-01-20 2019-01-22 6 2019-01-24 2019-01-26 7 2019-01-28 2019-01-30 df.dtypes date1 object date2 object dtype: object This is an admissible case for apply: df.apply(pd.to_datetime, errors='coerce').dtypes date1 datetime64[ns] date2 datetime64[ns] dtype: object Note that it would also make sense to stack, or just use an explicit loop. All these options are slightly faster than using apply, but the difference is small enough to forgive. %timeit df.apply(pd.to_datetime, errors='coerce') %timeit pd.to_datetime(df.stack(), errors='coerce').unstack() %timeit pd.concat([pd.to_datetime(df[c], errors='coerce') for c in df], axis=1) %timeit for c in df.columns: df[c] = pd.to_datetime(df[c], errors='coerce') 5.49 ms \u00b1 247 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 3.94 ms \u00b1 48.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 3.16 ms \u00b1 216 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 2.41 ms \u00b1 1.71 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) You can make a similar case for other operations such as string operations, or conversion to category. u = df.apply(lambda x: x.str.contains(...)) v = df.apply(lambda x: x.astype(category)) v/s u = pd.concat([df[c].str.contains(...) for c in df], axis=1) v = df.copy() for c in df: v[c] = df[c].astype(category) And so on... Converting Series to str: astype versus apply This seems like an idiosyncrasy of the API. Using apply to convert integers in a Series to string is comparable (and sometimes faster) than using astype. The graph was plotted using the perfplot library. import perfplot perfplot.show( setup=lambda n: pd.Series(np.random.randint(0, n, n)), kernels=[ lambda s: s.astype(str), lambda s: s.apply(str) ], labels=['astype', 'apply'], n_range=[2**k for k in range(1, 20)], xlabel='N', logx=True, logy=True, equality_check=lambda x, y: (x == y).all()) With floats, I see the astype is consistently as fast as, or slightly faster than apply. So this has to do with the fact that the data in the test is integer type. GroupBy operations with chained transformations GroupBy.apply has not been discussed until now, but GroupBy.apply is also an iterative convenience function to handle anything that the existing GroupBy functions do not. One common requirement is to perform a GroupBy and then two prime operations such as a \"lagged cumsum\": df = pd.DataFrame({\"A\": list('aabcccddee'), \"B\": [12, 7, 5, 4, 5, 4, 3, 2, 1, 10]}) df A B 0 a 12 1 a 7 2 b 5 3 c 4 4 c 5 5 c 4 6 d 3 7 d 2 8 e 1 9 e 10 You'd need two successive groupby calls here: df.groupby('A').B.cumsum().groupby(df.A).shift() 0 NaN 1 12.0 2 NaN 3 NaN 4 4.0 5 9.0 6 NaN 7 3.0 8 NaN 9 1.0 Name: B, dtype: float64 Using apply, you can shorten this to a a single call. df.groupby('A').B.apply(lambda x: x.cumsum().shift()) 0 NaN 1 12.0 2 NaN 3 NaN 4 4.0 5 9.0 6 NaN 7 3.0 8 NaN 9 1.0 Name: B, dtype: float64 It is very hard to quantify the performance because it depends on the data. But in general, apply is an acceptable solution if the goal is to reduce a groupby call (because groupby is also quite expensive). Other Caveats Aside from the caveats mentioned above, it is also worth mentioning that apply operates on the first row (or column) twice. This is done to determine whether the function has any side effects. If not, apply may be able to use a fast-path for evaluating the result, else it falls back to a slow implementation. df = pd.DataFrame({ 'A': [1, 2], 'B': ['x', 'y'] }) def func(x): print(x['A']) return x df.apply(func, axis=1) # 1 # 1 # 2 A B 0 1 x 1 2 y This behaviour is also seen in GroupBy.apply on pandas versions <0.25 (it was fixed for 0.25, see here for more information.)"} +{"question_id": 11548005, "score": 244, "creation_date": 1342636202, "tags": ["python", "numpy", "int", "pandas", "type-conversion"], "instruction": "NumPy or Pandas: Keeping array type as integer while having a NaN value\n\nIs there a preferred way to keep the data type of a numpy array fixed as int (or int64 or whatever), while still having an element inside listed as numpy.NaN? In particular, I am converting an in-house data structure to a Pandas DataFrame. In our structure, we have integer-type columns that still have NaN's (but the dtype of the column is int). It seems to recast everything as a float if we make this a DataFrame, but we'd really like to be int. Thoughts? Things tried: I tried using the from_records() function under pandas.DataFrame, with coerce_float=False and this did not help. I also tried using NumPy masked arrays, with NaN fill_value, which also did not work. All of these caused the column data type to become a float.", "output": "This capability has been added to pandas beginning with version 0.24. At this point, it requires the use of extension dtype 'Int64' (capitalized), rather than the default dtype 'int64' (lowercase)."} +{"question_id": 9724906, "score": 243, "creation_date": 1331832120, "tags": ["python", "date", "time"], "instruction": "python date of the previous month\n\nI am trying to get the date of the previous month with python. Here is what i've tried: str( time.strftime('%Y') ) + str( int(time.strftime('%m'))-1 ) However, this way is bad for 2 reasons: First it returns 20122 for the February of 2012 (instead of 201202) and secondly it will return 0 instead of 12 on January. I have solved this trouble in bash with echo $(date -d\"3 month ago\" \"+%G%m%d\") I think that if bash has a built-in way for this purpose, then python, much more equipped, should provide something better than forcing writing one's own script to achieve this goal. Of course i could do something like: if int(time.strftime('%m')) == 1: return '12' else: if int(time.strftime('%m')) < 10: return '0'+str(time.strftime('%m')-1) else: return str(time.strftime('%m') -1) I have not tested this code and i don't want to use it anyway (unless I can't find any other way:/) Thanks for your help!", "output": "datetime and the datetime.timedelta classes are your friend. find today. use that to find the first day of this month. use timedelta to backup a single day, to the last day of the previous month. print the YYYYMM string you're looking for. Like this: import datetime today = datetime.date.today() first = today.replace(day=1) last_month = first - datetime.timedelta(days=1) print(last_month.strftime(\"%Y%m\")) 201202 is printed."} +{"question_id": 38559755, "score": 243, "creation_date": 1469421038, "tags": ["python", "gpu", "tensorflow"], "instruction": "How to get current available GPUs in tensorflow?\n\nI have a plan to use distributed TensorFlow, and I saw TensorFlow can use GPUs for training and testing. In a cluster environment, each machine could have 0 or 1 or more GPUs, and I want to run my TensorFlow graph into GPUs on as many machines as possible. I found that when running tf.Session() TensorFlow gives information about GPU in the log messages like below: I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0) My question is how do I get information about current available GPU from TensorFlow? I can get loaded GPU information from the log, but I want to do it in a more sophisticated, programmatic way. I also could restrict GPUs intentionally using the CUDA_VISIBLE_DEVICES environment variable, so I don't want to know a way of getting GPU information from OS kernel. In short, I want a function like tf.get_available_gpus() that will return ['/gpu:0', '/gpu:1'] if there are two GPUs available in the machine. How can I implement this?", "output": "Since TensorFlow 2.1, you can use tf.config.list_physical_devices('GPU'): import tensorflow as tf gpus = tf.config.list_physical_devices('GPU') for gpu in gpus: print(\"Name:\", gpu.name, \" Type:\", gpu.device_type) If you have two GPUs installed, it outputs this: Name: /physical_device:GPU:0 Type: GPU Name: /physical_device:GPU:1 Type: GPU In TF 2.0, you must add experimental: gpus = tf.config.experimental.list_physical_devices('GPU') See: Guide pages Current API"} +{"question_id": 10919569, "score": 243, "creation_date": 1339005546, "tags": ["python", "pip"], "instruction": "Install a module using pip for specific python version\n\nOn Ubuntu 10.04 by default Python 2.6 is installed, then I have installed Python 2.7. How can I use pip install to install packages for Python 2.7. For example: pip install beautifulsoup4 by default installs BeautifulSoup for Python 2.6 When I do: import bs4 in Python 2.6 it works, but in Python 2.7 it says: No module named bs4", "output": "Use a version of pip installed against the Python instance you want to install new packages to. In many distributions, there may be separate python2.6-pip and python2.7-pip packages, invoked with binary names such as pip-2.6 and pip-2.7. If pip is not packaged in your distribution for the desired target, you might look for a setuptools or easyinstall package, or use virtualenv (which will always include pip in a generated environment). pip's website includes installation instructions, if you can't find anything within your distribution. Update: In the modern (2020s) world, python -m pip is the right way to do this, where python can be replaced with whichever interpreter you choose."} +{"question_id": 2265357, "score": 243, "creation_date": 1266230965, "tags": ["python", "date", "datetime", "strptime", "strftime"], "instruction": "Parse date string and change format\n\nI have a date string with the format 'Mon Feb 15 2010'. I want to change the format to '15/02/2010'. How can I do this?", "output": "The datetime module could help you with that: datetime.datetime.strptime(input_date_string, input_format).strftime(output_format) For the specific example, you could do: >>> from datetime import datetime >>> datetime.strptime('Mon Feb 15 2010', '%a %b %d %Y').strftime('%d/%m/%Y') '15/02/2010' Learn more about different formats here."} +{"question_id": 14029245, "score": 242, "creation_date": 1356426553, "tags": ["python", "if-statement", "syntax"], "instruction": "Putting an if-elif-else statement on one line?\n\nIs there an easier way of writing an if-elif-else statement so it fits on one line? For example, if expression1: statement1 elif expression2: statement2 else: statement3 Or a real-world example: if i > 100: x = 2 elif i < 100: x = 1 else: x = 0 I just feel if the example above could be written the following way, it could look like more concise. x = 2 if i>100 elif i<100 1 else 0 # [WRONG] I have read the link below, but it doesn't address my question. Does Python have a ternary conditional operator? (the question is about condensing an if-else statement to one line)", "output": "No, it's not possible (at least not with arbitrary statements), nor is it desirable. Fitting everything on one line would most likely violate PEP-8 where it is mandated that lines should not exceed 80 characters in length. It's also against the Zen of Python: \"Readability counts\". (Type import this at the Python prompt to read the whole thing). You can use a ternary expression in Python, but only for expressions, not for statements: >>> a = \"Hello\" if foo() else \"Goodbye\" Edit: Your revised question now shows that the three statements are identical except for the value being assigned. In that case, a chained ternary operator does work, but I still think that it's less readable: >>> i = 100 >>> x = 2 if i>100 else 1 if i<100 else 0 >>> x 0 >>> i = 101 >>> x = 2 if i>100 else 1 if i<100 else 0 >>> x 2 >>> i = 99 >>> x = 2 if i>100 else 1 if i<100 else 0 >>> x 1"} +{"question_id": 7204805, "score": 242, "creation_date": 1314362670, "tags": ["python", "dictionary", "merge", "array-merge"], "instruction": "Deep merge dictionaries of dictionaries in Python\n\nI need to merge multiple dictionaries, here's what I have for instance: dict1 = {1:{\"a\":{\"A\"}}, 2:{\"b\":{\"B\"}}} dict2 = {2:{\"c\":{\"C\"}}, 3:{\"d\":{\"D\"}}} With A B C and D being leaves of the tree, like {\"info1\":\"value\", \"info2\":\"value2\"} There is an unknown level(depth) of dictionaries, it could be {2:{\"c\":{\"z\":{\"y\":{C}}}}} In my case it represents a directory/files structure with nodes being docs and leaves being files. I want to merge them to obtain: dict3 = {1:{\"a\":{\"A\"}}, 2:{\"b\":{\"B\"},\"c\":{\"C\"}}, 3:{\"d\":{\"D\"}}} I'm not sure how I could do that easily with Python.", "output": "This is actually quite tricky - particularly if you want a useful error message when things are inconsistent, while correctly accepting duplicate but consistent entries (something no other answer here does..) Assuming you don't have huge numbers of entries, a recursive function is easiest: def merge(a: dict, b: dict, path=[]): for key in b: if key in a: if isinstance(a[key], dict) and isinstance(b[key], dict): merge(a[key], b[key], path + [str(key)]) elif a[key] != b[key]: raise Exception('Conflict at ' + '.'.join(path + [str(key)])) else: a[key] = b[key] return a # works print(merge({1:{\"a\":\"A\"},2:{\"b\":\"B\"}}, {2:{\"c\":\"C\"},3:{\"d\":\"D\"}})) # has conflict merge({1:{\"a\":\"A\"},2:{\"b\":\"B\"}}, {1:{\"a\":\"A\"},2:{\"b\":\"C\"}}) note that this mutates a - the contents of b are added to a (which is also returned). If you want to keep a you could call it like merge(dict(a), b). agf pointed out (below) that you may have more than two dicts, in which case you can use: from functools import reduce reduce(merge, [dict1, dict2, dict3...]) where everything will be added to dict1. Note: I edited my initial answer to mutate the first argument; that makes the \"reduce\" easier to explain"} +{"question_id": 3346230, "score": 242, "creation_date": 1280251842, "tags": ["python", "string"], "instruction": "Wrap long lines in Python\n\nHow do I wrap long lines in Python without sacrificing indentation? For example: def fun(): print '{0} Here is a really long sentence with {1}'.format(3, 5) Suppose this goes over the 79 character recommended limit. The way I read it, here is how to indent it: def fun(): print '{0} Here is a really long \\ sentence with {1}'.format(3, 5) However, with this approach, the indentation of the continued line matches the indentation of the fun(). This looks kinda ugly. If someone was to go through my code, it would look bad to have uneven indentation because of this print statement. How do I indent lines like this effectively without sacrificing code readability?", "output": "def fun(): print(('{0} Here is a really long ' 'sentence with {1}').format(3, 5)) Adjacent string literals are concatenated at compile time, just as in C. 2.4.2. String literal concatenation is a good place to start for more information."} +{"question_id": 2158347, "score": 241, "creation_date": 1264716507, "tags": ["python", "datetime", "string-formatting"], "instruction": "How do I turn a python datetime into a string, with readable format date?\n\nt = e['updated_parsed'] dt = datetime.datetime(t[0],t[1],t[2],t[3],t[4],t[5],t[6] print dt >>>2010-01-28 08:39:49.000003 How do I turn that into a string?: \"January 28, 2010\"", "output": "The datetime class has a method strftime. The Python docs documents the different formats it accepts: strftime() and strptime() Behavior For this specific example, it would look something like: my_datetime.strftime(\"%B %d, %Y\")"} +{"question_id": 51433378, "score": 241, "creation_date": 1532045400, "tags": ["python", "machine-learning", "deep-learning", "pytorch"], "instruction": "What does model.train() do in PyTorch?\n\nDoes it call forward() in nn.Module? I thought when we call the model, forward method is being used. Why do we need to specify train()?", "output": "model.train() tells your model that you are training the model. This helps inform layers such as Dropout and BatchNorm, which are designed to behave differently during training and evaluation. For instance, in training mode, BatchNorm updates a moving average on each new batch; whereas, for evaluation mode, these updates are frozen. More details: model.train() sets the mode to train (see source code). You can call either model.eval() or model.train(mode=False) to tell that you are testing. It is somewhat intuitive to expect train function to train model but it does not do that. It just sets the mode."} +{"question_id": 41286569, "score": 241, "creation_date": 1482420558, "tags": ["python", "pandas", "dataframe", "sum"], "instruction": "Get total of Pandas column\n\nI have a Pandas data frame, as shown below, with multiple columns and would like to get the total of column, MyColumn. X MyColumn Y Z 0 A 84 13.0 69.0 1 B 76 77.0 127.0 2 C 28 69.0 16.0 3 D 28 28.0 31.0 4 E 19 20.0 85.0 5 F 84 193.0 70.0 Expected Output I'd have expected the output to be the total of this column: 319. Or alternatively, I would like df to be edited with a new row entitled TOTAL containing the total: X MyColumn Y Z 0 A 84 13.0 69.0 1 B 76 77.0 127.0 2 C 28 69.0 16.0 3 D 28 28.0 31.0 4 E 19 20.0 85.0 5 F 84 193.0 70.0 TOTAL 319 I have attempted to get the sum of the column using groupby and .sum(): Total = df.groupby['MyColumn'].sum() This causes the following error: TypeError: 'instancemethod' object has no attribute '__getitem__'", "output": "You should use sum: Total = df['MyColumn'].sum() print(Total) 319 Then you use loc with Series, in that case the index should be set as the same as the specific column you need to sum: df.loc['Total'] = pd.Series(df['MyColumn'].sum(), index=['MyColumn']) print(df) X MyColumn Y Z 0 A 84.0 13.0 69.0 1 B 76.0 77.0 127.0 2 C 28.0 69.0 16.0 3 D 28.0 28.0 31.0 4 E 19.0 20.0 85.0 5 F 84.0 193.0 70.0 Total NaN 319.0 NaN NaN because if you pass scalar, the values of all rows will be filled: df.loc['Total'] = df['MyColumn'].sum() print(df) X MyColumn Y Z 0 A 84 13.0 69.0 1 B 76 77.0 127.0 2 C 28 69.0 16.0 3 D 28 28.0 31.0 4 E 19 20.0 85.0 5 F 84 193.0 70.0 Total 319 319 319.0 319.0 Two other solutions are with at, and ix see the applications below: df.at['Total', 'MyColumn'] = df['MyColumn'].sum() print(df) X MyColumn Y Z 0 A 84.0 13.0 69.0 1 B 76.0 77.0 127.0 2 C 28.0 69.0 16.0 3 D 28.0 28.0 31.0 4 E 19.0 20.0 85.0 5 F 84.0 193.0 70.0 Total NaN 319.0 NaN NaN df.ix['Total', 'MyColumn'] = df['MyColumn'].sum() print(df) X MyColumn Y Z 0 A 84.0 13.0 69.0 1 B 76.0 77.0 127.0 2 C 28.0 69.0 16.0 3 D 28.0 28.0 31.0 4 E 19.0 20.0 85.0 5 F 84.0 193.0 70.0 Total NaN 319.0 NaN NaN Note: Since Pandas v0.20, ix has been deprecated. Use loc or iloc instead."} +{"question_id": 40353079, "score": 241, "creation_date": 1477965018, "tags": ["python", "pandas", "dataframe"], "instruction": "How to check dtype for all columns in a Pandas dataframe?\n\nIt seems that dtype only works for Series, right? Is there a function to display data types of all columns at once?", "output": "The singular form dtype is used to check the data type for a single column while the plural form dtypes is for data frame which returns data types for all columns. Essentially: For a single column: dataframe[column].dtype For all columns: dataframe.dtypes Example: import pandas as pd df = pd.DataFrame({ 'A': [1, 2, 3], 'B': [True, False, False], 'C': ['a', 'b', 'c']}) df['A'].dtype # dtype('int64') df['B'].dtype # dtype('bool') df['C'].dtype # dtype('O') df.dtypes # A int64 # B bool # C object # dtype: object"} +{"question_id": 5008828, "score": 241, "creation_date": 1297799884, "tags": ["python", "reflection"], "instruction": "Convert a python 'type' object to a string\n\nI'm wondering how to convert a python 'type' object into a string using python's reflective capabilities. For example, I'd like to print the type of an object print(\"My type is \" + type(some_object)) # (which obviously doesn't work like this)", "output": "print(type(some_object).__name__) If that doesn't suit you, use this: print(some_instance.__class__.__name__) Example: class A: pass print(type(A())) # prints print(A().__class__.__name__) # prints A Also, it seems there are differences with type() when using new-style classes vs old-style (that is, inheritance from object). For a new-style class, type(someObject).__name__ returns the name, and for old-style classes it returns instance."} +{"question_id": 8703496, "score": 240, "creation_date": 1325524630, "tags": ["python", "hashmap"], "instruction": "Hash Map in Python\n\nI want to implement a HashMap in Python. I want to ask a user for an input. depending on his input I am retrieving some information from the HashMap. If the user enters a key of the HashMap, I would like to retrieve the corresponding value. How do I implement this functionality in Python? HashMap streetno=new HashMap(); streetno.put(\"1\", \"Sachin Tendulkar\"); streetno.put(\"2\", \"Dravid\"); streetno.put(\"3\",\"Sehwag\"); streetno.put(\"4\",\"Laxman\"); streetno.put(\"5\",\"Kohli\")", "output": "Python dictionary is a built-in type that supports key-value pairs. It's the nearest builtin data structure relative to Java's HashMap. You can declare a dict with key-value pairs set to values: streetno = { \"1\": \"Sachin Tendulkar\", \"2\": \"Dravid\", \"3\": \"Sehwag\", \"4\": \"Laxman\", \"5\": \"Kohli\" } You can also set a key-value mapping after creation: streetno = {} streetno[\"1\"] = \"Sachin Tendulkar\" print(streetno[\"1\"]) # => \"Sachin Tendulkar\" Another way to create a dictionary is with the dict() builtin function, but this only works when your keys are valid identifiers: streetno = dict(one=\"Sachin Tendulkar\", two=\"Dravid\") print(streetno[\"one\"]) # => \"Sachin Tendulkar\""} +{"question_id": 7507825, "score": 240, "creation_date": 1316646697, "tags": ["python", "logging", "python-logging"], "instruction": "Where is a complete example of logging.config.dictConfig?\n\nHow do I use dictConfig? How should I specify its input config dictionary?", "output": "How about here! The corresponding documentation reference is configuration-dictionary-schema. LOGGING_CONFIG = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'standard': { 'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s' }, }, 'handlers': { 'default': { 'level': 'INFO', 'formatter': 'standard', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout', # Default is stderr }, }, 'loggers': { '': { # root logger 'handlers': ['default'], 'level': 'WARNING', 'propagate': False }, 'my.packg': { 'handlers': ['default'], 'level': 'INFO', 'propagate': False }, '__main__': { # if __name__ == '__main__' 'handlers': ['default'], 'level': 'DEBUG', 'propagate': False }, } } Usage: import logging.config # Run once at startup: logging.config.dictConfig(LOGGING_CONFIG) # Include in each module: log = logging.getLogger(__name__) log.debug(\"Logging is configured.\") In case you see too many logs from third-party packages, be sure to run this config using logging.config.dictConfig(LOGGING_CONFIG) before the third-party packages are imported. To add additional custom info to each log message using a logging filter, consider this answer."} +{"question_id": 15285626, "score": 240, "creation_date": 1362709330, "tags": ["python", "django", "django-orm"], "instruction": "Django self-referential foreign key\n\nI'm kind of new to webapps and database stuff in general so this might be a dumb question. I want to make a model (\"CategoryModel\") with a field that points to the primary id of another instance of the model (its parent). class CategoryModel(models.Model): parent = models.ForeignKey(CategoryModel) How do I do this? Thanks!", "output": "You can pass in the name of a model as a string to ForeignKey and it will do the right thing. So: parent = models.ForeignKey(\"CategoryModel\") Or you can use the string \"self\" parent = models.ForeignKey(\"self\")"} +{"question_id": 18966564, "score": 240, "creation_date": 1379961973, "tags": ["python", "pip"], "instruction": "Pip freeze vs. pip list\n\nWhy does pip list generate a more comprehensive list than pip freeze? $ pip list feedparser (5.1.3) pip (1.4.1) setuptools (1.1.5) wsgiref (0.1.2) $ pip freeze feedparser==5.1.3 wsgiref==0.1.2 Pip's documentation states: freeze Output installed packages in requirements format. list List installed packages. What is a \"requirements format\"?", "output": "One may generate a requirements.txt via: pip freeze > requirements.txt A user can use this requirements.txt file to install all the dependencies. For instance: pip install -r requirements.txt The packages need to be in a specific format for pip to understand, such as: # requirements.txt feedparser==5.1.3 wsgiref==0.1.2 django==1.4.2 ... That is the \"requirements format\". Here, django==1.4.2 implies install django version 1.4.2 (even though the latest is 1.6.x). If you do not specify ==1.4.2, the latest version available would be installed. You can read more in \"Virtualenv and pip Basics\", and the official \"Requirements File Format\" documentation."} +{"question_id": 13795758, "score": 240, "creation_date": 1355118980, "tags": ["python", "python-3.x"], "instruction": "What is sys.maxint in Python 3?\n\nI've been trying to find out how to represent a maximum integer, and I've read to use \"sys.maxint\". However, in Python 3 when I call it I get: AttributeError: module 'object' has no attribute 'maxint'", "output": "The sys.maxint constant was removed, since there is no longer a limit to the value of integers. However, sys.maxsize can be used as an integer larger than any practical list or string index. It conforms to the implementation\u2019s \u201cnatural\u201d integer size and is typically the same as sys.maxint in previous releases on the same platform (assuming the same build options). https://docs.python.org/3/whatsnew/3.0.html#integers"} +{"question_id": 39857289, "score": 240, "creation_date": 1475597998, "tags": ["python", "anaconda", "conda"], "instruction": "Should conda, or conda-forge be used for Python environments?\n\nConda and conda-forge are both Python package managers. What is the appropriate choice when a package exists in both repositories? Django, for example, can be installed with either, but the difference between the two is several dependencies (conda-forge has many more). There is no explanation for these differences, not even a simple README. Which one should be used? Conda or conda-forge? Does it matter?", "output": "The short answer is that, in my experience generally, it doesn't matter which you use, with one exception. If you work for a company with more than 200 employees then the default conda channel is not free as of 2020. The long answer: So conda-forge is an additional channel from which packages may be installed. In this sense, it is not any more special than the default channel, or any of the other hundreds (thousands?) of channels that people have posted packages to. You can add your own channel if you sign up at https://anaconda.org and upload your own Conda packages. Here we need to make the distinction, which I think you're not clear about from your phrasing in the question, between conda, the cross-platform package manager, and conda-forge, the package channel. Anaconda Inc. (formerly Continuum IO), the main developers of the conda software, also maintain a separate channel of packages, which is the default when you type conda install packagename without changing any options. There are three ways to change the options for channels. The first two are done every time you install a package and the last one is persistent. The first one is to specify a channel every time you install a package: conda install -c some-channel packagename Of course, the package has to exist on that channel. This way will install packagename and all its dependencies from some-channel. Alternately, you can specify: conda install some-channel::packagename The package still has to exist on some-channel, but now, only packagename will be pulled from some-channel. Any other packages that are needed to satisfy dependencies will be searched for from your default list of channels. To see your channel configuration, you can write: conda config --show channels You can control the order that channels are searched with conda config. You can write: conda config --add channels some-channel to add the channel some-channel to the top of the channels configuration list. This gives some-channel the highest priority. Priority determines (in part) which channel is selected when more than one channel has a particular package. To add the channel to the end of the list and give it the lowest priority, type conda config --append channels some-channel If you would like to remove the channel that you added, you can do so by writing conda config --remove channels some-channel See conda config -h for more options. With all of that said, there are five main reasons to use the conda-forge channel instead of the defaults channel maintained by Anaconda: Packages on conda-forge may be more up-to-date than those on the defaults channel There are packages on the conda-forge channel that aren't available from defaults You would prefer to use a dependency such as openblas (from conda-forge) instead of mkl (from defaults). If you are installing a package that requires a compiled library (e.g., a C extension or a wrapper around a C library), it may reduce the chance of incompatibilities if you install all of the packages in an environment from a single channel due to binary compatibility of the base C library (but this advice may be out of date/change in the future). For reference, see the Conda Forge post on mixing channels. conda-forge is free to use even in large companies, while the default conda channel is not. See here."} +{"question_id": 10897339, "score": 239, "creation_date": 1338899419, "tags": ["python", "list"], "instruction": "Fetch first 10 results from a list in Python\n\nIs there a way we can fetch first 10 results from a list. Something like this maybe? list = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] list.fetch(10)", "output": "list[:10] will give you the first 10 elements of this list using slicing. However, note, it's best not to use list as a variable identifier as it's already used by Python: list() To find out more about this type of operation, you might find this tutorial on lists helpful and this link: Understanding slicing"} +{"question_id": 2703599, "score": 239, "creation_date": 1272093590, "tags": ["python", "dictionary", "data-structures", "immutability"], "instruction": "What would a \"frozen dict\" be?\n\nA frozen set is a frozenset. A frozen list could be a tuple. What would a frozen dict be? An immutable, hashable dict. I guess it could be something like collections.namedtuple, but that is more like a frozen-keys dict (a half-frozen dict). Isn't it? A \"frozendict\" should be a frozen dictionary, it should have keys, values, get, etc., and support in, for, etc. update : * there it is : https://www.python.org/dev/peps/pep-0603", "output": "Python doesn't have a builtin frozendict type. It turns out this wouldn't be useful too often (though it would still probably be useful more often than frozenset is). The most common reason to want such a type is when memoizing function calls for functions with unknown arguments. The most common solution to store a hashable equivalent of a dict (where the values are hashable) is something like tuple(sorted(kwargs.items())). This depends on the sorting not being a bit insane. Python cannot positively promise sorting will result in something reasonable here. (But it can't promise much else, so don't sweat it too much.) You could easily enough make some sort of wrapper that works much like a dict. It might look something like (In Python 3.10 and later, replace collections.Mapping with collections.abc.Mapping): import collections class FrozenDict(collections.Mapping): \"\"\"Don't forget the docstrings!!\"\"\" def __init__(self, *args, **kwargs): self._d = dict(*args, **kwargs) self._hash = None def __iter__(self): return iter(self._d) def __len__(self): return len(self._d) def __getitem__(self, key): return self._d[key] def __hash__(self): # It would have been simpler and maybe more obvious to # use hash(tuple(sorted(self._d.iteritems()))) from this discussion # so far, but this solution is O(n). I don't know what kind of # n we are going to run into, but sometimes it's hard to resist the # urge to optimize when it will gain improved algorithmic performance. if self._hash is None: hash_ = 0 for pair in self.items(): hash_ ^= hash(pair) self._hash = hash_ return self._hash This should work great: >>> x = FrozenDict(a=1, b=2) >>> y = FrozenDict(a=1, b=2) >>> x is y False >>> x == y True >>> x == {'a': 1, 'b': 2} True >>> d = {x: 'foo'} >>> d[x] 'foo'"} +{"question_id": 11784329, "score": 239, "creation_date": 1343935162, "tags": ["python", "numpy", "sys"], "instruction": "Python memory usage of numpy arrays\n\nI'm using python to analyse some large files and I'm running into memory issues, so I've been using sys.getsizeof() to try and keep track of the usage, but it's behaviour with numpy arrays is bizarre. Here's an example involving a map of albedos that I'm having to open: >>> import numpy as np >>> import struct >>> from sys import getsizeof >>> f = open('Albedo_map.assoc', 'rb') >>> getsizeof(f) 144 >>> albedo = struct.unpack('%df' % (7200*3600), f.read(7200*3600*4)) >>> getsizeof(albedo) 207360056 >>> albedo = np.array(albedo).reshape(3600,7200) >>> getsizeof(albedo) 80 Well the data's still there, but the size of the object, a 3600x7200 pixel map, has gone from ~200 Mb to 80 bytes. I'd like to hope that my memory issues are over and just convert everything to numpy arrays, but I feel that this behaviour, if true, would in some way violate some law of information theory or thermodynamics, or something, so I'm inclined to believe that getsizeof() doesn't work with numpy arrays. Any ideas?", "output": "You can use array.nbytes for numpy arrays, for example: import numpy as np from sys import getsizeof a = [0] * 1024 b = np.array(a) print(getsizeof(a)) print(b.nbytes) Output: 8264 8192"} +{"question_id": 3431676, "score": 239, "creation_date": 1281207873, "tags": ["python", "lambda", "closures"], "instruction": "Creating functions (or lambdas) in a loop (or comprehension)\n\nI'm trying to create functions inside of a loop: functions = [] for i in range(3): def f(): return i functions.append(f) Alternatively, with lambda: functions = [] for i in range(3): functions.append(lambda: i) The problem is that all functions end up being the same. Instead of returning 0, 1, and 2, all three functions return 2: print([f() for f in functions]) Expected output: [0, 1, 2] Actual output: [2, 2, 2] Why is this happening, and what should I do to get 3 different functions that output 0, 1, and 2 respectively?", "output": "You're running into a problem with late binding -- each function looks up i as late as possible (thus, when called after the end of the loop, i will be set to 2). Easily fixed by forcing early binding: change def f(): to def f(k=i): like this: def f(k=i): return k Default values (the i in k=i is a default value for argument name k) are looked up at def time, not at call time, so essentially they're a way to do early binding. If you're worried about f getting an extra argument (and thus potentially being called erroneously), there's a more sophisticated way which involves using a closure as a \"function factory\": def make_f(k): def f(): return k return f and in your loop use f = make_f(i) instead of the def statement. for i in range(3): f = make_f(i) functions.append(f)"} +{"question_id": 17990845, "score": 238, "creation_date": 1375351108, "tags": ["python", "matplotlib", "axis", "aspect-ratio"], "instruction": "How do I equalize the scales of the x-axis and y-axis?\n\nHow do I create a plot where the scales of x-axis and y-axis are the same? This equal ratio should be maintained even if I change the window size. Currently, my graph scales together with the window size. I tried: plt.xlim(-3, 3) plt.ylim(-3, 3) plt.axis('equal')", "output": "Use Axes.set_aspect in the following manner: from matplotlib import pyplot as plt plt.plot(range(5)) plt.xlim(-3, 3) plt.ylim(-3, 3) ax = plt.gca() ax.set_aspect('equal', adjustable='box') plt.draw()"} +{"question_id": 16870663, "score": 237, "creation_date": 1370074780, "tags": ["python", "date"], "instruction": "How do I validate a date string format in python?\n\nI have a python method which accepts a date input as a string. How do I add a validation to make sure the date string being passed to the method is in the ffg. format: 'YYYY-MM-DD' if it's not, method should raise some sort of error", "output": ">>> import datetime >>> def validate(date_text): try: datetime.date.fromisoformat(date_text) except ValueError: raise ValueError(\"Incorrect data format, should be YYYY-MM-DD\") >>> validate('2003-12-23') >>> validate('2003-12-32') Traceback (most recent call last): File \"\", line 1, in validate('2003-12-32') File \"\", line 5, in validate raise ValueError(\"Incorrect data format, should be YYYY-MM-DD\") ValueError: Incorrect data format, should be YYYY-MM-DD Note that datetime.date.fromisoformat() obviously works only when date is in ISO format. If you need to check date in some other format, use datetime.datetime.strptime()."} +{"question_id": 16888888, "score": 237, "creation_date": 1370222408, "tags": ["python", "pandas", "ipython", "jupyter-notebook", "dataframe"], "instruction": "How to read a .xlsx file using the pandas Library in iPython?\n\nI want to read a .xlsx file using the Pandas Library of python and port the data to a postgreSQL table. All I could do up until now is: import pandas as pd data = pd.ExcelFile(\"*File Name*\") Now I know that the step got executed successfully, but I want to know how i can parse the excel file that has been read so that I can understand how the data in the excel maps to the data in the variable data. I learnt that data is a Dataframe object if I'm not wrong. So How do i parse this dataframe object to extract each line row by row.", "output": "I usually create a dictionary containing a DataFrame for every sheet: xl_file = pd.ExcelFile(file_name) dfs = {sheet_name: xl_file.parse(sheet_name) for sheet_name in xl_file.sheet_names} Update: In pandas version 0.21.0+ you will get this behavior more cleanly by passing sheet_name=None to read_excel: dfs = pd.read_excel(file_name, sheet_name=None) In 0.20 and prior, this was sheetname rather than sheet_name (this is now deprecated in favor of the above): dfs = pd.read_excel(file_name, sheetname=None)"} +{"question_id": 15376509, "score": 237, "creation_date": 1363145084, "tags": ["python", "operators"], "instruction": "When is \"i += x\" different from \"i = i + x\" in Python?\n\nI was told that += can have different effects than the standard notation of i = i +. Is there a case in which i += 1 would be different from i = i + 1?", "output": "This depends entirely on the object i. += calls the __iadd__ method (if it exists -- falling back on __add__ if it doesn't exist) whereas + calls the __add__ method1 or the __radd__ method in a few cases2. From an API perspective, __iadd__ is supposed to be used for modifying mutable objects in place (returning the object which was mutated) whereas __add__ should return a new instance of something. For immutable objects, both methods return a new instance, but __iadd__ will put the new instance in the current namespace with the same name that the old instance had. This is why i = 1 i += 1 seems to increment i. In reality, you get a new integer and assign it \"on top of\" i -- losing one reference to the old integer. In this case, i += 1 is exactly the same as i = i + 1. But, with most mutable objects, it's a different story: As a concrete example: a = [1, 2, 3] b = a b += [1, 2, 3] print(a) # [1, 2, 3, 1, 2, 3] print(b) # [1, 2, 3, 1, 2, 3] compared to: a = [1, 2, 3] b = a b = b + [1, 2, 3] print(a) # [1, 2, 3] print(b) # [1, 2, 3, 1, 2, 3] notice how in the first example, since b and a reference the same object, when I use += on b, it actually changes b (and a sees that change too -- After all, it's referencing the same list). In the second case however, when I do b = b + [1, 2, 3], this takes the list that b is referencing and concatenates it with a new list [1, 2, 3]. It then stores the concatenated list in the current namespace as b -- With no regard for what b was the line before. 1In the expression x + y, if x.__add__ isn't implemented or if x.__add__(y) returns NotImplemented and x and y have different types, then x + y tries to call y.__radd__(x). So, in the case where you have foo_instance += bar_instance if Foo doesn't implement __add__ or __iadd__ then the result here is the same as foo_instance = bar_instance.__radd__(bar_instance, foo_instance) 2In the expression foo_instance + bar_instance, bar_instance.__radd__ will be tried before foo_instance.__add__ if the type of bar_instance is a subclass of the type of foo_instance (e.g. issubclass(Bar, Foo)). The rationale for this is that Bar is in some sense a \"higher-level\" object than Foo so Bar should get the option of overriding Foo's behavior."} +{"question_id": 4859292, "score": 236, "creation_date": 1296538625, "tags": ["python", "random", "dictionary", "key"], "instruction": "How can I get a random key-value pair from a dictionary?\n\nIn Python, given a dictionary like { 'VENEZUELA': 'CARACAS', 'CANADA': 'OTTAWA' } How can I choose a random item (key-value pair)? What if I only need the key, or only the value - can it be optimized?", "output": "Make a list of the dictionary's items, and choose randomly from that in the usual way: import random d = {'VENEZUELA':'CARACAS', 'CANADA':'OTTAWA'} country, capital = random.choice(list(d.items())) Similarly, if only a value is needed, choose directly from the values: capital = random.choice(list(d.values()))"} +{"question_id": 7828867, "score": 236, "creation_date": 1319062382, "tags": ["python", "algorithm", "list", "comparison"], "instruction": "How to efficiently compare two unordered lists (not sets)?\n\na = [1, 2, 3, 1, 2, 3] b = [3, 2, 1, 3, 2, 1] a & b should be considered equal, because they have exactly the same elements, only in different order. The thing is, my actual lists will consist of objects (my class instances), not integers.", "output": "O(n): The Counter() method is best (if your objects are hashable): def compare(s, t): return Counter(s) == Counter(t) O(n log n): The sorted() method is next best (if your objects are orderable): def compare(s, t): return sorted(s) == sorted(t) O(n * n): If the objects are neither hashable, nor orderable, you can use equality: def compare(s, t): t = list(t) # make a mutable copy try: for elem in s: t.remove(elem) except ValueError: return False return not t"} +{"question_id": 8230315, "score": 236, "creation_date": 1321979881, "tags": ["python", "json", "serialization", "set"], "instruction": "How to JSON serialize sets?\n\nI have a Python set that contains objects with __hash__ and __eq__ methods in order to make certain no duplicates are included in the collection. I need to json encode this result set, but passing even an empty set to the json.dumps method raises a TypeError. File \"/usr/lib/python2.7/json/encoder.py\", line 201, in encode chunks = self.iterencode(o, _one_shot=True) File \"/usr/lib/python2.7/json/encoder.py\", line 264, in iterencode return _iterencode(o, 0) File \"/usr/lib/python2.7/json/encoder.py\", line 178, in default raise TypeError(repr(o) + \" is not JSON serializable\") TypeError: set([]) is not JSON serializable I know I can create an extension to the json.JSONEncoder class that has a custom default method, but I'm not even sure where to begin in converting over the set. Should I create a dictionary out of the set values within the default method, and then return the encoding on that? Ideally, I'd like to make the default method able to handle all the datatypes that the original encoder chokes on (I'm using Mongo as a data source so dates seem to raise this error too) Any hint in the right direction would be appreciated. EDIT: Thanks for the answer! Perhaps I should have been more precise. I utilized (and upvoted) the answers here to get around the limitations of the set being translated, but there are internal keys that are an issue as well. The objects in the set are complex objects that translate to __dict__, but they themselves can also contain values for their properties that could be ineligible for the basic types in the json encoder. There's a lot of different types coming into this set, and the hash basically calculates a unique id for the entity, but in the true spirit of NoSQL there's no telling exactly what the child object contains. One object might contain a date value for starts, whereas another may have some other schema that includes no keys containing \"non-primitive\" objects. That is why the only solution I could think of was to extend the JSONEncoder to replace the default method to turn on different cases - but I'm not sure how to go about this and the documentation is ambiguous. In nested objects, does the value returned from default go by key, or is it just a generic include/discard that looks at the whole object? How does that method accommodate nested values? I've looked through previous questions and can't seem to find the best approach to case-specific encoding (which unfortunately seems like what I'm going to need to do here).", "output": "JSON notation has only a handful of native datatypes (objects, arrays, strings, numbers, booleans, and null), so anything serialized in JSON needs to be expressed as one of these types. As shown in the json module docs, this conversion can be done automatically by a JSONEncoder and JSONDecoder, but then you would be giving up some other structure you might need (if you convert sets to a list, then you lose the ability to recover regular lists; if you convert sets to a dictionary using dict.fromkeys(s) then you lose the ability to recover dictionaries). A more sophisticated solution is to build-out a custom type that can coexist with other native JSON types. This lets you store nested structures that include lists, sets, dicts, decimals, datetime objects, etc.: from json import dumps, loads, JSONEncoder, JSONDecoder import pickle class PythonObjectEncoder(JSONEncoder): def default(self, obj): try: return {'_python_object': pickle.dumps(obj).decode('latin-1')} except pickle.PickleError: return super().default(obj) def as_python_object(dct): if '_python_object' in dct: return pickle.loads(dct['_python_object'].encode('latin-1')) return dct Here is a sample session showing that it can handle lists, dicts, and sets: >>> data = [1,2,3, set(['knights', 'who', 'say', 'ni']), {'key':'value'}, Decimal('3.14')] >>> j = dumps(data, cls=PythonObjectEncoder) >>> loads(j, object_hook=as_python_object) [1, 2, 3, set(['knights', 'say', 'who', 'ni']), {'key': 'value'}, Decimal('3.14')] Alternatively, it may be useful to use a more general purpose serialization technique such as YAML, Twisted Jelly, or Python's pickle module. These each support a much greater range of datatypes."} +{"question_id": 5584586, "score": 235, "creation_date": 1302194656, "tags": ["python", "modulo", "integer-division"], "instruction": "Find the division remainder of a number\n\nHow could I go about finding the division remainder of a number in Python? For example: If the number is 26 and divided number is 7, then the division remainder is 5. (since 7+7+7=21 and 26-21=5.) For simple divisibility testing, see How do you check whether a number is divisible by another number?.", "output": "you are looking for the modulo operator: a % b for example: >>> 26 % 7 5 Of course, maybe they wanted you to implement it yourself, which wouldn't be too difficult either."} +{"question_id": 30228069, "score": 235, "creation_date": 1431568529, "tags": ["python", "matplotlib", "bar-chart", "plot-annotations"], "instruction": "How to display the value on horizontal bars\n\nI generated a bar plot, how can I display the value of the bar on each bar? Current plot: What I am trying to get: My code: import os import numpy as np import matplotlib.pyplot as plt x = [u'INFO', u'CUISINE', u'TYPE_OF_PLACE', u'DRINK', u'PLACE', u'MEAL_TIME', u'DISH', u'NEIGHBOURHOOD'] y = [160, 167, 137, 18, 120, 36, 155, 130] fig, ax = plt.subplots() width = 0.75 # the width of the bars ind = np.arange(len(y)) # the x locations for the groups ax.barh(ind, y, width, color=\"blue\") ax.set_yticks(ind+width/2) ax.set_yticklabels(x, minor=False) plt.title('title') plt.xlabel('x') plt.ylabel('y') #plt.show() plt.savefig(os.path.join('test.png'), dpi=300, format='png', bbox_inches='tight') # use format='svg' or 'pdf' for vectorial pictures", "output": "New in matplotlib 3.4.0 There is now a built-in Axes.bar_label helper method to auto-label bars: fig, ax = plt.subplots() bars = ax.barh(indexes, values) ax.bar_label(bars) Note that for grouped/stacked bar plots, there will multiple bar containers, which can all be accessed via ax.containers: for bars in ax.containers: ax.bar_label(bars) More details: How to add thousands separators (commas) to labels How to apply f-strings to labels How to add spacing to labels"} +{"question_id": 26595, "score": 235, "creation_date": 1219688836, "tags": ["python"], "instruction": "Is there any difference between \"foo is None\" and \"foo == None\"?\n\nIs there any difference between: if foo is None: pass and if foo == None: pass The convention that I've seen in most Python code (and the code I myself write) is the former, but I recently came across code which uses the latter. None is an instance (and the only instance, IIRC) of NoneType, so it shouldn't matter, right? Are there any circumstances in which it might?", "output": "is always returns True if it compares the same object instance, whereas == is ultimately determined by the __eq__() method. i.e. >>> class Foo: def __eq__(self, other): return True >>> f = Foo() >>> f == None True >>> f is None False"} +{"question_id": 35325042, "score": 235, "creation_date": 1455135424, "tags": ["python", "logging", "python-logging"], "instruction": "Python Logging - Disable logging from imported modules\n\nI'm using the Python logging module, and would like to disable log messages printed by the third party modules that I import. For example, I'm using something like the following: logger = logging.getLogger() logger.setLevel(level=logging.DEBUG) fh = logging.StreamHandler() fh_formatter = logging.Formatter('%(asctime)s %(levelname)s %(lineno)d:%(filename)s(%(process)d) - %(message)s') fh.setFormatter(fh_formatter) logger.addHandler(fh) This prints out my debug messages when I do a logger.debug(\"my message!\"), but it also prints out the debug messages from any module I import (such as requests, and a number of other things). I'd like to see only the log messages from modules I'm interested in. Is it possible to make the logging module do this? Ideally, I'd like to be able tell the logger to print messages from \"ModuleX, ModuleY\" and ignore all others. I looked at the following, but I don't want to have to disable/enable logging before every call to an imported function: logging - how to ignore imported module logs?", "output": "The problem is that calling getLogger without arguments returns the root logger so when you set the level to logging.DEBUG you are also setting the level for other modules that use that logger. You can solve this by simply not using the root logger. To do this just pass a name as argument, for example the name of your module: logger = logging.getLogger('my_module_name') # as before this will create a new logger and thus it wont inadvertently change logging level for other modules. Obviously you have to use logger.debug instead of logging.debug since the latter is a convenience function that calls the debug method of the root logger. This is mentioned in the Advanced Logging Tutorial. It also allows you to know which module triggered the log message in a simple way."} +{"question_id": 311775, "score": 235, "creation_date": 1227387401, "tags": ["python", "list", "dictionary", "initialization"], "instruction": "Create a list with initial capacity in Python\n\nCode like this often happens: l = [] while foo: # baz l.append(bar) # qux This is really slow if you're about to append thousands of elements to your list, as the list will have to be constantly resized to fit the new elements. In Java, you can create an ArrayList with an initial capacity. If you have some idea how big your list will be, this will be a lot more efficient. I understand that code like this can often be refactored into a list comprehension. If the for/while loop is very complicated, though, this is unfeasible. Is there an equivalent for us Python programmers?", "output": "Warning: This answer is contested. See comments. def doAppend( size=10000 ): result = [] for i in range(size): message= \"some unique object %d\" % ( i, ) result.append(message) return result def doAllocate( size=10000 ): result=size*[None] for i in range(size): message= \"some unique object %d\" % ( i, ) result[i]= message return result Results. (evaluate each function 144 times and average the duration) simple append 0.0102 pre-allocate 0.0098 Conclusion. It barely matters. Premature optimization is the root of all evil."} +{"question_id": 41328451, "score": 234, "creation_date": 1482740851, "tags": ["python", "ssl", "pip"], "instruction": "\"ssl module in Python is not available\" when installing package with pip3\n\nI've install Python 3.4 and Python 3.6 on my local machine successfully, but am unable to install packages with pip3. When I execute pip3 install , I get the following SSL related error: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Collecting Could not fetch URL https://pypi.python.org/simple//: There was a problem confirming the ssl certificate: Can't connect to HTTPS URL because the SSL module is not available. - skipping Could not find a version that satisfies the requirement (from versions: ) No matching distribution found for How can I fix my Python3.x install so that I can install packages with pip install ?", "output": "Step by step guide to install Python 3.6 and pip3 in Ubuntu Install the necessary packages for Python and ssl: $ sudo apt-get install build-essential libffi-dev libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev Download and unzip \"Python-3.6.8.tar.xz\" from https://www.python.org/ftp/python/ into your home directory. Open terminal in that directory and run: $ ./configure Build and install: $ make && sudo make install Install packages with: $ pip3 install package_name Disclaimer: The above commands are not tested in Ubuntu 20.04 LTS."} +{"question_id": 121396, "score": 234, "creation_date": 1222180500, "tags": ["python", "object", "memory-address", "repr"], "instruction": "Accessing Object Memory Address\n\nWhen you call the object.__repr__() method in Python you get something like this back: <__main__.Test object at 0x2aba1c0cf890> Is there any way to get a hold of the memory address if you overload __repr__(), other then calling super(Class, obj).__repr__() and regexing it out?", "output": "The Python manual has this to say about id(): Return the \"identity'' of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. (CPython implementation detail: This is the address of the object in memory.) So in CPython, this will be the address of the object. No such guarantee for any other Python interpreter, though. Note that if you're writing a C extension, you have full access to the internals of the Python interpreter, including access to the addresses of objects directly."} +{"question_id": 14852821, "score": 234, "creation_date": 1360755970, "tags": ["python", "matplotlib", "axis-labels"], "instruction": "Aligning rotated xticklabels with their respective xticks\n\nCheck the x axis of the figure below. How can I move the labels a bit to the left so that they align with their respective ticks? I'm rotating the labels using: ax.set_xticks(xlabels_positions) ax.set_xticklabels(xlabels, rotation=45) But, as you can see, the rotation is centered on the middle of the text labels. Which makes it look like they are shifted to the right. I've tried using this instead: ax.set_xticklabels(xlabels, rotation=45, rotation_mode=\"anchor\") ... but it doesn't do what I wished for. And \"anchor\" seems to be the only value allowed for the rotation_mode parameter.", "output": "You can set the horizontal alignment of ticklabels, see the example below. If you imagine a rectangular box around the rotated label, which side of the rectangle do you want to be aligned with the tickpoint? Given your description, you want: ha='right' n=5 x = np.arange(n) y = np.sin(np.linspace(-3,3,n)) xlabels = ['Ticklabel %i' % i for i in range(n)] fig, axs = plt.subplots(1,3, figsize=(12,3)) ha = ['right', 'center', 'left'] for n, ax in enumerate(axs): ax.plot(x,y, 'o-') ax.set_title(ha[n]) ax.set_xticks(x) ax.set_xticklabels(xlabels, rotation=40, ha=ha[n])"} +{"question_id": 28035119, "score": 234, "creation_date": 1421709031, "tags": ["python", "django", "git"], "instruction": "Should I be adding the Django migration files in the .gitignore file?\n\nShould I be adding the Django migration files in the .gitignore file? I've recently been getting a lot of git issues due to migration conflicts and was wondering if I should be marking migration files as ignore. If so, how would I go about adding all of the migrations that I have in my apps, and adding them to the .gitignore file?", "output": "Quoting from the Django migrations documentation: The migration files for each app live in a \u201cmigrations\u201d directory inside of that app, and are designed to be committed to, and distributed as part of, its codebase. You should be making them once on your development machine and then running the same migrations on your colleagues\u2019 machines, your staging machines, and eventually your production machines. If you follow this process, you shouldn't be getting any merge conflicts in the migration files. When merging version control branches, you still may encounter a situation where you have multiple migrations based on the same parent migration, e.g. if two different developers introduced a migration concurrently. One way of resolving this situation is to introduce a merge_migration. Often this can be done automatically with the command ./manage.py makemigrations --merge which will introduce a new migration that depends on all current head migrations. Of course this only works when there is no conflict between the head migrations, in which case you will have to resolve the problem manually. Given that some people here suggested that you shouldn't commit your migrations to version control, I'd like to expand on the reasons why you actually should do so. First, you need a record of the migrations applied to your production systems. If you deploy changes to production and want to migrate the database, you need a description of the current state. You can create a separate backup of the migrations applied to each production database, but this seems unnecessarily cumbersome. Second, migrations often contain custom, handwritten code. It's not always possible to automatically generate them with ./manage.py makemigrations. Third, migrations should be included in code review. They are significant changes to your production system, and there are lots of things that can go wrong with them. So in short, if you care about your production data, please check your migrations into version control."} +{"question_id": 39383557, "score": 233, "creation_date": 1473314622, "tags": ["python", "apache-spark", "pyspark", "apache-spark-sql"], "instruction": "Show distinct column values in pyspark dataframe\n\nWith pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique(). I want to list out all the unique values in a pyspark dataframe column. Not the SQL type way (registertemplate then SQL query for distinct values). Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column.", "output": "Let's assume we're working with the following representation of data (two columns, k and v, where k contains three entries, two unique: +---+---+ | k| v| +---+---+ |foo| 1| |bar| 2| |foo| 3| +---+---+ With a Pandas dataframe: import pandas as pd p_df = pd.DataFrame([(\"foo\", 1), (\"bar\", 2), (\"foo\", 3)], columns=(\"k\", \"v\")) p_df['k'].unique() This returns an ndarray, i.e. array(['foo', 'bar'], dtype=object) You asked for a \"pyspark dataframe alternative for pandas df['col'].unique()\". Now, given the following Spark dataframe: s_df = sqlContext.createDataFrame([(\"foo\", 1), (\"bar\", 2), (\"foo\", 3)], ('k', 'v')) If you want the same result from Spark, i.e. an ndarray, use toPandas(): s_df.toPandas()['k'].unique() Alternatively, if you don't need an ndarray specifically and just want a list of the unique values of column k: s_df.select('k').distinct().rdd.map(lambda r: r[0]).collect() Finally, you can also use a list comprehension as follows: [i for i in s_df.select('k').distinct().collect()]"} +{"question_id": 5036700, "score": 233, "creation_date": 1297991640, "tags": ["python", "variables"], "instruction": "How can you dynamically create variables?\n\nI want to create variables dynamically in Python. Does anyone have any creative means of doing this?", "output": "Unless there is an overwhelming need to create a mess of variable names, I would just use a dictionary, where you can dynamically create the key names and associate a value to each. a = {} k = 0 while k < 10: # dynamically create key key = ... # calculate value value = ... a[key] = value k += 1 There are also some interesting data structures in the collections module that might be applicable."} +{"question_id": 111945, "score": 232, "creation_date": 1222027865, "tags": ["python", "http", "put"], "instruction": "Is there any way to do HTTP PUT request in Python?\n\nI need to upload some data to a server using HTTP PUT method in Python. From my brief reading of the urllib2 docs, it only does HTTP POST. Is there any way to do an HTTP PUT in Python?", "output": "I've used a variety of python HTTP libs in the past, and I've settled on requests as my favourite. Existing libs had pretty useable interfaces, but code can end up being a few lines too long for simple operations. A basic PUT in requests looks like: payload = {'username': 'bob', 'email': 'bob@bob.com'} >>> r = requests.put(\"http://somedomain.org/endpoint\", data=payload) You can then check the response status code with: r.status_code or the response with: r.content Requests has a lot synactic sugar and shortcuts that'll make your life easier."} +{"question_id": 14351048, "score": 232, "creation_date": 1358307995, "tags": ["python", "django", "django-views", "django-urls"], "instruction": "Django optional URL parameters\n\nI have a Django URL like this: url( r'^project_config/(?P\\w+)/(?P\\w+)/$', 'tool.views.ProjectConfig', name='project_config' ), views.py: def ProjectConfig(request, product, project_id=None, template_name='project.html'): ... # do stuff The problem is that I want the project_id parameter to be optional. I want /project_config/ and /project_config/12345abdce/ to be equally valid URL patterns, so that if project_id is passed, then I can use it. As it stands at the moment, I get a 404 when I access the URL without the project_id parameter.", "output": "Updated 2023 This answer is outdated but still gets activity. See @j-i-l's answer below for Django > 2 and reference to current docs. Original 2013 Answer There are several approaches. One is to use a non-capturing group in the regex: (?:/(?P[a-zA-Z]+)/)? Making a Regex Django URL Token Optional Another, easier to follow way is to have multiple rules that matches your needs, all pointing to the same view. urlpatterns = patterns('', url(r'^project_config/$', views.foo), url(r'^project_config/(?P<product>\\w+)/$', views.foo), url(r'^project_config/(?P<product>\\w+)/(?P<project_id>\\w+)/$', views.foo), ) Keep in mind that in your view you'll also need to set a default for the optional URL parameter, or you'll get an error: def foo(request, optional_parameter=''): # Your code goes here"} +{"question_id": 2266646, "score": 231, "creation_date": 1266245286, "tags": ["python", "logging", "console", "stdout", "python-logging"], "instruction": "How to disable logging on the standard error stream?\n\nHow to disable logging on the standard error stream in Python? This does not work: import logging logger = logging.getLogger() logger.removeHandler(sys.stderr) logger.warning('foobar') # emits 'foobar' on sys.stderr", "output": "I found a solution for this: logger = logging.getLogger('my-logger') logger.propagate = False # now if you use logger it will not log to console. This will prevent logging from being sent to the upper logger that includes the console logging."} +{"question_id": 2104782, "score": 231, "creation_date": 1264020010, "tags": ["python"], "instruction": "Returning the product of a list\n\nIs there a more concise, efficient or simply pythonic way to do the following? def product(lst): p = 1 for i in lst: p *= i return p After some tests I have found out that this is marginally faster than using operator.mul: from operator import mul # from functools import reduce # python3 compatibility def with_lambda(lst): reduce(lambda x, y: x * y, lst) def without_lambda(lst): reduce(mul, lst) def forloop(lst): r = 1 for x in lst: r *= x return r import timeit a = range(50) b = range(1,50)#no zero t = timeit.Timer(\"with_lambda(a)\", \"from __main__ import with_lambda,a\") print(\"with lambda:\", t.timeit()) t = timeit.Timer(\"without_lambda(a)\", \"from __main__ import without_lambda,a\") print(\"without lambda:\", t.timeit()) t = timeit.Timer(\"forloop(a)\", \"from __main__ import forloop,a\") print(\"for loop:\", t.timeit()) t = timeit.Timer(\"with_lambda(b)\", \"from __main__ import with_lambda,b\") print(\"with lambda (no 0):\", t.timeit()) t = timeit.Timer(\"without_lambda(b)\", \"from __main__ import without_lambda,b\") print(\"without lambda (no 0):\", t.timeit()) t = timeit.Timer(\"forloop(b)\", \"from __main__ import forloop,b\") print(\"for loop (no 0):\", t.timeit()) gives me ('with lambda:', 17.755449056625366) ('without lambda:', 8.2084708213806152) ('for loop:', 7.4836349487304688) ('with lambda (no 0):', 22.570688009262085) ('without lambda (no 0):', 12.472226858139038) ('for loop (no 0):', 11.04065990447998)", "output": "Without using lambda: from operator import mul # from functools import reduce # python3 compatibility reduce(mul, list, 1) it is better and faster. With python 2.7.5 from operator import mul import numpy as np import numexpr as ne # from functools import reduce # python3 compatibility a = range(1, 101) %timeit reduce(lambda x, y: x * y, a) # (1) %timeit reduce(mul, a) # (2) %timeit np.prod(a) # (3) %timeit ne.evaluate(\"prod(a)\") # (4) In the following configuration: a = range(1, 101) # A a = np.array(a) # B a = np.arange(1, 1e4, dtype=int) #C a = np.arange(1, 1e5, dtype=float) #D Results with python 2.7.5 | 1 | 2 | 3 | 4 | -------+-----------+-----------+-----------+-----------+ A 20.8 \u00b5s 13.3 \u00b5s 22.6 \u00b5s 39.6 \u00b5s B 106 \u00b5s 95.3 \u00b5s 5.92 \u00b5s 26.1 \u00b5s C 4.34 ms 3.51 ms 16.7 \u00b5s 38.9 \u00b5s D 46.6 ms 38.5 ms 180 \u00b5s 216 \u00b5s Result: np.prod is the fastest one, if you use np.array as data structure (18x for small array, 250x for large array) with python 3.3.2: | 1 | 2 | 3 | 4 | -------+-----------+-----------+-----------+-----------+ A 23.6 \u00b5s 12.3 \u00b5s 68.6 \u00b5s 84.9 \u00b5s B 133 \u00b5s 107 \u00b5s 7.42 \u00b5s 27.5 \u00b5s C 4.79 ms 3.74 ms 18.6 \u00b5s 40.9 \u00b5s D 48.4 ms 36.8 ms 187 \u00b5s 214 \u00b5s Is python 3 slower?"} +{"question_id": 9752958, "score": 231, "creation_date": 1332011933, "tags": ["python", "function", "return", "return-value"], "instruction": "How can I return two values from a function in Python?\n\nI would like to return two values from a function in two separate variables. For example: def select_choice(): loop = 1 row = 0 while loop == 1: print('''Choose from the following options?: 1. Row 1 2. Row 2 3. Row 3''') row = int(input(\"Which row would you like to move the card from?: \")) if row == 1: i = 2 card = list_a[-1] elif row == 2: i = 1 card = list_b[-1] elif row == 3: i = 0 card = list_c[-1] return i return card And I want to be able to use these values separately. When I tried to use return i, card, it returns a tuple and this is not what I want.", "output": "You cannot return two values, but you can return a tuple or a list and unpack it after the call: def select_choice(): ... return i, card # or [i, card] my_i, my_card = select_choice() On line return i, card i, card means creating a tuple. You can also use parenthesis like return (i, card), but tuples are created by comma, so parens are not mandatory. But you can use parens to make your code more readable or to split the tuple over multiple lines. The same applies to line my_i, my_card = select_choice(). If you want to return more than two values, consider using a named tuple. It will allow the caller of the function to access fields of the returned value by name, which is more readable. You can still access items of the tuple by index. For example in Schema.loads method Marshmallow framework returns a UnmarshalResult which is a namedtuple. So you can do: data, errors = MySchema.loads(request.json()) if errors: ... or result = MySchema.loads(request.json()) if result.errors: ... else: # use `result.data` In other cases you may want to return a dict from your function: def select_choice(): ... return {'i': i, 'card': card, 'other_field': other_field, ...} But consider returning an instance of a utility class (or a Pydantic/dataclass model instance), which wraps your data: class ChoiceData(pydantic.BaseModel): i: int card: str other_field: typing.Any def select_choice(): ... return ChoiceData(i=i, card=card, other_field=other_field) choice_data = select_choice() print(choice_data.i, choice_data.card)"} +{"question_id": 1365265, "score": 231, "creation_date": 1251850034, "tags": ["python", "sockets", "ipc", "port"], "instruction": "On localhost, how do I pick a free port number?\n\nI'm trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I'll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to the master. How do I figure out which port is available? I assume I cannot listen on port 80 or 21? I'm using Python, if that cuts the choices down.", "output": "Do not bind to a specific port. Instead, bind to port 0: import socket sock = socket.socket() sock.bind(('', 0)) sock.getsockname()[1] The OS will then pick an available port for you. You can get the port that was chosen using sock.getsockname()[1], and pass it on to the slaves so that they can connect back. sock is the socket that you created, returned by socket.socket."} +{"question_id": 8294618, "score": 231, "creation_date": 1322476892, "tags": ["python"], "instruction": "Define a lambda expression that raises an Exception\n\nHow can I write a lambda expression that's equivalent to: def x(): raise Exception() The following is not allowed: y = lambda : raise Exception()", "output": "There is more than one way to skin a Python: y = lambda: (_ for _ in ()).throw(Exception('foobar')) Lambdas don't accept statements. Since raise ex is a statement, you could write a general purpose raiser: def raise_(ex): raise ex y = lambda: raise_(Exception('foobar')) But if your goal is to avoid a def, this obviously doesn't cut it. It does, however allow you to conditionally raise exceptions, e.g.: y = lambda x: 2*x if x < 10 else raise_(Exception('foobar')) Alternatively you can raise an exception without defining a named function. All you need is a strong stomach (and 2.x for the given code): type(lambda:0)(type((lambda:0).func_code)( 1,1,1,67,'|\\0\\0\\202\\1\\0',(),(),('x',),'','',1,''),{} )(Exception()) And a python3 strong stomach solution: type(lambda: 0)(type((lambda: 0).__code__)( 1,0,1,1,67,b'|\\0\\202\\1\\0',(),(),('x',),'','',1,b''),{} )(Exception()) Thanks @WarrenSpencer for pointing out a very simple answer if you don't care which exception is raised: y = lambda: 1/0."} +{"question_id": 50110800, "score": 231, "creation_date": 1525143152, "tags": ["python", "pathlib"], "instruction": "Python pathlib make directories if they don\u2019t exist\n\nIf I wanted to specify a path to save files to and make directories that don\u2019t exist in that path, is it possible to do this using the pathlib library in one line of code?", "output": "Yes, that is Path.mkdir: pathlib.Path('/tmp/sub1/sub2').mkdir(parents=True, exist_ok=True) From the docs: If parents is true, any missing parents of this path are created as needed; they are created with the default permissions without taking mode into account (mimicking the POSIX mkdir -p command). If parents is false (the default), a missing parent raises FileNotFoundError. If exist_ok is false (the default), FileExistsError is raised if the target directory already exists. If exist_ok is true, FileExistsError exceptions will be ignored (same behavior as the POSIX mkdir -p command), but only if the last path component is not an existing non-directory file."} +{"question_id": 17194301, "score": 231, "creation_date": 1371653090, "tags": ["python", "pip", "requirements.txt"], "instruction": "Is there any way to show the dependency trees for pip packages?\n\nI have a project with multiple package dependencies, the main requirements being listed in requirements.txt. When I call pip freeze it prints the currently installed packages as plain list. I would prefer to also get their dependency relationships, something like this: Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 Jinja2==2.7 Werkzeug==0.8.3 Flask-Admin==1.0.6 Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 The goal is to detect the dependencies of each specific package: Werkzeug==0.8.3 Flask==0.9 Flask-Admin==1.0.6 And insert these into my current requirements.txt. For example, for this input: Flask==0.9 Flask-Admin==1.0.6 Werkzeug==0.8.3 I would like to get: Flask==0.9 Jinja2==2.7 Flask-Admin==1.0.6 Werkzeug==0.8.3 Is there any way show the dependencies of installed pip packages?", "output": "You should take a look at pipdeptree: $ pip install pipdeptree $ pipdeptree -fl Warning!!! Cyclic dependencies found: ------------------------------------------------------------------------ xlwt==0.7.5 ruamel.ext.rtf==0.1.1 xlrd==0.9.3 openpyxl==2.0.4 - jdcal==1.0 pymongo==2.7.1 reportlab==3.1.8 - Pillow==2.5.1 - pip - setuptools It doesn't generate a requirements.txt file as you indicated directly. However the source (255 lines of python code) should be relatively easy to modify to your needs, or alternatively you can (as @MERose indicated is in the pipdeptree 0.3 README) out use: pipdeptree --freeze --warn silence | grep -P '^[\\w0-9\\-=.]+' > requirements.txt The 0.5 version of pipdeptree also allows JSON output with the --json option, that is more easily machine parseble, at the expense of being less readable."} +{"question_id": 12718187, "score": 231, "creation_date": 1349305928, "tags": ["python", "decorator", "static-methods", "python-3.9"], "instruction": "Python version <= 3.9: Calling class staticmethod within the class body?\n\nWhen I attempt to use a static method from within the body of the class, and define the static method using the built-in staticmethod function as a decorator, like this: class Klass(object): @staticmethod # use as decorator def _stat_func(): return 42 _ANS = _stat_func() # call the staticmethod def method(self): ret = Klass._stat_func() + Klass._ANS return ret I get the following error: Traceback (most recent call last): File \"call_staticmethod.py\", line 1, in <module> class Klass(object): File \"call_staticmethod.py\", line 7, in Klass _ANS = _stat_func() TypeError: 'staticmethod' object is not callable I understand why this is happening (descriptor binding), and can work around it by manually converting _stat_func() into a staticmethod after its last use, like so: class Klass(object): def _stat_func(): return 42 _ANS = _stat_func() # use the non-staticmethod version _stat_func = staticmethod(_stat_func) # convert function to a static method def method(self): ret = Klass._stat_func() + Klass._ANS return ret So my question is: Are there cleaner or more \"Pythonic\" ways to accomplish this?", "output": "update for python version >= 3.10: staticmethod functions can be called from within class scope just fine (for more info see: python issue tracker, or \"what's new\", or here) for python version <= 3.9 continue reading staticmethod objects apparently have a __func__ attribute storing the original raw function (makes sense that they had to). So this will work: class Klass(object): @staticmethod # use as decorator def stat_func(): return 42 _ANS = stat_func.__func__() # call the staticmethod def method(self): ret = Klass.stat_func() return ret As an aside, though I suspected that a staticmethod object had some sort of attribute storing the original function, I had no idea of the specifics. In the spirit of teaching someone to fish rather than giving them a fish, this is what I did to investigate and find that out (a C&P from my Python session): >>> class Foo(object): ... @staticmethod ... def foo(): ... return 3 ... global z ... z = foo >>> z <staticmethod object at 0x0000000002E40558> >>> Foo.foo <function foo at 0x0000000002E3CBA8> >>> dir(z) ['__class__', '__delattr__', '__doc__', '__format__', '__func__', '__get__', '__getattribute__', '__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__'] >>> z.__func__ <function foo at 0x0000000002E3CBA8> Similar sorts of digging in an interactive session (dir is very helpful) can often solve these sorts of question very quickly."} +{"question_id": 44664040, "score": 231, "creation_date": 1497997951, "tags": ["python", "python-typing"], "instruction": "Type hints with user defined classes\n\nCouldn't seem to find a definitive answer. I want to do a type hint for a function and the type being some custom class that I have defined, called it CustomClass(). And then let's say in some function, call it FuncA(arg), I have one argument named arg. Would the correct way to type hint FuncA be: def FuncA(arg: CustomClass): Or would it be: from typing import Type def FuncA(Arg:Type[CustomClass]):", "output": "The former is correct, if arg accepts an instance of CustomClass: def FuncA(arg: CustomClass): # ^ instance of CustomClass In case you want the class CustomClass itself (or a subtype), then you should write: from typing import Type # you have to import Type def FuncA(arg: Type[CustomClass]): # ^ CustomClass (class object) itself Like it is written in the documentation about Typing: class typing.Type(Generic[CT_co]) A variable annotated with C may accept a value of type C. In contrast, a variable annotated with Type[C] may accept values that are classes themselves - specifically, it will accept the class object of C. The documentation includes an example with the int class: a = 3 # Has type 'int' b = int # Has type 'Type[int]' c = type(a) # Also has type 'Type[int]' Update 2024: Type is now deprecated in favour of type"} +{"question_id": 2217488, "score": 230, "creation_date": 1265562602, "tags": ["python"], "instruction": "Age from birthdate in python\n\nHow can I find an age in python from today's date and a persons birthdate? The birthdate is a from a DateField in a Django model.", "output": "That can be done much simpler considering that int(True) is 1 and int(False) is 0, and tuples comparison goes from left to right: from datetime import date def calculate_age(born): today = date.today() return today.year - born.year - ((today.month, today.day) < (born.month, born.day))"} +{"question_id": 3605680, "score": 230, "creation_date": 1283222303, "tags": ["python", "xml"], "instruction": "Creating a simple XML file using python\n\nWhat are my options if I want to create a simple XML file in python? (library wise) The xml I want looks like: <root> <doc> <field1 name=\"blah\">some value1</field1> <field2 name=\"asdfasd\">some vlaue2</field2> </doc> </root>", "output": "These days, the most popular (and very simple) option is the ElementTree API, which has been included in the standard library since Python 2.5. The available options for that are: ElementTree (Basic, pure-Python implementation of ElementTree. Part of the standard library since 2.5) cElementTree (Optimized C implementation of ElementTree. Also offered in the standard library since 2.5. Deprecated and folded into the regular ElementTree as an automatic thing as of 3.3.) LXML (Based on libxml2. Offers a rich superset of the ElementTree API as well XPath, CSS Selectors, and more) Here's an example of how to generate your example document using the in-stdlib cElementTree: import xml.etree.cElementTree as ET root = ET.Element(\"root\") doc = ET.SubElement(root, \"doc\") ET.SubElement(doc, \"field1\", name=\"blah\").text = \"some value1\" ET.SubElement(doc, \"field2\", name=\"asdfasd\").text = \"some vlaue2\" tree = ET.ElementTree(root) tree.write(\"filename.xml\") I've tested it and it works, but I'm assuming whitespace isn't significant. If you need \"prettyprint\" indentation, let me know and I'll look up how to do that. (It may be an LXML-specific option. I don't use the stdlib implementation much) For further reading, here are some useful links: API docs for the implementation in the Python standard library Introductory Tutorial (From the original author's site) LXML etree tutorial. (With example code for loading the best available option from all major ElementTree implementations) As a final note, either cElementTree or LXML should be fast enough for all your needs (both are optimized C code), but in the event you're in a situation where you need to squeeze out every last bit of performance, the benchmarks on the LXML site indicate that: LXML clearly wins for serializing (generating) XML As a side-effect of implementing proper parent traversal, LXML is a bit slower than cElementTree for parsing."} +{"question_id": 8447947, "score": 230, "creation_date": 1323445507, "tags": ["python", "python-nonlocal"], "instruction": "Is it possible to modify a variable in python that is in an outer (enclosing), but not global, scope?\n\nConsider this example: def A(): b = 1 def B(): # I can access 'b' from here. print(b) # But can i modify 'b' here? B() A() For the code in the B function, the variable b is in a non-global, enclosing (outer) scope. How can I modify b from within B? I get an UnboundLocalError if I try it directly, and using global does not fix the problem since b is not global. Python implements lexical, not dynamic scope - like almost all modern languages. The techniques here will not allow access to the caller's variables - unless the caller also happens to be an enclosing function - because the caller is not in scope. For more on this problem, see How can I access variables from the caller, even if it isn't an enclosing scope (i.e., implement dynamic scoping)?.", "output": "On Python 3, use the nonlocal keyword: The nonlocal statement causes the listed identifiers to refer to previously bound variables in the nearest enclosing scope excluding globals. This is important because the default behavior for binding is to search the local namespace first. The statement allows encapsulated code to rebind variables outside of the local scope besides the global (module) scope. def foo(): a = 1 def bar(): nonlocal a a = 2 bar() print(a) # Output: 2 On Python 2, use a mutable object (like a list, or dict) and mutate the value instead of reassigning a variable: def foo(): a = [] def bar(): a.append(1) bar() bar() print a foo() Outputs: [1, 1]"} +{"question_id": 553303, "score": 229, "creation_date": 1234791004, "tags": ["python", "datetime", "random"], "instruction": "How to generate a random date between two other dates?\n\nHow would I generate a random date that has to be between two other given dates? The function's signature should be something like this: random_date(\"1/1/2008 1:30 PM\", \"1/1/2009 4:50 AM\", 0.34) ^ ^ ^ date generated has date generated has a random number to be after this to be before this and would return a date such as: 2/4/2008 7:20 PM", "output": "Convert both strings to timestamps (in your chosen resolution, e.g. milliseconds, seconds, hours, days, whatever), subtract the earlier from the later, multiply your random number (assuming it is distributed in the range [0, 1]) with that difference, and add again to the earlier one. Convert the timestamp back to date string and you have a random time in that range. Python example (output is almost in the format you specified, other than 0 padding - blame the American time format conventions): import random import time def str_time_prop(start, end, time_format, prop): \"\"\"Get a time at a proportion of a range of two formatted times. start and end should be strings specifying times formatted in the given format (strftime-style), giving an interval [start, end]. prop specifies how a proportion of the interval to be taken after start. The returned time will be in the specified format. \"\"\" stime = time.mktime(time.strptime(start, time_format)) etime = time.mktime(time.strptime(end, time_format)) ptime = stime + prop * (etime - stime) return time.strftime(time_format, time.localtime(ptime)) def random_date(start, end, prop): return str_time_prop(start, end, '%m/%d/%Y %I:%M %p', prop) print(random_date(\"1/1/2008 1:30 PM\", \"1/1/2009 4:50 AM\", random.random()))"} +{"question_id": 5846167, "score": 229, "creation_date": 1304219452, "tags": ["python", "macos"], "instruction": "How to change default Python version?\n\nI have installed Python 3.2 on my Mac. After I run /Applications/Python 3.2/Update Shell Profile.command, it's confusing that when I type python -V in Terminal, it says Python 2.6.1 which is not what I expected. How can I change the default Python version?", "output": "[updated for 2021] (Regardless if you are on Mac, Linux, or Windows:) If you are confused about how to start the latest version of python, on most platforms it is the case that python3 leaves your python2 installation intact (due to the above compatibility reasons); thus you can start python3 with the python3 command. Historically... The naming convention is that generally, most scripts will call python2 or python3 explicitly. This happened due to a need for backwards compatibility. Even though technically python doesn't even guarantee backwards compatibility between minor versions, Python3 really breaks backwards compatibility. At the time, programs invoking 'python' were expecting python2 (which was the main version at the time). Extremely old systems may have programs and scripts which expect python=python2, and changing this would break those programs and scripts. At the time this answer was written, OP should not have changed this due to maintaining compatibility for old scripts. Circa year 2021... Nowadays, many years after the python2->python3 transition, most software explicitly refers to python2 or python3 (at least on Linux). For example, they might call #!/usr/bin/env python2 or #!/usr/bin/env python3. This has for example (python-is-python3-package) freed up the python command to be settable to a user default, but it really depends on the operating system. The prescription for how distributions should handle the python command was written up in 2011 as PEP 394 -- The \"python\" Command on Unix-Like Systems. It was last updated in June 2019. Regardless of whether you are writing a library or your program, you should specify the version of python (2 or 3, or finer-grained under specific circumstances) you can use in the shebang line, or since you're on OS X, in your IDE with which you are developing your app, so it doesn't mess up the rest of the system (this is what python venvs are for... download and search how to up set up a python3 venv on Mac if you're on a really really old version of OS X). Shell alias: You could, however, make a custom alias in your shell. The way you do so depends on the shell, but perhaps you could do alias py=python3, and put it in your shell startup file (.bashrc, .zshrc, etc). This will only work on your local computer (as it should), and is somewhat unnecessary compared to just typing it out (unless you invoke the command constantly). Confused users should not try to create aliases or virtual environments or similar that make python execute python3; this is poor form.This is acceptable nowadays, but PEP 394 suggests encouraging users to use a virtualenv instead. Different 3.* versions, or 2.* versions: In the extremely unlikely case that if someone comes to this question with two python3 versions e.g. 3.1 vs 3.2, and you are confused that you have somehow installed two versions of python, this is possibly because you have done manual and/or manual installations. You can use your OS's standard package/program install/uninstall/management facilities to help track things down, and perhaps (unless you are doing dev work that surprisingly is impacted by the few backwards-incompatible changes between minor versions) delete the old version (or do make uninstall if you did a manual installation). If you require two versions, then reconfigure your $PATH variable so the 'default' version you want is in front; or if you are using most Linux distros, the command you are looking for is sudo update-alternatives. Make sure any programs you run which need access to the older versions may be properly invoked by their calling environment or shell (by setting up the var PATH in that environment). A bit about $PATH sidenote: To elaborate a bit on PATH: the usual ways that programs are selected is via the PATH (echo $PATH on Linux and Mac) environment variable. You can always run a program with the full path e.g. /usr/bin/\ud83d\udd33 some args, or cd /usr/bin then ./\ud83d\udd33 some args (replace blank with the 'echo' program I mentioned above for example), but otherwise typing \ud83d\udd33 some args has no meaning without PATH env variable which declares the directories we implicitly may search-then-execute files from (if /usr/bin was not in PATH, then it would say \ud83d\udd33: command not found). The first matching command in the first directory is the one which is executed (the which command on Linux and Mac will tell you which sub-path this is). Usually it is (e.g. on Linux, but similar on Mac) something like /usr/bin/python which is a symlink to other symlinks to the final version somewhere, e.g.: % echo $PATH /usr/sbin:/usr/local/bin:/usr/sbin:usr/local/bin:/usr/bin:/bin % which python /usr/bin/python % which python2 /usr/bin/python2 % ls -l /usr/bin/python lrwxrwxrwx 1 root root 7 Mar 4 2019 /usr/bin/python -> python2* % ls -l /usr/bin/python2 lrwxrwxrwx 1 root root 9 Mar 4 2019 /usr/bin/python2 -> python2.7* % ls -l /usr/bin/python2.7 -rwxr-xr-x 1 root root 3689352 Oct 10 2019 /usr/bin/python2.7* % which python3 /usr/bin/python3 % ls -l /usr/bin/python3 lrwxrwxrwx 1 root root 9 Mar 26 2019 /usr/bin/python3 -> python3.7* % ls -l /usr/bin/python3.7 -rwxr-xr-x 2 root root 4877888 Apr 2 2019 /usr/bin/python3.7* % ls -l /usr/bin/python* lrwxrwxrwx 1 root root 7 Mar 4 2019 /usr/bin/python -> python2* lrwxrwxrwx 1 root root 9 Mar 4 2019 /usr/bin/python2 -> python2.7* -rwxr-xr-x 1 root root 3689352 Oct 10 2019 /usr/bin/python2.7* lrwxrwxrwx 1 root root 9 Mar 26 2019 /usr/bin/python3 -> python3.7* -rwxr-xr-x 2 root root 4877888 Apr 2 2019 /usr/bin/python3.7* lrwxrwxrwx 1 root root 33 Apr 2 2019 /usr/bin/python3.7-config -> x86_64-linux-gnu-python3.7-config* -rwxr-xr-x 2 root root 4877888 Apr 2 2019 /usr/bin/python3.7m* lrwxrwxrwx 1 root root 34 Apr 2 2019 /usr/bin/python3.7m-config -> x86_64-linux-gnu-python3.7m-config* lrwxrwxrwx 1 root root 16 Mar 26 2019 /usr/bin/python3-config -> python3.7-config* lrwxrwxrwx 1 root root 10 Mar 26 2019 /usr/bin/python3m -> python3.7m* lrwxrwxrwx 1 root root 17 Mar 26 2019 /usr/bin/python3m-config -> python3.7m-config* sidenote2: (In the rarer case a python program invokes a sub-program with the subprocess module, to specify which program to run, one can modify the paths of subprocesses with sys.path from the sys module or the PYTHONPATH environment variable set on the parent, or specifying the full path... but since the path is inherited by child processes this is not remotely likely an issue.)"} +{"question_id": 26649716, "score": 229, "creation_date": 1414662550, "tags": ["python", "ipython", "python-imaging-library", "jupyter-notebook"], "instruction": "How to show PIL Image in ipython notebook\n\nThis is my code from PIL import Image pil_im = Image.open('data/empire.jpg') I would like to do some image manipulation on it, and then show it on screen. I am having problem with showing PIL Image in python notebook. I have tried: print pil_im And just pil_im But both just give me: <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=569x800 at 0x10ECA0710>", "output": "Updated 2021/11/17 When using PIL/Pillow, Jupyter Notebooks now have a display built-in that will show the image directly, with no extra fuss. display(pil_im) Jupyter will also show the image if it is simply the last line in a cell (this has changed since the original post). Thanks to answers from @Dean and @Prabhat for pointing this out. Other Methods From File You can also use IPython's display module to load the image. You can read more from the doc. from IPython.display import Image pil_img = Image(filename='data/empire.jpg') display(pil_img) From PIL.Image Object As OP's requirement is to use PIL, if you want to show inline image, you can use matplotlib.pyplot.imshow with numpy.asarray like this too: from matplotlib.pyplot import imshow import numpy as np from PIL import Image %matplotlib inline pil_im = Image.open('data/empire.jpg', 'r') imshow(np.asarray(pil_im)) If you only require a preview rather than an inline, you may just use show like this: pil_im = Image.open('data/empire.jpg', 'r') pil_im.show()"} +{"question_id": 1735025, "score": 229, "creation_date": 1258221158, "tags": ["python", "arrays", "numpy", "scipy", "convenience-methods"], "instruction": "How to normalize a NumPy array to within a certain range?\n\nAfter doing some processing on an audio or image array, it needs to be normalized within a range before it can be written back to a file. This can be done like so: # Normalize audio channels to between -1.0 and +1.0 audio[:,0] = audio[:,0]/abs(audio[:,0]).max() audio[:,1] = audio[:,1]/abs(audio[:,1]).max() # Normalize image to between 0 and 255 image = image/(image.max()/255.0) Is there a less verbose, convenience function way to do this? matplotlib.colors.Normalize() doesn't seem to be related.", "output": "# Normalize audio channels to between -1.0 and +1.0 audio /= np.max(np.abs(audio),axis=0) # Normalize image to between 0 and 255 image *= (255.0/image.max()) Using /= and *= allows you to eliminate an intermediate temporary array, thus saving some memory. Multiplication is less expensive than division, so image *= 255.0/image.max() # Uses 1 division and image.size multiplications is marginally faster than image /= image.max()/255.0 # Uses 1+image.size divisions Since we are using basic numpy methods here, I think this is about as efficient a solution in numpy as can be. In-place operations do not change the dtype of the container array. Since the desired normalized values are floats, the audio and image arrays need to have floating-point point dtype before the in-place operations are performed. If they are not already of floating-point dtype, you'll need to convert them using astype. For example, image = image.astype('float64')"} +{"question_id": 30306099, "score": 228, "creation_date": 1431959867, "tags": ["python", "pip", "setuptools", "python-packaging"], "instruction": "\"pip install --editable ./\" vs \"python setup.py develop\"\n\nIs there any significant difference between pip install -e /path/to/mypackage and the setuptools variant? python /path/to/mypackage/setup.py develop", "output": "Try to avoid calling setup.py directly, it will not properly tell pip that you've installed your package. With pip install -e: For local projects, the \u201cSomeProject.egg-info\u201d directory is created relative to the project path. This is one advantage over just using setup.py develop, which creates the \u201cegg-info\u201d directly relative the current working directory. More: docs Also read the setuptools' docs."} +{"question_id": 8234445, "score": 227, "creation_date": 1321999383, "tags": ["python", "alignment", "string-formatting"], "instruction": "Format output string, right alignment\n\nI am processing a text file containing coordinates x, y, z 1 128 1298039 123388 0 2 .... every line is delimited into 3 items using words = line.split() After processing data I need to write coordinates back in another txt file so as items in each column are aligned right (as well as the input file). Every line is composed of the coordinates line_new = words[0] + ' ' + words[1] + ' ' words[2]. Is there any manipulator like std::setw() etc. in C++ allowing to set the width and alignment?", "output": "Try this approach using the newer str.format syntax. This uses a width of 12, space padded, right aligned. line_new = '{:>12} {:>12} {:>12}'.format(words[0], words[1], words[2]) Example with the interpreter: >>> line = \"123 456 789\" >>> words = line.split(' ') >>> line_new = '{:>12} {:>12} {:>12}'.format(words[0], words[1], words[2]) >>> print(line_new) 123 456 789 And here's how to do it using the old % syntax (useful for older versions of Python that don't support str.format): line_new = '%12s %12s %12s' % (words[0], words[1], words[2])"} +{"question_id": 1271320, "score": 227, "creation_date": 1250161848, "tags": ["python", "generator", "yield"], "instruction": "Resetting generator object in Python\n\nI have a generator object returned by multiple yield. Preparation to call this generator is rather time-consuming operation. That is why I want to reuse the generator several times. y = FunctionWithYield() for x in y: print(x) #here must be something to reset 'y' for x in y: print(x) Of course, I'm taking in mind copying content into simple list. Is there a way to reset my generator? See also: How to look ahead one element (peek) in a Python generator?", "output": "Another option is to use the itertools.tee() function to create a second version of your generator: import itertools y = FunctionWithYield() y, y_backup = itertools.tee(y) for x in y: print(x) for x in y_backup: print(x) This could be beneficial from memory usage point of view if the original iteration might not process all the items."} +{"question_id": 41546883, "score": 227, "creation_date": 1483960893, "tags": ["python", "environment-variables", "python-dotenv"], "instruction": "What is the use of python-dotenv?\n\nNeed an example and please explain me the purpose of python-dotenv. I am kind of confused with the documentation.", "output": "From the Github page: Reads the key,value pair from .env and adds them to environment variable. It is great of managing app settings during development and in production using 12-factor principles. Assuming you have created the .env file along-side your settings module. . \u251c\u2500\u2500 .env \u2514\u2500\u2500 settings.py Add the following code to your settings.py: # settings.py import os from os.path import join, dirname from dotenv import load_dotenv dotenv_path = join(dirname(__file__), '.env') load_dotenv(dotenv_path) SECRET_KEY = os.environ.get(\"SECRET_KEY\") DATABASE_PASSWORD = os.environ.get(\"DATABASE_PASSWORD\") .env is a simple text file with each environment variable listed one per line, in the format of KEY=\"Value\". The lines starting with # are ignored. SOME_VAR=someval # I am a comment and that is OK FOO=\"BAR\""} +{"question_id": 14117415, "score": 227, "creation_date": 1357105641, "tags": ["python", "argparse"], "instruction": "How can I constrain a value parsed with argparse (for example, restrict an integer to positive values)?\n\nI have this code so far: import argparse parser = argparse.ArgumentParser() parser.add_argument(\"-g\", \"--games\", type=int, default=162, help=\"The number of games to simulate\") args = parser.parse_args() It does not make sense to supply a negative value for the number of games, but type=int allows any integer. For example, if I run python simulate_many.py -g -2, args.games will be set to -2 and the program will continue as if nothing is wrong. I realize that I could just explicit check the value of args.games after parsing arguments. But can I make argparse itself check this condition? How? I would prefer it to work that way so that the automatic usage message can explain the requirement to the user. Ideally, the output would look something like: python simulate_many.py -g -2 usage: simulate_many.py [-h] [-g GAMES] [-d] [-l LEAGUE] simulate_many.py: error: argument -g/--games: invalid positive int value: '-2' just as it currently handles arguments that can't be converted to integer: python simulate_many.py -g a usage: simulate_many.py [-h] [-g GAMES] [-d] [-l LEAGUE] simulate_many.py: error: argument -g/--games: invalid int value: 'a'", "output": "This should be possible utilizing type. You'll still need to define an actual method that decides this for you: def check_positive(value): ivalue = int(value) if ivalue <= 0: raise argparse.ArgumentTypeError(\"%s is an invalid positive int value\" % value) return ivalue parser = argparse.ArgumentParser(...) parser.add_argument('foo', type=check_positive) This is basically just an adapted example from the perfect_square function in the docs on argparse."} +{"question_id": 57507832, "score": 227, "creation_date": 1565862515, "tags": ["python", "numpy"], "instruction": "Unable to allocate array with shape and data type\n\nI'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS. I am trying to allocate memory for a numpy array with shape (156816, 36, 53806) with np.zeros((156816, 36, 53806), dtype='uint8') and while I'm getting an error on Ubuntu OS >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') Traceback (most recent call last): File \"<stdin>\", line 1, in <module> numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8 I'm not getting it on MacOS: >>> import numpy as np >>> np.zeros((156816, 36, 53806), dtype='uint8') array([[[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], ..., [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], [[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]], dtype=uint8) I've read somewhere that np.zeros shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb. versions: Ubuntu os -> ubuntu mate 18 python -> 3.6.8 numpy -> 1.17.0 mac os -> 10.14.6 python -> 3.6.4 numpy -> 1.17.0 PS: also failed on Google Colab", "output": "This is likely due to your system's overcommit handling mode. In the default mode, 0, Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. The root is allowed to allocate slightly more memory in this mode. This is the default. The exact heuristic used is not well explained here, but this is discussed more on Linux over commit heuristic and on this page. You can check your current overcommit mode by running $ cat /proc/sys/vm/overcommit_memory 0 In this case, you're allocating >>> 156816 * 36 * 53806 / 1024.0**3 282.8939827680588 ~282 GB and the kernel is saying well obviously there's no way I'm going to be able to commit that many physical pages to this, and it refuses the allocation. If (as root) you run: $ echo 1 > /proc/sys/vm/overcommit_memory This will enable the \"always overcommit\" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least). I tested this myself on a machine with 32 GB of RAM. With overcommit mode 0 I also got a MemoryError, but after changing it back to 1 it works: >>> import numpy as np >>> a = np.zeros((156816, 36, 53806), dtype='uint8') >>> a.nbytes 303755101056 You can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays."} +{"question_id": 12646326, "score": 227, "creation_date": 1348861205, "tags": ["python", "class"], "instruction": "Calling a class function inside of __init__\n\nI'm writing some code that takes a filename, opens the file, and parses out some data. I'd like to do this in a class. The following code works: class MyClass(): def __init__(self, filename): self.filename = filename self.stat1 = None self.stat2 = None self.stat3 = None self.stat4 = None self.stat5 = None def parse_file(): #do some parsing self.stat1 = result_from_parse1 self.stat2 = result_from_parse2 self.stat3 = result_from_parse3 self.stat4 = result_from_parse4 self.stat5 = result_from_parse5 parse_file() But it involves me putting all of the parsing machinery in the scope of the __init__ function for my class. That looks fine now for this simplified code, but the function parse_file has quite a few levels of indention as well. I'd prefer to define the function parse_file() as a class function like below: class MyClass(): def __init__(self, filename): self.filename = filename self.stat1 = None self.stat2 = None self.stat3 = None self.stat4 = None self.stat5 = None parse_file() def parse_file(): #do some parsing self.stat1 = result_from_parse1 self.stat2 = result_from_parse2 self.stat3 = result_from_parse3 self.stat4 = result_from_parse4 self.stat5 = result_from_parse5 Of course this code doesn't work because the function parse_file() is not within the scope of the __init__ function. Is there a way to call a class function from within __init__ of that class? Or am I thinking about this the wrong way?", "output": "Call the function in this way: self.parse_file() You also need to define your parse_file() function like this: def parse_file(self): The parse_file method has to be bound to an object upon calling it (because it's not a static method). This is done by calling the function on an instance of the object, in your case the instance is self."} +{"question_id": 489861, "score": 226, "creation_date": 1233184803, "tags": ["python", "file-locking"], "instruction": "Locking a file in Python\n\nI need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.", "output": "Update as of June 2024 Nowadays there seem to be a number of robust, cross-platform, actively-maintained solutions to this. A few of the most cited in other answers and comments are: filelock Portalocker oslo.concurrency (if you need more general multi-process synchronization utilities) Original Answer Alright, so I ended up going with the code I wrote here, on my website link is dead, view on archive.org (also available on GitHub). I can use it in the following fashion: from filelock import FileLock with FileLock(\"myfile.txt.lock\"): # work with the file as it is now locked print(\"Lock acquired.\")"} +{"question_id": 1239035, "score": 225, "creation_date": 1249566935, "tags": ["python", "asynchronous", "python-asyncio", "coroutine"], "instruction": "Asynchronous method call in Python?\n\nI was wondering if there's any library for asynchronous method calls in Python. It would be great if you could do something like @async def longComputation(): <code> token = longComputation() token.registerCallback(callback_function) # alternative, polling while not token.finished(): doSomethingElse() if token.finished(): result = token.result() Or to call a non-async routine asynchronously def longComputation() <code> token = asynccall(longComputation()) It would be great to have a more refined strategy as native in the language core. Was this considered?", "output": "You can use the multiprocessing module added in Python 2.6. You can use pools of processes and then get results asynchronously with: apply_async(func[, args[, kwds[, callback]]]) E.g.: import time from multiprocessing import Pool def postprocess(result): print(\"finished: %s\" % result) def f(x): return x*x if __name__ == '__main__': pool = Pool(processes=1) # Start a worker processes. result = pool.apply_async(f, [10], callback=postprocess) # Evaluate \"f(10)\" asynchronously calling callback when finished. print(\"waiting...\") time.sleep(1) This is only one alternative. This module provides lots of facilities to achieve what you want. Also it will be really easy to make a decorator from this."} +{"question_id": 19130942, "score": 225, "creation_date": 1380694275, "tags": ["python", "python-3.x", "django", "django-forms", "django-validation"], "instruction": "What's the best way to store a phone number in Django models?\n\nI am storing a phone number in model like this: phone_number = models.CharField(max_length=12) The user would enter a phone number and I would use the phone number for SMS authentication. This application would be used globally. So I would also need a country code. Is CharField a good way to store a phone number? And, how do I validate the phone number?", "output": "You might actually look into the internationally standardized format E.164, recommended by Twilio for example (who have a service and an API for sending SMS or phone-calls via REST requests). This is likely to be the most universal way to store phone numbers, in particular if you have international numbers work with. Phone by PhoneNumberField You can use the phonenumber_field library. It is a port of Google's libphonenumber library, which powers Android's phone number handling. See django-phonenumber-field. In the model: from phonenumber_field.modelfields import PhoneNumberField class Client(models.Model, Importable): phone = PhoneNumberField(null=False, blank=False, unique=True) In the form: from phonenumber_field.formfields import PhoneNumberField class ClientForm(forms.Form): phone = PhoneNumberField() Get the phone as a string from an object field: client.phone.as_e164 Normalize the phone string (for tests and other staff): from phonenumber_field.phonenumber import PhoneNumber phone = PhoneNumber.from_string(phone_number=raw_phone, region='RU').as_e164 Phone by regexp One note for your model: E.164 numbers have a maximum character length of 15. To validate, you can employ some combination of formatting and then attempting to contact the number immediately to verify. I believe I used something like the following in my django project: class ReceiverForm(forms.ModelForm): phone_number = forms.RegexField(regex=r'^\\+?1?\\d{9,15}$', error_messages = (\"Phone number must be entered in the format: '+999999999'. Up to 15 digits is allowed.\")) As per jpotter6, you can do something like the following in your models as well: File models.py: from django.core.validators import RegexValidator class PhoneModel(models.Model): ... phone_regex = RegexValidator(regex=r'^\\+?1?\\d{9,15}$', message=\"Phone number must be entered in the format: '+999999999'. Up to 15 digits allowed.\") phone_number = models.CharField(validators=[phone_regex], max_length=17, blank=True) # Validators should be a list"} +{"question_id": 25748683, "score": 225, "creation_date": 1410276963, "tags": ["python", "pandas", "dataframe", "sum"], "instruction": "Pandas: sum DataFrame rows for given columns\n\nI have the following DataFrame: In [1]: df = pd.DataFrame({'a': [1, 2, 3], 'b': [2, 3, 4], 'c': ['dd', 'ee', 'ff'], 'd': [5, 9, 1]}) df Out [1]: a b c d 0 1 2 dd 5 1 2 3 ee 9 2 3 4 ff 1 I would like to add a column 'e' which is the sum of columns 'a', 'b' and 'd'. Going across forums, I thought something like this would work: df['e'] = df[['a', 'b', 'd']].map(sum) But it didn't. I would like to know the appropriate operation with the list of columns ['a', 'b', 'd'] and df as inputs.", "output": "You can just sum and set axis=1 to sum the rows, which will ignore non-numeric columns; from pandas 2.0+ you also need to specify numeric_only=True. In [91]: df = pd.DataFrame({'a': [1,2,3], 'b': [2,3,4], 'c':['dd','ee','ff'], 'd':[5,9,1]}) df['e'] = df.sum(axis=1, numeric_only=True) df Out[91]: a b c d e 0 1 2 dd 5 8 1 2 3 ee 9 14 2 3 4 ff 1 8 If you want to just sum specific columns then you can create a list of the columns and remove the ones you are not interested in: In [98]: col_list= list(df) col_list.remove('d') col_list Out[98]: ['a', 'b', 'c'] In [99]: df['e'] = df[col_list].sum(axis=1) df Out[99]: a b c d e 0 1 2 dd 5 3 1 2 3 ee 9 5 2 3 4 ff 1 7 sum docs"} +{"question_id": 5262702, "score": 225, "creation_date": 1299774598, "tags": ["python", "argparse"], "instruction": "argparse module How to add option without any argument?\n\nI have created a script using argparse. The script needs to take a configuration file name as an option, and user can specify whether they need to proceed totally the script or only simulate it. The args to be passed: ./script -f config_file -s or ./script -f config_file. It's ok for the -f config_file part, but It keeps asking me for arguments for the -s which is optionnal and should not be followed by any. I have tried this: parser = argparse.ArgumentParser() parser.add_argument('-f', '--file') #parser.add_argument('-s', '--simulate', nargs = '0') args = parser.parse_args() if args.file: config_file = args.file if args.set_in_prod: simulate = True else: pass With the following errors: File \"/usr/local/lib/python2.6/dist-packages/argparse.py\", line 2169, in _get_nargs_pattern nargs_pattern = '(-*%s-*)' % '-*'.join('A' * nargs) TypeError: can't multiply sequence by non-int of type 'str' And same errror with '' instead of 0.", "output": "As @Felix Kling suggested, to create an option that needs no value, use action='store_true', 'store_false' or 'store_const'. See documentation. >>> from argparse import ArgumentParser >>> p = ArgumentParser() >>> _ = p.add_argument('-f', '--foo', action='store_true') >>> args = p.parse_args() >>> args.foo False >>> args = p.parse_args(['-f']) >>> args.foo True"} +{"question_id": 6416538, "score": 225, "creation_date": 1308598948, "tags": ["python", "generator"], "instruction": "How to check if an object is a generator object in Python?\n\nIn Python, how do I check if an object is a generator object? Trying >>> type(myobject, generator) gives the error Traceback (most recent call last): File \"<stdin>\", line 1, in <module> NameError: name 'generator' is not defined (I know I can check if the object has a __next__ method for it to be a generator, but I want some way using which I can determine the type of any object, not just generators.)", "output": "You can use GeneratorType from types: >>> import types >>> types.GeneratorType <class 'generator'> >>> gen = (i for i in range(10)) >>> isinstance(gen, types.GeneratorType) True"} +{"question_id": 14734533, "score": 225, "creation_date": 1360169754, "tags": ["python", "pandas", "dataframe", "group-by"], "instruction": "How to access subdataframes of pandas groupby by key\n\nHow do I access the corresponding groupby dataframe in a groupby object by the key? With the following groupby: rand = np.random.RandomState(1) df = pd.DataFrame({'A': ['foo', 'bar'] * 3, 'B': rand.randn(6), 'C': rand.randint(0, 20, 6)}) gb = df.groupby(['A']) I can iterate through it to get the keys and groups: In [11]: for k, gp in gb: print 'key=' + str(k) print gp key=bar A B C 1 bar -0.611756 18 3 bar -1.072969 10 5 bar -2.301539 18 key=foo A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 I would like to be able to access a group by its key: In [12]: gb['foo'] Out[12]: A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 But when I try doing that with gb[('foo',)] I get this weird pandas.core.groupby.DataFrameGroupBy object thing which doesn't seem to have any methods that correspond to the DataFrame I want. The best I could think of is: In [13]: def gb_df_key(gb, key, orig_df): ix = gb.indices[key] return orig_df.ix[ix] gb_df_key(gb, 'foo', df) Out[13]: A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 but this is kind of nasty, considering how nice pandas usually is at these things. What's the built-in way of doing this?", "output": "You can use the get_group method: In [21]: gb.get_group('foo') Out[21]: A B C 0 foo 1.624345 5 2 foo -0.528172 11 4 foo 0.865408 14 Note: This doesn't require creating an intermediary dictionary / copy of every subdataframe for every group, so will be much more memory-efficient than creating the naive dictionary with dict(iter(gb)). This is because it uses data-structures already available in the groupby object. You can select different columns using the groupby slicing: In [22]: gb[[\"A\", \"B\"]].get_group(\"foo\") Out[22]: A B 0 foo 1.624345 2 foo -0.528172 4 foo 0.865408 In [23]: gb[\"C\"].get_group(\"foo\") Out[23]: 0 5 2 11 4 14 Name: C, dtype: int64"} +{"question_id": 6822725, "score": 225, "creation_date": 1311630118, "tags": ["python", "algorithm"], "instruction": "Rolling or sliding window iterator?\n\nI need a rolling window (aka sliding window) iterable over a sequence/iterator/generator. (Default Python iteration could be considered a special case, where the window length is 1.) I'm currently using the following code. How can I do it more elegantly and/or efficiently? def rolling_window(seq, window_size): it = iter(seq) win = [it.next() for cnt in xrange(window_size)] # First window yield win for e in it: # Subsequent windows win[:-1] = win[1:] win[-1] = e yield win if __name__==\"__main__\": for w in rolling_window(xrange(6), 3): print w \"\"\"Example output: [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5] \"\"\" For the specific case of window_size == 2 (i.e., iterating over adjacent, overlapping pairs in a sequence), see also How can I iterate over overlapping (current, next) pairs of values from a list?.", "output": "There's one in an old version of the Python docs with itertools examples: from itertools import islice def window(seq, n=2): \"Returns a sliding window (of width n) over data from the iterable\" \" s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... \" it = iter(seq) result = tuple(islice(it, n)) if len(result) == n: yield result for elem in it: result = result[1:] + (elem,) yield result The one from the docs is a little more succinct and uses itertools to greater effect I imagine. If your iterator is a simple list/tuple a simple way to slide through it with a specified window size would be: seq = [0, 1, 2, 3, 4, 5] window_size = 3 for i in range(len(seq) - window_size + 1): print(seq[i: i + window_size]) Output: [0, 1, 2] [1, 2, 3] [2, 3, 4] [3, 4, 5]"} +{"question_id": 36117583, "score": 224, "creation_date": 1458497504, "tags": ["python", "if-statement"], "instruction": "Pythonic way to avoid \"if x: return x\" statements\n\nI have a method that calls 4 other methods in sequence to check for specific conditions, and returns immediately (not checking the following ones) whenever one returns something Truthy. def check_all_conditions(): x = check_size() if x: return x x = check_color() if x: return x x = check_tone() if x: return x x = check_flavor() if x: return x return None This seems like a lot of baggage code. Instead of each 2-line if statement, I'd rather do something like: x and return x But that is invalid Python. Am I missing a simple, elegant solution here? Incidentally, in this situation, those four check methods may be expensive, so I do not want to call them multiple times.", "output": "You could use a loop: conditions = (check_size, check_color, check_tone, check_flavor) for condition in conditions: if result := condition(): return result This has the added advantage that you can now make the number of conditions variable. Note that the above example uses an assignment expression (aka the walrus expression) to integrate the asignment and result test; this requires Python 3.8 or newer. You could use map() + filter() to get the first such matching value, and, as of Python 3.11, operator.call(): try: # python 3.11 from operator import call except ImportError: # older versions def call(callable): return callable() conditions = (check_size, check_color, check_tone, check_flavor) return next(filter(None, map(call, conditions)), None) but if this is more readable is debatable. Another option is to use a generator expression: conditions = (check_size, check_color, check_tone, check_flavor) checks = (condition() for condition in conditions) return next((check for check in checks if check), None)"} +{"question_id": 7824101, "score": 224, "creation_date": 1319039293, "tags": ["python", "flask", "httpresponse"], "instruction": "Return HTTP status code 201 in flask\n\nWe're using Flask for one of our API's and I was just wondering if anyone knew how to return a HTTP response 201? For errors such as 404 we can call: from flask import abort abort(404) But for 201 I get LookupError: no exception for 201 Do I need to create my own exception like this in the docs?", "output": "You can read about it here. return render_template('page.html'), 201"} +{"question_id": 41034866, "score": 224, "creation_date": 1481185500, "tags": ["python", "jupyter-notebook", "jupyter"], "instruction": "How can I fix \"'jupyter' is not recognized as an internal or external command, operable program or batch file\" when running Jupyter on Windows?\n\nI have installed Jupyter on Windows 10, Python 3.x via pip install jupyter The installation works fine, even though I did restart the terminal. But trying to run jupyter notebook gives the following error 'jupyter' is not recognized as an internal or external command, operable program or batch file. How and where do I find the executable for Jupyter?", "output": "If you are absolutely sure that your Python library path is in your system variables (and you can find that path when you pip install Jupyter, you just have to read a bit) and you still experience \"command not found or recognized\" errors in Windows, you can try: python -m notebook For my Windows at least (Windows 10 Pro), having the python -m is the only way I can run my Python packages from command line without running into some sort of error: Fatal error in launcher: Unable to create process using ' \" or Errno 'THIS_PROGRAM' not found"} +{"question_id": 212358, "score": 224, "creation_date": 1224253397, "tags": ["python", "binary-search", "bisection"], "instruction": "Binary search (bisection) in Python\n\nIs there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not? I found the functions bisect_left/right in the bisect module, but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything). I thought of using bisect_left and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it. Edit To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list. Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements. I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested.", "output": "bisect_left finds the first position p at which an element could be inserted in a given sorted range while maintaining the sorted order. That will be the position of x if x exists in the range. If p is the past-the-end position, x wasn't found. Otherwise, we can test to see if x is there to see if x was found. from bisect import bisect_left def binary_search(a, x, lo=0, hi=None): if hi is None: hi = len(a) pos = bisect_left(a, x, lo, hi) # find insertion position return pos if pos != hi and a[pos] == x else -1 # don't walk off the end"} +{"question_id": 53218931, "score": 224, "creation_date": 1541729962, "tags": ["python", "pandas", "dataframe", "pandas-explode"], "instruction": "How to unnest (explode) a column in a pandas DataFrame, into multiple rows\n\nI have the following DataFrame where one of the columns is an object (list type cell): df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]}) Output: A B 0 1 [1, 2] 1 2 [1, 2] My expected output is: A B 0 1 1 1 1 2 3 2 1 4 2 2 What should I do to achieve this? Related question: Pandas column of lists, create a row for each list element - Good question and answer but only handle one column with list. (In my answer the self-def function will work for multiple columns, also the accepted answer is use the most time consuming apply, which is not recommended, check more info When should I (not) want to use pandas apply() in my code?)", "output": "Use DataFrame.explode(): df.explode('B') A B 0 1 1 1 1 2 0 2 1 1 2 2 This was added in pandas 0.25, and support for multiple columns was added in pandas 1.3.0. [Original answer follows, including legacy solutions] I know object dtype makes the data hard to convert with pandas functions. When I received the data like this, the first thing that came to mind was to \"flatten\" or unnest the columns. I am using pandas and Python functions for this type of question. If you are worried about the speed of the below solutions, check out user3483203's answer, since it's using numpy and most of the time numpy is faster. I recommend Cython or numba if speed matters. Method 0: .explode() [pandas >= 0.25] [Solution moved above. The below note no longer applies since NaNs are now handled as expected, but it is kept for reference.] Given a dataframe with an empty list or a NaN in the column. An empty list will not cause an issue, but a NaN will need to be filled with a list df = pd.DataFrame({ 'A': [1, 2, 3, 4], 'B': [[1, 2], [1, 2], [], np.nan]}) df.B = df.B.fillna({i: [] for i in df.index}) # replace NaN with [] df.explode('B') A B 0 1 1 0 1 2 1 2 1 1 2 2 2 3 NaN 3 4 NaN Method 1: apply + pd.Series (easy to understand but in terms of performance not recommended.) df.set_index('A').B.apply(pd.Series).stack().reset_index(level=0).rename(columns={0:'B'}) Out[463]: A B 0 1 1 1 1 2 0 2 1 1 2 2 Method 2 Using repeat with DataFrame constructor, re-create your dataframe (good at performance, not good at multiple columns) df = pd.DataFrame({ 'A': df.A.repeat(df.B.str.len()), 'B': np.concatenate(df.B.values)}) df Out[465]: A B 0 1 1 0 1 2 1 2 1 1 2 2 Method 2.1 for example besides A we have A.1 .....A.n. If we still use the method (Method 2) above it is hard for us to re-create the columns one by one. Solution: join or merge with the index after 'unnest' the single columns s = pd.DataFrame({ 'B': np.concatenate(df.B.values)}, index=df.index.repeat(df.B.str.len())) s.join(df.drop('B', 1), how='left') Out[477]: B A 0 1 1 0 2 1 1 1 2 1 2 2 If you need the column order exactly the same as before, add reindex at the end. s.join(df.drop('B', 1), how='left').reindex(columns=df.columns) Method 3 recreate the list pd.DataFrame([[x] + [z] for x, y in df.values for z in y], columns=df.columns) Out[488]: A B 0 1 1 1 1 2 2 2 1 3 2 2 If more than two columns, use s = pd.DataFrame([[x] + [z] for x, y in zip(df.index, df.B) for z in y]) s.merge(df, left_on=0, right_index=True) Out[491]: 0 1 A B 0 0 1 1 [1, 2] 1 0 2 1 [1, 2] 2 1 1 2 [1, 2] 3 1 2 2 [1, 2] Method 4: using reindex or loc df.reindex(df.index.repeat(df.B.str.len())).assign(B=np.concatenate(df.B.values)) Out[554]: A B 0 1 1 0 1 2 1 2 1 1 2 2 #df.loc[df.index.repeat(df.B.str.len())].assign(B=np.concatenate(df.B.values)) Method 5 when the list only contains unique values: df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [3, 4]]}) from collections import ChainMap d = dict(ChainMap(*map(dict.fromkeys, df['B'], df['A']))) pd.DataFrame(list(d.items()), columns=df.columns[::-1]) Out[574]: B A 0 1 1 1 2 1 2 3 2 3 4 2 Method 6: using numpy for high performance newvalues = np.dstack(( np.repeat(df.A.values, list(map(len, df.B.values))), np.concatenate(df.B.values))) pd.DataFrame(data=newvalues[0], columns=df.columns) A B 0 1 1 1 1 2 2 2 1 3 2 2 Method 7: Pure Python solution just for fun using base function itertools cycle and chain from itertools import cycle, chain l = df.values.tolist() l1 = [ list(zip([x[0]], cycle(x[1])) if len([x[0]]) > len(x[1]) else list(zip(cycle([x[0]]), x[1]))) for x in l] pd.DataFrame(list(chain.from_iterable(l1)), columns=df.columns) A B 0 1 1 1 1 2 2 2 1 3 2 2 Generalizing to multiple columns df = pd.DataFrame({ 'A': [1, 2], 'B': [[1, 2], [3, 4]], 'C': [[1, 2], [3, 4]]}) df Out[592]: A B C 0 1 [1, 2] [1, 2] 1 2 [3, 4] [3, 4] Self-def function: def unnesting(df, explode): idx = df.index.repeat(df[explode[0]].str.len()) df1 = pd.concat( [pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1) df1.index = idx return df1.join(df.drop(explode, 1), how='left') unnesting(df, ['B', 'C']) Out[609]: B C A 0 1 1 1 0 2 2 1 1 3 3 2 1 4 4 2 Column-wise Unnesting All above methods are talking about vertical unnesting and explode. If you need to expand the list horizontally, try pd.DataFrame constructor df.join(pd.DataFrame(df.B.tolist(), index=df.index).add_prefix('B_')) Out[33]: A B C B_0 B_1 0 1 [1, 2] [1, 2] 1 2 1 2 [3, 4] [3, 4] 3 4 Updated function def unnesting(df, explode, axis): if axis == 1: idx = df.index.repeat(df[explode[0]].str.len()) df1 = pd.concat( [pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1) df1.index = idx return df1.join(df.drop(explode, 1), how='left') else: df1 = pd.concat( [pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1) return df1.join(df.drop(explode, 1), how='left') Test output unnesting(df, ['B', 'C'], axis=0) Out[36]: B0 B1 C0 C1 A 0 1 2 1 2 1 1 3 4 3 4 2 Update 2021-02-17 with original explode function def unnesting(df, explode, axis): if axis == 1: df1 = pd.concat( [df[x].explode() for x in explode], axis=1) return df1.join(df.drop(explode, 1), how='left') else: df1 = pd.concat( [pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1) return df1.join(df.drop(explode, 1), how='left')"} +{"question_id": 26977076, "score": 224, "creation_date": 1416241272, "tags": ["python", "pandas", "dataframe", "unique"], "instruction": "pandas unique values multiple columns\n\ndf = pd.DataFrame({'Col1': ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'], 'Col2': ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'], 'Col3': np.random.random(5)}) What is the best way to return the unique values of 'Col1' and 'Col2'? The desired output is 'Bob', 'Joe', 'Bill', 'Mary', 'Steve'", "output": "pd.unique returns the unique values from an input array, or DataFrame column or index. The input to this function needs to be one-dimensional, so multiple columns will need to be combined. The simplest way is to select the columns you want and then view the values in a flattened NumPy array. The whole operation looks like this: >>> pd.unique(df[['Col1', 'Col2']].values.ravel('K')) array(['Bob', 'Joe', 'Bill', 'Mary', 'Steve'], dtype=object) Note that ravel() is an array method that returns a view (if possible) of a multidimensional array. The argument 'K' tells the method to flatten the array in the order the elements are stored in the memory (pandas typically stores underlying arrays in Fortran-contiguous order; columns before rows). This can be significantly faster than using the method's default 'C' order. An alternative way is to select the columns and pass them to np.unique: >>> np.unique(df[['Col1', 'Col2']].values) array(['Bill', 'Bob', 'Joe', 'Mary', 'Steve'], dtype=object) There is no need to use ravel() here as the method handles multidimensional arrays. Even so, this is likely to be slower than pd.unique as it uses a sort-based algorithm rather than a hashtable to identify unique values. The difference in speed is significant for larger DataFrames (especially if there are only a handful of unique values): >>> df1 = pd.concat([df]*100000, ignore_index=True) # DataFrame with 500000 rows >>> %timeit np.unique(df1[['Col1', 'Col2']].values) 1 loop, best of 3: 1.12 s per loop >>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel('K')) 10 loops, best of 3: 38.9 ms per loop >>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel()) # ravel using C order 10 loops, best of 3: 49.9 ms per loop"} +{"question_id": 768634, "score": 224, "creation_date": 1240239115, "tags": ["python", "compiler-construction", "abstract-syntax-tree"], "instruction": "Parse a .py file, read the AST, modify it, then write back the modified source code\n\nI want to programmatically edit python source code. Basically I want to read a .py file, generate the AST, and then write back the modified python source code (i.e. another .py file). There are ways to parse/compile python source code using standard python modules, such as ast or compiler. However, I don't think any of them support ways to modify the source code (e.g. delete this function declaration) and then write back the modifying python source code. UPDATE: The reason I want to do this is I'd like to write a Mutation testing library for python, mostly by deleting statements / expressions, rerunning tests and seeing what breaks.", "output": "Pythoscope does this to the test cases it automatically generates as does the 2to3 tool for python 2.6 (it converts python 2.x source into python 3.x source). Both these tools uses the lib2to3 library which is an implementation of the python parser/compiler machinery that can preserve comments in source when it's round tripped from source -> AST -> source. The rope project may meet your needs if you want to do more refactoring like transforms. The ast module is your other option, and there's an older example of how to \"unparse\" syntax trees back into code (using the parser module). But the ast module is more useful when doing an AST transform on code that is then transformed into a code object. The redbaron project also may be a good fit (ht Xavier Combelle)"} +{"question_id": 12524994, "score": 223, "creation_date": 1348206881, "tags": ["python", "encryption", "padding", "pycrypto", "initialization-vector"], "instruction": "Encrypt and decrypt using PyCrypto AES-256\n\nI'm trying to build two functions using PyCrypto that accept two parameters: the message and the key, and then encrypt/decrypt the message. I found several links on the web to help me out, but each one of them has flaws: This one at codekoala uses os.urandom, which is discouraged by PyCrypto. Moreover, the key I give to the function is not guaranteed to have the exact length expected. What can I do to make that happen? Also, there are several modes, which one is recommended? I don't know what to use :/ Finally, what exactly is the IV? Can I provide a different IV for encrypting and decrypting, or will this return in a different result?", "output": "Here is my implementation, and it works for me with some fixes. It enhances the alignment of the key and secret phrase with 32 bytes and IV to 16 bytes: import base64 import hashlib from Crypto import Random from Crypto.Cipher import AES class AESCipher(object): def __init__(self, key): self.bs = AES.block_size self.key = hashlib.sha256(key.encode()).digest() def encrypt(self, raw): raw = self._pad(raw) iv = Random.new().read(AES.block_size) cipher = AES.new(self.key, AES.MODE_CBC, iv) return base64.b64encode(iv + cipher.encrypt(raw.encode())) def decrypt(self, enc): enc = base64.b64decode(enc) iv = enc[:AES.block_size] cipher = AES.new(self.key, AES.MODE_CBC, iv) return AESCipher._unpad(cipher.decrypt(enc[AES.block_size:])).decode('utf-8') def _pad(self, s): return s + (self.bs - len(s) % self.bs) * chr(self.bs - len(s) % self.bs) @staticmethod def _unpad(s): return s[:-ord(s[len(s)-1:])]"} +{"question_id": 39577984, "score": 223, "creation_date": 1474302999, "tags": ["python", "python-3.x", "pip", "ubuntu-16.04"], "instruction": "What is \"pkg-resources==0.0.0\" in output of pip freeze command\n\nWhen I run pip freeze I see (among other expected packages) pkg-resources==0.0.0. I have seen a few posts mentioning this package (including this one), but none explaining what it is, or why it is included in the output of pip freeze. The main reason I am wondering is out of curiosity, but also, it seems to break things in some cases when trying to install packages with a requirements.txt file generated with pip freeze that includes the pkg-resources==0.0.0 line (for example when Travis CI tries to install dependencies through pip and finds this line). What is pkg-resources, and is it OK to remove this line from requirements.txt? Update: I have found that this line only seems to exist in the output of pip freeze when I am in a virtualenv. I am still not sure what it is or what it does, but I will investigate further knowing that it is likely related to virtualenv.", "output": "According to https://github.com/pypa/pip/issues/4022, this is a bug resulting from Ubuntu providing incorrect metadata to pip. So, no there does not seem to be a good reason for this behaviour. I filed a follow-up bug with Ubuntu. https://bugs.launchpad.net/ubuntu/+source/python-pip/+bug/1635463 To backup the previous answer, it should be safe to remove that line from your requirements.txt. Here is an example Make file stanza that safely freezes your package list (drop in your Makefile and run with make freeze): freeze: pip freeze | grep -v \"pkg-resources\" > requirements.txt edit 2022 July 06: I have been informed that the package name differs depending on the system in use (pkg-resources vs pkg_resources). Please see the comments attached to this answer for differences in usage between different versions of Debian/Ubuntu. As pkg-resources is the historically correct package name at the time this was posted (almost 6 years ago) for the system in question, it will remain unchanged in this answer."} +{"question_id": 38154040, "score": 222, "creation_date": 1467408230, "tags": ["python", "csv", "amazon-s3", "dataframe", "boto3"], "instruction": "Save Dataframe to csv directly to s3 Python\n\nI have a pandas DataFrame that I want to upload to a new CSV file. The problem is that I don't want to save the file locally before transferring it to s3. Is there any method like to_csv for writing the dataframe to s3 directly? I am using boto3. Here is what I have so far: import boto3 s3 = boto3.client('s3', aws_access_key_id='key', aws_secret_access_key='secret_key') read_file = s3.get_object(Bucket, Key) df = pd.read_csv(read_file['Body']) # Make alterations to DataFrame # Then export DataFrame to CSV through direct transfer to s3", "output": "You can use: from io import StringIO # python3; python2: BytesIO import boto3 bucket = 'my_bucket_name' # already created on S3 csv_buffer = StringIO() df.to_csv(csv_buffer) s3_resource = boto3.resource('s3') s3_resource.Object(bucket, 'df.csv').put(Body=csv_buffer.getvalue()) In addition, pandas now also handles reading and writing remote files via fspec, if installed. In other words, in this case df.to_csv('s3://bucket/folder/path/file.csv) also works."} +{"question_id": 6076270, "score": 222, "creation_date": 1305916740, "tags": ["python"], "instruction": "Lambda function in list comprehensions\n\nWhy is the output of the following two list comprehensions different, even though f and the lambda function are the same? f = lambda x: x*x [f(x) for x in range(10)] and [lambda x: x*x for x in range(10)] Mind you, both type(f) and type(lambda x: x*x) return the same type.", "output": "The first one creates a single lambda function and calls it ten times. The second one doesn't call the function. It creates 10 different lambda functions. It puts all of those in a list. To make it equivalent to the first you need: [(lambda x: x*x)(x) for x in range(10)] Or better yet: [x*x for x in range(10)]"} +{"question_id": 7030831, "score": 222, "creation_date": 1313086535, "tags": ["python", "boolean"], "instruction": "How do I get the opposite (negation) of a Boolean in Python?\n\nFor the following sample: def fuctionName(int, bool): if int in range(...): if bool == True: return False else: return True Is there any way to skip the second if-statement? Just to tell the computer to return the opposite of the boolean bool?", "output": "To negate a boolean, you can use the not operator: not bool Or in your case, the if/return blocks can be replaced by: return not bool Be sure to note the operator precedence rules, and the negated is and in operators: a is not b and a not in b."} +{"question_id": 53004311, "score": 222, "creation_date": 1540541606, "tags": ["python", "anaconda", "jupyter-notebook", "jupyter-lab"], "instruction": "How to add conda environment to jupyter lab\n\nI'm using Jupyter Lab and I'm having trouble to add conda environment. The idea is to launch Jupyter Lab from my base environment, and then to be able to choose my other conda envs as kernels. I installed the package nb_conda_kernels which is supposed to do just that, but it's not working as I want. Indeed, let's assume I create a new Conda Environment, then I launch jupyter lab from base, I can't see the new environment as an available kernel. I have found a \"fix\", which works everytime but is not convenient at all. If I install Jupyter Notebook in my new environment, then launch a jupyter notebook from this new environment, close it, go back to base environment, and then launch Jupyter Lab from base environment, my new environment is available as a kernel in Jupyter Lab. If you know how to make it work without this \"fix\", I would be very grateful.", "output": "A solution using nb_conda_kernels. First, install it in your base environment : (base)$ conda install -c conda-forge nb_conda_kernels Then in order to get a kernel for the conda_env cenv : $ conda activate cenv (cenv)$ conda install ipykernel (cenv)$ conda deactivate You will get a new kernel named Python [conda env:cenv] in your next run of jupyter lab / jupyter notebook Note : If you have installed nb_conda_kernels, and want to create a new conda environment and have it accessible right away then conda create -n new_env_name ipykernel will do the job."} +{"question_id": 14692690, "score": 222, "creation_date": 1360001075, "tags": ["python", "dictionary"], "instruction": "Access nested dictionary items via a list of keys?\n\nI have a complex dictionary structure which I would like to access via a list of keys to address the correct item. dataDict = { \"a\":{ \"r\": 1, \"s\": 2, \"t\": 3 }, \"b\":{ \"u\": 1, \"v\": { \"x\": 1, \"y\": 2, \"z\": 3 }, \"w\": 3 } } maplist = [\"a\", \"r\"] or maplist = [\"b\", \"v\", \"y\"] I have made the following code which works but I'm sure there is a better and more efficient way to do this if anyone has an idea. # Get a given data from a dictionary with position provided as a list def getFromDict(dataDict, mapList): for k in mapList: dataDict = dataDict[k] return dataDict # Set a given data in a dictionary with position provided as a list def setInDict(dataDict, mapList, value): for k in mapList[:-1]: dataDict = dataDict[k] dataDict[mapList[-1]] = value", "output": "Use reduce() to traverse the dictionary: from functools import reduce # forward compatibility for Python 3 import operator def getFromDict(dataDict, mapList): return reduce(operator.getitem, mapList, dataDict) and reuse getFromDict to find the location to store the value for setInDict(): def setInDict(dataDict, mapList, value): getFromDict(dataDict, mapList[:-1])[mapList[-1]] = value All but the last element in mapList is needed to find the 'parent' dictionary to add the value to, then use the last element to set the value to the right key. Demo: >>> getFromDict(dataDict, [\"a\", \"r\"]) 1 >>> getFromDict(dataDict, [\"b\", \"v\", \"y\"]) 2 >>> setInDict(dataDict, [\"b\", \"v\", \"w\"], 4) >>> import pprint >>> pprint.pprint(dataDict) {'a': {'r': 1, 's': 2, 't': 3}, 'b': {'u': 1, 'v': {'w': 4, 'x': 1, 'y': 2, 'z': 3}, 'w': 3}} Note that the Python PEP8 style guide prescribes snake_case names for functions. The above works equally well for lists or a mix of dictionaries and lists, so the names should really be get_by_path() and set_by_path(): from functools import reduce # forward compatibility for Python 3 import operator def get_by_path(root, items): \"\"\"Access a nested object in root by item sequence.\"\"\" return reduce(operator.getitem, items, root) def set_by_path(root, items, value): \"\"\"Set a value in a nested object in root by item sequence.\"\"\" get_by_path(root, items[:-1])[items[-1]] = value And for completion's sake, a function to delete a key: def del_by_path(root, items): \"\"\"Delete a key-value in a nested object in root by item sequence.\"\"\" del get_by_path(root, items[:-1])[items[-1]]"} +{"question_id": 13730468, "score": 222, "creation_date": 1354733953, "tags": ["python", "numpy"], "instruction": "From ND to 1D arrays\n\nSay I have an array a: a = np.array([[1,2,3], [4,5,6]]) array([[1, 2, 3], [4, 5, 6]]) I would like to convert it to a 1D array (i.e. a column vector): b = np.reshape(a, (1,np.product(a.shape))) but this returns array([[1, 2, 3, 4, 5, 6]]) which is not the same as: array([1, 2, 3, 4, 5, 6]) I can take the first element of this array to manually convert it to a 1D array: b = np.reshape(a, (1,np.product(a.shape)))[0] but this requires me to know how many dimensions the original array has (and concatenate [0]'s when working with higher dimensions) Is there a dimensions-independent way of getting a column/row vector from an arbitrary ndarray?", "output": "Use np.ravel (for a 1D view) or np.ndarray.flatten (for a 1D copy) or np.ndarray.flat (for an 1D iterator): In [12]: a = np.array([[1,2,3], [4,5,6]]) In [13]: b = a.ravel() In [14]: b Out[14]: array([1, 2, 3, 4, 5, 6]) Note that ravel() returns a view of a when possible. So modifying b also modifies a. ravel() returns a view when the 1D elements are contiguous in memory, but would return a copy if, for example, a were made from slicing another array using a non-unit step size (e.g. a = x[::2]). If you want a copy rather than a view, use In [15]: c = a.flatten() If you just want an iterator, use np.ndarray.flat: In [20]: d = a.flat In [21]: d Out[21]: <numpy.flatiter object at 0x8ec2068> In [22]: list(d) Out[22]: [1, 2, 3, 4, 5, 6]"} +{"question_id": 1767513, "score": 222, "creation_date": 1258675772, "tags": ["python"], "instruction": "How to read first N lines of a file?\n\nWe have a large raw data file that we would like to trim to a specified size. How would I go about getting the first N lines of a text file in python? Will the OS being used have any effect on the implementation?", "output": "Python 3: with open(path_to_file) as input_file: head = [next(input_file) for _ in range(lines_number)] print(head) Python 2: with open(path_to_file) as input_file: head = [next(input_file) for _ in xrange(lines_number)] print head Here's another way (both Python 2 & 3): from itertools import islice with open(path_to_file) as input_file: head = list(islice(input_file, lines_number)) print(head)"} +{"question_id": 30405867, "score": 222, "creation_date": 1432328763, "tags": ["python", "ssl", "python-requests"], "instruction": "How can I get Python Requests to trust a self-signed SSL certificate?\n\nConsider: import requests data = {'foo':'bar'} url = 'https://example.com/bar' r = requests.post(url, data=data) If the URL uses a self-signed certificate, this fails with: requests.exceptions.SSLError: [Errno 1] _ssl.c:507: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed I know that I can pass False to the verify parameter, like this: r = requests.post(url, data=data, verify=False) However, I would like to point Requests to a copy of the public key on disk and tell it to trust that certificate.", "output": "Try: r = requests.post(url, data=data, verify='/path/to/public_key.pem')"} +{"question_id": 2546207, "score": 222, "creation_date": 1269961051, "tags": ["python", "django", "sqlalchemy"], "instruction": "Does SQLAlchemy have an equivalent of Django's get_or_create?\n\nI want to get an object from the database if it already exists (based on provided parameters) or create it if it does not. Django's get_or_create (or source) does this. Is there an equivalent shortcut in SQLAlchemy? I'm currently writing it out explicitly like this: def get_or_create_instrument(session, serial_number): instrument = session.query(Instrument).filter_by(serial_number=serial_number).first() if instrument: return instrument else: instrument = Instrument(serial_number) session.add(instrument) return instrument", "output": "That's basically the way to do it, there is no shortcut readily available AFAIK. You could generalize it ofcourse: def get_or_create(session, model, defaults=None, **kwargs): instance = session.query(model).filter_by(**kwargs).one_or_none() if instance: return instance, False else: params = {k: v for k, v in kwargs.items() if not isinstance(v, ClauseElement)} params.update(defaults or {}) instance = model(**params) try: session.add(instance) session.commit() except Exception: # The actual exception depends on the specific database so we catch all exceptions. This is similar to the official documentation: https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html session.rollback() instance = session.query(model).filter_by(**kwargs).one() return instance, False else: return instance, True 2020 update (Python 3.9+ ONLY) Here is a cleaner version with Python 3.9's the new dict union operator (|=) def get_or_create(session, model, defaults=None, **kwargs): instance = session.query(model).filter_by(**kwargs).one_or_none() if instance: return instance, False else: kwargs |= defaults or {} instance = model(**kwargs) try: session.add(instance) session.commit() except Exception: # The actual exception depends on the specific database so we catch all exceptions. This is similar to the official documentation: https://docs.sqlalchemy.org/en/latest/orm/session_transaction.html session.rollback() instance = session.query(model).filter_by(**kwargs).one() return instance, False else: return instance, True Note: Similar to the Django version this will catch duplicate key constraints and similar errors. If your get or create is not guaranteed to return a single result it can still result in race conditions. To alleviate some of that issue you would need to add another one_or_none() style fetch right after the session.commit(). This still is no 100% guarantee against race conditions unless you also use a with_for_update() or serializable transaction mode."} +{"question_id": 48083405, "score": 222, "creation_date": 1515005310, "tags": ["python", "pandas", "parquet", "feather", "pyarrow"], "instruction": "What are the differences between feather and parquet?\n\nBoth are columnar (disk-)storage formats for use in data analysis systems. Both are integrated within Apache Arrow (pyarrow package for python) and are designed to correspond with Arrow as a columnar in-memory analytics layer. How do both formats differ? Should you always prefer feather when working with pandas when possible? What are the use cases where feather is more suitable than parquet and the other way round? Appendix I found some hints here https://github.com/wesm/feather/issues/188, but given the young age of this project, it's possibly a bit out of date. Not a serious speed test because I'm just dumping and loading a whole Dataframe but to give you some impression if you never heard of the formats before: # IPython import numpy as np import pandas as pd import pyarrow as pa import pyarrow.feather as feather import pyarrow.parquet as pq import fastparquet as fp df = pd.DataFrame({'one': [-1, np.nan, 2.5], 'two': ['foo', 'bar', 'baz'], 'three': [True, False, True]}) print(\"pandas df to disk ####################################################\") print('example_feather:') %timeit feather.write_feather(df, 'example_feather') # 2.62 ms \u00b1 35.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print('example_parquet:') %timeit pq.write_table(pa.Table.from_pandas(df), 'example.parquet') # 3.19 ms \u00b1 51 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print() print(\"for comparison:\") print('example_pickle:') %timeit df.to_pickle('example_pickle') # 2.75 ms \u00b1 18.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print('example_fp_parquet:') %timeit fp.write('example_fp_parquet', df) # 7.06 ms \u00b1 205 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) print('example_hdf:') %timeit df.to_hdf('example_hdf', 'key_to_store', mode='w', table=True) # 24.6 ms \u00b1 4.45 ms per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) print() print(\"pandas df from disk ##################################################\") print('example_feather:') %timeit feather.read_feather('example_feather') # 969 \u00b5s \u00b1 1.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) print('example_parquet:') %timeit pq.read_table('example.parquet').to_pandas() # 1.9 ms \u00b1 5.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) print(\"for comparison:\") print('example_pickle:') %timeit pd.read_pickle('example_pickle') # 1.07 ms \u00b1 6.21 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) print('example_fp_parquet:') %timeit fp.ParquetFile('example_fp_parquet').to_pandas() # 4.53 ms \u00b1 260 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) print('example_hdf:') %timeit pd.read_hdf('example_hdf') # 10 ms \u00b1 43.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) # pandas version: 0.22.0 # fastparquet version: 0.1.3 # numpy version: 1.13.3 # pandas version: 0.22.0 # pyarrow version: 0.8.0 # sys.version: 3.6.3 # example Dataframe taken from https://arrow.apache.org/docs/python/parquet.html", "output": "Parquet format is designed for long-term storage, where Arrow is more intended for short term or ephemeral storage (Arrow may be more suitable for long-term storage after the 1.0.0 release happens, since the binary format will be stable then) Parquet is more expensive to write than Feather as it features more layers of encoding and compression. Feather is unmodified raw columnar Arrow memory. We will probably add simple compression to Feather in the future. Due to dictionary encoding, RLE encoding, and data page compression, Parquet files will often be much smaller than Feather files Parquet is a standard storage format for analytics that's supported by many different systems: Spark, Hive, Impala, various AWS services, in future by BigQuery, etc. So if you are doing analytics, Parquet is a good option as a reference storage format for query by multiple systems The benchmarks you showed are going to be very noisy since the data you read and wrote is very small. You should try compressing at least 100MB or upwards 1GB of data to get some more informative benchmarks, see e.g. http://wesmckinney.com/blog/python-parquet-multithreading/"} +{"question_id": 320929, "score": 221, "creation_date": 1227710613, "tags": ["python", "formatting", "currency"], "instruction": "Currency formatting in Python\n\nI am looking to format a number like 188518982.18 to \u00a3188,518,982.18 using Python. How can I do this?", "output": "See the locale module. This does currency (and date) formatting. >>> import locale >>> locale.setlocale( locale.LC_ALL, '' ) 'English_United States.1252' >>> locale.currency( 188518982.18 ) '$188518982.18' >>> locale.currency( 188518982.18, grouping=True ) '$188,518,982.18'"} +{"question_id": 4938429, "score": 221, "creation_date": 1297199079, "tags": ["python"], "instruction": "How do we determine the number of days for a given month in python\n\nI need to calculate the number of days for a given month in python. If a user inputs Feb 2011 the program should be able to tell me that Feb 2011 has 28 days. Could anyone tell me which library I should use to determine the length of a given month?", "output": "You should use calendar.monthrange: >>> from calendar import monthrange >>> monthrange(2011, 2) (1, 28) Just to be clear, monthrange supports leap years as well: >>> from calendar import monthrange >>> monthrange(2012, 2) (2, 29) As @mikhail-pyrev mentions in a comment: First number is the weekday of the first day of the month, the second number is the number of days in said month."} +{"question_id": 15514593, "score": 221, "creation_date": 1363749019, "tags": ["python", "jupyter-notebook", "ipython", "jupyter", "importerror"], "instruction": "\"ImportError: No module named\" when trying to run Python script\n\nI'm trying to run a script that launches, amongst other things, a python script. I get a ImportError: No module named ..., however, if I launch ipython and import the same module in the same way through the interpreter, the module is accepted. What's going on, and how can I fix it? I've tried to understand how python uses PYTHONPATH but I'm thoroughly confused. Any help would greatly appreciated.", "output": "This issue arises due to the ways in which the command line IPython interpreter uses your current path vs. the way a separate process does (be it an IPython notebook, external process, etc). IPython will look for modules to import that are not only found in your sys.path, but also on your current working directory. When starting an interpreter from the command line, the current directory you're operating in is the same one you started ipython in. If you run import os os.getcwd() you'll see this is true. However, let's say you're using an ipython notebook, run os.getcwd() and your current working directory is instead the folder in which you told the notebook to operate from in your ipython_notebook_config.py file (typically using the c.NotebookManager.notebook_dir setting). The solution is to provide the python interpreter with the path-to-your-module. The simplest solution is to append that path to your sys.path list. In your notebook, first try: import sys sys.path.append('my/path/to/module/folder') import module_of_interest If that doesn't work, you've got a different problem on your hands unrelated to path-to-import and you should provide more info about your problem. The better (and more permanent) way to solve this is to set your PYTHONPATH, which provides the interpreter with additional directories look in for python packages/modules. Editing or setting the PYTHONPATH as a global var is os dependent, and is discussed in detail here for Unix or Windows."} +{"question_id": 22190403, "score": 220, "creation_date": 1394001412, "tags": ["python", "python-requests", "python-3.4", "aiohttp"], "instruction": "How could I use requests in asyncio?\n\nI want to do parallel http request tasks in asyncio, but I find that python-requests would block the event loop of asyncio. I've found aiohttp but it couldn't provide the service of http request using a http proxy. So I want to know if there's a way to do asynchronous http requests with the help of asyncio.", "output": "To use requests (or any other blocking libraries) with asyncio, you can use BaseEventLoop.run_in_executor to run a function in another thread and yield from it to get the result. For example: import asyncio import requests @asyncio.coroutine def main(): loop = asyncio.get_event_loop() future1 = loop.run_in_executor(None, requests.get, 'http://www.google.com') future2 = loop.run_in_executor(None, requests.get, 'http://www.google.co.uk') response1 = yield from future1 response2 = yield from future2 print(response1.text) print(response2.text) asyncio.run(main()) This will get both responses in parallel. With python 3.5 you can use the new await/async syntax: import asyncio import requests async def main(): loop = asyncio.get_event_loop() response1 = await loop.run_in_executor(None, requests.get, 'http://www.google.com') response2 = await loop.run_in_executor(None, requests.get, 'http://www.google.co.uk') print(response1.text) print(response2.text) asyncio.run(main()) See PEP0492 for more."} +{"question_id": 3426108, "score": 220, "creation_date": 1281115046, "tags": ["python", "sorting"], "instruction": "How to sort a list of strings numerically\n\nI know that this sounds trivial, but I did not realize that the sort() function of Python was weird. I have a list of \"numbers\" that are actually in string form, so I first convert them to ints, then attempt a sort. list1=[\"1\",\"10\",\"3\",\"22\",\"23\",\"4\",\"2\",\"200\"] for item in list1: item=int(item) list1.sort() print list1 Gives me: ['1', '10', '2', '200', '22', '23', '3', '4'] I want ['1','2','3','4','10','22','23','200'] I've looked around for some of the algorithms associated with sorting numeric sets, but the ones I found all involved sorting alphanumeric sets. I know this is probably a no-brainer problem, but Google and my textbook don't offer anything more or less useful than the .sort() function.", "output": "You haven't actually converted your strings to ints. Or rather, you did, but then you didn't do anything with the results. What you want is: list1 = [\"1\",\"10\",\"3\",\"22\",\"23\",\"4\",\"2\",\"200\"] list1 = [int(x) for x in list1] list1.sort() If for some reason you need to keep strings instead of ints (usually a bad idea, but maybe you need to preserve leading zeros or something), you can use a key function. sort takes a named parameter, key, which is a function that is called on each element before it is compared. The key function's return values are compared instead of comparing the list elements directly: list1 = [\"1\",\"10\",\"3\",\"22\",\"23\",\"4\",\"2\",\"200\"] # call int(x) on each element before comparing it list1.sort(key=int) # or if you want to do it all in the same line list1 = sorted([int(x) for x in list1])"} +{"question_id": 200020, "score": 220, "creation_date": 1223958709, "tags": ["python"], "instruction": "Get last result in interactive Python shell\n\nIn many symbolic math systems, such as Matlab or Mathematica, you can use a variable like Ans or % to retrieve the last computed value. Is there a similar facility in the Python shell?", "output": "Underscore. >>> 5+5 10 >>> _ 10 >>> _ + 5 15 >>> _ 15"} +{"question_id": 696047, "score": 220, "creation_date": 1238389474, "tags": ["python", "exception", "polymorphism"], "instruction": "Re-raise exception with a different type and message, preserving existing information\n\nI'm writing a module and want to have a unified exception hierarchy for the exceptions that it can raise (e.g. inheriting from a FooError abstract class for all the foo module's specific exceptions). This allows users of the module to catch those particular exceptions and handle them distinctly, if needed. But many of the exceptions raised from the module are raised because of some other exception; e.g. failing at some task because of an OSError on a file. What I need is to \u201cwrap\u201d the exception caught such that it has a different type and message, so that information is available further up the propagation hierarchy by whatever catches the exception. But I don't want to lose the existing type, message, and stack trace; that's all useful information for someone trying to debug the problem. A top-level exception handler is no good, since I'm trying to decorate the exception before it makes its way further up the propagation stack, and the top-level handler is too late. This is partly solved by deriving my module foo's specific exception types from the existing type (e.g. class FooPermissionError(OSError, FooError)), but that doesn't make it any easier to wrap the existing exception instance in a new type, nor modify the message. Python's PEP 3134 \u201cException Chaining and Embedded Tracebacks\u201d discusses a change accepted in Python 3.0 for \u201cchaining\u201d exception objects, to indicate that a new exception was raised during the handling of an existing exception. What I'm trying to do is related: I need it also working in earlier Python versions, and I need it not for chaining, but only for polymorphism. What is the right way to do this?", "output": "Python 3 introduced exception chaining (as described in PEP 3134). This allows, when raising an exception, to cite an existing exception as the \u201ccause\u201d: try: frobnicate() except KeyError as exc: raise ValueError(\"Bad grape\") from exc The caught exception (exc, a KeyError) thereby becomes part of (is the \u201ccause of\u201d) the new exception, a ValueError. The \u201ccause\u201d is available to whatever code catches the new exception. By using this feature, the __cause__ attribute is set. The built-in exception handler also knows how to report the exception's \u201ccause\u201d and \u201ccontext\u201d along with the traceback. In Python 2, it appears this use case has no good answer (as described by Ian Bicking and Ned Batchelder). Bummer."} +{"question_id": 800197, "score": 219, "creation_date": 1240959551, "tags": ["python", "file"], "instruction": "How to get all of the immediate subdirectories in Python\n\nI'm trying to write a simple Python script that will copy a index.tpl to index.html in all of the subdirectories (with a few exceptions). I'm getting bogged down by trying to get the list of subdirectories.", "output": "I did some speed testing on various functions to return the full path to all current subdirectories. tl;dr: Always use scandir: list_subfolders_with_paths = [f.path for f in os.scandir(path) if f.is_dir()] Bonus: With scandir you can also simply only get folder names by using f.name instead of f.path. This (as well as all other functions below) will not use natural sorting. This means results will be sorted like this: 1, 10, 2. To get natural sorting (1, 2, 10), please have a look at https://stackoverflow.com/a/48030307/2441026 Results: scandir is: 3x faster than walk, 32x faster than listdir (with filter), 35x faster than Pathlib and 36x faster than listdir and 37x (!) faster than glob. Scandir: 0.977 Walk: 3.011 Listdir (filter): 31.288 Pathlib: 34.075 Listdir: 35.501 Glob: 36.277 Tested with W7x64, Python 3.8.1. Folder with 440 subfolders. In case you wonder if listdir could be speed up by not doing os.path.join() twice, yes, but the difference is basically nonexistent. Code: import os import pathlib import timeit import glob path = r\"<example_path>\" def a(): list_subfolders_with_paths = [f.path for f in os.scandir(path) if f.is_dir()] # print(len(list_subfolders_with_paths)) def b(): list_subfolders_with_paths = [os.path.join(path, f) for f in os.listdir(path) if os.path.isdir(os.path.join(path, f))] # print(len(list_subfolders_with_paths)) def c(): list_subfolders_with_paths = [] for root, dirs, files in os.walk(path): for dir in dirs: list_subfolders_with_paths.append( os.path.join(root, dir) ) break # print(len(list_subfolders_with_paths)) def d(): list_subfolders_with_paths = glob.glob(path + '/*/') # print(len(list_subfolders_with_paths)) def e(): list_subfolders_with_paths = list(filter(os.path.isdir, [os.path.join(path, f) for f in os.listdir(path)])) # print(len(list(list_subfolders_with_paths))) def f(): p = pathlib.Path(path) list_subfolders_with_paths = [x for x in p.iterdir() if x.is_dir()] # print(len(list_subfolders_with_paths)) print(f\"Scandir: {timeit.timeit(a, number=1000):.3f}\") print(f\"Listdir: {timeit.timeit(b, number=1000):.3f}\") print(f\"Walk: {timeit.timeit(c, number=1000):.3f}\") print(f\"Glob: {timeit.timeit(d, number=1000):.3f}\") print(f\"Listdir (filter): {timeit.timeit(e, number=1000):.3f}\") print(f\"Pathlib: {timeit.timeit(f, number=1000):.3f}\")"} +{"question_id": 23853553, "score": 219, "creation_date": 1401007800, "tags": ["python", "pandas", "dataframe", "csv", "file-io"], "instruction": "Python Pandas: How to read only first n rows of CSV files in?\n\nI have a very large data set and I can't afford to read the entire data set in. So, I'm thinking of reading only one chunk of it to train but I have no idea how to do it.", "output": "If you only want to read the first 999,999 (non-header) rows: read_csv(..., nrows=999999) If you only want to read rows 1,000,000 ... 1,999,999 read_csv(..., skiprows=1000000, nrows=999999) nrows : int, default None Number of rows of file to read. Useful for reading pieces of large files* skiprows : list-like or integer Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file and for large files, you'll probably also want to use chunksize: chunksize : int, default None Return TextFileReader object for iteration pandas.io.parsers.read_csv documentation"} +{"question_id": 8900166, "score": 219, "creation_date": 1326826954, "tags": ["python", "list", "tuples"], "instruction": "What's the difference between lists enclosed by square brackets and parentheses in Python?\n\n>>> x=[1,2] >>> x[1] 2 >>> x=(1,2) >>> x[1] 2 Are they both valid? Is one preferred for some reason?", "output": "Square brackets are lists while parentheses are tuples. A list is mutable, meaning you can change its contents: >>> x = [1,2] >>> x.append(3) >>> x [1, 2, 3] while tuples are not: >>> x = (1,2) >>> x (1, 2) >>> x.append(3) Traceback (most recent call last): File \"<stdin>\", line 1, in <module> AttributeError: 'tuple' object has no attribute 'append' The other main difference is that a tuple is hashable, meaning that you can use it as a key to a dictionary, among other things. For example: >>> x = (1,2) >>> y = [1,2] >>> z = {} >>> z[x] = 3 >>> z {(1, 2): 3} >>> z[y] = 4 Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: unhashable type: 'list' Note that, as many people have pointed out, you can add tuples together. For example: >>> x = (1,2) >>> x += (3,) >>> x (1, 2, 3) However, this does not mean tuples are mutable. In the example above, a new tuple is constructed by adding together the two tuples as arguments. The original tuple is not modified. To demonstrate this, consider the following: >>> x = (1,2) >>> y = x >>> x += (3,) >>> x (1, 2, 3) >>> y (1, 2) Whereas, if you were to construct this same example with a list, y would also be updated: >>> x = [1, 2] >>> y = x >>> x += [3] >>> x [1, 2, 3] >>> y [1, 2, 3]"} +{"question_id": 9170288, "score": 218, "creation_date": 1328582267, "tags": ["python", "json", "twitter", "pretty-print"], "instruction": "Pretty-Print JSON Data to a File using Python\n\nA project for class involves parsing Twitter JSON data. I'm getting the data and setting it to the file without much trouble, but it's all in one line. This is fine for the data manipulation I'm trying to do, but the file is ridiculously hard to read and I can't examine it very well, making the code writing for the data manipulation part very difficult. Does anyone know how to do that from within Python (i.e. not using the command line tool, which I can't get to work)? Here's my code so far: header, output = client.request(twitterRequest, method=\"GET\", body=None, headers=None, force_auth_header=True) # now write output to a file twitterDataFile = open(\"twitterData.json\", \"wb\") # magic happens here to make it pretty-printed twitterDataFile.write(output) twitterDataFile.close() Note I appreciate people pointing me to simplejson documentation and such, but as I have stated, I have already looked at that and continue to need assistance. A truly helpful reply will be more detailed and explanatory than the examples found there. Thanks Also: Trying this in the windows command line: more twitterData.json | python -mjson.tool > twitterData-pretty.json results in this: Invalid control character at: line 1 column 65535 (char 65535) I'd give you the data I'm using, but it's very large and you've already seen the code I used to make the file.", "output": "You should use the optional argument indent. header, output = client.request(twitterRequest, method=\"GET\", body=None, headers=None, force_auth_header=True) # now write output to a file with open(\"twitterData.json\", \"w\") as twitterDataFile: # magic happens here to make it pretty-printed twitterDataFile.write( simplejson.dumps(simplejson.loads(output), indent=4, sort_keys=True) )"} +{"question_id": 55409641, "score": 218, "creation_date": 1553826270, "tags": ["python", "python-3.x", "jupyter-notebook", "python-asyncio"], "instruction": "\"asyncio.run() cannot be called from a running event loop\" when using Jupyter Notebook\n\nI would like to use asyncio to get webpage html. I run the following code in jupyter notebook: import aiofiles import aiohttp from aiohttp import ClientSession async def get_info(url, session): resp = await session.request(method=\"GET\", url=url) resp.raise_for_status() html = await resp.text(encoding='GB18030') with open('test_asyncio.html', 'w', encoding='utf-8-sig') as f: f.write(html) return html async def main(urls): async with ClientSession() as session: tasks = [get_info(url, session) for url in urls] return await asyncio.gather(*tasks) if __name__ == \"__main__\": url = ['http://huanyuntianxiazh.fang.com/house/1010123799/housedetail.htm', 'http://zhaoshangyonghefu010.fang.com/house/1010126863/housedetail.htm'] result = asyncio.run(main(url)) However, it returns RuntimeError: asyncio.run() cannot be called from a running event loop What is the problem? How to solve it?", "output": "The asyncio.run() documentation says: This function cannot be called when another asyncio event loop is running in the same thread. In your case, jupyter (IPython \u2265 7.0) is already running an event loop: You can now use async/await at the top level in the IPython terminal and in the notebook, it should \u2014 in most of the cases \u2014 \u201cjust work\u201d. Update IPython to version 7+, IPykernel to version 5+, and you\u2019re off to the races. Therefore you don't need to start the event loop yourself and can instead call await main(url) directly, even if your code lies outside any asynchronous function. Modern Jupyter lab/notebook Use the following for newer versions of Jupyter (IPython \u2265 7.0): async def main(): print(1) await main() Python or older IPython If you are using Python \u2265 3.7 or IPython < 7.0, use the following: import asyncio async def main(): print(1) asyncio.run(main()) That's also that form you should use if you are running this in a python REPL or in an independent script (a bot, a web scrapper, etc.). If you are using an older version of python (< 3.7), the API to run asynchronous code was a bit less elegant: import asyncio async def main(): print(1) loop = asyncio.get_event_loop() loop.run_until_complete(hello_world()) Using await in your code In your case, you can call await main(url) as follows: url = ['url1', 'url2'] result = await main(url) for text in result: pass # text contains your html (text) response This change to recent versions of IPython makes notebook code simpler and more intuitive for beginers. Further notices Few remarks that might help you in different use cases. Jupyter vs. IPython caution There is a slight difference on how Jupyter uses the loop compared to IPython. [...] IPykernel having a persistent asyncio loop running, while Terminal IPython starts and stops a loop for each code block. This can lead to unexpected issues. Google Colab In the past, Google colab required you to do more complex loop manipulations like presented in some other answers here. Now plain await main() should just work like in IPython \u2265 7.0 (tested on Colab version 2023/08/18). Python REPL You can also run the python REPL using the asyncio concurrent context. As explained in asyncio's documentation: $ python -m asyncio asyncio REPL ... Use \"await\" directly instead of \"asyncio.run()\". >>> import asyncio >>> await asyncio.sleep(10, result='hello') 'hello' The asyncio REPL should be available for python \u2265 3.8.1. When does asyncio.run matters and why? Older versions of IPython were running in a synchronous context, which is why calling asyncio.run was mandatory. The asyncio.run function allows to run asynchronous code from a synchronous context by doing the following: starts an event loop, runs the async function passed as argument in this (new) event loop, stops the event loop once the function returned In more technical terms (notice how the argument function is called a coroutine): This function runs the passed coroutine, taking care of managing the asyncio event loop, finalizing asynchronous generators, and closing the threadpool. What happen when using await in synchronous context? If you happen to use await in a synchronous context you would get the one of the following errors: SyntaxError: 'await' outside function SyntaxError: 'await' outside async function In that case that means you need to use asyncio.run(main()) instead of await main()."} +{"question_id": 43298872, "score": 218, "creation_date": 1491676278, "tags": ["python", "pip"], "instruction": "How to solve ReadTimeoutError: HTTPSConnectionPool(host='pypi.python.org', port=443) with pip?\n\nI recently need to install some packages pip install future pip install scikit-learn pip install numpy pip install scipy I also tried by writin sudo before them but all it came up with the following errors in red lines: Exception: Traceback (most recent call last): File \"/usr/lib/python2.7/dist-packages/pip/basecommand.py\", line 122, in main status = self.run(options, args) File \"/usr/lib/python2.7/dist-packages/pip/commands/install.py\", line 290, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File \"/usr/lib/python2.7/dist-packages/pip/req.py\", line 1198, in prepare_files do_download, File \"/usr/lib/python2.7/dist-packages/pip/req.py\", line 1376, in unpack_url self.session, File \"/usr/lib/python2.7/dist-packages/pip/download.py\", line 572, in unpack_http_url download_hash = _download_url(resp, link, temp_location) File \"/usr/lib/python2.7/dist-packages/pip/download.py\", line 433, in _download_url for chunk in resp_read(4096): File \"/usr/lib/python2.7/dist-packages/pip/download.py\", line 421, in resp_read chunk_size, decode_content=False): File \"/usr/lib/python2.7/dist-packages/urllib3/response.py\", line 256, in stream data = self.read(amt=amt, decode_content=decode_content) File \"/usr/lib/python2.7/dist-packages/urllib3/response.py\", line 201, in read raise ReadTimeoutError(self._pool, None, 'Read timed out.') ReadTimeoutError: HTTPSConnectionPool(host='pypi.python.org', port=443): Read timed out. Storing debug log for failure in /root/.pip/pip.log'", "output": "Use --default-timeout=100 parameter with the install: sudo pip install --default-timeout=100 future"} +{"question_id": 1574088, "score": 218, "creation_date": 1255630030, "tags": ["python", "matplotlib", "timestamp", "x-axis"], "instruction": "Plotting time on the independent axis\n\nI have an array of timestamps in the format (HH:MM:SS.mmmmmm) and another array of floating point numbers, each corresponding to a value in the timestamp array. Can I plot time on the x axis and the numbers on the y-axis using Matplotlib? I was trying to, but somehow it was only accepting arrays of floats. How can I get it to plot the time? Do I have to modify the format in any way?", "output": "Update: This answer is outdated since matplotlib version 3.5. The plot function now handles datetime data directly. See https://matplotlib.org/3.5.1/api/_as_gen/matplotlib.pyplot.plot_date.html The use of plot_date is discouraged. This method exists for historic reasons and may be deprecated in the future. datetime-like data should directly be plotted using plot. If you need to plot plain numeric data as Matplotlib date format or need to set a timezone, call ax.xaxis.axis_date / ax.yaxis.axis_date before plot. See Axis.axis_date. Old, outdated answer: You must first convert your timestamps to Python datetime objects (use datetime.strptime). Then use date2num to convert the dates to matplotlib format. Plot the dates and values using plot_date: import matplotlib.pyplot as plt import matplotlib.dates from datetime import datetime x_values = [datetime(2021, 11, 18, 12), datetime(2021, 11, 18, 14), datetime(2021, 11, 18, 16)] y_values = [1.0, 3.0, 2.0] dates = matplotlib.dates.date2num(x_values) plt.plot_date(dates, y_values)"} +{"question_id": 22241420, "score": 218, "creation_date": 1394166338, "tags": ["python", "module", "package"], "instruction": "Execution of Python code with -m option or not\n\nThe python interpreter has -m module option that \"Runs library module module as a script\". With this python code a.py: if __name__ == \"__main__\": print __package__ print __name__ I tested python -m a to get \"\" <-- Empty String __main__ whereas python a.py returns None <-- None __main__ To me, those two invocation seems to be the same except __package__ is not None when invoked with -m option. Interestingly, with python -m runpy a, I get the same as python -m a with python module compiled to get a.pyc. What's the (practical) difference between these invocations? Any pros and cons between them? Also, David Beazley's Python Essential Reference explains it as \"The -m option runs a library module as a script which executes inside the __main__ module prior to the execution of the main script\". What does it mean?", "output": "When you use the -m command-line flag, Python will import a module or package for you, then run it as a script. When you don't use the -m flag, the file you named is run as just a script. The distinction is important when you try to run a package. There is a big difference between: python foo/bar/baz.py and python -m foo.bar.baz as in the latter case, foo.bar is imported and relative imports will work correctly with foo.bar as the starting point. Demo: $ mkdir -p test/foo/bar $ touch test/foo/__init__.py $ touch test/foo/bar/__init__.py $ cat << EOF > test/foo/bar/baz.py > if __name__ == \"__main__\": > print __package__ > print __name__ > > EOF $ PYTHONPATH=test python test/foo/bar/baz.py None __main__ $ PYTHONPATH=test python -m foo.bar.baz foo.bar __main__ As a result, Python has to actually care about packages when using the -m switch. A normal script can never be a package, so __package__ is set to None. But run a package or module inside a package with -m and now there is at least the possibility of a package, so the __package__ variable is set to a string value; in the above demonstration it is set to 'foo.bar', for plain modules not inside a package it is set to an empty string. As for the __main__ module, Python imports scripts being run as it would import regular modules. A new module object is created to hold the global namespace and is stored in sys.modules['__main__']. This is what the __name__ variable refers to, it is a key in that structure. For packages, you can create a __main__.py module inside and have that run when running python -m package_name; in fact that is the only way you can run a package as a script: $ PYTHONPATH=test python -m foo.bar python: No module named foo.bar.__main__; 'foo.bar' is a package and cannot be directly executed $ cp test/foo/bar/baz.py test/foo/bar/__main__.py $ PYTHONPATH=test python -m foo.bar foo.bar __main__ So, when naming a package for running with -m, Python looks for a __main__ module contained in that package and executes that as a script. Its name is then still set to '__main__' and the module object is still stored in sys.modules['__main__']."} +{"question_id": 8406242, "score": 218, "creation_date": 1323202610, "tags": ["python", "pycharm"], "instruction": "Why does Pycharm's inspector complain about \"d = {}\"?\n\nWhen initializing a dictionary with d = {} Pycharm's code inspector generates a warning, saying This dictionary creation could be rewritten as a dictionary literal. If I rewrite it d = dict() the warning goes away. Since {} already is a dictionary literal, I'm pretty sure the message is erroneous. Furthermore, it seems like both d = {} and d = dict() are valid and Pythonic. This related question seems to conclude that the choice is just a matter of style/preference: differences between \"d = dict()\" and \"d = {}\" Why would Pycharm complain about d = {}? UPDATE: Mac nailed it. The warning actually applied to multiple lines, not just the one that was flagged. Pycharm seems to look for a sequence of consecutive statements where you initialize a dictionary and then set values in the dictionary. For example, this will trigger the warning: d = {} d['a'] = 1 But this code will not: d = {} pass d['a'] = 1", "output": "What is the code following your dictionary declaration? I think PyCharm will trigger the error if you have something like: dic = {} dic['aaa'] = 5 as you could have written dic = {'aaa': 5} Note: The fact that the error goes away if you use the function dict(). This doesn't necessarily mean that pycharm believes dict() is a literal. It could just mean that it doesn't complain about it: dic = dict() dic['aaa'] = 5"} +{"question_id": 7479777, "score": 218, "creation_date": 1316489217, "tags": ["python", "datetime", "time"], "instruction": "Difference between Python datetime vs time modules\n\nI am trying to figure out the differences between the datetime and time modules, and what each should be used for. I know that datetime provides both dates and time. What is the use of the time module? Examples would be appreciated and differences concerning timezones would especially be of interest.", "output": "The time module is principally for working with Unix time stamps; expressed as a floating point number taken to be seconds since the Unix epoch. the datetime module can support many of the same operations, but provides a more object oriented set of types, and also has some limited support for time zones."} +{"question_id": 15058462, "score": 217, "creation_date": 1361752418, "tags": ["python", "selenium-webdriver", "cookies", "webdriver"], "instruction": "How to save and load cookies using Python + Selenium WebDriver\n\nHow can I save all cookies in Python's Selenium WebDriver to a .txt file, and then load them later? The documentation doesn't say much of anything about the getCookies function.", "output": "You can save the current cookies as a Python object using pickle. For example: import pickle import selenium.webdriver driver = selenium.webdriver.Firefox() driver.get(\"http://www.google.com\") pickle.dump(driver.get_cookies(), open(\"cookies.pkl\", \"wb\")) And later to add them back: import pickle import selenium.webdriver driver = selenium.webdriver.Firefox() driver.get(\"http://www.google.com\") cookies = pickle.load(open(\"cookies.pkl\", \"rb\")) for cookie in cookies: driver.add_cookie(cookie)"} +{"question_id": 18262293, "score": 217, "creation_date": 1376602583, "tags": ["python", "file", "pipe", "stdout", "stdin"], "instruction": "How to open every file in a folder\n\nI have a python script parse.py, which in the script open a file, say file1, and then do something maybe print out the total number of characters. filename = 'file1' f = open(filename, 'r') content = f.read() print filename, len(content) Right now, I am using stdout to direct the result to my output file - output python parse.py >> output However, I don't want to do this file by file manually, is there a way to take care of every single file automatically? Like ls | awk '{print}' | python parse.py >> output Then the problem is how could I read the file name from standardin? or there are already some built-in functions to do the ls and those kind of work easily? Thanks!", "output": "Os You can list all files in the current directory using os.listdir: import os for filename in os.listdir(os.getcwd()): with open(os.path.join(os.getcwd(), filename), 'r') as f: # open in readonly mode # do your stuff Glob Or you can list only some files, depending on the file pattern using the glob module: import os, glob for filename in glob.glob('*.txt'): with open(os.path.join(os.getcwd(), filename), 'r') as f: # open in readonly mode # do your stuff It doesn't have to be the current directory you can list them in any path you want: import os, glob path = '/some/path/to/file' for filename in glob.glob(os.path.join(path, '*.txt')): with open(os.path.join(os.getcwd(), filename), 'r') as f: # open in readonly mode # do your stuff Pipe Or you can even use the pipe as you specified using fileinput import fileinput for line in fileinput.input(): # do your stuff And you can then use it with piping: ls -1 | python parse.py"} +{"question_id": 44597662, "score": 217, "creation_date": 1497645654, "tags": ["python", "python-3.x", "windows-10", "anaconda", "conda"], "instruction": "Conda command is not recognized on Windows 10\n\nI installed Anaconda 4.4.0 (Python 3.6 version) on Windows 10 by following the instructions here: https://www.continuum.io/downloads. However, when I open the Command prompt window and try to write conda list I get the 'conda' command is not recognized... error. I tried to run set PATH=%PATH%;C:\\Users\\Alex\\Anaconda3 but it didn't help. I also read that I might need to edit my .bashrc file, but I don't know how to access this file, and how I should edit it.", "output": "In Windows, you will have to set the path to the location where you installed Anaconda3 to. For me, I installed anaconda3 into C:\\Anaconda3. Therefore you need to add C:\\Anaconda3 as well as C:\\Anaconda3\\Scripts\\ to your path variable, e.g. set PATH=%PATH%;C:\\Anaconda3;C:\\Anaconda3\\Scripts\\. You can do this via powershell (see above, https://learn.microsoft.com/en-us/windows/win32/shell/user-environment-variables ), or hit the windows key \u2192 enter environment \u2192 choose from settings \u2192 edit environment variables for your account \u2192 select Path variable \u2192 Edit \u2192 New. To test it, open a new dos shell, and you should be able to use conda commands now. E.g., try conda --version."} +{"question_id": 8109122, "score": 217, "creation_date": 1321150438, "tags": ["python", "mongodb", "pymongo"], "instruction": "How to sort mongodb with pymongo\n\nI'm trying to use the sort feature when querying my mongoDB, but it is failing. The same query works in the MongoDB console but not here. Code is as follows: import pymongo from pymongo import Connection connection = Connection() db = connection.myDB print db.posts.count() for post in db.posts.find({}, {'entities.user_mentions.screen_name':1}).sort({u'entities.user_mentions.screen_name':1}): print post The error I get is as follows: Traceback (most recent call last): File \"find_ow.py\", line 7, in <module> for post in db.posts.find({}, {'entities.user_mentions.screen_name':1}).sort({'entities.user_mentions.screen_name':1},1): File \"/Library/Python/2.6/site-packages/pymongo-2.0.1-py2.6-macosx-10.6-universal.egg/pymongo/cursor.py\", line 430, in sort File \"/Library/Python/2.6/site-packages/pymongo-2.0.1-py2.6-macosx-10.6-universal.egg/pymongo/helpers.py\", line 67, in _index_document TypeError: first item in each key pair must be a string I found a link elsewhere that says I need to place a 'u' infront of the key if using pymongo, but that didn't work either. Anyone else get this to work or is this a bug.", "output": ".sort(), in pymongo, takes key and direction as parameters. So if you want to sort by, let's say, id then you should .sort(\"_id\", 1) For multiple fields: .sort([(\"field1\", pymongo.ASCENDING), (\"field2\", pymongo.DESCENDING)])"} +{"question_id": 19859840, "score": 217, "creation_date": 1383915991, "tags": ["python"], "instruction": "Excluding directories in os.walk\n\nI'm writing a script that descends into a directory tree (using os.walk()) and then visits each file matching a certain file extension. However, since some of the directory trees that my tool will be used on also contain sub directories that in turn contain a LOT of useless (for the purpose of this script) stuff, I figured I'd add an option for the user to specify a list of directories to exclude from the traversal. This is easy enough with os.walk(). After all, it's up to me to decide whether I actually want to visit the respective files / dirs yielded by os.walk() or just skip them. The problem is that if I have, for example, a directory tree like this: root-- | --- dirA | --- dirB | --- uselessStuff -- | --- moreJunk | --- yetMoreJunk and I want to exclude uselessStuff and all its children, os.walk() will still descend into all the (potentially thousands of) sub directories of uselessStuff, which, needless to say, slows things down a lot. In an ideal world, I could tell os.walk() to not even bother yielding any more children of uselessStuff, but to my knowledge there is no way of doing that (is there?). Does anyone have an idea? Maybe there's a third-party library that provides something like that?", "output": "Modifying dirs in-place will prune the (subsequent) files and directories visited by os.walk: # exclude = set(['New folder', 'Windows', 'Desktop']) for root, dirs, files in os.walk(top, topdown=True): dirs[:] = [d for d in dirs if d not in exclude] From help(os.walk): When topdown is true, the caller can modify the dirnames list in-place (e.g., via del or slice assignment), and walk will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search..."} +{"question_id": 69919970, "score": 216, "creation_date": 1636576956, "tags": ["python", "ubuntu", "distutils"], "instruction": "No module named 'distutils.util' ...but distutils is installed?\n\nI was wanting to upgrade my Python version (to 3.10 in this case), so after installing Python 3.10, I proceeded to try adding some modules I use, e.g., opencv, which ran into: python3.10 -m pip install opencv-python Output: Traceback (most recent call last): File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code exec(code, run_globals) File \"/usr/lib/python3/dist-packages/pip/__main__.py\", line 16, in <module> from pip._internal.cli.main import main as _main # isort:skip # noqa File \"/usr/lib/python3/dist-packages/pip/_internal/cli/main.py\", line 10, in <module> from pip._internal.cli.autocompletion import autocomplete File \"/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py\", line 9, in <module> from pip._internal.cli.main_parser import create_main_parser File \"/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py\", line 7, in <module> from pip._internal.cli import cmdoptions File \"/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py\", line 19, in <module> from distutils.util import strtobool ModuleNotFoundError: No module named 'distutils.util' And sudo apt-get install python3-distutils Output: [sudo] password for jeremy: Reading package lists... Done Building dependency tree Reading state information... Done python3-distutils is already the newest version (3.8.10-0ubuntu1~20.04). ... 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. Since distutils already seems to be installed, I can't grok how to proceed.", "output": "It looks like distutils has versioning, so after the following, it seems to be able to proceed. sudo apt-get install python3.10-distutils Output: Reading package lists... Done Building dependency tree Reading state information... Done ... Setting up python3.10-lib2to3 (3.10.0-1+focal1) ... Setting up python3.10-distutils (3.10.0-1+focal1) ... And: python3.10 -m pip install opencv-python"} +{"question_id": 1208118, "score": 216, "creation_date": 1248975147, "tags": ["python", "arrays", "numpy", "multidimensional-array", "cartesian-product"], "instruction": "Using NumPy to build an array of all combinations of two arrays\n\nI'm trying to run over the parameters space of a six-parameter function to study its numerical behavior before trying to do anything complex with it, so I'm searching for an efficient way to do this. My function takes float values given in a 6-dim NumPy array as input. What I tried to do initially was this: First, I created a function that takes two arrays and generate an array with all combinations of values from the two arrays: from numpy import * def comb(a, b): c = [] for i in a: for j in b: c.append(r_[i,j]) return c Then, I used reduce() to apply that to m copies of the same array: def combs(a, m): return reduce(comb, [a]*m) Finally, I evaluate my function like this: values = combs(np.arange(0, 1, 0.1), 6) for val in values: print F(val) This works, but it's way too slow. I know the space of parameters is huge, but this shouldn't be so slow. I have only sampled 106 (a million) points in this example and it took more than 15 seconds just to create the array values. Is there a more efficient way of doing this with NumPy? I can modify the way the function F takes its arguments if it's necessary.", "output": "In newer versions of NumPy (>1.8.x), numpy.meshgrid() provides a much faster implementation: For pv's solution: In [113]: %timeit cartesian(([1, 2, 3], [4, 5], [6, 7])) 10000 loops, best of 3: 135 \u00b5s per loop In [114]: cartesian(([1, 2, 3], [4, 5], [6, 7])) Out[114]: array([[1, 4, 6], [1, 4, 7], [1, 5, 6], [1, 5, 7], [2, 4, 6], [2, 4, 7], [2, 5, 6], [2, 5, 7], [3, 4, 6], [3, 4, 7], [3, 5, 6], [3, 5, 7]]) numpy.meshgrid() used to be two-dimensional only, but now it is capable of being multidimensional. In this case, three-dimensional: In [115]: %timeit np.array(np.meshgrid([1, 2, 3], [4, 5], [6, 7])).T.reshape(-1,3) 10000 loops, best of 3: 74.1 \u00b5s per loop In [116]: np.array(np.meshgrid([1, 2, 3], [4, 5], [6, 7])).T.reshape(-1,3) Out[116]: array([[1, 4, 6], [1, 5, 6], [2, 4, 6], [2, 5, 6], [3, 4, 6], [3, 5, 6], [1, 4, 7], [1, 5, 7], [2, 4, 7], [2, 5, 7], [3, 4, 7], [3, 5, 7]]) Note that the order of the final result is slightly different."} +{"question_id": 997797, "score": 216, "creation_date": 1245092710, "tags": ["python", "string-formatting"], "instruction": "What does %s mean in a Python format string?\n\nWhat does %s mean in Python? And what does the following bit of code do? For instance... if len(sys.argv) < 2: sys.exit('Usage: %s database-name' % sys.argv[0]) if not os.path.exists(sys.argv[1]): sys.exit('ERROR: Database %s was not found!' % sys.argv[1])", "output": "It is a string formatting syntax (which it borrows from C). Please see \"PyFormat\": Python supports formatting values into strings. Although this can include very complicated expressions, the most basic usage is to insert values into a string with the %s placeholder. Here is a really simple example: #Python 2 name = raw_input(\"who are you? \") print \"hello %s\" % (name,) #Python 3+ name = input(\"who are you? \") print(\"hello %s\" % (name,)) The %s token allows me to insert (and potentially format) a string. Notice that the %s token is replaced by whatever I pass to the string after the % symbol. Notice also that I am using a tuple here as well (when you only have one string using a tuple is optional) to illustrate that multiple strings can be inserted and formatted in one statement."} +{"question_id": 2965271, "score": 216, "creation_date": 1275562854, "tags": ["python", "function", "coding-style", "parameter-passing"], "instruction": "How can we force naming of parameters when calling a function?\n\nIn Python you may have a function definition: def info(obj, spacing=10, collapse=1) which could be called in any of the following ways: info(odbchelper) info(odbchelper, 12) info(odbchelper, collapse=0) info(spacing=15, object=odbchelper) thanks to Python's allowing of any-order arguments, so long as they're named. The problem we're having is as some of our larger functions grow, people might be adding parameters between spacing and collapse, meaning that the wrong values may be going to parameters that aren't named. In addition sometimes it's not always clear as to what needs to go in. How can we force people to name certain parameters - not just a coding standard, but ideally a flag or pydev plugin? So that in the above 4 examples, only the last would pass the check as all the parameters are named.", "output": "In Python 3 - Yes, you can specify * in the argument list. From docs: Parameters after \u201c*\u201d or \u201c*identifier\u201d are keyword-only parameters and may only be passed used keyword arguments. >>> def foo(pos, *, forcenamed): ... print(pos, forcenamed) ... >>> foo(pos=10, forcenamed=20) 10 20 >>> foo(10, forcenamed=20) 10 20 >>> foo(10, 20) Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: foo() takes exactly 1 positional argument (2 given) This can also be combined with **kwargs: def foo(pos, *, forcenamed, **kwargs): To complete example: def foo(pos, *, forcenamed ): print(pos, forcenamed) foo(pos=10, forcenamed=20) foo(10, forcenamed=20) # basically you always have to give the value! foo(10) output: Traceback (most recent call last): File \"/Users/brando/anaconda3/envs/metalearning/lib/python3.9/site-packages/IPython/core/interactiveshell.py\", line 3444, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File \"<ipython-input-12-ab74191b3e9e>\", line 7, in <module> foo(10) TypeError: foo() missing 1 required keyword-only argument: 'forcenamed' So you are forced to always give the value. If you don't call it you don't have to do anything else named argument forced."} +{"question_id": 7877522, "score": 216, "creation_date": 1319467861, "tags": ["python", "pylint", "docstring"], "instruction": "How do I disable \"missing docstring\" warnings at a file-level in Pylint?\n\nPylint throws errors that some of the files are missing docstrings. I try and add docstrings to each class, method and function, but it seems that Pylint also checks that files should have a docstring at the beginning of them. Can I disable this somehow? I would like to be notified of a docstring is missing inside a class, function or method, but it shouldn't be mandatory for a file to have a docstring. (Is there a term for the legal jargon often found at the beginning of a proprietary source file? Any examples? I don't know whether it is a okay to post such a trivial question separately.)", "output": "It is nice for a Python module to have a docstring, explaining what the module does, what it provides, examples of how to use the classes. This is different from the comments that you often see at the beginning of a file giving the copyright and license information, which IMO should not go in the docstring (some even argue that they should disappear altogether, see e.g. Get Rid of Source Code Templates) With Pylint 2.4 and above, you can differentiate between the various missing-docstring by using the three following sub-messages: C0114 (missing-module-docstring) C0115 (missing-class-docstring) C0116 (missing-function-docstring) So the following .pylintrc file should work: [MASTER] disable= C0114, # missing-module-docstring For previous versions of Pylint, it does not have a separate code for the various place where docstrings can occur, so all you can do is disable C0111. The problem is that if you disable this at module scope, then it will be disabled everywhere in the module (i.e., you won't get any C line for missing function / class / method docstring. Which arguably is not nice. So I suggest adding that small missing docstring, saying something like: \"\"\" high level support for doing this and that. \"\"\" Soon enough, you'll be finding useful things to put in there, such as providing examples of how to use the various classes / functions of the module which do not necessarily belong to the individual docstrings of the classes / functions (such as how these interact, or something like a quick start guide)."} +{"question_id": 34833327, "score": 216, "creation_date": 1452986978, "tags": ["python", "pytest"], "instruction": "How to test single file under pytest\n\nHow do you test a single file in pytest? I could only find ignore options and no \"test this file only\" option in the docs. Preferably this would work on the command line instead of setup.cfg, as I would like to run different file tests in the ide. The entire suite takes too long.", "output": "simply run pytest with the path to the file something like pytest tests/test_file.py Use the :: syntax to run a specific test in the test file: pytest test_mod.py::test_func Here test_func can be a test method or a class (e.g.: pytest test_mod.py::TestClass). For more ways and details, see \"Specifying which tests to run\" in the docs."} +{"question_id": 118370, "score": 215, "creation_date": 1222129029, "tags": ["python", "numpy", "subclass", "slice", "ellipsis"], "instruction": "How do you use the ellipsis slicing syntax in Python?\n\nThis came up in Hidden features of Python, but I can't see good documentation or examples that explain how the feature works.", "output": "Ellipsis, or ... is not a hidden feature, it's just a constant. It's quite different to, say, javascript ES6 where it's a part of the language syntax. No builtin class or Python language constuct makes use of it. So the syntax for it depends entirely on you, or someone else, having written code to understand it. Numpy uses it, as stated in the documentation. Some examples here. In your own class, you'd use it like this: >>> class TestEllipsis(object): ... def __getitem__(self, item): ... if item is Ellipsis: ... return \"Returning all items\" ... else: ... return \"return %r items\" % item ... >>> x = TestEllipsis() >>> print x[2] return 2 items >>> print x[...] Returning all items Of course, there is the python documentation, and language reference. But those aren't very helpful."} +{"question_id": 647071, "score": 215, "creation_date": 1237080259, "tags": ["python", "xml"], "instruction": "Python xml ElementTree from a string source?\n\nThe ElementTree.parse reads from a file, how can I use this if I already have the XML data in a string? Maybe I am missing something here, but there must be a way to use the ElementTree without writing out the string to a file and reading it again. xml.etree.elementtree", "output": "If you're using xml.etree.ElementTree.parse to parse from a file, then you can use xml.etree.ElementTree.fromstring to get the root Element of the document. Often you don't actually need an ElementTree. See xml.etree.ElementTree"} +{"question_id": 30227466, "score": 215, "creation_date": 1431563770, "tags": ["python", "python-imaging-library", "paste"], "instruction": "Combine several images horizontally with Python\n\nI am trying to horizontally combine some JPEG images in Python. Problem I have 3 images - each is 148 x 95 - see attached. I just made 3 copies of the same image - that is why they are the same. My attempt I am trying to horizontally join them using the following code: from PIL import Image list_im = ['Test1.jpg','Test2.jpg','Test3.jpg'] # creates a new empty image, RGB mode, and size 444 by 95 new_im = Image.new('RGB', (444,95)) for elem in list_im: for i in xrange(0,444,95): im=Image.open(elem) new_im.paste(im, (i,0)) new_im.save('test.jpg') However, this is producing the output attached as test.jpg. Question Is there a way to horizontally concatenate these images such that the sub-images in test.jpg do not have an extra partial image showing? Additional Information I am looking for a way to horizontally concatenate n images. I would like to use this code generally so I would prefer to: not to hard-code image dimensions, if possible specify dimensions in one line so that they can be easily changed", "output": "You can do something like this: from PIL import Image images = [Image.open(x) for x in ['Test1.jpg', 'Test2.jpg', 'Test3.jpg']] widths, heights = zip(*(i.size for i in images)) total_width = sum(widths) max_height = max(heights) new_im = Image.new('RGB', (total_width, max_height)) x_offset = 0 for im in images: new_im.paste(im, (x_offset,0)) x_offset += im.size[0] new_im.save('test.jpg') Test1.jpg Test2.jpg Test3.jpg test.jpg The nested for for i in xrange(0,444,95): is pasting each image 5 times, staggered 95 pixels apart. Each outer loop iteration pasting over the previous. for elem in list_im: for i in xrange(0,444,95): im=Image.open(elem) new_im.paste(im, (i,0)) new_im.save('new_' + elem + '.jpg')"} +{"question_id": 3041986, "score": 214, "creation_date": 1276564434, "tags": ["python"], "instruction": "APT command line interface-like yes/no input?\n\nIs there any short way to achieve what the APT (Advanced Package Tool) command line interface does in Python? I mean, when the package manager prompts a yes/no question followed by [Yes/no], the script accepts YES/Y/yes/y or Enter (defaults to Yes as hinted by the capital letter). The only thing I find in the official docs is input and raw_input... I know it's not that hard to emulate, but it's annoying to rewrite :|", "output": "As you mentioned, the easiest way is to use raw_input() (or simply input() for Python 3). There is no built-in way to do this. From Recipe 577058: import sys def query_yes_no(question, default=\"yes\"): \"\"\"Ask a yes/no question via raw_input() and return their answer. \"question\" is a string that is presented to the user. \"default\" is the presumed answer if the user just hits <Enter>. It must be \"yes\" (the default), \"no\" or None (meaning an answer is required of the user). The \"answer\" return value is True for \"yes\" or False for \"no\". \"\"\" valid = {\"yes\": True, \"y\": True, \"ye\": True, \"no\": False, \"n\": False} if default is None: prompt = \" [y/n] \" elif default == \"yes\": prompt = \" [Y/n] \" elif default == \"no\": prompt = \" [y/N] \" else: raise ValueError(\"invalid default answer: '%s'\" % default) while True: sys.stdout.write(question + prompt) choice = input().lower() if default is not None and choice == \"\": return valid[default] elif choice in valid: return valid[choice] else: sys.stdout.write(\"Please respond with 'yes' or 'no' \" \"(or 'y' or 'n').\\n\") (For Python 2, use raw_input instead of input.) Usage example: >>> query_yes_no(\"Is cabbage yummier than cauliflower?\") Is cabbage yummier than cauliflower? [Y/n] oops Please respond with 'yes' or 'no' (or 'y' or 'n'). Is cabbage yummier than cauliflower? [Y/n] [ENTER] >>> True >>> query_yes_no(\"Is cabbage yummier than cauliflower?\", None) Is cabbage yummier than cauliflower? [y/n] [ENTER] Please respond with 'yes' or 'no' (or 'y' or 'n'). Is cabbage yummier than cauliflower? [y/n] y >>> True"} +{"question_id": 49201236, "score": 214, "creation_date": 1520625323, "tags": ["python", "pytorch"], "instruction": "Check the total number of parameters in a PyTorch model\n\nHow do I count the total number of parameters in a PyTorch model? Something similar to model.count_params() in Keras.", "output": "PyTorch doesn't have a function to calculate the total number of parameters as Keras does, but it's possible to sum the number of elements for every parameter group: pytorch_total_params = sum(p.numel() for p in model.parameters()) If you want to calculate only the trainable parameters: pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad) Answer inspired by this answer on PyTorch Forums."} +{"question_id": 3136689, "score": 214, "creation_date": 1277765112, "tags": ["python", "string", "list"], "instruction": "Find and replace string values in list\n\nI got this list: words = ['how', 'much', 'is[br]', 'the', 'fish[br]', 'no', 'really'] What I would like is to replace [br] with some fantastic value similar to <br /> and thus getting a new list: words = ['how', 'much', 'is<br />', 'the', 'fish<br />', 'no', 'really']", "output": "words = [w.replace('[br]', '<br />') for w in words] This is called a list comprehension."} +{"question_id": 2020014, "score": 214, "creation_date": 1262865028, "tags": ["python", "python-datamodel"], "instruction": "Get fully qualified class name of an object in Python\n\nFor logging purposes I want to retrieve the fully qualified class name of a Python object. (With fully qualified I mean the class name including the package and module name.) I know about x.__class__.__name__, but is there a simple method to get the package and module?", "output": "With the following program #!/usr/bin/env python import foo def fullname(o): klass = o.__class__ module = klass.__module__ if module == 'builtins': return klass.__qualname__ # avoid outputs like 'builtins.str' return module + '.' + klass.__qualname__ bar = foo.Bar() print(fullname(bar)) and Bar defined as class Bar(object): def __init__(self, v=42): self.val = v the output is $ ./prog.py foo.Bar If you're still stuck on Python 2, you'll have to use __name__ instead of __qualname__, which is less informative for nested classes - a class Bar nested in a class Foo will show up as Bar instead of Foo.Bar: def fullname(o): klass = o.__class__ module = klass.__module__ if module == '__builtin__': return klass.__name__ # avoid outputs like '__builtin__.str' return module + '.' + klass.__name__"} +{"question_id": 20638040, "score": 213, "creation_date": 1387293988, "tags": ["python", "glob"], "instruction": "glob exclude pattern\n\nI have a directory with a bunch of files inside: eee2314, asd3442 ... and eph. I want to exclude all files that start with eph with the glob function. How can I do it?", "output": "The pattern rules for glob are not regular expressions. Instead, they follow standard Unix path expansion rules. There are only a few special characters: two different wild-cards, and character ranges are supported [from pymotw: glob \u2013 Filename pattern matching]. So you can exclude some files with patterns. For example to exclude manifests files (files starting with _) with glob, you can use: files = glob.glob('files_path/[!_]*')"} +{"question_id": 7971618, "score": 213, "creation_date": 1320174935, "tags": ["python", "dictionary"], "instruction": "Return first N key:value pairs from dict\n\nConsider the following dictionary, d: d = {'a': 3, 'b': 2, 'c': 3, 'd': 4, 'e': 5} I want to return the first N key:value pairs from d (N <= 4 in this case). What is the most efficient method of doing this?", "output": "Prior to Python 3.6 there is no such thing a the \"first n\" keys because a dict doesn't remember which keys were inserted first. You can get first n iteration sorted key-value pairs though: n_items = take(n, d.items()) This uses the implementation of take from the itertools recipes: from itertools import islice def take(n, iterable): \"\"\"Return the first n items of the iterable as a list.\"\"\" return list(islice(iterable, n)) See it working online: ideone For Python < 3.6 n_items = take(n, d.iteritems())"} +{"question_id": 34716454, "score": 213, "creation_date": 1452498473, "tags": ["python", "keras", "neural-network", "batch-normalization"], "instruction": "Where do I call the BatchNormalization function in Keras?\n\nIf I want to use the BatchNormalization function in Keras, then do I need to call it once only at the beginning? I read this documentation for it: http://keras.io/layers/normalization/ I don't see where I'm supposed to call it. Below is my code attempting to use it: model = Sequential() keras.layers.normalization.BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None) model.add(Dense(64, input_dim=14, init='uniform')) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(64, init='uniform')) model.add(Activation('tanh')) model.add(Dropout(0.5)) model.add(Dense(2, init='uniform')) model.add(Activation('softmax')) sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd) model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2) I ask because if I run the code with the second line including the batch normalization and if I run the code without the second line I get similar outputs. So either I'm not calling the function in the right place, or I guess it doesn't make that much of a difference.", "output": "As Pavel said, Batch Normalization is just another layer, so you can use it as such to create your desired network architecture. The general use case is to use BN between the linear and non-linear layers in your network, because it normalizes the input to your activation function, so that you're centered in the linear section of the activation function (such as Sigmoid). There's a small discussion of it here In your case above, this might look like: # import BatchNormalization from keras.layers.normalization import BatchNormalization # instantiate model model = Sequential() # we can think of this chunk as the input layer model.add(Dense(64, input_dim=14, init='uniform')) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.5)) # we can think of this chunk as the hidden layer model.add(Dense(64, init='uniform')) model.add(BatchNormalization()) model.add(Activation('tanh')) model.add(Dropout(0.5)) # we can think of this chunk as the output layer model.add(Dense(2, init='uniform')) model.add(BatchNormalization()) model.add(Activation('softmax')) # setting up the optimization of our weights sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd) # running the fitting model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)"} +{"question_id": 1726391, "score": 213, "creation_date": 1258072255, "tags": ["python", "matplotlib", "grid"], "instruction": "Matplotlib: draw grid lines behind other graph elements\n\nIn Matplotlib, I make dashed grid lines as follows: fig = pylab.figure() ax = fig.add_subplot(1,1,1) ax.yaxis.grid(color='gray', linestyle='dashed') however, I can't find out how (or even if it is possible) to make the grid lines be drawn behind other graph elements, such as bars. Changing the order of adding the grid versus adding other elements makes no difference. Is it possible to make it so that the grid lines appear behind everything else?", "output": "According to this - https://web.archive.org/web/20200131000410/http://matplotlib.1069221.n5.nabble.com/axis-elements-and-zorder-td5346.html - you can use Axis.set_axisbelow(True) (I am currently installing matplotlib for the first time, so have no idea if that's correct - I just found it by googling \"matplotlib z order grid\" - \"z order\" is typically used to describe this kind of thing (z being the axis \"out of the page\"))"} +{"question_id": 2058802, "score": 213, "creation_date": 1263403489, "tags": ["python", "setuptools"], "instruction": "How can I get the version defined in setup.py (setuptools) in my package?\n\nHow could I get the version defined in setup.py from my package (for --version, or other purposes)?", "output": "Interrogate version string of already-installed distribution To retrieve the version from inside your package at runtime (what your question appears to actually be asking), you can use: import pkg_resources # part of setuptools version = pkg_resources.require(\"MyProject\")[0].version Store version string for use during install If you want to go the other way 'round (which appears to be what other answer authors here appear to have thought you were asking), put the version string in a separate file and read that file's contents in setup.py. You could make a version.py in your package with a __version__ line, then read it from setup.py using execfile('mypackage/version.py'), so that it sets __version__ in the setup.py namespace. Warning about race condition during install By the way, DO NOT import your package from your setup.py as suggested in another answer here: it will seem to work for you (because you already have your package's dependencies installed), but it will wreak havoc upon new users of your package, as they will not be able to install your package without manually installing the dependencies first."} +{"question_id": 39870642, "score": 212, "creation_date": 1475660740, "tags": ["python", "matplotlib"], "instruction": "How to plot a high resolution graph\n\nI've used matplotlib for plotting some experimental results (discussed it in here: Looping over files and plotting. However, saving the picture by clicking right to the image gives very bad quality / low resolution images. from glob import glob import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl # loop over all files in the current directory ending with .txt for fname in glob(\"./*.txt\"): # read file, skip header (1 line) and unpack into 3 variables WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True) # first plot plt.plot(WL, T, label='BN', color='blue') plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylim(0,100) plt.ylabel('Transmittance, %') mpl.rcParams.update({'font.size': 14}) #plt.legend(loc='lower center') plt.title('') plt.show() plt.clf() # second plot plt.plot(WL, ABS, label='BN', color='red') plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylabel('Absorbance, A') mpl.rcParams.update({'font.size': 14}) #plt.legend() plt.title('') plt.show() plt.clf() Example graph of what I'm looking for: example graph", "output": "You can use savefig() to export to an image file: plt.savefig('filename.png') In addition, you can specify the dpi argument to some scalar value (default is 100). For example: plt.savefig('filename.png', dpi=300)"} +{"question_id": 3897499, "score": 212, "creation_date": 1286651523, "tags": ["python", "list", "dictionary", "search", "key"], "instruction": "Check if value already exists within list of dictionaries in Python?\n\nI've got a Python list of dictionaries: a = [ {'main_color': 'red', 'second_color':'blue'}, {'main_color': 'yellow', 'second_color':'green'}, {'main_color': 'yellow', 'second_color':'blue'}, ] I'd like to check whether a dictionary with a particular key/value already exists in the list: // is a dict with 'main_color'='red' in the list already? // if not: add item", "output": "if not any(d['main_color'] == 'red' for d in a): # does not exist The part in parentheses is a generator expression that returns True for each dictionary that has the key-value pair you are looking for, otherwise False. If the key could also be missing the above code can give you a KeyError. You can fix this by using get and providing a default value. If you don't provide a default value, None is returned. if not any(d.get('main_color', default_value) == 'red' for d in a): # does not exist"} +{"question_id": 11360858, "score": 212, "creation_date": 1341572104, "tags": ["python"], "instruction": "What is the EAFP principle in Python?\n\nWhat is meant by \"using the EAFP principle\" in Python? Could you provide any examples?", "output": "From the glossary: Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements. The technique contrasts with the LBYL style common to many other languages such as C. An example would be an attempt to access a dictionary key. EAFP: try: x = my_dict[\"key\"] except KeyError: # handle missing key LBYL: if \"key\" in my_dict: x = my_dict[\"key\"] else: # handle missing key The LBYL version has to search the key inside the dictionary twice, and might also be considered slightly less readable."} +{"question_id": 8011797, "score": 211, "creation_date": 1320420865, "tags": ["python", "readfile"], "instruction": "open read and close a file in 1 line of code\n\nNow I use: pageHeadSectionFile = open('pagehead.section.htm','r') output = pageHeadSectionFile.read() pageHeadSectionFile.close() But to make the code look better, I can do: output = open('pagehead.section.htm','r').read() When using the above syntax, how do I close the file to free up system resources?", "output": "You don't really have to close it - Python will do it automatically either during garbage collection or at program exit. But as @delnan noted, it's better practice to explicitly close it for various reasons. So, what you can do to keep it short, simple and explicit: with open('pagehead.section.htm', 'r') as f: output = f.read() Now it's just two lines and pretty readable, I think."} +{"question_id": 16333296, "score": 211, "creation_date": 1367482480, "tags": ["python", "python-2.7", "dictionary", "mapping", "nested"], "instruction": "How do you create nested dict in Python?\n\nI have 2 CSV files: 'Data' and 'Mapping': 'Mapping' file has 4 columns: Device_Name, GDN, Device_Type, and Device_OS. All four columns are populated. 'Data' file has these same columns, with Device_Name column populated and the other three columns blank. I want my Python code to open both files and for each Device_Name in the Data file, map its GDN, Device_Type, and Device_OS value from the Mapping file. I know how to use dict when only 2 columns are present (1 is needed to be mapped) but I don't know how to accomplish this when 3 columns need to be mapped. Following is the code using which I tried to accomplish mapping of Device_Type: x = dict([]) with open(\"Pricing Mapping_2013-04-22.csv\", \"rb\") as in_file1: file_map = csv.reader(in_file1, delimiter=',') for row in file_map: typemap = [row[0],row[2]] x.append(typemap) with open(\"Pricing_Updated_Cleaned.csv\", \"rb\") as in_file2, open(\"Data Scraper_GDN.csv\", \"wb\") as out_file: writer = csv.writer(out_file, delimiter=',') for row in csv.reader(in_file2, delimiter=','): try: row[27] = x[row[11]] except KeyError: row[27] = \"\" writer.writerow(row) It returns Attribute Error. After some researching, I think I need to create a nested dict, but I don't have any idea how to do this.", "output": "A nested dict is a dictionary within a dictionary. A very simple thing. >>> d = {} >>> d['dict1'] = {} >>> d['dict1']['innerkey'] = 'value' >>> d['dict1']['innerkey2'] = 'value2' >>> d {'dict1': {'innerkey': 'value', 'innerkey2': 'value2'}} You can also use a defaultdict from the collections package to facilitate creating nested dictionaries. >>> import collections >>> d = collections.defaultdict(dict) >>> d['dict1']['innerkey'] = 'value' >>> d # currently a defaultdict type defaultdict(<type 'dict'>, {'dict1': {'innerkey': 'value'}}) >>> dict(d) # but is exactly like a normal dictionary. {'dict1': {'innerkey': 'value'}} You can populate that however you want. I would recommend in your code something like the following: d = {} # can use defaultdict(dict) instead for row in file_map: # derive row key from something # when using defaultdict, we can skip the next step creating a dictionary on row_key d[row_key] = {} for idx, col in enumerate(row): d[row_key][idx] = col According to your comment: may be above code is confusing the question. My problem in nutshell: I have 2 files a.csv b.csv, a.csv has 4 columns i j k l, b.csv also has these columns. i is kind of key columns for these csvs'. j k l column is empty in a.csv but populated in b.csv. I want to map values of j k l columns using 'i` as key column from b.csv to a.csv file My suggestion would be something like this (without using defaultdict): a_file = \"path/to/a.csv\" b_file = \"path/to/b.csv\" # read from file a.csv with open(a_file) as f: # skip headers f.next() # get first colum as keys keys = (line.split(',')[0] for line in f) # create empty dictionary: d = {} # read from file b.csv with open(b_file) as f: # gather headers except first key header headers = f.next().split(',')[1:] # iterate lines for line in f: # gather the colums cols = line.strip().split(',') # check to make sure this key should be mapped. if cols[0] not in keys: continue # add key to dict d[cols[0]] = dict( # inner keys are the header names, values are columns (headers[idx], v) for idx, v in enumerate(cols[1:])) Please note though, that for parsing csv files there is a csv module."} +{"question_id": 4474395, "score": 211, "creation_date": 1292616699, "tags": ["python", "abstract-class", "static-methods", "python-decorators", "abstract-methods"], "instruction": "`staticmethod` and `abc.abstractmethod`: Will it blend?\n\nIn my Python app I want to make a method that is both a staticmethod and an abc.abstractmethod. How do I do this? I tried applying both decorators, but it doesn't work. If I do this: import abc class C(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod @staticmethod def my_function(): pass I get an exception*, and if I do this: class C(object): __metaclass__ = abc.ABCMeta @staticmethod @abc.abstractmethod def my_function(): pass The abstract method is not enforced. How can I make an abstract static method? *The exception: File \"c:\\Python26\\Lib\\abc.py\", line 29, in abstractmethod funcobj.__isabstractmethod__ = True AttributeError: 'staticmethod' object has no attribute '__isabstractmethod__'", "output": "Starting with Python 3.3, it is possible to combine @staticmethod and @abstractmethod, so none of the other suggestions are necessary anymore: @staticmethod @abstractmethod def my_abstract_staticmethod(...): @abstractstaticmethod has been deprecated since version 3.3 (but is still there in Python 3.14)."} +{"question_id": 37278647, "score": 211, "creation_date": 1463494380, "tags": ["python", "python-3.5", "python-asyncio"], "instruction": "\"Fire and forget\" python async/await\n\nSometimes there is some non-critical asynchronous operation that needs to happen but I don't want to wait for it to complete. In Tornado's coroutine implementation you can \"fire & forget\" an asynchronous function by simply ommitting the yield key-word. I've been trying to figure out how to \"fire & forget\" with the new async/await syntax released in Python 3.5. E.g., a simplified code snippet: async def async_foo(): print(\"Do some stuff asynchronously here...\") def bar(): async_foo() # fire and forget \"async_foo()\" bar() What happens though is that bar() never executes and instead we get a runtime warning: RuntimeWarning: coroutine 'async_foo' was never awaited async_foo() # fire and forget \"async_foo()\"", "output": "Editor's note: According to a comment below, According to the documentation the event loop only keeps a \"weak reference\" to a task, see https://docs.python.org/3.13/library/asyncio-task.html#creating-tasks so it appears to me that you MUST save a reference in order to ensure that the task is not destroyed before it is done. It seems to me that this makes this answer wrong. Upd: Replace asyncio.ensure_future with asyncio.create_task everywhere if you're using Python >= 3.7 It's a newer, nicer way to spawn tasks. asyncio.Task to \"fire and forget\" According to python docs for asyncio.Task it is possible to start some coroutine to execute \"in the background\". The task created by asyncio.ensure_future won't block the execution (therefore the function will return immediately!). This looks like a way to \"fire and forget\" as you requested. import asyncio async def async_foo(): print(\"async_foo started\") await asyncio.sleep(1) print(\"async_foo done\") async def main(): asyncio.ensure_future(async_foo()) # fire and forget async_foo() # btw, you can also create tasks inside non-async funcs print('Do some actions 1') await asyncio.sleep(1) print('Do some actions 2') await asyncio.sleep(1) print('Do some actions 3') if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) Output: Do some actions 1 async_foo started Do some actions 2 async_foo done Do some actions 3 What if tasks are executing after the event loop has completed? Note that asyncio expects tasks to be completed at the moment the event loop completes. So if you'll change main() to: async def main(): asyncio.ensure_future(async_foo()) # fire and forget print('Do some actions 1') await asyncio.sleep(0.1) print('Do some actions 2') You'll get this warning after the program finished: Task was destroyed but it is pending! task: <Task pending coro=<async_foo() running at [...] To prevent that you can just await all pending tasks after the event loop has completed: async def main(): asyncio.ensure_future(async_foo()) # fire and forget print('Do some actions 1') await asyncio.sleep(0.1) print('Do some actions 2') if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) # Let's also finish all running tasks: pending = asyncio.Task.all_tasks() loop.run_until_complete(asyncio.gather(*pending)) Kill tasks instead of awaiting them Sometimes you don't want to await tasks to be done (for example, some tasks may be created to run forever). In that case, you can just cancel() them instead of awaiting them: import asyncio from contextlib import suppress async def echo_forever(): while True: print(\"echo\") await asyncio.sleep(1) async def main(): asyncio.ensure_future(echo_forever()) # fire and forget print('Do some actions 1') await asyncio.sleep(1) print('Do some actions 2') await asyncio.sleep(1) print('Do some actions 3') if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) # Let's also cancel all running tasks: pending = asyncio.Task.all_tasks() for task in pending: task.cancel() # Now we should await task to execute it's cancellation. # Cancelled task raises asyncio.CancelledError that we can suppress: with suppress(asyncio.CancelledError): loop.run_until_complete(task) Output: Do some actions 1 echo Do some actions 2 echo Do some actions 3 echo"} +{"question_id": 17978092, "score": 210, "creation_date": 1375295260, "tags": ["python", "pandas", "datetime", "time-series"], "instruction": "Combine Date and Time columns using pandas\n\nI have a pandas dataframe with the following columns: data = {'Date': ['01-06-2013', '02-06-2013', '02-06-2013', '02-06-2013', '02-06-2013', '03-06-2013', '03-06-2013', '03-06-2013', '03-06-2013', '04-06-2013'], 'Time': ['23:00:00', '01:00:00', '21:00:00', '22:00:00', '23:00:00', '01:00:00', '21:00:00', '22:00:00', '23:00:00', '01:00:00']} df = pd.DataFrame(data) Date Time 0 01-06-2013 23:00:00 1 02-06-2013 01:00:00 2 02-06-2013 21:00:00 3 02-06-2013 22:00:00 4 02-06-2013 23:00:00 5 03-06-2013 01:00:00 6 03-06-2013 21:00:00 7 03-06-2013 22:00:00 8 03-06-2013 23:00:00 9 04-06-2013 01:00:00 How do I combine data['Date'] & data['Time'] to get the following? Is there a way of doing it using pd.to_datetime? Date 01-06-2013 23:00:00 02-06-2013 01:00:00 02-06-2013 21:00:00 02-06-2013 22:00:00 02-06-2013 23:00:00 03-06-2013 01:00:00 03-06-2013 21:00:00 03-06-2013 22:00:00 03-06-2013 23:00:00 04-06-2013 01:00:00", "output": "It's worth mentioning that you may have been able to read this in directly e.g. if you were using read_csv using parse_dates=[['Date', 'Time']]. Assuming these are just strings you could simply add them together (with a space), allowing you to use to_datetime, which works without specifying the format= parameter In [11]: df['Date'] + ' ' + df['Time'] Out[11]: 0 01-06-2013 23:00:00 1 02-06-2013 01:00:00 2 02-06-2013 21:00:00 3 02-06-2013 22:00:00 4 02-06-2013 23:00:00 5 03-06-2013 01:00:00 6 03-06-2013 21:00:00 7 03-06-2013 22:00:00 8 03-06-2013 23:00:00 9 04-06-2013 01:00:00 dtype: object In [12]: pd.to_datetime(df['Date'] + ' ' + df['Time']) Out[12]: 0 2013-01-06 23:00:00 1 2013-02-06 01:00:00 2 2013-02-06 21:00:00 3 2013-02-06 22:00:00 4 2013-02-06 23:00:00 5 2013-03-06 01:00:00 6 2013-03-06 21:00:00 7 2013-03-06 22:00:00 8 2013-03-06 23:00:00 9 2013-04-06 01:00:00 dtype: datetime64[ns] Alternatively, without the + ' ', but the format= parameter must be used. Additionally, pandas is good at inferring the format to be converted to a datetime, however, specifying the exact format is faster. pd.to_datetime(df['Date'] + df['Time'], format='%m-%d-%Y%H:%M:%S') Note: surprisingly (for me), this works fine with NaNs being converted to NaT, but it is worth worrying that the conversion (perhaps using the raise argument). %%timeit # sample dataframe with 10000000 rows using df from the OP df = pd.concat([df for _ in range(1000000)]).reset_index(drop=True) %%timeit pd.to_datetime(df['Date'] + ' ' + df['Time']) [result]: 1.73 s \u00b1 10.4 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) %%timeit pd.to_datetime(df['Date'] + df['Time'], format='%m-%d-%Y%H:%M:%S') [result]: 1.33 s \u00b1 9.88 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)"} +{"question_id": 7745952, "score": 210, "creation_date": 1318450265, "tags": ["python", "arguments"], "instruction": "How to expand a list to function arguments in Python\n\nIs there syntax that allows you to expand a list into the arguments of a function call? Example: # Trivial example function, not meant to do anything useful. def foo(x,y,z): return \"%d, %d, %d\" %(x,y,z) # List of values that I want to pass into foo. values = [1,2,3] # I want to do something like this, and get the result \"1, 2, 3\": foo( values.howDoYouExpandMe() )", "output": "It exists, but it's hard to search for. I think most people call it the \"splat\" operator. It's in the documentation as \"Unpacking argument lists\". You'd use it like this for positional arguments: values = [1, 2] foo(*values) There's also one for dictionaries to call with named arguments: d = {'a': 1, 'b': 2} def foo(a, b): pass foo(**d)"} +{"question_id": 28002897, "score": 210, "creation_date": 1421519766, "tags": ["python", "installation", "python-wheel"], "instruction": "Wheel file installation\n\nHow do I install a .whl file? I have the wheel library, but I don't know how to use it to install those files. I have the .whl file, but I don't know how to run it.", "output": "You normally use a tool like pip to install wheels. Leave it to the tool to discover and download the file if this is for a project hosted on PyPI. For this to work, you do need to install the wheel package: pip install wheel You can then tell pip to install the project (and it'll download the wheel if available), or the wheel file directly: pip install project_name # discover, download and install pip install wheel_file.whl # directly install the wheel Also see the wheel project documentation."} +{"question_id": 27516849, "score": 210, "creation_date": 1418778990, "tags": ["python", "list", "numpy"], "instruction": "How to convert list of numpy arrays into single numpy array?\n\nSuppose I have ; LIST = [[array([1, 2, 3, 4, 5]), array([1, 2, 3, 4, 5],[1,2,3,4,5])] # inner lists are numpy arrays I try to convert; array([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]) I am solving it by iteration on vstack right now but it is really slow for especially large LIST What do you suggest for the best efficient way?", "output": "In general you can concatenate a whole sequence of arrays along any axis: numpy.concatenate( LIST, axis=0 ) but you do have to worry about the shape and dimensionality of each array in the list (for a 2-dimensional 3x5 output, you need to ensure that they are all 2-dimensional n-by-5 arrays already). If you want to concatenate 1-dimensional arrays as the rows of a 2-dimensional output, you need to expand their dimensionality. As Jorge's answer points out, there is also the function stack, introduced in numpy 1.10: numpy.stack( LIST, axis=0 ) This takes the complementary approach: it creates a new view of each input array and adds an extra dimension (in this case, on the left, so each n-element 1D array becomes a 1-by-n 2D array) before concatenating. It will only work if all the input arrays have the same shape. vstack (or equivalently row_stack) is often an easier-to-use solution because it will take a sequence of 1- and/or 2-dimensional arrays and expand the dimensionality automatically where necessary and only where necessary, before concatenating the whole list together. Where a new dimension is required, it is added on the left. Again, you can concatenate a whole list at once without needing to iterate: numpy.vstack( LIST ) This flexible behavior is also exhibited by the syntactic shortcut numpy.r_[ array1, ...., arrayN ] (note the square brackets). This is good for concatenating a few explicitly-named arrays but it becomes less readable in your situation because [] subscripting will not accept a list. You would need to convert your sequence to a tuple: numpy.r_[tuple(LIST)]. It's more readable to simply use vstack(). There is also an analogous function column_stack and shortcut c_[...], for horizontal (column-wise) stacking, as well as an almost-analogous function hstack\u2014although for some reason the latter is less flexible (it is stricter about input arrays' dimensionality, and tries to concatenate 1-D arrays end-to-end instead of treating them as columns). Finally, in the specific case of vertical stacking of 1-D arrays, the following also works: numpy.array( LIST ) ...because arrays can be constructed out of a sequence of other arrays, adding a new dimension to the beginning."} +{"question_id": 6918493, "score": 210, "creation_date": 1312318481, "tags": ["python", "logging", "printing", "python-logging"], "instruction": "In python, why use logging instead of print?\n\nFor simple debugging in a complex project is there a reason to use the python logger instead of print? What about other use-cases? Is there an accepted best use-case for each (especially when you're only looking for stdout)? I've always heard that this is a \"best practice\" but I haven't been able to figure out why.", "output": "The logging package has a lot of useful features: Easy to see where and when (even what line no.) a logging call is being made from. You can log to files, sockets, pretty much anything, all at the same time. You can differentiate your logging based on severity. Print doesn't have any of these. Also, if your project is meant to be imported by other python tools, it's bad practice for your package to print things to stdout, since the user likely won't know where the print messages are coming from. With logging, users of your package can choose whether they want to propogate logging messages from your tool or not."} +{"question_id": 9383014, "score": 210, "creation_date": 1329848839, "tags": ["python", "import", "module", "package"], "instruction": "Can't import my own modules in Python\n\nI'm having a hard time understanding how module importing works in Python (I've never done it in any other language before either). Let's say I have: myapp/__init__.py myapp/myapp/myapp.py myapp/myapp/SomeObject.py myapp/tests/TestCase.py Now I'm trying to get something like this: myapp.py =================== from myapp import SomeObject # stuff ... TestCase.py =================== from myapp import SomeObject # some tests on SomeObject However, I'm definitely doing something wrong as Python can't see that myapp is a module: ImportError: No module named myapp", "output": "In your particular case it looks like you're trying to import SomeObject from the myapp.py and TestCase.py scripts. From myapp.py, do import SomeObject since it is in the same folder. For TestCase.py, do from ..myapp import SomeObject However, this will work only if you are importing TestCase from the package. If you want to directly run python TestCase.py, you would have to mess with your path. This can be done within Python: import sys sys.path.append(\"..\") from myapp import SomeObject though that is generally not recommended. In general, if you want other people to use your Python package, you should use distutils to create a setup script. That way, anyone can install your package easily using a command like python setup.py install and it will be available everywhere on their machine. If you're serious about the package, you could even add it to the Python Package Index, PyPI."} +{"question_id": 6103825, "score": 209, "creation_date": 1306190305, "tags": ["python", "unit-testing"], "instruction": "How to properly use unit-testing's assertRaises() with NoneType objects\n\nI did a simple test case: def setUp(self): self.testListNone = None def testListSlicing(self): self.assertRaises(TypeError, self.testListNone[:1]) And I am expecting the test to pass, but I am getting an exception: Traceback (most recent call last): self.assertRaises(TypeError, self.testListNone[:1]) TypeError: 'NoneType' object is unsubscriptable I thought that assertRaises will pass since TypeError exception will be raised. What is the explanation?", "output": "If you are using Python 2.7 or above, you can use the ability of assertRaises to be used as a context manager and do: with self.assertRaises(TypeError): self.testListNone[:1] If you are using Python 2.6, another way beside the one given until now is to use unittest2 which is a back port of unittest's new features to Python 2.6, and you can make it work using the code above. N.B: I'm a big fan of the new feature (SkipTest, test discovery, etc.) of unittest, so I intend to use unittest2 as much as I can. I advise to do the same, because there is a lot more than what unittest come with in Python 2.6 or lower."} +{"question_id": 46184239, "score": 209, "creation_date": 1505245493, "tags": ["python", "image", "flask", "pdf"], "instruction": "Python - Extract a PDF page as a jpeg\n\nHow can I efficiently save a particular page of a PDF as a jpeg file using Python? I have a Python Flask web server where PDFs will be uploaded and I want to also store jpeg files that correspond to each PDF page. This solution is close but it does not result in the entire page being converted to a jpeg.", "output": "The pdf2image library can be used. You can install it simply using, pip install pdf2image Once installed you can use following code to get images. from pdf2image import convert_from_path pages = convert_from_path('pdf_file', 500) Saving pages in jpeg format for count, page in enumerate(pages): page.save(f'out{count}.jpg', 'JPEG') Edit: the Github repo pdf2image also mentions that it uses pdftoppm and that it requires other installations: pdftoppm is the piece of software that does the actual magic. It is distributed as part of a greater package called poppler. Windows users will have to install [poppler for Windows] see ** below Mac users will have to install poppler for Mac. Linux users will have pdftoppm pre-installed with the distro (Tested on Ubuntu and Archlinux) if it's not, run sudo apt install poppler-utils. You can install the latest version under Windows using anaconda by doing: conda install -c conda-forge poppler ** note: Windows 64 bit versions upto 24.08 are available at https://github.com/oschwartz10612/poppler-windows but note that for 32 bit 22.02 was the last one included in TeXLive 2022 (https://poppler.freedesktop.org/releases.html) so you'll not be getting the latest features or bug fixes."} +{"question_id": 2170228, "score": 209, "creation_date": 1264902491, "tags": ["python", "django", "django-templates"], "instruction": "Iterate over model instance field names and values in template\n\nI'm trying to create a basic template to display the selected instance's field values, along with their names. Think of it as just a standard output of the values of that instance in table format, with the field name (verbose_name specifically if specified on the field) in the first column and the value of that field in the second column. For example, let's say we have the following model definition: class Client(Model): name = CharField(max_length=150) email = EmailField(max_length=100, verbose_name=\"E-mail\") I would want it to be output in the template like so (assume an instance with the given values): Field Name Field Value ---------- ----------- Name Wayne Koorts E-mail waynes@email.com What I'm trying to achieve is being able to pass an instance of the model to a template and be able to iterate over it dynamically in the template, something like this: <table> {% for field in fields %} <tr> <td>{{ field.name }}</td> <td>{{ field.value }}</td> </tr> {% endfor %} </table> Is there a neat, \"Django-approved\" way to do this? It seems like a very common task, and I will need to do it often for this particular project.", "output": "Since Django 1.9: use model._meta.get_fields() to get the model's fields and field.name to get each field name. Previous to Django 1.9: model._meta.get_all_field_names() will give you all the model's field names, then you can use model._meta.get_field() to work your way to the verbose name, and getattr(model_instance, 'field_name') to get the value from the model."} +{"question_id": 538551, "score": 209, "creation_date": 1234383225, "tags": ["python", "optimization", "largenumber"], "instruction": "Handling very large numbers in Python\n\nI've been considering fast poker hand evaluation in Python. It occurred to me that one way to speed the process up would be to represent all the card faces and suits as prime numbers and multiply them together to represent the hands. To whit: class PokerCard: faces = '23456789TJQKA' suits = 'cdhs' facePrimes = [11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 53, 59, 61] suitPrimes = [2, 3, 5, 7] AND def HashVal(self): return PokerCard.facePrimes[self.cardFace] * PokerCard.suitPrimes[self.cardSuit] This would give each hand a numeric value that, through modulo could tell me how many kings are in the hand or how many hearts. For example, any hand with five or more clubs in it would divide evenly by 2^5; any hand with four kings would divide evenly by 59^4, etc. The problem is that a seven-card hand like AcAdAhAsKdKhKs has a hash value of approximately 62.7 quadrillion, which would take considerably more than 32 bits to represent internally. Is there a way to store such large numbers in Python that will allow me to perform arithmetic operations on it?", "output": "Python supports a \"bignum\" integer type which can work with arbitrarily large numbers. In Python 2.5+, this type is called long and is separate from the int type, but the interpreter will automatically use whichever is more appropriate. In Python 3.0+, long has been renamed int and the old int type has been dropped completely. That's just an implementation detail, though \u2014 as long as you have version 2.5 or better, just perform standard math operations and any number which exceeds the boundaries of 32-bit math will be automatically (and transparently) converted to a bignum. You can find all the gory details in PEP 0237."} +{"question_id": 1316887, "score": 209, "creation_date": 1250970821, "tags": ["python", "string"], "instruction": "What is the most efficient string concatenation method in Python?\n\nIs there an efficient mass string concatenation method in Python (like StringBuilder in C# or StringBuffer in Java)? I found following methods here: Simple concatenation using + Using a string list and the join method Using UserString from the MutableString module Using a character array and the array module Using cStringIO from the StringIO module What should be used and why? (A related question is here.)", "output": "If you know all components beforehand once, use the literal string interpolation, also known as f-strings or formatted strings, introduced in Python 3.6. Given the test case from mkoistinen's answer, having strings domain = 'some_really_long_example.com' lang = 'en' path = 'some/really/long/path/' The contenders and their execution time on my computer using Python 3.6 on Linux as timed by IPython and the timeit module are f'http://{domain}/{lang}/{path}' - 0.151 \u00b5s 'http://%s/%s/%s' % (domain, lang, path) - 0.321 \u00b5s 'http://' + domain + '/' + lang + '/' + path - 0.356 \u00b5s ''.join(('http://', domain, '/', lang, '/', path)) - 0.249 \u00b5s (notice that building a constant-length tuple is slightly faster than building a constant-length list). Thus the shortest and the most beautiful code possible is also fastest. The speed can be contrasted with the fastest method for Python 2, which is + concatenation on my computer; and that takes 0.203 \u00b5s with 8-bit strings, and 0.259 \u00b5s if the strings are all Unicode. (In alpha versions of Python 3.6 the implementation of f'' strings was the slowest possible - actually the generated byte code is pretty much equivalent to the ''.join() case with unnecessary calls to str.__format__ which without arguments would just return self unchanged. These inefficiencies were addressed before 3.6 final.)"} +{"question_id": 3755136, "score": 208, "creation_date": 1285013714, "tags": ["python", "algorithm", "list", "sorting"], "instruction": "Pythonic way to check if a list is sorted or not\n\nIs there a pythonic way to check if a list is already sorted in ASC or DESC listtimestamps = [1, 2, 3, 5, 6, 7] something like isttimestamps.isSorted() that returns True or False. I want to input a list of timestamps for some messages and check if the the transactions appeared in the correct order.", "output": "Here is a one liner: all(l[i] <= l[i+1] for i in range(len(l) - 1)) If using Python 2, use xrange instead of range. For reverse=True, use >= instead of <=."} +{"question_id": 35286540, "score": 208, "creation_date": 1455004258, "tags": ["python", "opencv", "matplotlib", "imshow"], "instruction": "How to display an image\n\nI tried to use IPython.display with the following code: from IPython.display import display, Image display(Image(filename='MyImage.png')) I also tried to use matplotlib with the following code: import matplotlib.pyplot as plt import matplotlib.image as mpimg plt.imshow(mpimg.imread('MyImage.png')) In both cases, nothing is displayed, not even an error message.", "output": "If you are using matplotlib and want to show the image in your interactive notebook, try the following: %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg img = mpimg.imread('your_image.png') imgplot = plt.imshow(img) plt.show()"} +{"question_id": 15511349, "score": 208, "creation_date": 1363730471, "tags": ["python", "list", "select", "random"], "instruction": "Select 50 items from list at random\n\nI have a function which reads a list of items from a file. How can I select only 50 items from the list randomly to write to another file? def randomizer(input, output='random.txt'): query = open(input).read().split() out_file = open(output, 'w') random.shuffle(query) for item in query: out_file.write(item + '\\n') For example, if the total randomization file was random_total = ['9', '2', '3', '1', '5', '6', '8', '7', '0', '4'] and I would want a random set of 3, the result could be random = ['9', '2', '3'] How can I select 50 from the list that I randomized? Even better, how could I select 50 at random from the original list?", "output": "If the list is in random order, you can just take the first 50. Otherwise, use import random random.sample(the_list, 50) random.sample help text: sample(self, population, k) method of random.Random instance Chooses k unique random elements from a population sequence. Returns a new list containing elements from the population while leaving the original population unchanged. The resulting list is in selection order so that all sub-slices will also be valid random samples. This allows raffle winners (the sample) to be partitioned into grand prize and second place winners (the subslices). Members of the population need not be hashable or unique. If the population contains repeats, then each occurrence is a possible selection in the sample. To choose a sample in a range of integers, use xrange as an argument. This is especially fast and space efficient for sampling from a large population: sample(xrange(10000000), 60)"} +{"question_id": 5268404, "score": 208, "creation_date": 1299811317, "tags": ["python"], "instruction": "What is the fastest way to check if a class has a function defined?\n\nI'm writing an AI state space search algorithm, and I have a generic class which can be used to quickly implement a search algorithm. A subclass would define the necessary operations, and the algorithm does the rest. Here is where I get stuck: I want to avoid regenerating the parent state over and over again, so I have the following function, which returns the operations that can be legally applied to any state: def get_operations(self, include_parent=True): ops = self._get_operations() if not include_parent and self.path.parent_op: try: parent_inverse = self.invert_op(self.path.parent_op) ops.remove(parent_inverse) except NotImplementedError: pass return ops And the invert_op function throws by default. Is there a faster way to check to see if the function is not defined than catching an exception? I was thinking something on the lines of checking for present in dir, but that doesn't seem right. hasattr is implemented by calling getattr and checking if it raises, which is not what I want.", "output": "Yes, use getattr() to get the attribute, and callable() to verify it is a method: invert_op = getattr(self, \"invert_op\", None) if callable(invert_op): invert_op(self.path.parent_op) Note that getattr() normally throws exception when the attribute doesn't exist. However, if you specify a default value (None, in this case), it will return that instead."} +{"question_id": 45273731, "score": 208, "creation_date": 1500876788, "tags": ["python", "pandas", "numpy", "dataframe", "binning"], "instruction": "Binning a column with pandas\n\nI have a data frame column with numeric values: df['percentage'].head() 46.5 44.2 100.0 42.12 I want to see the column as bin counts: bins = [0, 1, 5, 10, 25, 50, 100] How can I get the result as bins with their value counts? [0, 1] bin amount [1, 5] etc [5, 10] etc ...", "output": "You can use pandas.cut: bins = [0, 1, 5, 10, 25, 50, 100] df['binned'] = pd.cut(df['percentage'], bins) print (df) percentage binned 0 46.50 (25, 50] 1 44.20 (25, 50] 2 100.00 (50, 100] 3 42.12 (25, 50] bins = [0, 1, 5, 10, 25, 50, 100] labels = [1,2,3,4,5,6] df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels) print (df) percentage binned 0 46.50 5 1 44.20 5 2 100.00 6 3 42.12 5 Or numpy.searchsorted: bins = [0, 1, 5, 10, 25, 50, 100] df['binned'] = np.searchsorted(bins, df['percentage'].values) print (df) percentage binned 0 46.50 5 1 44.20 5 2 100.00 6 3 42.12 5 ...and then value_counts or groupby and aggregate size: s = pd.cut(df['percentage'], bins=bins).value_counts() print (s) (25, 50] 3 (50, 100] 1 (10, 25] 0 (5, 10] 0 (1, 5] 0 (0, 1] 0 Name: percentage, dtype: int64 s = df.groupby(pd.cut(df['percentage'], bins=bins)).size() print (s) percentage (0, 1] 0 (1, 5] 0 (5, 10] 0 (10, 25] 0 (25, 50] 3 (50, 100] 1 dtype: int64 By default cut returns categorical. Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical."} +{"question_id": 1984162, "score": 208, "creation_date": 1262239599, "tags": ["python", "magic-methods", "repr"], "instruction": "What is the purpose of the __repr__ method?\n\nFor example, def __repr__(self): return '<%s %s (%s:%s) %s>' % ( self.__class__.__name__, self.urlconf_name, self.app_name, self.namespace, self.regex.pattern) What is the significance/purpose of this method?", "output": "__repr__ should return a printable representation of the object, most likely one of the ways possible to create this object. See also documentation for repr() (which calls __repr__). __repr__ is more for developers, while __str__ is for end users. A simple example (interactive): class Point: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): cls = self.__class__.__name__ return f'{cls}(x={self.x!r}, y={self.y!r})' p = Point(1, 2) p Output: Point(x=1, y=2)"} +{"question_id": 8236380, "score": 207, "creation_date": 1322012909, "tags": ["python"], "instruction": "Why is semicolon allowed in this Python snippet?\n\nPython does not warrant the use of semicolons to end statements. So why is this (below) allowed? import pdb; pdb.set_trace()", "output": "Python does not require semicolons to terminate statements. Semicolons can be used to delimit statements if you wish to put multiple statements on the same line. Now, why is this allowed? It's a simple design decision. I don't think Python needs this semicolon thing, but somebody thought it would be nice to have and added it to the language."} +{"question_id": 59882884, "score": 206, "creation_date": 1579796188, "tags": ["python", "visual-studio-code", "python-poetry"], "instruction": "VSCode doesn't show poetry virtualenvs in select interpreter option\n\nI need help. VSCode will NEVER find poetry virtualenv interpreter no matter what I try. Installed poetry Python package manager using a standard $ curl method as explained in the official documentation. Started a project by $ poetry new finance-essentials_37-64, installed poetry environment with $ poetry install. So now I can see that I indeed have a virtual environment by: Jaepil@Jaepil-PC MINGW64 /e/VSCodeProjects/finance_essentials_37-64 $ poetry env list >> finance-essentials-37-64-SCQrHB_N-py3.7 (Activated) and this virtualenv is installed at: C:\\Users\\Jaepil\\AppData\\Local\\pypoetry\\Cache\\virtualenvs, which has finance-essentials-37-64-SCQrHB_N-py3.7 directory. However, VSCode is unable to find this virtualenv in its 'select interpreter' command. I only see a bunch of Anaconda and Pipenv environments but not the poetry environment's interpreter that I've just made. I also added \"python.venvPath\": \"~/.cache/pypoetry/virtualenvs\", to my settings.json as suggested in here, but to no avail. Still doesn't work. I also tried an absolute path, by adding \"python.venvPath\": \"C:\\\\Users\\\\Jaepil\\\\AppData\\\\Local\\\\pypoetry\\\\Cache\\\\virtualenvs\", to the same settings, but it also doesn't work. VSCode settings reference states that it has python.poetryPath as a default but it doesn't seem to work either. Should I change the default value \"poetry\" in this case? python.poetryPath \"poetry\" Specifies the location of the Poetry dependency manager executable, if installed. The default value \"poetry\" assumes the executable is in the current path. The Python extension uses this setting to install packages when Poetry is available and there's a poetry.lock file in the workspace folder. I'm on Windows 10 pro 64bit & Has Python 3.7.6 installed on the system. PS C:\\Users\\Jaepil> python Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)] on win32", "output": "You just need to type in your shell: poetry config virtualenvs.in-project true The virtualenv will be created inside the project path and vscode will recognize. Consider adding this to your .bashrc or .zshrc. If you already have created your project, you need to re-create the virtualenv to make it appear in the correct place: poetry env list # shows the name of the current environment poetry env remove <current environment> poetry install # will create a new environment using your updated configuration"} +{"question_id": 18713086, "score": 206, "creation_date": 1378798530, "tags": ["python", "virtualenv"], "instruction": "'virtualenv' won't activate on Windows\n\nEssentially I cannot seem to activate my virtualenv environment which I create. I'm doing this inside of Windows PowerShell through using scripts\\activate but I get an error message: \"cannot be loaded because the execution of scripts is disabled on this system\". Could this be because I don't carry administrator privileges on my computer?", "output": "According to Microsoft Tech Support it might be a problem with Execution Policy Settings. To fix it, you should try executing Set-ExecutionPolicy Unrestricted -Scope Process (as mentioned in the comment section by @wtsiamruk) in your PowerShell window. This would allow running virtualenv in the current PowerShell session. There is also another approach that is more unsafe, but recommended by MS Tech Support. This approach would be to use Set-ExecutionPolicy Unrestricted -Force (which do unleash powers to screw Your system up). However, before you use this unsafe way, be sure to check what your current ExecutionPolicy setting is by using get-ExecutionPolicy. Then, when you are done, you can revert back to this ExecutionPolicy by using Set-ExecutionPolicy %the value the get-ExecutionPolicy command gave you% -Force."} +{"question_id": 62764148, "score": 206, "creation_date": 1594067909, "tags": ["python", "pip", "python-poetry"], "instruction": "How to import an existing requirements.txt into a Poetry project?\n\nI am trying out Poetry in an existing project. It used pyenv and virtual env originally so I have a requirements.txt file with the project's dependencies. I want to import the requirements.txt file using Poetry, so that I can load the dependencies for the first time. I've looked through poetry's documentation, but I haven't found a way to do this. Is there a way to do it? I know that I can add all packages manually, but I was hoping for a more automated process, because there are a lot of packages.", "output": "poetry doesn't support this directly. But if you have a handmade list of required packages (at best without any version numbers), that only contain the main dependencies and not the dependencies of a dependency you could do this: $ cat requirements.txt | xargs poetry add"} +{"question_id": 8744113, "score": 206, "creation_date": 1325773731, "tags": ["python", "list", "reference"], "instruction": "python list by value not by reference\n\nLet's take an example a=['help', 'copyright', 'credits', 'license'] b=a b.append('XYZ') b ['help', 'copyright', 'credits', 'license', 'XYZ'] a ['help', 'copyright', 'credits', 'license', 'XYZ'] I wanted to append value in list 'b' but the value of list 'a' have also changed. I think I have little idea why its like this (python passes lists by reference). My question is \"how can I pass it by value so that appending 'b' does't change values in 'a' ?\"", "output": "You cannot pass anything by value in Python. If you want to make a copy of a, you can do so explicitly, as described in the official Python FAQ: b = a[:]"} +{"question_id": 7173033, "score": 206, "creation_date": 1314177258, "tags": ["python"], "instruction": "Duplicate log output when using Python logging module\n\nI am using python logger. The following is my code: import os import time import datetime import logging class Logger : def myLogger(self): logger = logging.getLogger('ProvisioningPython') logger.setLevel(logging.DEBUG) now = datetime.datetime.now() handler=logging.FileHandler('/root/credentials/Logs/ProvisioningPython'+ now.strftime(\"%Y-%m-%d\") +'.log') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) return logger The problem I have is that I get multiple entries in the log file for each logger.info call. How can I solve this?", "output": "Function logging.getLogger() returns the same instance for a given name. The problem is that every time you call myLogger(), it's adding another handler to the instance, which causes the duplicate logs. Perhaps something like this? import os import time import datetime import logging loggers = {} def myLogger(name): global loggers if loggers.get(name): return loggers.get(name) else: logger = logging.getLogger(name) logger.setLevel(logging.DEBUG) now = datetime.datetime.now() handler = logging.FileHandler( '/root/credentials/Logs/ProvisioningPython' + now.strftime(\"%Y-%m-%d\") + '.log') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) loggers[name] = logger return logger"} +{"question_id": 70319606, "score": 206, "creation_date": 1639262655, "tags": ["python", "django", "django-urls", "django-4.0"], "instruction": "ImportError: cannot import name 'url' from 'django.conf.urls' after upgrading to Django 4.0\n\nAfter upgrading to Django 4.0, I get the following error when running python manage.py runserver ... File \"/path/to/myproject/myproject/urls.py\", line 16, in <module> from django.conf.urls import url ImportError: cannot import name 'url' from 'django.conf.urls' (/path/to/my/venv/lib/python3.9/site-packages/django/conf/urls/__init__.py) My urls.py is as follows: from django.conf.urls from myapp.views import home urlpatterns = [ url(r'^$', home, name=\"home\"), url(r'^myapp/', include('myapp.urls'), ]", "output": "django.conf.urls.url() was deprecated in Django 3.0, and is removed in Django 4.0+. The easiest fix is to replace url() with re_path(). re_path uses regexes like url, so you only have to update the import and replace url with re_path. from django.urls import include, re_path from myapp.views import home urlpatterns = [ re_path(r'^$', home, name='home'), re_path(r'^myapp/', include('myapp.urls'), ] Alternatively, you could switch to using path. path() does not use regexes, so you'll have to update your URL patterns if you switch to path. from django.urls import include, path from myapp.views import home urlpatterns = [ path('', home, name='home'), path('myapp/', include('myapp.urls'), ] If you have a large project with many URL patterns to update, you may find the django-upgrade library useful to update your urls.py files."} +{"question_id": 36059194, "score": 206, "creation_date": 1458214037, "tags": ["python", "json", "python-2.7"], "instruction": "What is the difference between json.dump() and json.dumps() in python?\n\nI searched in this official document to find difference between the json.dump() and json.dumps() in python. It is clear that they are related with file write option. But what is the detailed difference between them and in what situations one has more advantage than other?", "output": "If you want to dump the JSON into a file/socket or whatever, then you should go with dump(). If you only need it as a string (for printing, parsing or whatever) then use dumps() (dump string) As mentioned by Antti Haapala in this answer, there are some minor differences on the ensure_ascii behaviour. This is mostly due to how the underlying write() function works, being that it operates on chunks rather than the whole string. Check his answer for more details on that. json.dump() Serialize obj as a JSON formatted stream to fp (a .write()-supporting file-like object If ensure_ascii is False, some chunks written to fp may be unicode instances json.dumps() Serialize obj to a JSON formatted str If ensure_ascii is False, the result may contain non-ASCII characters and the return value may be a unicode instance"} +{"question_id": 18188563, "score": 206, "creation_date": 1376314959, "tags": ["python", "exception", "nested", "raise"], "instruction": "How to re-raise an exception in nested try/except blocks?\n\nI know that if I want to re-raise an exception, I simple use raise without arguments in the respective except block. But given a nested expression like try: something() except SomeError as e: try: plan_B() except AlsoFailsError: raise e # I'd like to raise the SomeError as if plan_B() # didn't raise the AlsoFailsError how can I re-raise the SomeError without breaking the stack trace? raise alone would in this case re-raise the more recent AlsoFailsError. Or how could I refactor my code to avoid this issue?", "output": "As of Python 3, the traceback is stored in the exception, so a simple raise e will do the (mostly) right thing: try: something() except SomeError as e: try: plan_B() except AlsoFailsError: raise e # or raise e from None - see below The traceback produced will include an additional notice that SomeError occurred while handling AlsoFailsError (because of raise e being inside except AlsoFailsError). This is misleading because what actually happened is the other way around - we encountered AlsoFailsError, and handled it, while trying to recover from SomeError. To obtain a traceback that doesn't include AlsoFailsError, replace raise e with raise e from None. In Python 2 you'd store the exception type, value, and traceback in local variables and use the three-argument form of raise: try: something() except SomeError: t, v, tb = sys.exc_info() try: plan_B() except AlsoFailsError: raise t, v, tb"} +{"question_id": 4588628, "score": 205, "creation_date": 1294090808, "tags": ["python", "numpy"], "instruction": "Find indices of elements equal to zero in a NumPy array\n\nNumPy has the efficient function/method nonzero() to identify the indices of non-zero elements in an ndarray object. What is the most efficient way to obtain the indices of the elements that do have a value of zero?", "output": "numpy.where() is my favorite. >>> x = numpy.array([1,0,2,0,3,0,4,5,6,7,8]) >>> numpy.where(x == 0)[0] array([1, 3, 5]) The method where returns a tuple of ndarrays, each corresponding to a different dimension of the input. Since the input is one-dimensional, the [0] unboxes the tuple's only element."} +{"question_id": 14853243, "score": 205, "creation_date": 1360757317, "tags": ["python", "xml", "xml-parsing", "xml-namespaces", "elementtree"], "instruction": "Parsing XML with namespace in Python via 'ElementTree'\n\nI have the following XML which I want to parse using Python's ElementTree: <rdf:RDF xml:base=\"http://dbpedia.org/ontology/\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\" xmlns:owl=\"http://www.w3.org/2002/07/owl#\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema#\" xmlns:rdfs=\"http://www.w3.org/2000/01/rdf-schema#\" xmlns=\"http://dbpedia.org/ontology/\"> <owl:Class rdf:about=\"http://dbpedia.org/ontology/BasketballLeague\"> <rdfs:label xml:lang=\"en\">basketball league</rdfs:label> <rdfs:comment xml:lang=\"en\"> a group of sports teams that compete against each other in Basketball </rdfs:comment> </owl:Class> </rdf:RDF> I want to find all owl:Class tags and then extract the value of all rdfs:label instances inside them. I am using the following code: tree = ET.parse(\"filename\") root = tree.getroot() root.findall('owl:Class') Because of the namespace, I am getting the following error. SyntaxError: prefix 'owl' not found in prefix map I tried reading the document at http://effbot.org/zone/element-namespaces.htm but I am still not able to get this working since the above XML has multiple nested namespaces. Kindly let me know how to change the code to find all the owl:Class tags.", "output": "You need to give the .find(), findall() and iterfind() methods an explicit namespace dictionary: namespaces = {'owl': 'http://www.w3.org/2002/07/owl#'} # add more as needed root.findall('owl:Class', namespaces) Prefixes are only looked up in the namespaces parameter you pass in. This means you can use any namespace prefix you like; the API splits off the owl: part, looks up the corresponding namespace URL in the namespaces dictionary, then changes the search to look for the XPath expression {http://www.w3.org/2002/07/owl}Class instead. You can use the same syntax yourself too of course: root.findall('{http://www.w3.org/2002/07/owl#}Class') Also see the Parsing XML with Namespaces section of the ElementTree documentation. As of Python 3.8, the ElementTree library also understands the {*} namespace wildcard, so root.findall('{*}Class') would also work (but don't do that if your document can have multiple namespaces that define the Class element). If you can switch to the lxml library things are better; that library supports the same ElementTree API, but collects namespaces for you in .nsmap attribute on elements and generally has superior namespaces support."} +{"question_id": 11786157, "score": 205, "creation_date": 1343943380, "tags": ["python"], "instruction": "If list index exists, do X\n\nI need to code such that if a certain list index exists, then run a function. My failing code: n = int(input(\"Define number of actors: \")) count = 0 nams = [] while count < n: count = count + 1 print(\"Define name for actor\", count, \":\") nams.append(input()) if nams[2]: # 'if nams[2] exists, do something depending on len(nams) if len(nams) > 3: do_something", "output": "Could it be more useful for you to use the length of the list len(nams) to inform your decision rather than checking nams[i] for each possible length?"} +{"question_id": 17315737, "score": 204, "creation_date": 1372237284, "tags": ["python", "pandas"], "instruction": "Split a large pandas dataframe\n\nI have a large dataframe with 423244 lines. I want to split this in to 4. I tried the following code which gave an error? ValueError: array split does not result in an equal division for item in np.split(df, 4): print item How to split this dataframe in to 4 groups?", "output": "Use np.array_split: Docstring: Split an array into multiple sub-arrays. Please refer to the ``split`` documentation. The only difference between these functions is that ``array_split`` allows `indices_or_sections` to be an integer that does *not* equally divide the axis. In [1]: import pandas as pd In [2]: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', ...: 'foo', 'bar', 'foo', 'foo'], ...: 'B' : ['one', 'one', 'two', 'three', ...: 'two', 'two', 'one', 'three'], ...: 'C' : randn(8), 'D' : randn(8)}) In [3]: print df A B C D 0 foo one -0.174067 -0.608579 1 bar one -0.860386 -1.210518 2 foo two 0.614102 1.689837 3 bar three -0.284792 -1.071160 4 foo two 0.843610 0.803712 5 bar two -1.514722 0.870861 6 foo one 0.131529 -0.968151 7 foo three -1.002946 -0.257468 In [4]: import numpy as np In [5]: np.array_split(df, 3) Out[5]: [ A B C D 0 foo one -0.174067 -0.608579 1 bar one -0.860386 -1.210518 2 foo two 0.614102 1.689837, A B C D 3 bar three -0.284792 -1.071160 4 foo two 0.843610 0.803712 5 bar two -1.514722 0.870861, A B C D 6 foo one 0.131529 -0.968151 7 foo three -1.002946 -0.257468]"} +{"question_id": 15222754, "score": 204, "creation_date": 1362483278, "tags": ["python", "pandas", "dataframe", "group-by", "mode"], "instruction": "GroupBy pandas DataFrame and select most common value\n\nI have a data frame with three string columns. I know that the only one value in the 3rd column is valid for every combination of the first two. To clean the data I have to group by data frame by first two columns and select most common value of the third column for each combination. My code: import pandas as pd from scipy import stats source = pd.DataFrame({ 'Country': ['USA', 'USA', 'Russia', 'USA'], 'City': ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'], 'Short name': ['NY', 'New', 'Spb', 'NY']}) source.groupby(['Country','City']).agg(lambda x: stats.mode(x['Short name'])[0]) Last line of code doesn't work, it says KeyError: 'Short name' and if I try to group only by City, then I got an AssertionError. What can I do fix it?", "output": "You can use value_counts() to get a count series, and get the first row: source.groupby(['Country','City']).agg(lambda x: x.value_counts().index[0]) In case you are wondering about performing other agg functions in the .agg(), try this. # Let's add a new col, \"account\" source['account'] = [1, 2, 3, 3] source.groupby(['Country','City']).agg( mod=('Short name', lambda x: x.value_counts().index[0]), avg=('account', 'mean'))"} +{"question_id": 3170055, "score": 204, "creation_date": 1278123315, "tags": ["list", "python", "intersection"], "instruction": "Test if lists share any items in python\n\nI want to check if any of the items in one list are present in another list. I can do it simply with the code below, but I suspect there might be a library function to do this. If not, is there a more pythonic method of achieving the same result. In [78]: a = [1, 2, 3, 4, 5] In [79]: b = [8, 7, 6] In [80]: c = [8, 7, 6, 5] In [81]: def lists_overlap(a, b): ....: for i in a: ....: if i in b: ....: return True ....: return False ....: In [82]: lists_overlap(a, b) Out[82]: False In [83]: lists_overlap(a, c) Out[83]: True In [84]: def lists_overlap2(a, b): ....: return len(set(a).intersection(set(b))) > 0 ....:", "output": "Short answer: use not set(a).isdisjoint(b), it's generally the fastest. There are four common ways to test if two lists a and b share any items. The first option is to convert both to sets and check their intersection, as such: bool(set(a) & set(b)) Because sets are stored using a hash table in Python, searching them is O(1) (see here for more information about complexity of operators in Python). Theoretically, this is O(n+m) on average for n and m objects in lists a and b. But it must first create sets out of the lists, which can take a non-negligible amount of time, and it supposes that hashing collisions are sparse among your data. The second way to do it is using a generator expression performing iteration on the lists, such as: any(i in a for i in b) This allows to search in-place, so no new memory is allocated for intermediary variables. It also bails out on the first find. But the in operator is always O(n) on lists (see here). Another proposed option is an hybridto iterate through one of the list, convert the other one in a set and test for membership on this set, like so: a = set(a); any(i in a for i in b) A fourth approach is to take advantage of the isdisjoint() method of the (frozen)sets (see here), for example: not set(a).isdisjoint(b) If the elements you search are near the beginning of an array (e.g. it is sorted), the generator expression is favored, as the sets intersection method have to allocate new memory for the intermediary variables: from timeit import timeit >>> timeit('bool(set(a) & set(b))', setup=\"a=list(range(1000));b=list(range(1000))\", number=100000) 26.077727576019242 >>> timeit('any(i in a for i in b)', setup=\"a=list(range(1000));b=list(range(1000))\", number=100000) 0.16220548999262974 Here's a graph of the execution time for this example in function of list size: Note that both axes are logarithmic. This represents the best case for the generator expression. As can be seen, the isdisjoint() method is better for very small list sizes, whereas the generator expression is better for larger list sizes. On the other hand, as the search begins with the beginning for the hybrid and generator expression, if the shared element are systematically at the end of the array (or both lists does not share any values), the disjoint and set intersection approaches are then way faster than the generator expression and the hybrid approach. >>> timeit('any(i in a for i in b)', setup=\"a=list(range(1000));b=[x+998 for x in range(999,0,-1)]\", number=1000)) 13.739536046981812 >>> timeit('bool(set(a) & set(b))', setup=\"a=list(range(1000));b=[x+998 for x in range(999,0,-1)]\", number=1000)) 0.08102107048034668 It is interesting to note that the generator expression is way slower for bigger list sizes. This is only for 1000 repetitions, instead of the 100000 for the previous figure. This setup also approximates well when when no elements are shared, and is the best case for the disjoint and set intersection approaches. Here are two analysis using random numbers (instead of rigging the setup to favor one technique or another): High chance of sharing: elements are randomly taken from [1, 2*len(a)]. Low chance of sharing: elements are randomly taken from [1, 1000*len(a)]. Up to now, this analysis supposed both lists are of the same size. In case of two lists of different sizes, for example a is much smaller, isdisjoint() is always faster: Make sure that the a list is the smaller, otherwise the performance decreases. In this experiment, the a list size was set constant to 5. In summary: If the lists are very small (< 10 elements), not set(a).isdisjoint(b) is always the fastest. If the elements in the lists are sorted or have a regular structure that you can take advantage of, the generator expression any(i in a for i in b) is the fastest on large list sizes; Test the set intersection with not set(a).isdisjoint(b), which is always faster than bool(set(a) & set(b)). The hybrid \"iterate through list, test on set\" a = set(a); any(i in a for i in b) is generally slower than other methods. The generator expression and the hybrid are much slower than the two other approaches when it comes to lists without sharing elements. In most cases, using the isdisjoint() method is the best approach as the generator expression will take much longer to execute, as it is very inefficient when no elements are shared."} +{"question_id": 64717302, "score": 204, "creation_date": 1604676159, "tags": ["python", "selenium"], "instruction": "DeprecationWarning: executable_path has been deprecated selenium python\n\nI am using sublime to code python scripts. The following code is for selenium in python to install the driver automatically by using the webdriver_manager package # pip install webdriver-manager from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By driver = webdriver.Chrome(ChromeDriverManager().install()) driver.maximize_window() #s=Service(path) #driver=webdriver.Chrome(service=s) driver.get('https://www.google.com') driver.find_element(By.NAME, 'q').send_keys('Yasser Khalil') The code works fine but I got a warning like that Demo.py:7: DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome(ChromeDriverManager().install()) How to fix such a bug?", "output": "This error message... DeprecationWarning: executable_path has been deprecated, please pass in a Service object ...implies that the key executable_path will be deprecated in the upcoming releases. This change is inline with the Selenium 4.0 Beta 1 changelog which mentions: Deprecate all but Options and Service arguments in driver instantiation. (#9125,#9128) Solution With selenium4 as the key executable_path is deprecated you have to use an instance of the Service() class along with ChromeDriverManager().install() command as discussed below. Pre-requisites Ensure that: Selenium is upgraded to v4.0.0 pip3 install -U selenium Webdriver Manager for Python is installed pip3 install webdriver-manager You can find a detailed discussion on installing Webdriver Manager for Python in ModuleNotFoundError: No module named 'webdriver_manager' error even after installing webdrivermanager Selenium v4 compatible Code Block from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) driver.get(\"https://www.google.com\") Console Output: [WDM] - ====== WebDriver manager ====== [WDM] - Current google-chrome version is 96.0.4664 [WDM] - Get LATEST driver version for 96.0.4664 [WDM] - Driver [C:\\Users\\Admin\\.wdm\\drivers\\chromedriver\\win32\\96.0.4664.45\\chromedriver.exe] found in cache You can find a detailed discussion on installing Webdriver Manager for Python in Selenium ChromeDriver issue using Webdriver Manager for Python Incase you want to pass the Options() object you can use: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = Options() options.add_argument(\"start-maximized\") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) driver.get(\"https://www.google.com\") TL; DR You can find the relevant Bug Report/Pull Request in: Bug Report: deprecate all but Options and Service arguments in driver instantiation Pull Request: deprecate all but Options and Service arguments in driver instantiation"} +{"question_id": 8463008, "score": 204, "creation_date": 1323595717, "tags": ["python", "performance", "queue", "multiprocessing", "pipe"], "instruction": "Multiprocessing - Pipe vs Queue\n\nWhat are the fundamental differences between queues and pipes in Python's multiprocessing package? In what scenarios should one choose one over the other? When is it advantageous to use Pipe()? When is it advantageous to use Queue()?", "output": "What are the fundamental differences between queues and pipes in Python's multiprocessing package? Major Edit of this answer (CY2024): concurrency As of modern python versions if you don't need your producers and consumers to communicate, that's the only real use-case for python multiprocessing. If you only need python concurrency, use concurrent.futures. This example uses concurrent.futures to make four calls to do_something_slow(), which has a one-second delay. If your machine has at least four cores, running this four-second-aggregate series of function calls only takes one-second. By default, concurrent.futures spawns workers corresponding to the number of CPU cores you have. import concurrent.futures import time def do_slow_thing(input_str: str) -> str: \"\"\"Return modified input string after a 1-second delay\"\"\" if isinstance(input_str, str): time.sleep(1) return \"1-SECOND-DELAY \" + input_str else: return \"INPUT ERROR\" if __name__==\"__main__\": # Define some inputs for process pool all_inputs = [ \"do\", \"foo\", \"moo\", \"chew\", ] # Spawn a process pool with the default number of workers... with concurrent.futures.ProcessPoolExecutor(max_workers=None) as executor: # For each string in all_inputs, call do_slow_thing() # in parallel across the process worker pool these_futures = [executor.submit(do_slow_thing, ii) for ii in all_inputs] # Wait for all processes to finish concurrent.futures.wait(these_futures) # Get the results from the process pool execution... each # future.result() call is the return value from do_slow_thing() string_outputs = [future.result() for future in these_futures] for tmp in string_outputs: print(tmp) With at least four CPU cores, you'll see this printed after roughly one-second... $ time python stackoverflow.py 1-SECOND-DELAY do 1-SECOND-DELAY foo 1-SECOND-DELAY moo 1-SECOND-DELAY chew real 0m1.058s user 0m0.060s sys 0m0.017s $ Original Answer At this point, the only major use-case for multiprocessing is to facilitate your producers and consumers talking to each other during execution. Most people don't need that. However, if you want communication via queue / pipes, you can find my original answer to the OP's question below (which profiles how fast they are). The existing comments on this answer refer to the aforementioned answer below"} +{"question_id": 5434891, "score": 204, "creation_date": 1301068491, "tags": ["python"], "instruction": "How can I iterate over overlapping (current, next) pairs of values from a list?\n\nI sometimes need to iterate a list in Python looking at the \"current\" element and the \"next\" element. I have, till now, done so with code like: for current, next in zip(the_list, the_list[1:]): # Do something This works and does what I expect, but is there's a more idiomatic or efficient way to do the same thing? Some answers to this problem can simplify by addressing the specific case of taking only two elements at a time. For the general case of N elements at a time, see Rolling or sliding window iterator?.", "output": "The documentation for 3.8 provides this recipe: import itertools def pairwise(iterable): \"s -> (s0, s1), (s1, s2), (s2, s3), ...\" a, b = itertools.tee(iterable) next(b, None) return zip(a, b) For Python 2, use itertools.izip instead of zip to get the same kind of lazy iterator (zip will instead create a list): import itertools def pairwise(iterable): \"s -> (s0, s1), (s1, s2), (s2, s3), ...\" a, b = itertools.tee(iterable) next(b, None) return itertools.izip(a, b) How this works: First, two parallel iterators, a and b are created (the tee() call), both pointing to the first element of the original iterable. The second iterator, b is moved 1 step forward (the next(b, None)) call). At this point a points to s0 and b points to s1. Both a and b can traverse the original iterator independently - the izip function takes the two iterators and makes pairs of the returned elements, advancing both iterators at the same pace. Since tee() can take an n parameter (the number of iterators to produce), the same technique can be adapted to produce a larger \"window\". For example: def threes(iterator): \"s -> (s0, s1, s2), (s1, s2, s3), (s2, s3, 4), ...\" a, b, c = itertools.tee(iterator, 3) next(b, None) next(c, None) next(c, None) return zip(a, b, c) Caveat: If one of the iterators produced by tee advances further than the others, then the implementation needs to keep the consumed elements in memory until every iterator has consumed them (it cannot 'rewind' the original iterator). Here it doesn't matter because one iterator is only 1 step ahead of the other, but in general it's easy to use a lot of memory this way."} +{"question_id": 31026754, "score": 204, "creation_date": 1435149339, "tags": ["python", "if-statement", "equality"], "instruction": "Python if not == vs if !=\n\nWhat is the difference between these two lines of code: if not x == 'val': and if x != 'val': Is one more efficient than the other? Would it be better to use if x == 'val': pass else:", "output": "Using dis to look at the bytecode generated for the two versions: not == 4 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 2 (==) 9 UNARY_NOT 10 RETURN_VALUE != 4 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 3 (!=) 9 RETURN_VALUE The latter has fewer operations, and is therefore likely to be slightly more efficient. It was pointed out in the commments (thanks, @Quincunx) that where you have if foo != bar vs. if not foo == bar the number of operations is exactly the same, it's just that the COMPARE_OP changes and POP_JUMP_IF_TRUE switches to POP_JUMP_IF_FALSE: not == : 2 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 2 (==) 9 POP_JUMP_IF_TRUE 16 != 2 0 LOAD_FAST 0 (foo) 3 LOAD_FAST 1 (bar) 6 COMPARE_OP 3 (!=) 9 POP_JUMP_IF_FALSE 16 In this case, unless there was a difference in the amount of work required for each comparison, it's unlikely you'd see any performance difference at all. However, note that the two versions won't always be logically identical, as it will depend on the implementations of __eq__ and __ne__ for the objects in question. Per the data model documentation: There are no implied relationships among the comparison operators. The truth of x==y does not imply that x!=y is false. For example: >>> class Dummy(object): def __eq__(self, other): return True def __ne__(self, other): return True >>> not Dummy() == Dummy() False >>> Dummy() != Dummy() True Finally, and perhaps most importantly: in general, where the two are logically identical, x != y is much more readable than not x == y."} +{"question_id": 12059509, "score": 204, "creation_date": 1345567678, "tags": ["python", "compilation", "exe", "packaging", "software-distribution"], "instruction": "Create a single executable from a Python project\n\nI want to create a single executable from my Python project. A user should be able to download and run it without needing Python installed. If I were just distributing a package, I could use pip, wheel, and PyPI to build and distribute it, but this requires that the user has Python and knows how to install packages. What can I use to build a self-contained executable from a Python project?", "output": "There are several different ways of doing this. The first -- and likely most common -- way is to use \"freeze\" style programs. These programs work by bundling together Python and your program, essentially combining them into a single executable: PyInstaller: Website || Repo || PyPi Supports Python 3.7 - 3.10 on Windows, Mac, and Linux. cx_Freeze: Website || Repo || PyPi Supports Python 3.6 - 3.10 on Windows, Mac, and Linux. py2exe: Website || Repo || PyPi Supports Python 3.7 - 3.10 on Windows only. py2app: Website || Repo || PyPi Supports Python 3.6 - 3.10 on Macs only. The main thing to keep in mind is that these types of programs will generally only produce an exe for the operating system you run it in. So for example, running Pyinstaller in Windows will produce a Windows exe, but running Pyinstaller in Linux will produce a Linux exe. If you want to produce an exe for multiple operating systems, you will have to look into using virtual machines or something like Wine. Of course, that's not the only way of doing things: pynsist: Website || Repo || PyPi Pynsist will create a Windows installer for your program which will directly install Python on the user's computer instead of bundling it with your code and create shortcuts that link to your Python script. The pynsist tool itself requires Python 3.5+ to run, but supports bundling any version of Python with your program. Pynsist will create Windows installers only, but can be run from Windows, Mac, and Linux. See their FAQ for more details. Nuitka: Website || Repo (Github mirror) || PyPi Nuitka will literally compile your Python code and produce an exe (as opposed to the other projects, which simply include Python) to try and speed up your code. As a side effect, you'll also get a handy exe you can distribute. Note that you need to have a C++ compiler available on your system. Supports Python 2.6 - 2.7 and Python 3.3 - 3.10 on Windows, Mac, and Linux. cython: Website || Repo || PyPi Cython is similar to Nuitka in that it is a Python compiler. However, instead of directly compiling your code, it'll compile it to C. You can then take that C code and turn your code into an exe. You'll need to have a C compiler available on your system. Supports Python 2.7 and Python 3.3 - 3.11 on Windows, Mac, and Linux. My personal preference is to use PyInstaller since it was the easiest for me to get up and running, was designed to work nicely with various popular libraries such as numpy or pygame, and has great compatibility with various OSes and Python versions. However, I've also successfully built various exes using cx_Freeze without too much difficulty, so you should also consider trying that program out. I haven't yet had a chance to to try pynist, Nuitka, or Cython extensively, but they seem like pretty interesting and innovative solutions. If you run into trouble using the first group of programs, it might be worthwhile to try one of these three. Since they work fundamentally differently then the Pyinstaller/cx_freeze-style programs, they might succeed in those odd edge cases where the first group fails. In particular, I think pynist is a good way of sidestepping the entire issue of distributing your code altogether: Macs and Linux already have native support for Python, and just installing Python on Windows might genuinely be the cleanest solution. (The downside is now that you need to worry about targeting multiple versions of Python + installing libraries). Nuitka and Cython (in my limited experience) seem to work fairly well. Again, I haven't tested them extensively myself, and so my main observation is that they seem to take much longer to produce an exe then the \"freeze\" style programs do. All this being said, converting your Python program into an executable isn't necessarily the only way of distributing your code. To learn more about what other options are available, see the following links: https://packaging.python.org/overview/#packaging-python-applications https://docs.python-guide.org/shipping/packaging/#for-linux-distributions"} +{"question_id": 70851048, "score": 204, "creation_date": 1643123383, "tags": ["python", "machine-learning", "package", "conda", "python-poetry"], "instruction": "Does it make sense to use Conda + Poetry?\n\nDoes it make sense to use Conda + Poetry for a Machine Learning project? Allow me to share my (novice) understanding and please correct or enlighten me: As far as I understand, Conda and Poetry have different purposes but are largely redundant: Conda is primarily a environment manager (in fact not necessarily Python), but it can also manage packages and dependencies. Poetry is primarily a Python package manager (say, an upgrade of pip), but it can also create and manage Python environments (say, an upgrade of Pyenv). My idea is to use both and compartmentalize their roles: let Conda be the environment manager and Poetry the package manager. My reasoning is that (it sounds like) Conda is best for managing environments and can be used for compiling and installing non-python packages, especially CUDA drivers (for GPU capability), while Poetry is more powerful than Conda as a Python package manager. I've managed to make this work fairly easily by using Poetry within a Conda environment. The trick is to not use Poetry to manage the Python environment: I'm not using commands like poetry shell or poetry run, only poetry init, poetry install etc (after activating the Conda environment). For full disclosure, my environment.yml file (for Conda) looks like this: name: N channels: - defaults - conda-forge dependencies: - python=3.9 - cudatoolkit - cudnn and my poetry.toml file looks like that: [tool.poetry] name = \"N\" authors = [\"B\"] [tool.poetry.dependencies] python = \"3.9\" torch = \"^1.10.1\" [build-system] requires = [\"poetry-core>=1.0.0\"] build-backend = \"poetry.core.masonry.api\" To be honest, one of the reasons I proceeded this way is that I was struggling to install CUDA (for GPU support) without Conda. Does this project design look reasonable to you?", "output": "2024-04-05 update: It looks like my tips proved to be useful to many people, but they are not needed anymore. Just use Pixi. It's still alpha, but it works great, and provides the features of the Conda + Poetry setup in a simpler and more unified way. In particular, Pixi supports: installing packages both from Conda channels and from PyPi, lockfiles, creating multiple features and environments (prod, dev, etc.), very efficient package version resolution, not just faster than Conda (which is very slow), but in my experience also faster than Mamba, Poetry and pip. Making a Pixi env look like a Conda env One non-obvious tip about Pixi is that you can easily make your project's Pixi environment visible as a Conda environment, which may be useful e.g. in VS Code, which allows choosing Python interpreters and Jupyter kernels from detected Conda environments. All you need to do is something like: ln -s /path/to/my/project/.pixi/envs/default /path/to/conda/base/envs/conda-name-of-my-env The first path is the path to your Pixi environment, which resides in your project directory, under .pixi/envs, and the second path needs to be within one of Conda's environment directories, which can be found with conda config --show envs_dirs. Original answer: I have experience with a Conda + Poetry setup, and it's been working fine. The great majority of my dependencies are specified in pyproject.toml, but when there's something that's unavailable in PyPI, or installing it with Conda is easier, I add it to environment.yml. Moreover, Conda is used as a virtual environment manager, which works well with Poetry: there is no need to use poetry run or poetry shell, it is enough to activate the right Conda environment. Tips for creating a reproducible environment Add Poetry, possibly with a version number (if needed), as a dependency in environment.yml, so that you get Poetry installed when you run conda create, along with Python and other non-PyPI dependencies. Add conda-lock, which gives you lock files for Conda dependencies, just like you have poetry.lock for Poetry dependencies. Consider using mamba which is generally compatible with conda, but is better at resolving conflicts, and is also much faster. An additional benefit is that all users of your setup will use the same package resolver, independent from the locally-installed version of Conda. By default, use Poetry for adding Python dependencies. Install packages via Conda if there's a reason to do so (e.g. in order to get a CUDA-enabled version). In such a case, it is best to specify the package's exact version in environment.yml, and after it's installed, to add an entry with the same version specification to Poetry's pyproject.toml (without ^ or ~ before the version number). This will let Poetry know that the package is there and should not be upgraded. If you use a different channels that provide the same packages, it might be not obvious which channel a particular package will be downloaded from. One solution is to specify the channel for the package using the :: notation (see the pytorch entry below), and another solution is to enable strict channel priority. Unfortunately, in Conda 4.x there is no way to enable this option through environment.yml. Note that Python adds user site-packages to sys.path, which may cause lack of reproducibility if the user has installed Python packages outside Conda environments. One possible solution is to make sure that the PYTHONNOUSERSITE environment variable is set to True (or to any other non-empty value). Example environment.yml: name: my_project_env channels: - pytorch - conda-forge # We want to have a reproducible setup, so we don't want default channels, # which may be different for different users. All required channels should # be listed explicitly here. - nodefaults dependencies: - python=3.10.* # or don't specify the version and use the latest stable Python - mamba - pip # pip must be mentioned explicitly, or conda-lock will fail - poetry=1.* # or 1.1.*, or no version at all -- as you want - tensorflow=2.8.0 - pytorch::pytorch=1.11.0 - pytorch::torchaudio=0.11.0 - pytorch::torchvision=0.12.0 # Non-standard section listing target platforms for conda-lock: platforms: - linux-64 virtual-packages.yml (may be used e.g. when we want conda-lock to generate CUDA-enabled lock files even on platforms without CUDA): subdirs: linux-64: packages: __cuda: 11.5 First-time setup You can avoid playing with the bootstrap env and simplify the example below if you have conda-lock, mamba and poetry already installed outside your target environment. # Create a bootstrap env conda create -p /tmp/bootstrap -c conda-forge mamba conda-lock poetry='1.*' conda activate /tmp/bootstrap # Create Conda lock file(s) from environment.yml conda-lock -k explicit --conda mamba # Set up Poetry poetry init --python=~3.10 # version spec should match the one from environment.yml # Fix package versions installed by Conda to prevent upgrades poetry add --lock tensorflow=2.8.0 torch=1.11.0 torchaudio=0.11.0 torchvision=0.12.0 # Add conda-lock (and other packages, as needed) to pyproject.toml and poetry.lock poetry add --lock conda-lock # Remove the bootstrap env conda deactivate rm -rf /tmp/bootstrap # Add Conda spec and lock files git add environment.yml virtual-packages.yml conda-linux-64.lock # Add Poetry spec and lock files git add pyproject.toml poetry.lock git commit Usage The above setup may seem complex, but it can be used in a fairly simple way. Creating the environment conda create --name my_project_env --file conda-linux-64.lock conda activate my_project_env poetry install Activating the environment conda activate my_project_env Updating the environment # Re-generate Conda lock file(s) based on environment.yml conda-lock -k explicit --conda mamba # Update Conda packages based on re-generated lock file mamba update --file conda-linux-64.lock # Update Poetry packages and re-generate poetry.lock poetry update"} +{"question_id": 7279810, "score": 204, "creation_date": 1314943956, "tags": ["python", "module"], "instruction": "What does a leading . (dot, period) in an import statement in Python mean?\n\nI'm looking over the code for Python's multiprocessing module, and it contains this line: from ._multiprocessing import win32, Connection, PipeConnection instead of from _multiprocessing import win32, Connection, PipeConnection the subtle difference being the period before _multiprocessing. What does that mean? Why the period?", "output": "That's the syntax for explicit relative imports. It means import from the current package."} +{"question_id": 9641440, "score": 203, "creation_date": 1331329650, "tags": ["python", "hex", "ascii"], "instruction": "Convert from ASCII string encoded in Hex to plain ASCII?\n\nHow can I convert from hex to plain ASCII in Python? Note that, for example, I want to convert \"0x7061756c\" to \"paul\".", "output": "A slightly simpler solution (python 2 only): >>> \"7061756c\".decode(\"hex\") 'paul'"} +{"question_id": 7207309, "score": 203, "creation_date": 1314373605, "tags": ["python", "multithreading", "multiprocessing"], "instruction": "How to run functions in parallel?\n\nI am trying to run multiple functions in parallel in Python. I have something like this: files.py import common #common is a util class that handles all the IO stuff dir1 = 'C:\\folder1' dir2 = 'C:\\folder2' filename = 'test.txt' addFiles = [25, 5, 15, 35, 45, 25, 5, 15, 35, 45] def func1(): c = common.Common() for i in range(len(addFiles)): c.createFiles(addFiles[i], filename, dir1) c.getFiles(dir1) time.sleep(10) c.removeFiles(addFiles[i], dir1) c.getFiles(dir1) def func2(): c = common.Common() for i in range(len(addFiles)): c.createFiles(addFiles[i], filename, dir2) c.getFiles(dir2) time.sleep(10) c.removeFiles(addFiles[i], dir2) c.getFiles(dir2) I want to call func1 and func2 and have them run at the same time. The functions do not interact with each other or on the same object. Right now I have to wait for func1 to finish before func2 to start. How do I do something like below: process.py from files import func1, func2 runBothFunc(func1(), func2()) I want to be able to create both directories pretty close to the same time because every min I am counting how many files are being created. If the directory isn't there it will throw off my timing.", "output": "You could use threading or multiprocessing. Due to peculiarities of CPython, threading is unlikely to achieve true parallelism. For this reason, multiprocessing is generally a better bet. Here is a complete example: from multiprocessing import Process def func1(): print(\"func1: starting\") for i in range(10000000): pass print(\"func1: finishing\") def func2(): print(\"func2: starting\") for i in range(10000000): pass print(\"func2: finishing\") if __name__ == \"__main__\": p1 = Process(target=func1) p1.start() p2 = Process(target=func2) p2.start() p1.join() p2.join() The mechanics of starting/joining child processes can easily be encapsulated into a function along the lines of your runBothFunc: def runInParallel(*fns): proc = [] for fn in fns: p = Process(target=fn) p.start() proc.append(p) for p in proc: p.join() runInParallel(func1, func2)"} +{"question_id": 77213053, "score": 203, "creation_date": 1696215720, "tags": ["python", "flask", "pytest", "werkzeug"], "instruction": "Why did Flask start failing with \"ImportError: cannot import name 'url_quote' from 'werkzeug.urls'\"?\n\nEnvironment: Python 3.10.11 Flask==2.2.2 I run my Flask backend code in docker container, with BASE Image: FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime But when I run the pytest with version pytest 7.4.2, pip install pytest pytest it raised an Error, with logs: ==================================== ERRORS ==================================== _____________ ERROR collecting tests/test_fiftyone_utils_utils.py ______________ ImportError while importing test module '/builds/kw/data-auto-analysis-toolkit-backend/tests/test_fiftyone_utils_utils.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/conda/lib/python3.10/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_fiftyone_utils_utils.py:2: in <module> import daat # noqa: F401 /opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/__init__.py:1: in <module> from daat.app import app /opt/conda/lib/python3.10/site-packages/daat-1.0.0-py3.10.egg/daat/app/__init__.py:6: in <module> from flask import Flask, jsonify, request /opt/conda/lib/python3.10/site-packages/flask/__init__.py:5: in <module> from .app import Flask as Flask /opt/conda/lib/python3.10/site-packages/flask/app.py:30: in <module> from werkzeug.urls import url_quote E ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/opt/conda/lib/python3.10/site-packages/werkzeug/urls.py) My codes works well when I directly run it with python run.py run.py shown below from daat import app app.run(host='0.0.0.0') I guess it should be the pytest versions issue, because it used to work well without changing any related code, and I use pip install pytest without defined a specific version. And my backend runs well without pytest.", "output": "I had the same problem. It is because Werkzeug 3.0.0 was released and Flask doesn't specify the dependency correctly (requirements says Werkzeug>=2.2.0). This is why, Werkzeug 3.0.0 is still installed and Flask 2.2.2 isn't made for Werkzeug 3.0.0. Solution: Just set a fix version for Werkzeug such as Werkzeug==2.2.2 in your requirements.txt and it should work."} +{"question_id": 9755538, "score": 203, "creation_date": 1332038116, "tags": ["python", "random"], "instruction": "How do I create a list of random numbers without duplicates?\n\nI tried using random.randint(0, 100), but some numbers were the same. Is there a method/module to create a list unique random numbers?", "output": "This will return a list of 10 numbers selected from the range 0 to 99, without duplicates. import random random.sample(range(100), 10)"} +{"question_id": 2677185, "score": 203, "creation_date": 1271783844, "tags": ["python", "arguments", "python-inspect"], "instruction": "How can I read a function's signature including default argument values?\n\nGiven a function object, how can I get its signature? For example, for: def my_method(first, second, third='something'): pass I would like to get \"my_method(first, second, third='something')\".", "output": "import inspect def foo(a, b, x='blah'): pass print(inspect.signature(foo)) # (a, b, x='blah') Python 3.5+ recommends inspect.signature()."} +{"question_id": 29647749, "score": 203, "creation_date": 1429093506, "tags": ["python", "seaborn", "heatmap", "plot-annotations"], "instruction": "Seaborn showing scientific notation in heatmap for 3-digit numbers\n\nI'm creating a heatmap from a pandas pivot_table as below: table2 = pd.pivot_table(df,values='control',columns='Year',index='Region',aggfunc=np.sum) sns.heatmap(table2,annot=True,cmap='Blues') It creates a heat map as shown below. You can see the numbers are not huge (max 750), but it's showing them in scientific notation. If I view the table itself this is not the case. Any idea on how I could get it to show the numbers in plain notation?", "output": "According to the docs, the param fmt='.2g' is being applied because you've set annot=True so you can modify the format being applied to: sns.heatmap(table2,annot=True,cmap='Blues', fmt='g')"} +{"question_id": 6167127, "score": 203, "creation_date": 1306664054, "tags": ["python"], "instruction": "How can I put multiple statements in one line?\n\nI know a little bit of comprehensions in Python, but they seem very hard to 'read'. The way I see it, a comprehension might accomplish the same as the following code: for i in range(10): if i == 9: print('i equals 9') This code is much easier to read than how comprehensions currently work, but I've noticed you can\u2019t have two :s in one line. This brings me to: Is there a way I can get the following example into one line? try: if sam[0] != 'harry': print('hello', sam) except: pass Something like this would be great: try: if sam[0] != 'harry': print('hellp', sam) except:pass But again I encounter the conflicting :s. I'd also love to know if there's a way to run try (or something like it) without except. It seems entirely pointless that I need to put except:pass in there. It\u2019s a wasted line.", "output": "Unfortunately, what you want is not possible with Python (which makes Python close to useless for command-line one-liner programs). Even explicit use of parentheses does not avoid the syntax exception. You can get away with a sequence of simple statements, separated by semicolon: for i in range(10): print \"foo\"; print \"bar\" But as soon as you add a construct that introduces an indented block (like if), you need the line break. Also, for i in range(10): print \"i equals 9\" if i==9 else None is legal and might approximate what you want. If you are still determined to use one-liners, see eprovst's answer. As for the try ... except thing: It would be totally useless without the except. try says \"I want to run this code, but it might throw an exception\". If you don't care about the exception, leave out the try. But as soon as you put it in, you're saying \"I want to handle a potential exception\". The pass then says you wish to not handle it specifically. But that means your code will continue running, which it wouldn't otherwise."} +{"question_id": 12904912, "score": 203, "creation_date": 1350339918, "tags": ["python", "matplotlib", "mplot3d", "matplotlib-3d"], "instruction": "how to set \"camera position\" for 3d plots using python/matplotlib?\n\nI'm learning how to use mplot3d to produce nice plots of 3d data and I'm pretty happy so far. What I am trying to do at the moment is a little animation of a rotating surface. For that purpose, I need to set a camera position for the 3D projection. I guess this must be possible since a surface can be rotated using the mouse when using matplotlib interactively. But how can I do this from a script? I found a lot of transforms in mpl_toolkits.mplot3d.proj3d but I could not find out how to use these for my purpose and I didn't find any example for what I'm trying to do.", "output": "By \"camera position,\" it sounds like you want to adjust the elevation and the azimuth angle that you use to view the 3D plot. You can set this with ax.view_init. I've used the below script to first create the plot, then I determined a good elevation, or elev, from which to view my plot. I then adjusted the azimuth angle, or azim, to vary the full 360deg around my plot, saving the figure at each instance (and noting which azimuth angle as I saved the plot). For a more complicated camera pan, you can adjust both the elevation and angle to achieve the desired effect. from mpl_toolkits.mplot3d import Axes3D ax = Axes3D(fig) ax.scatter(xx,yy,zz, marker='o', s=20, c=\"goldenrod\", alpha=0.6) for ii in xrange(0,360,1): ax.view_init(elev=10., azim=ii) savefig(\"movie%d.png\" % ii)"} +{"question_id": 44327999, "score": 202, "creation_date": 1496403486, "tags": ["python", "pandas", "dataframe", "merge", "data-analysis"], "instruction": "How to merge multiple dataframes\n\nI have different dataframes and need to merge them together based on the date column. If I only had two dataframes, I could use df1.merge(df2, on='date'), to do it with three dataframes, I use df1.merge(df2.merge(df3, on='date'), on='date'), however it becomes really complex and unreadable to do it with multiple dataframes. All dataframes have one column in common -date, but they don't have the same number of rows nor columns and I only need those rows in which each date is common to every dataframe. So, I'm trying to write a recursion function that returns a dataframe with all data but it didn't work. How should I merge multiple dataframes then? I tried different ways and got errors like out of range, keyerror 0/1/2/3 and can not merge DataFrame with instance of type <class 'NoneType'>. This is the script I wrote: dfs = [df1, df2, df3] # list of dataframes def mergefiles(dfs, countfiles, i=0): if i == (countfiles - 2): # it gets to the second to last and merges it with the last return dfm = dfs[i].merge(mergefiles(dfs[i+1], countfiles, i=i+1), on='date') return dfm print(mergefiles(dfs, len(dfs))) An example: df_1: May 19, 2017;1,200.00;0.1% May 18, 2017;1,100.00;0.1% May 17, 2017;1,000.00;0.1% May 15, 2017;1,901.00;0.1% df_2: May 20, 2017;2,200.00;1000000;0.2% May 18, 2017;2,100.00;1590000;0.2% May 16, 2017;2,000.00;1230000;0.2% May 15, 2017;2,902.00;1000000;0.2% df_3: May 21, 2017;3,200.00;2000000;0.3% May 17, 2017;3,100.00;2590000;0.3% May 16, 2017;3,000.00;2230000;0.3% May 15, 2017;3,903.00;2000000;0.3% Expected merge result: May 15, 2017; 1,901.00;0.1%; 2,902.00;1000000;0.2%; 3,903.00;2000000;0.3%", "output": "Short answer df_merged = reduce(lambda left,right: pd.merge(left,right,on=['DATE'], how='outer'), data_frames) Long answer Below, is the most clean, comprehensible way of merging multiple dataframe if complex queries aren't involved. Just simply merge with DATE as the index and merge using OUTER method (to get all the data). import pandas as pd from functools import reduce df1 = pd.read_table('file1.csv', sep=',') df2 = pd.read_table('file2.csv', sep=',') df3 = pd.read_table('file3.csv', sep=',') Now, basically load all the files you have as data frame into a list. And, then merge the files using merge or reduce function. # compile the list of dataframes you want to merge data_frames = [df1, df2, df3] Note: you can add as many data-frames inside the above list. This is the good part about this method. No complex queries involved. To keep the values that belong to the same date you need to merge it on the DATE df_merged = reduce(lambda left,right: pd.merge(left,right,on=['DATE'], how='outer'), data_frames) # if you want to fill the values that don't exist in the lines of merged dataframe simply fill with required strings as df_merged = reduce(lambda left,right: pd.merge(left,right,on=['DATE'], how='outer'), data_frames).fillna('void') Now, the output will the values from the same date on the same lines. You can fill the non existing data from different frames for different columns using fillna(). Then write the merged data to the csv file if desired. pd.DataFrame.to_csv(df_merged, 'merged.txt', sep=',', na_rep='.', index=False) This should give you DATE VALUE1 VALUE2 VALUE3 ...."} +{"question_id": 3579568, "score": 202, "creation_date": 1282857191, "tags": ["python", "user-interface", "dialog", "filechooser"], "instruction": "Choosing a file in Python with simple Dialog\n\nI would like to get file path as input in my Python console application. Currently I can only ask for full path as an input in the console. Is there a way to trigger a simple user interface where users can select file instead of typing the full path?", "output": "How about using tkinter? from tkinter import Tk from tkinter.filedialog import askopenfilename Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing filename = askopenfilename() # show an \"Open\" dialog box and return the path to the selected file print(filename) Done!"} +{"question_id": 8685790, "score": 202, "creation_date": 1325296928, "tags": ["python", "http-headers", "python-requests"], "instruction": "Adding headers to the Requests module\n\nEarlier, I used the httplib module to add a header in the request. Now I am trying the same thing with the Requests module. This is the Python Requests module I am using: requests 2.32.5 How can I add a header to request.post() and request.get()? Say, I have to add a foobar key in each request in the header.", "output": "From Requests' Quickstart: url = 'https://api.github.com/some/endpoint' payload = {'some': 'data'} headers = {'content-type': 'application/json'} r = requests.post(url, data=json.dumps(payload), headers=headers) You just need to create a dict with your headers (key: value pairs where the key is the name of the header and the value is, well, the value of the pair) and pass that dict to the headers parameter on the .get or .post method. So more specific to your question: headers = {'foobar': 'raboof'} requests.get('http://himom.com', headers=headers)"} +{"question_id": 46753393, "score": 201, "creation_date": 1508057905, "tags": ["python", "python-3.x", "selenium-webdriver", "selenium-firefoxdriver", "firefox-headless"], "instruction": "How to run headless Firefox with Selenium in Python?\n\nI am running this code with python, selenium, and firefox but still get 'head' version of firefox: binary = FirefoxBinary('C:\\\\Program Files (x86)\\\\Mozilla Firefox\\\\firefox.exe', log_file=sys.stdout) binary.add_command_line_options('-headless') self.driver = webdriver.Firefox(firefox_binary=binary) I also tried some variations of binary: binary = FirefoxBinary('C:\\\\Program Files\\\\Nightly\\\\firefox.exe', log_file=sys.stdout) binary.add_command_line_options(\"--headless\")", "output": "To invoke Firefox Browser headlessly, you can set the headless property through Options() class as follows: from selenium import webdriver from selenium.webdriver.firefox.options import Options options = Options() options.add_argument(\"--headless\") driver = webdriver.Firefox(options=options) driver.get(\"http://google.com/\") print(\"Headless Firefox Initialized\") driver.quit() There's another way to accomplish headless mode. If you need to disable or enable the headless mode in Firefox, without changing the code, you can set the environment variable MOZ_HEADLESS to whatever if you want Firefox to run headless, or don't set it at all. This is very useful when you are using for example continuous integration and you want to run the functional tests in the server but still be able to run the tests in normal mode in your PC. $ MOZ_HEADLESS=1 python manage.py test # testing example in Django with headless Firefox or $ export MOZ_HEADLESS=1 # this way you only have to set it once $ python manage.py test functional/tests/directory $ unset MOZ_HEADLESS # if you want to disable headless mode"} +{"question_id": 20895429, "score": 201, "creation_date": 1388715371, "tags": ["python", "django"], "instruction": "How exactly do Django content types work?\n\nI'm really having a difficult time grasping the concept of Django's content types. It feels very hackish and, ultimately, against how Python tends to do things. That being said, if I'm going to use Django then I have to work within the confines of the framework. So I'm coming here wondering if anyone can give a practical real world example of how a content type works and how you would implement it. Almost all the tutorials (mostly on blogs) I have reviewed don't do a great job really covering the concept. They seem to pick up where the Django documentation left off (what seems like nowhere).", "output": "So you want to use the Content Types framework on your work? Start by asking yourself this question: \"Do any of these models need to be related in the same way to other models and/or will I be reusing these relationships in unforseen ways later down the road?\" The reason why we ask this question is because this is what the Content Types framework does best: it creates generic relations between models. Blah blah, let's dive into some code and see what I mean. # ourapp.models from django.conf import settings from django.db import models # Assign the User model in case it has been \"swapped\" User = settings.AUTH_USER_MODEL # Create your models here class Post(models.Model): author = models.ForeignKey(User) title = models.CharField(max_length=75) slug = models.SlugField(unique=True) body = models.TextField(blank=True) class Picture(models.Model): author = models.ForeignKey(User) image = models.ImageField() caption = models.TextField(blank=True) class Comment(models.Model): author = models.ForeignKey(User) body = models.TextField(blank=True) post = models.ForeignKey(Post) picture = models.ForeignKey(Picture) Okay, so we do have a way to theoretically create this relationship. However, as a Python programmer, your superior intellect is telling you this sucks and you can do better. High five! Enter the Content Types framework! Well, now we're going to take a close look at our models and rework them to be more \"reusable\" and intuitive. Let's start by getting rid of the two foreign keys on our Comment model and replace them with a GenericForeignKey. # ourapp.models from django.contrib.contenttypes.fields import GenericForeignKey from django.contrib.contenttypes.models import ContentType ... class Comment(models.Model): author = models.ForeignKey(User) body = models.TextField(blank=True) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = GenericForeignKey() So, what happened? Well, we went in and added the necessary code to allow for a generic relation to other models. Notice how there is more than just a GenericForeignKey, but also a ForeignKey to ContentType and a PositiveIntegerField for the object_id. These fields are for telling Django what type of object this is related to and what the id is for that object. In reality, this makes sense because Django will need both to lookup these related objects. Well, that's not very Python-like... it\u2019s kinda ugly! You are probably looking for air-tight, spotless, intuitive code that would make Guido van Rossum proud. I get you. Let's look at the GenericRelation field so we can put a pretty bow on this. # ourapp.models from django.contrib.contenttypes.fields import GenericRelation ... class Post(models.Model): author = models.ForeignKey(User) title = models.CharField(max_length=75) slug = models.SlugField(unique=True) body = models.TextField(blank=True) comments = GenericRelation('Comment') class Picture(models.Model): author = models.ForeignKey(User) image = models.ImageField() caption = models.TextField(blank=True) comments = GenericRelation('Comment') Bam! Just like that you can work with the Comments for these two models. In fact, let's go ahead and do that in our shell (type python manage.py shell from your Django project directory). >>> from django.contrib.auth import get_user_model >>> from ourapp.models import Picture, Post # We use get_user_model() since we are referencing directly User = get_user_model() # Grab our own User object >>> me = User.objects.get(username='myusername') # Grab the first of our own pictures so we can comment on it >>> pic = Picture.objects.get(author=me) # Let's start making a comment for our own picture >>> pic.comments.create(author=me, body=\"Man, I'm cool!\") # Let's go ahead and retrieve the comments for this picture now >>> pic.comments.all() [<Comment: \"Man, I'm cool!\">] # Same for Post comments >>> post = Post.objects.get(author=me) >>> post.comments.create(author=me, body=\"So easy to comment now!\") >>> post.comments.all() [<Comment: \"So easy to comment now!\"] It's that simple. What are the other practical implications of these \"generic\" relations? Generic foreign keys allow for less intrusive relations between various applications. For example, let's say we pulled the Comment model out into its own app named chatterly. Now we want to create another application named noise_nimbus where people store their music to share with others. What if we want to add comments to those songs? Well, we can just draw a generic relation: # noise_nimbus.models from django.conf import settings from django.contrib.contenttypes.fields import GenericRelation from django.db import models from chatterly.models import Comment # For a third time, we take the time to ensure custom Auth isn't overlooked User = settings.AUTH_USER_MODEL # Create your models here class Song(models.Model): ''' A song which can be commented on. ''' file = models.FileField() author = models.ForeignKey(User) title = models.CharField(max_length=75) slug = models.SlugField(unique=True) description = models.TextField(blank=True) comments = GenericRelation(Comment) I hope you guys found this helpful as I would have loved to have come across something that showed me the more realistic application of GenericForeignKey and GenericRelation fields. Is this too good to be true? As with anything in life, there are pros and cons. Anytime you add more code and more abstraction, the underlying processes becomes heavier and a bit slower. Adding generic relations can add a little bit of a performance dampener despite the fact it will try and smart cache its results. All in all, it comes down to whether the cleanliness and simplicity outweighs the small performance costs. For me, the answer is a million times yes. There is more to the Content Types framework than I have displayed here. There is a whole level of granularity and more verbose usage, but for the average individual, this is how you will be using it 9 out of 10 times in my opinion. Generic relationizers(?) beware! A rather large caveat is that when you use a GenericRelation, if the model which has the GenericRelation applied (Picture) is deleted, all related (Comment) objects will also be deleted. Or at least as of the time of this writing."} +{"question_id": 29177498, "score": 201, "creation_date": 1426894981, "tags": ["python", "pandas", "dataframe", "nan", "fillna"], "instruction": "Replace NaN in one column with value from corresponding row of second column\n\nI am working with this Pandas DataFrame in Python. File heat Farheit Temp_Rating 1 YesQ 75 N/A 1 NoR 115 N/A 1 YesA 63 N/A 1 NoT 83 41 1 NoY 100 80 1 YesZ 56 12 2 YesQ 111 N/A 2 NoR 60 N/A 2 YesA 19 N/A 2 NoT 106 77 2 NoY 45 21 2 YesZ 40 54 3 YesQ 84 N/A 3 NoR 67 N/A 3 YesA 94 N/A 3 NoT 68 39 3 NoY 63 46 3 YesZ 34 81 I need to replace all NaNs in the Temp_Rating column with the value from the Farheit column. This is what I need: File heat Temp_Rating 1 YesQ 75 1 NoR 115 1 YesA 63 1 YesQ 41 1 NoR 80 1 YesA 12 2 YesQ 111 2 NoR 60 2 YesA 19 2 NoT 77 2 NoY 21 2 YesZ 54 3 YesQ 84 3 NoR 67 3 YesA 94 3 NoT 39 3 NoY 46 3 YesZ 81 If I do a Boolean selection, I can pick out only one of these columns at a time. The problem is if I then try to join them, I am not able to do this while preserving the correct order. How can I only find Temp_Rating rows with the NaNs and replace them with the value in the same row of the Farheit column?", "output": "Assuming your DataFrame is in df: df.Temp_Rating.fillna(df.Farheit, inplace=True) del df['Farheit'] df.columns = 'File heat Observations'.split() First replace any NaN values with the corresponding value of df.Farheit. Delete the 'Farheit' column. Then rename the columns. Here's the resulting DataFrame: File heat Observations 0 1 YesQ 75 1 1 NoR 115 2 1 YesA 63 3 1 NoT 41 4 1 NoY 80 5 1 YesZ 12 6 2 YesQ 111 7 2 NoR 60 8 2 YesA 19 9 2 NoT 77 10 2 NoY 21 11 2 YesZ 54 12 3 YesQ 84 13 3 NoR 67 14 3 YesA 94 15 3 NoT 39 16 3 NoY 46 17 3 YesZ 81"} +{"question_id": 33311258, "score": 200, "creation_date": 1445633820, "tags": ["python", "types", "isinstance"], "instruction": "Python check if variable isinstance of any type in list\n\nHow do I compactly perform the following: var = 7.0 var_is_good = ( isinstance(var, classinfo1) or isinstance(var, classinfo2) or isinstance(var, classinfo3) )", "output": "isinstance() takes a tuple of classes for the second argument. It'll return true if the first argument is an instance of any of the types in that sequence: isinstance(var, (classinfo1, classinfo2, classinfo3)) In other words, isinstance() already offers this functionality, out of the box. From the isinstance() documentation: If classinfo is neither a class object nor a type object, it may be a tuple of class or type objects, or may recursively contain other such tuples (other sequence types are not accepted). Emphasis mine; note the recursive nature; (classinfo1, (classinfo2, classinfo3)) is also a valid option."} +{"question_id": 3300464, "score": 200, "creation_date": 1279723340, "tags": ["python", "sql", "sqlite", "dictionary", "dataformat"], "instruction": "How can I get dict from sqlite query?\n\ndb = sqlite.connect(\"test.sqlite\") res = db.execute(\"select * from table\") With iteration I get lists coresponding to the rows. for row in res: print row I can get name of the columns col_name_list = [tuple[0] for tuple in res.description] But is there some function or setting to get dictionaries instead of list? {'col1': 'value', 'col2': 'value'} or I have to do myself?", "output": "You could use row_factory, as in the example in the docs: import sqlite3 def dict_factory(cursor, row): d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d con = sqlite3.connect(\":memory:\") con.row_factory = dict_factory cur = con.cursor() cur.execute(\"select 1 as a\") print cur.fetchone()[\"a\"] or follow the advice that's given right after this example in the docs: If returning a tuple doesn\u2019t suffice and you want name-based access to columns, you should consider setting row_factory to the highly-optimized sqlite3.Row type. Row provides both index-based and case-insensitive name-based access to columns with almost no memory overhead. It will probably be better than your own custom dictionary-based approach or even a db_row based solution. Here is the code for this second solution: con = sqlite3.connect(\u2026) con.row_factory = sqlite3.Row # add this row cursor = con.cursor()"} +{"question_id": 19226488, "score": 200, "creation_date": 1381153348, "tags": ["python", "pandas"], "instruction": "Change values in one column on the basis of the values in another column\n\nI'm trying to reproduce my Stata code in Python, and I was pointed in the direction of Pandas. I am, however, having a hard time wrapping my head around how to process the data. Let's say I want to iterate over all values in the column head 'ID.' If that ID matches a specific number, then I want to change two corresponding values FirstName and LastName. In Stata it looks like this: replace FirstName = \"Matt\" if ID==103 replace LastName = \"Jones\" if ID==103 So this replaces all values in FirstName that correspond with values of ID == 103 to Matt. In Pandas, I'm trying something like this df = read_csv(\"test.csv\") for i in df['ID']: if i ==103: ... Not sure where to go from here. Any ideas?", "output": "One option is to use Python's slicing and indexing features to logically evaluate the places where your condition holds and overwrite the data there. Assuming you can load your data directly into pandas with pandas.read_csv then the following code might be helpful for you. import pandas df = pandas.read_csv(\"test.csv\") df.loc[df.ID == 103, 'FirstName'] = \"Matt\" df.loc[df.ID == 103, 'LastName'] = \"Jones\" As mentioned in the comments, you can also do the assignment to both columns in one shot: df.loc[df.ID == 103, ['FirstName', 'LastName']] = 'Matt', 'Jones' Note that you'll need pandas version 0.11 or newer to make use of loc for overwrite assignment operations. Indeed, for older versions like 0.8 (despite what critics of chained assignment may say), chained assignment is the correct way to do it, hence why it's useful to know about even if it should be avoided in more modern versions of pandas. Another way to do it is to use what is called chained assignment. The behavior of this is less stable and so it is not considered the best solution (it is explicitly discouraged in the docs), but it is useful to know about: import pandas df = pandas.read_csv(\"test.csv\") df['FirstName'][df.ID == 103] = \"Matt\" df['LastName'][df.ID == 103] = \"Jones\""} +{"question_id": 50168647, "score": 199, "creation_date": 1525415765, "tags": ["python", "python-3.x", "multithreading", "macos"], "instruction": "Multiprocessing causes Python to crash and gives an error may have been in progress in another thread when fork() was called\n\nI am relatively new to Python and trying to implement a Multiprocessing module for my for loop. I have an array of Image url's stored in img_urls which I need to download and apply some Google vision. if __name__ == '__main__': img_urls = [ALL_MY_Image_URLS] runAll(img_urls) print(\"--- %s seconds ---\" % (time.time() - start_time)) This is my runAll() method def runAll(img_urls): num_cores = multiprocessing.cpu_count() print(\"Image URLS {}\",len(img_urls)) if len(img_urls) > 2: numberOfImages = 0 else: numberOfImages = 1 start_timeProcess = time.time() pool = multiprocessing.Pool() pool.map(annotate,img_urls) end_timeProcess = time.time() print('\\n Time to complete ', end_timeProcess-start_timeProcess) print(full_matching_pages) def annotate(img_path): file = requests.get(img_path).content print(\"file is\",file) \"\"\"Returns web annotations given the path to an image.\"\"\" print('Process Working under ',os.getpid()) image = types.Image(content=file) web_detection = vision_client.web_detection(image=image).web_detection report(web_detection) I am getting this as the warning when I run it and python crashes objc[67570]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. objc[67570]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. objc[67567]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. objc[67567]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. objc[67568]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. objc[67568]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. objc[67569]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. objc[67569]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. objc[67571]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. objc[67571]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. objc[67572]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. objc[67572]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.", "output": "This error occurs because of added security to restrict multithreading in macOS High Sierra and later versions of macOS. I know this answer is a bit late, but I solved the problem using the following method: Set an environment variable .bash_profile (or .zshrc for recent macOS) to allow multithreading applications or scripts under the new macOS High Sierra security rules. Open a terminal: $ nano .bash_profile Add the following line to the end of the file: OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES Save, exit, close terminal and re-open the terminal. Check to see that the environment variable is now set: $ env You will see output similar to: TERM_PROGRAM=Apple_Terminal SHELL=/bin/bash TERM=xterm-256color TMPDIR=/var/folders/pn/vasdlj3ojO#OOas4dasdffJq/T/ Apple_PubSub_Socket_Render=/private/tmp/com.apple.launchd.E7qLFJDSo/Render TERM_PROGRAM_VERSION=404 TERM_SESSION_ID=NONE OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES You should now be able to run your Python script with multithreading."} +{"question_id": 1251692, "score": 199, "creation_date": 1249835582, "tags": ["python", "reflection", "properties"], "instruction": "How to enumerate an object's properties in Python?\n\nI C# we do it through reflection. In Javascript it is simple as: for(var propertyName in objectName) var currentPropertyValue = objectName[propertyName]; How to do it in Python?", "output": "for property, value in vars(theObject).items(): print(property, \":\", value) Be aware that in some rare cases there's a __slots__ property, such classes often have no __dict__."} +{"question_id": 1832940, "score": 199, "creation_date": 1259760847, "tags": ["python", "eval"], "instruction": "Why is using 'eval' a bad practice?\n\nI use the following class to easily store data of my songs. class Song: \"\"\"The class to store the details of each song\"\"\" attsToStore=('Name', 'Artist', 'Album', 'Genre', 'Location') def __init__(self): for att in self.attsToStore: exec 'self.%s=None'%(att.lower()) in locals() def setDetail(self, key, val): if key in self.attsToStore: exec 'self.%s=val'%(key.lower()) in locals() I feel that this is just much more extensible than writing out an if/else block. However, I have heard that eval is unsafe. Is it? What is the risk? How can I solve the underlying problem in my class (setting attributes of self dynamically) without incurring that risk?", "output": "Yes, using eval is a bad practice. Just to name a few reasons: There is almost always a better way to do it Very dangerous and insecure Makes debugging difficult Slow In your case you can use setattr instead: class Song: \"\"\"The class to store the details of each song\"\"\" attsToStore=('Name', 'Artist', 'Album', 'Genre', 'Location') def __init__(self): for att in self.attsToStore: setattr(self, att.lower(), None) def setDetail(self, key, val): if key in self.attsToStore: setattr(self, key.lower(), val) There are some cases where you have to use eval or exec. But they are rare. Using eval in your case is a bad practice for sure. I'm emphasizing on bad practice because eval and exec are frequently used in the wrong place. Replying to the comments: It looks like some disagree that eval is 'very dangerous and insecure' in the OP case. That might be true for this specific case but not in general. The question was general and the reasons I listed are true for the general case as well."} +{"question_id": 59013109, "score": 199, "creation_date": 1574550512, "tags": ["python", "python-3.x", "machine-learning", "deep-learning", "pytorch"], "instruction": "RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same\n\nThis: device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") model.to(device) for data in dataloader: inputs, labels = data outputs = model(inputs) Gives the error: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same", "output": "You get this error because your model is on the GPU, but your data is on the CPU. So, you need to send your input tensors to the GPU. inputs, labels = data # this is what you had inputs, labels = inputs.cuda(), labels.cuda() # add this line Or like this, to stay consistent with the rest of your code: device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\") inputs, labels = inputs.to(device), labels.to(device) The same error will be raised if your input tensors are on the GPU but your model weights aren't. In this case, you need to send your model weights to the GPU. model = MyModel() if torch.cuda.is_available(): model.cuda() See the documentation for cuda(), and its opposite, cpu()."} +{"question_id": 26131607, "score": 199, "creation_date": 1412118330, "tags": ["python", "matplotlib", "plot"], "instruction": "matplotlib get ylim values\n\nI'm using matplotlib to plot data (using plot and errorbar functions) from Python. I have to plot a set of totally separate and independent plots, and then adjust their ylim values so they can be easily visually compared. How can I retrieve the ylim values from each plot, so that I can take the min and max of the lower and upper ylim values, respectively, and adjust the plots so they can be visually compared? Of course, I could just analyze the data and come up with my own custom ylim values... but I'd like to use matplotlib to do that for me. Any suggestions on how to easily (and efficiently) do this? Here's my Python function that plots using matplotlib: import matplotlib.pyplot as plt def myplotfunction(title, values, errors, plot_file_name): # plot errorbars indices = range(0, len(values)) fig = plt.figure() plt.errorbar(tuple(indices), tuple(values), tuple(errors), marker='.') # axes axes = plt.gca() axes.set_xlim([-0.5, len(values) - 0.5]) axes.set_xlabel('My x-axis title') axes.set_ylabel('My y-axis title') # title plt.title(title) # save as file plt.savefig(plot_file_name) # close figure plt.close(fig)", "output": "Just use axes.get_ylim(), it is very similar to set_ylim. From the docs: get_ylim() Get the y-axis range [bottom, top]"} +{"question_id": 15857647, "score": 199, "creation_date": 1365295621, "tags": ["python", "matplotlib", "plot", "transparency"], "instruction": "How to export plots from matplotlib with transparent background?\n\nI am using matplotlib to make some graphs and unfortunately I cannot export them without the white background. In other words, when I export a plot like this and position it on top of another image, the white background hides what is behind it rather than allowing it to show through. How can I export plots with a transparent background instead?", "output": "Use the matplotlib savefig function with the keyword argument transparent=True to save the image as a png file. In [28]: import numpy as np In [29]: from matplotlib.pyplot import plot, savefig In [30]: x = np.linspace(0,6,31) In [31]: y = np.exp(-0.5*x) * np.sin(x) In [32]: plot(x, y, 'bo-') Out[32]: [<matplotlib.lines.Line2D at 0x3f29750>] In [33]: savefig('demo.png', transparent=True) Result: Of course, that plot doesn't demonstrate the transparency. Here's a screenshot of the PNG file displayed using the ImageMagick display command. The checkerboard pattern is the background that is visible through the transparent parts of the PNG file."} +{"question_id": 23296282, "score": 199, "creation_date": 1398437047, "tags": ["python", "pandas", "dataframe", "indexing", "chained-assignment"], "instruction": "What rules does Pandas use to generate a view vs a copy?\n\nI'm confused about the rules Pandas uses when deciding that a selection from a dataframe is a copy of the original dataframe, or a view on the original. If I have, for example, df = pd.DataFrame(np.random.randn(8,8), columns=list('ABCDEFGH'), index=range(1,9)) I understand that a query returns a copy so that something like foo = df.query('2 < index <= 5') foo.loc[:,'E'] = 40 will have no effect on the original dataframe, df. I also understand that scalar or named slices return a view, so that assignments to these, such as df.iloc[3] = 70 or df.ix[1,'B':'E'] = 222 will change df. But I'm lost when it comes to more complicated cases. For example, df[df.C <= df.B] = 7654321 changes df, but df[df.C <= df.B].ix[:,'B':'E'] does not. Is there a simple rule that Pandas is using that I'm just missing? What's going on in these specific cases; and in particular, how do I change all values (or a subset of values) in a dataframe that satisfy a particular query (as I'm attempting to do in the last example above)? Note: This is not the same as this question; and I have read the documentation, but am not enlightened by it. I've also read through the \"Related\" questions on this topic, but I'm still missing the simple rule Pandas is using, and how I'd apply it to \u2014 for example \u2014 modify the values (or a subset of values) in a dataframe that satisfy a particular query.", "output": "Here's the rules, subsequent override: All operations generate a copy If inplace=True is provided, it will modify in-place; only some operations support this An indexer that sets, e.g. .loc/.iloc/.iat/.at will set inplace. An indexer that gets on a single-dtyped object is almost always a view (depending on the memory layout it may not be that's why this is not reliable). This is mainly for efficiency. (the example from above is for .query; this will always return a copy as its evaluated by numexpr) An indexer that gets on a multiple-dtyped object is always a copy. Your example of chained indexing df[df.C <= df.B].loc[:,'B':'E'] is not guaranteed to work (and thus you should never do this). Instead do: df.loc[df.C <= df.B, 'B':'E'] as this is faster and will always work The chained indexing is 2 separate python operations and thus cannot be reliably intercepted by pandas (you will oftentimes get a SettingWithCopyWarning, but that is not 100% detectable either). The dev docs, which you pointed, offer a much more full explanation."} +{"question_id": 58441514, "score": 199, "creation_date": 1571351313, "tags": ["python", "tensorflow", "keras", "performance-testing", "tensorflow2.0"], "instruction": "Why is TensorFlow 2 much slower than TensorFlow 1?\n\nIt's been cited by many users as the reason for switching to Pytorch, but I've yet to find a justification/explanation for sacrificing the most important practical quality, speed, for eager execution. Below is code benchmarking performance, TF1 vs. TF2 - with TF1 running anywhere from 47% to 276% faster. My question is: what is it, at the graph or hardware level, that yields such a significant slowdown? Looking for a detailed answer - am already familiar with broad concepts. Relevant Git Specs: CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10, GTX 1070 Benchmark results: UPDATE: Disabling Eager Execution per below code does not help. The behavior, however, is inconsistent: sometimes running in graph mode helps considerably, other times it runs slower relative to Eager. Benchmark code: # use tensorflow.keras... to benchmark tf.keras; used GPU for all above benchmarks from keras.layers import Input, Dense, LSTM, Bidirectional, Conv1D from keras.layers import Flatten, Dropout from keras.models import Model from keras.optimizers import Adam import keras.backend as K import numpy as np from time import time batch_shape = (32, 400, 16) X, y = make_data(batch_shape) model_small = make_small_model(batch_shape) model_small.train_on_batch(X, y) # skip first iteration which builds graph timeit(model_small.train_on_batch, 200, X, y) K.clear_session() # in my testing, kernel was restarted instead model_medium = make_medium_model(batch_shape) model_medium.train_on_batch(X, y) # skip first iteration which builds graph timeit(model_medium.train_on_batch, 10, X, y) Functions used: def timeit(func, iterations, *args): t0 = time() for _ in range(iterations): func(*args) print(\"Time/iter: %.4f sec\" % ((time() - t0) / iterations)) def make_small_model(batch_shape): ipt = Input(batch_shape=batch_shape) x = Conv1D(128, 400, strides=4, padding='same')(ipt) x = Flatten()(x) x = Dropout(0.5)(x) x = Dense(64, activation='relu')(x) out = Dense(1, activation='sigmoid')(x) model = Model(ipt, out) model.compile(Adam(lr=1e-4), 'binary_crossentropy') return model def make_medium_model(batch_shape): ipt = Input(batch_shape=batch_shape) x = Bidirectional(LSTM(512, activation='relu', return_sequences=True))(ipt) x = LSTM(512, activation='relu', return_sequences=True)(x) x = Conv1D(128, 400, strides=4, padding='same')(x) x = Flatten()(x) x = Dense(256, activation='relu')(x) x = Dropout(0.5)(x) x = Dense(128, activation='relu')(x) x = Dense(64, activation='relu')(x) out = Dense(1, activation='sigmoid')(x) model = Model(ipt, out) model.compile(Adam(lr=1e-4), 'binary_crossentropy') return model def make_data(batch_shape): return np.random.randn(*batch_shape), np.random.randint(0, 2, (batch_shape[0], 1))", "output": "UPDATE 8/1730/2020: TF 2.3 has finally done it: all cases run as fast, or notably faster, than any previous version. Further, my previous update was unfair to TF; my GPU was to blame, has been overheating lately. If you see a rising stem plot of iteration times, it's a reliable symptom. Lastly, see a dev's note on Eager vs Graph. This might be my last update on this answer. The true stats on your model's speed can only be found by you, on your device. UPDATE 5/19/2020: TF 2.2, using same tests: only a minor improvement in Eager speed. Plots for Large-Large Numpy train_on_batch case below, x-axis is successive fit iterations; my GPU isn't near its full capacity, so doubt it's throttling, but iterations do get slower over time. Per above, Graph and Eager are 1.56x and 1.97x slower than their TF1 counterparts, respectively. Unsure I'll debug this further, as I'm considering switching to Pytorch per TensorFlow's poor support for custom / low-level functionality. I did, however, open an Issue to get devs' feedback. UPDATE 2/18/2020: I've benched 2.1 and 2.1-nightly; the results are mixed. All but one configs (model & data size) are as fast as or much faster than the best of TF2 & TF1. The one that's slower, and slower dramatically, is Large-Large - esp. in Graph execution (1.6x to 2.5x slower). Furthermore, there are extreme reproducibility differences between Graph and Eager for a large model I tested - one not explainable via randomness/compute-parallelism. I can't currently present reproducible code for these claims per time constraints, so instead I strongly recommend testing this for your own models. Haven't opened a Git issue on these yet, but I did comment on the original - no response yet. I'll update the answer(s) once progress is made. VERDICT: it isn't, IF you know what you're doing. But if you don't, it could cost you, lots - by a few GPU upgrades on average, and by multiple GPUs worst-case. THIS ANSWER: aims to provide a high-level description of the issue, as well as guidelines for how to decide on the training configuration specific to your needs. For a detailed, low-level description, which includes all benchmarking results + code used, see my other answer. I'll be updating my answer(s) w/ more info if I learn any - can bookmark / \"star\" this question for reference. ISSUE SUMMARY: as confirmed by a TensorFlow developer, Q. Scott Zhu, TF2 focused development on Eager execution & tight integration w/ Keras, which involved sweeping changes in TF source - including at graph-level. Benefits: greatly expanded processing, distribution, debug, and deployment capabilities. The cost of some of these, however, is speed. The matter, however, is fairly more complex. It isn't just TF1 vs. TF2 - factors yielding significant differences in train speed include: TF2 vs. TF1 Eager vs. Graph mode keras vs. tf.keras numpy vs. tf.data.Dataset vs. ... train_on_batch() vs. fit() GPU vs. CPU model(x) vs. model.predict(x) vs. ... Unfortunately, almost none of the above are independent of the other, and each can at least double execution time relative to another. Fortunately, you can determine what'll work best systematically, and with a few shortcuts - as I'll be showing. WHAT SHOULD I DO? Currently, the only way is - experiment for your specific model, data, and hardware. No single configuration will always work best - but there are do's and don't's to simplify your search: >> DO: train_on_batch() + numpy + tf.keras + TF1 + Eager/Graph train_on_batch() + numpy + tf.keras + TF2 + Graph fit() + numpy + tf.keras + TF1/TF2 + Graph + large model & data >> DON'T: fit() + numpy + keras for small & medium models and data fit() + numpy + tf.keras + TF1/TF2 + Eager train_on_batch() + numpy + keras + TF1 + Eager [Major] tf.python.keras; it can run 10-100x slower, and w/ plenty of bugs; more info This includes layers, models, optimizers, & related \"out-of-box\" usage imports; ops, utils, & related 'private' imports are fine - but to be sure, check for alts, & whether they're used in tf.keras Refer to code at bottom of my other answer for an example benchmarking setup. The list above is based mainly on the \"BENCHMARKS\" tables in the other answer. LIMITATIONS of the above DO's & DON'T's: This question's titled \"Why is TF2 much slower than TF1?\", and while its body concerns training explicitly, the matter isn't limited to it; inference, too, is subject to major speed differences, even within the same TF version, import, data format, etc. - see this answer. RNNs are likely to notably change the data grid in the other answer, as they've been improved in TF2 Models primarily used Conv1D and Dense - no RNNs, sparse data/targets, 4/5D inputs, & other configs Input data limited to numpy and tf.data.Dataset, while many other formats exist; see other answer GPU was used; results will differ on a CPU. In fact, when I asked the question, my CUDA wasn't properly configured, and some of the results were CPU-based. Why did TF2 sacrifice the most practical quality, speed, for eager execution? It hasn't, clearly - graph is still available. But if the question is \"why eager at all\": Superior debugging: you've likely come across multitudes of questions asking \"how do I get intermediate layer outputs\" or \"how do I inspect weights\"; with eager, it's (almost) as simple as .__dict__. Graph, in contrast, requires familiarity with special backend functions - greatly complicating the entire process of debugging & introspection. Faster prototyping: per ideas similar to above; faster understanding = more time left for actual DL. HOW TO ENABLE/DISABLE EAGER? tf.enable_eager_execution() # TF1; must be done before any model/tensor creation tf.compat.v1.disable_eager_execution() # TF2; above holds Misleading in TF2; see here. ADDITIONAL INFO: Careful with _on_batch() methods in TF2; according to the TF dev, they still use a slower implementation, but not intentionally - i.e. it's to be fixed. See other answer for details. REQUESTS TO TENSORFLOW DEVS: Please fix train_on_batch(), and the performance aspect of calling fit() iteratively; custom train loops are important to many, especially to me. Add documentation / docstring mention of these performance differences for users' knowledge. Improve general execution speed to keep peeps from hopping to Pytorch. ACKNOWLEDGEMENTS: Thanks to Q. Scott Zhu, TensorFlow developer, for his detailed clarification on the matter. P. Andrey for sharing useful testing, and discussion. UPDATES: 11/14/19 - found a model (in my real application) that that runs slower on TF2 for all* configurations w/ Numpy input data. Differences ranged 13-19%, averaging 17%. Differences between keras and tf.keras, however, were more dramatic: 18-40%, avg. 32% (both TF1 & 2). (* - except Eager, for which TF2 OOM'd) 11/17/19 - devs updated on_batch() methods in a recent commit, stating to have improved speed - to be released in TF 2.1, or available now as tf-nightly. As I'm unable to get latter running, will delay benching until 2.1. 2/20/20 - prediction performance is also worth benching; in TF2, for example, CPU prediction times can involve periodic spikes"} +{"question_id": 73830524, "score": 198, "creation_date": 1663949925, "tags": ["python", "google-analytics-api"], "instruction": "AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'\n\nRecently I had to reinstall Python due to a corrupt executable. This made one of our Python scripts bomb with the following error: AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK' The line of code that caused it to bomb was: from apiclient.discovery import build I tried pip uninstalling and pip upgrading the google-api-python-client, but I can\u2019t seem to find any information on this particular error. For what it is worth, I am trying to pull Google Analytics information down via an API call. Here is an output of the command prompt error: File \"C:\\Analytics\\Puritan_GoogleAnalytics\\Google_Conversions\\mcfTest.py\", line 1, in <module> from apiclient.discovery import build File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\apiclient\\__init__.py\", line 3, in <module> from googleapiclient import channel, discovery, errors, http, mimeparse, model File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\googleapiclient\\discovery.py\", line 57, in <module> from googleapiclient import _auth, mimeparse File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\googleapiclient\\_auth.py\", line 34, in <module> import oauth2client.client File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\oauth2client\\client.py\", line 45, in <module> from oauth2client import crypt File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\oauth2client\\crypt.py\", line 45, in <module> from oauth2client import _openssl_crypt File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\oauth2client\\_openssl_crypt.py\", line 16, in <module> from OpenSSL import crypto File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\OpenSSL\\__init__.py\", line 8, in <module> from OpenSSL import crypto, SSL File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\OpenSSL\\crypto.py\", line 1517, in <module> class X509StoreFlags(object): File \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\OpenSSL\\crypto.py\", line 1537, in X509StoreFlags CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'", "output": "Upgrade the latest version of PyOpenSSL. python3 -m pip install pip --upgrade pip install pyopenssl --upgrade"} +{"question_id": 11783875, "score": 198, "creation_date": 1343933220, "tags": ["python", "beautifulsoup"], "instruction": "ImportError: No Module Named bs4 (BeautifulSoup)\n\nI'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has \"No Module Named bs4.\" Any comments or advice is greatly appreciated.", "output": "Activate the virtualenv, and then install BeautifulSoup4: $ pip install beautifulsoup4 When you installed bs4 with easy_install, you installed it system-wide. So your system python can import it, but not your virtualenv python. If you do not need bs4 to be installed in your system python path, uninstall it and keep it in your virtualenv. For more information about virtualenvs, read this"} +{"question_id": 16008670, "score": 198, "creation_date": 1366006219, "tags": ["python", "arrays", "algorithm", "random", "hash"], "instruction": "How to hash a string into 8 digits?\n\nIs there anyway that I can hash a random string into a 8 digit number without implementing any algorithms myself?", "output": "Yes, you can use the built-in hashlib module or the built-in hash function. Then, chop-off the last eight digits using modulo operations or string slicing operations on the integer form of the hash: >>> s = 'she sells sea shells by the sea shore' >>> # Use hashlib >>> import hashlib >>> int(hashlib.sha1(s.encode(\"utf-8\")).hexdigest(), 16) % (10 ** 8) 58097614L >>> # Use hash() >>> abs(hash(s)) % (10 ** 8) 82148974"} +{"question_id": 2927993, "score": 198, "creation_date": 1275040002, "tags": ["python", "directory", "module"], "instruction": "Where are the python modules stored?\n\nI have recently started learning Python and I have 2 questions relating to modules. Is there a way to obtain a list of Python modules available (i.e. installed) on a machine? I am using Ubuntu Karmic and Synaptic for package management. I have just installed a python module.Where is the module code actually stored on my machine? (is there a default [recommended] location that modules are stored)?", "output": "Is there a way to obtain a list of Python modules available (i.e. installed) on a machine? This works for me: help('modules') Where is the module code actually stored on my machine? Usually in /lib/site-packages in your Python folder. (At least, on Windows.) You can use sys.path to find out what directories are searched for modules."} +{"question_id": 22086116, "score": 198, "creation_date": 1393561262, "tags": ["python", "filter", "pandas"], "instruction": "How do you filter pandas dataframes by multiple columns?\n\nTo filter a DataFrame (df) by a single column, if we consider data with male and females we might: males = df[df[Gender]=='Male'] Question 1: But what if the data spanned multiple years and I wanted to only see males for 2014? In other languages I might do something like: if A = \"Male\" and if B = \"2014\" then (except I want to do this and get a subset of the original DataFrame in a new dataframe object) Question 2: How do I do this in a loop, and create a dataframe object for each unique sets of year and gender (i.e. a df for: 2013-Male, 2013-Female, 2014-Male, and 2014-Female? for y in year: for g in gender: df = .....", "output": "Using & operator, don't forget to wrap the sub-statements with (): males = df[(df[Gender]=='Male') & (df[Year]==2014)] To store your DataFrames in a dict using a for loop: from collections import defaultdict dic={} for g in ['male', 'female']: dic[g]=defaultdict(dict) for y in [2013, 2014]: dic[g][y]=df[(df[Gender]==g) & (df[Year]==y)] #store the DataFrames to a dict of dict A demo for your getDF: def getDF(dic, gender, year): return dic[gender][year] print genDF(dic, 'male', 2014)"} +{"question_id": 7811556, "score": 198, "creation_date": 1318960379, "tags": ["python", "django"], "instruction": "How do I convert a Django QuerySet into list of dicts?\n\nHow can I convert a Django QuerySet into a list of dicts? I haven't found an answer to this so I'm wondering if I'm missing some sort of common helper function that everyone uses.", "output": "Use the .values() method: >>> Blog.objects.values() [{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}], >>> Blog.objects.values('id', 'name') [{'id': 1, 'name': 'Beatles Blog'}] Note: the result is a QuerySet which mostly behaves like a list, but isn't actually an instance of list. Use list(Blog.objects.values(\u2026)) if you really need an instance of list."} +{"question_id": 16645083, "score": 198, "creation_date": 1369036952, "tags": ["python", "string", "algorithm", "parsing", "split"], "instruction": "When splitting an empty string in Python, why does split() return an empty list while split('\\n') returns ['']?\n\nI am using split('\\n') to get lines in one string, and found that ''.split() returns an empty list, [], while ''.split('\\n') returns ['']. Is there any specific reason for such a difference? And is there any more convenient way to count lines in a string?", "output": "Question: I am using split('\\n') to get lines in one string, and found that ''.split() returns an empty list, [], while ''.split('\\n') returns ['']. The str.split() method has two algorithms. If no arguments are given, it splits on repeated runs of whitespace. However, if an argument is given, it is treated as a single delimiter with no repeated runs. In the case of splitting an empty string, the first mode (no argument) will return an empty list because the whitespace is eaten and there are no values to put in the result list. In contrast, the second mode (with an argument such as \\n) will produce the first empty field. Consider if you had written '\\n'.split('\\n'), you would get two fields (one split, gives you two halves). Question: Is there any specific reason for such a difference? This first mode is useful when data is aligned in columns with variable amounts of whitespace. For example: >>> data = '''\\ Shasta California 14,200 McKinley Alaska 20,300 Fuji Japan 12,400 ''' >>> for line in data.splitlines(): print(line.split()) ['Shasta', 'California', '14,200'] ['McKinley', 'Alaska', '20,300'] ['Fuji', 'Japan', '12,400'] The second mode is useful for delimited data such as CSV where repeated commas denote empty fields. For example: >>> data = '''\\ Guido,BDFL,,Amsterdam Barry,FLUFL,,USA Tim,,,USA ''' >>> for line in data.splitlines(): print(line.split(',')) ['Guido', 'BDFL', '', 'Amsterdam'] ['Barry', 'FLUFL', '', 'USA'] ['Tim', '', '', 'USA'] Note, the number of result fields is one greater than the number of delimiters. Think of cutting a rope. If you make no cuts, you have one piece. Making one cut, gives two pieces. Making two cuts, gives three pieces. And so it is with Python's str.split(delimiter) method: >>> ''.split(',') # No cuts [''] >>> ','.split(',') # One cut ['', ''] >>> ',,'.split(',') # Two cuts ['', '', ''] Question: And is there any more convenient way to count lines in a string? Yes, there are a couple of easy ways. One uses str.count() and the other uses str.splitlines(). Both ways will give the same answer unless the final line is missing the \\n. If the final newline is missing, the str.splitlines approach will give the accurate answer. A faster technique that is also accurate uses the count method but then corrects it for the final newline: >>> data = '''\\ Line 1 Line 2 Line 3 Line 4''' >>> data.count('\\n') # Inaccurate 3 >>> len(data.splitlines()) # Accurate, but slow 4 >>> data.count('\\n') + (not data.endswith('\\n')) # Accurate and fast 4 Question from @Kaz: Why the heck are two very different algorithms shoe-horned into a single function? The signature for str.split is about 20 years old, and a number of the APIs from that era are strictly pragmatic. While not perfect, the method signature isn't \"terrible\" either. For the most part, Guido's API design choices have stood the test of time. The current API is not without advantages. Consider strings such as: ps_aux_header = 'USER PID %CPU %MEM VSZ' patient_header = 'name,age,height,weight' When asked to break these strings into fields, people tend to describe both using the same English word, \"split\". When asked to read code such as fields = line.split() or fields = line.split(','), people tend to correctly interpret the statements as \"splits a line into fields\". Microsoft Excel's text-to-columns tool made a similar API choice and incorporates both splitting algorithms in the same tool. People seem to mentally model field-splitting as a single concept even though more than one algorithm is involved."} +{"question_id": 55266154, "score": 198, "creation_date": 1553100701, "tags": ["python", "pytorch", "copy", "tensor"], "instruction": "PyTorch preferred way to copy a tensor\n\nThere seems to be several ways to create a copy of a tensor in PyTorch, including y = tensor.new_tensor(x) #a y = x.clone().detach() #b y = torch.empty_like(x).copy_(x) #c y = torch.tensor(x) #d b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable. Any reasons for/against using c?", "output": "TL;DR Use .clone().detach() (or preferrably .detach().clone()) If you first detach the tensor and then clone it, the computation path is not copied, the other way around it is copied and then abandoned. Thus, .detach().clone() is very slightly more efficient.-- pytorch forums as it's slightly fast and explicit in what it does. Using perfplot, I plotted the timing of various methods to copy a pytorch tensor. y = tensor.new_tensor(x) # method a y = x.clone().detach() # method b y = torch.empty_like(x).copy_(x) # method c y = torch.tensor(x) # method d y = x.detach().clone() # method e The x-axis is the dimension of tensor created, y-axis shows the time. The graph is in linear scale. As you can clearly see, the tensor() or new_tensor() takes more time compared to other three methods. Note: In multiple runs, I noticed that out of b, c, e, any method can have lowest time. The same is true for a and d. But, the methods b, c, e consistently have lower timing than a and d. import torch import perfplot perfplot.show( setup=lambda n: torch.randn(n), kernels=[ lambda a: a.new_tensor(a), lambda a: a.clone().detach(), lambda a: torch.empty_like(a).copy_(a), lambda a: torch.tensor(a), lambda a: a.detach().clone(), ], labels=[\"new_tensor()\", \"clone().detach()\", \"empty_like().copy()\", \"tensor()\", \"detach().clone()\"], n_range=[2 ** k for k in range(15)], xlabel=\"len(a)\", logx=False, logy=False, title='Timing comparison for copying a pytorch tensor', )"} +{"question_id": 3788870, "score": 197, "creation_date": 1285344075, "tags": ["python", "nltk", "wordnet"], "instruction": "How to check if a word is an English word with Python?\n\nI want to check in a Python program if a word is in the English dictionary. I believe nltk wordnet interface might be the way to go but I have no clue how to use it for such a simple task. def is_english_word(word): pass # how to I implement is_english_word? is_english_word(token.lower()) In the future, I might want to check if the singular form of a word is in the dictionary (e.g., properties -> property -> english word). How would I achieve that?", "output": "For (much) more power and flexibility, use a dedicated spellchecking library like PyEnchant. There's a tutorial, or you could just dive straight in: >>> import enchant >>> d = enchant.Dict(\"en_US\") >>> d.check(\"Hello\") True >>> d.check(\"Helo\") False >>> d.suggest(\"Helo\") ['He lo', 'He-lo', 'Hello', 'Helot', 'Help', 'Halo', 'Hell', 'Held', 'Helm', 'Hero', \"He'll\"] >>> PyEnchant comes with a few dictionaries (en_GB, en_US, de_DE, fr_FR), but can use any of the OpenOffice ones if you want more languages. There appears to be a pluralisation library called inflect, but I've no idea whether it's any good."} +{"question_id": 2801008, "score": 197, "creation_date": 1273476812, "tags": ["python", "mongodb", "bulkinsert", "mongodb-query"], "instruction": "mongodb: insert if not exists\n\nEvery day, I receive a stock of documents (an update). What I want to do is insert each item that does not already exist. I also want to keep track of the first time I inserted them, and the last time I saw them in an update. I don't want to have duplicate documents. I don't want to remove a document which has previously been saved, but is not in my update. 95% (estimated) of the records are unmodified from day to day. I am using the Python driver (pymongo). What I currently do is (pseudo-code): for each document in update: existing_document = collection.find_one(document) if not existing_document: document['insertion_date'] = now else: document = existing_document document['last_update_date'] = now my_collection.save(document) My problem is that it is very slow (40 mins for less than 100 000 records, and I have millions of them in the update). I am pretty sure there is something builtin for doing this, but the document for update() is mmmhhh.... a bit terse.... (http://www.mongodb.org/display/DOCS/Updating ) Can someone advise how to do it faster?", "output": "Sounds like you want to do an upsert. MongoDB has built-in support for this. Pass an extra parameter to your update() call: {upsert:true}. For example: key = {'key':'value'} data = {'key2':'value2', 'key3':'value3'}; coll.update(key, data, upsert=True); #In python upsert must be passed as a keyword argument This replaces your if-find-else-update block entirely. It will insert if the key doesn't exist and will update if it does. Before: {\"key\":\"value\", \"key2\":\"Ohai.\"} After: {\"key\":\"value\", \"key2\":\"value2\", \"key3\":\"value3\"} You can also specify what data you want to write: data = {\"$set\":{\"key2\":\"value2\"}} Now your selected document will update the value of key2 only and leave everything else untouched."} +{"question_id": 78634235, "score": 197, "creation_date": 1718650377, "tags": ["python", "numpy", "matlab", "spacy"], "instruction": "numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject\n\nI want to call my Python module from the Matlab. I received the error: Error using numpy_ops>init thinc.backends.numpy_ops Python Error: ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject. The Python script is as follows import spacy def text_recognizer(model_path, text): try: # Load the trained model nlp = spacy.load(model_path) print(\"Model loaded successfully.\") # Process the given text doc = nlp(text) ent_labels = [(ent.text, ent.label_) for ent in doc.ents] return ent_labels The Matlab script is as follows % Set up the Python environment pe = pyenv; py.importlib.import_module('final_output'); % Add the directory containing the Python script to the Python path path_add = fileparts(which('final_output.py')); if count(py.sys.path, path_add) == 0 insert(py.sys.path, int64(0), path_add); end % Define model path and text to process model_path = 'D:\\trained_model\\\\output\\\\model-best'; text = 'Roses are red'; % Call the Python function pyOut = py.final_output.text_recognizer(model_path, text); % Convert the output to a MATLAB cell array entity_labels = cell(pyOut); disp(entity_labels); I found one solution to update Numpy, what I did, but nothing changed. I am using Python 3.9 and Numpy version 2.0.0 The error was received when I tried to call the Python module using a Matlab script. How can I fix the issue?", "output": "The reason is that pandas defines its numpy dependency freely as \"anything newer than certain version of numpy\". The problem occured, when numpy==2.0.0 has been released on June 16th 2024, because it is no longer compatible with your pandas version. The solution is to pin down the numpy version to any before the 2.0.0. Today it could be (this is the most recent numpy 1 release): numpy==1.26.4 To be added in your requirements or to the pip command you use (but together with installing pandas). Nowadays pip is very flexible and can handle the issue flawesly. You just need to ask it to install both pandas and numpy of given versions in the same pip install invocation."} +{"question_id": 5387208, "score": 196, "creation_date": 1300771618, "tags": ["python", "string", "list"], "instruction": "How to convert a string with comma-delimited items to a list in Python?\n\nHow do you convert a string into a list? Say the string is like text = \"a,b,c\". After the conversion, text == ['a', 'b', 'c'] and hopefully text[0] == 'a', text[1] == 'b'?", "output": "Like this: >>> text = 'a,b,c' >>> text = text.split(',') >>> text [ 'a', 'b', 'c' ]"} +{"question_id": 21784851, "score": 196, "creation_date": 1392396242, "tags": ["python", "sqlalchemy"], "instruction": "sqlalchemy IS NOT NULL select\n\nHow can I add the filter as in SQL to select values that are NOT NULL from a certain column ? SELECT * FROM table WHERE YourColumn IS NOT NULL; How can I do the same with SQLAlchemy filters? select = select(table).select_from(table).where(all_filters)", "output": "column_obj != None will produce a IS NOT NULL constraint: In a column context, produces the clause a != b. If the target is None, produces a IS NOT NULL. or use is_not()*: Implement the IS NOT operator. Normally, IS NOT is generated automatically when comparing to a value of None, which resolves to NULL. However, explicit usage of IS NOT may be desirable if comparing to boolean values on certain platforms. Demo: >>> from sqlalchemy.sql import column >>> column('YourColumn') != None <sqlalchemy.sql.elements.BinaryExpression object at 0x10f81aa90> >>> print(column('YourColumn') != None) \"YourColumn\" IS NOT NULL >>> column('YourColumn').is_not(None) <sqlalchemy.sql.elements.BinaryExpression object at 0x11081edf0> >>> print(column('YourColumn').is_not(None)) \"YourColumn\" IS NOT NULL You can't use is not None here, because the is not object identity inequality test can't be overloaded the way != can; you'll just get True instead as a ColumnClause instance is not the same object as the None singleton: >>> column('YourColumn') is not None True *) The method was formerly named isnot() and was renamed in SQLAlchemy 1.4. The old name is still available for backwards compatibility."} +{"question_id": 1061697, "score": 196, "creation_date": 1246335354, "tags": ["python", "html"], "instruction": "What's the easiest way to escape HTML in Python?\n\ncgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?", "output": "html.escape is the correct answer now, it used to be cgi.escape in python before 3.2. It escapes: < to < > to > & to & That is enough for all HTML. EDIT: If you have non-ascii chars you also want to escape, for inclusion in another encoded document that uses a different encoding, like Craig says, just use: data.encode('ascii', 'xmlcharrefreplace') Don't forget to decode data to unicode first, using whatever encoding it was encoded. However in my experience that kind of encoding is useless if you just work with unicode all the time from start. Just encode at the end to the encoding specified in the document header (utf-8 for maximum compatibility). Example: >>> cgi.escape(u'<a>b\u00e1</a>').encode('ascii', 'xmlcharrefreplace') '<a>bá</a> Also worth of note (thanks Greg) is the extra quote parameter cgi.escape takes. With it set to True, cgi.escape also escapes double quote chars (\") so you can use the resulting value in a XML/HTML attribute. EDIT: Note that cgi.escape has been deprecated in Python 3.2 in favor of html.escape, which does the same except that quote defaults to True."} +{"question_id": 18810777, "score": 196, "creation_date": 1379236632, "tags": ["python", "python-requests"], "instruction": "How do I read a response from Python Requests?\n\nI have two Python scripts. One uses the Urllib2 library and one uses the Requests library. I have found Requests easier to implement, but I can't find an equivalent for urlib2's read() function. For example: ... response = url.urlopen(req) print response.geturl() print response.getcode() data = response.read() print data Once I have built up my post url, data = response.read() gives me the content - I am trying to connect to a vcloud director api instance and the response shows the endpoints that I have access to. However if I use the Requests library as follows..... .... def post_call(username, org, password, key, secret): endpoint = '<URL ENDPOINT>' post_url = endpoint + 'sessions' get_url = endpoint + 'org' headers = {'Accept':'application/*+xml;version=5.1', \\ 'Authorization':'Basic '+ base64.b64encode(username + \"@\" + org + \":\" + password), \\ 'x-id-sec':base64.b64encode(key + \":\" + secret)} print headers post_call = requests.post(post_url, data=None, headers = headers) print post_call, \"POST call\" print post_call.text, \"TEXT\" print post_call.content, \"CONTENT\" post_call.status_code, \"STATUS CODE\" .... ....the print post_call.text and print post_call.content returns nothing, even though the status code equals 200 in the requests post call. Why isn't my response from Requests returning any text or content?", "output": "Requests doesn't have an equivalent to Urlib2's read(). >>> import requests >>> response = requests.get(\"http://www.google.com\") >>> print response.content '<!doctype html><html itemscope=\"\" itemtype=\"http://schema.org/WebPage\"><head>....' >>> print response.content == response.text True It looks like the POST request you are making is returning no content. Which is often the case with a POST request. Perhaps it set a cookie? The status code is telling you that the POST succeeded after all. Edit for Python 3: Python now handles data types differently. response.content returns a sequence of bytes (integers that represent ASCII) while response.text is a string (sequence of chars). Thus, >>> print response.content == response.text False >>> print str(response.content) == response.text True"} +{"question_id": 2414667, "score": 196, "creation_date": 1268197722, "tags": ["python", "string"], "instruction": "Python string class like StringBuilder in C#?\n\nIs there some string class in Python like StringBuilder in C#?", "output": "There is no one-to-one correlation. For a really good article please see Efficient String Concatenation in Python: Building long strings in the Python progamming language can sometimes result in very slow running code. In this article I investigate the computational performance of various string concatenation methods. TLDR the fastest method is below. It's extremely compact, and also pretty understandable: def method6(): return ''.join([`num` for num in xrange(loop_count)])"} +{"question_id": 23639113, "score": 195, "creation_date": 1400006435, "tags": ["python", "django", "django-views", "django-rest-framework"], "instruction": "Disable a method in a ViewSet, django-rest-framework\n\nViewSets have automatic methods to list, retrieve, create, update, delete, ... I would like to disable some of those, and the solution I came up with is probably not a good one, since OPTIONS still states those as allowed. Any idea on how to do this the right way? class SampleViewSet(viewsets.ModelViewSet): queryset = api_models.Sample.objects.all() serializer_class = api_serializers.SampleSerializer def list(self, request): return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED) def create(self, request): return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)", "output": "The definition of ModelViewSet is: class ModelViewSet(mixins.CreateModelMixin, mixins.RetrieveModelMixin, mixins.UpdateModelMixin, mixins.DestroyModelMixin, mixins.ListModelMixin, GenericViewSet) So rather than extending ModelViewSet, why not just use whatever you need? So for example: from rest_framework import viewsets, mixins class SampleViewSet(mixins.RetrieveModelMixin, mixins.UpdateModelMixin, mixins.DestroyModelMixin, viewsets.GenericViewSet): ... With this approach, the router should only generate routes for the included methods. Reference: ModelViewSet Update: In DRF 3.14.0, using one of the methods not implemented in the mixins gives a 405 - Method Not Allowed: Method Not Allowed: /status/ [06/Mar/2023 01:03:01] \"POST /status/ HTTP/1.1\" 405 41"} +{"question_id": 22272003, "score": 195, "creation_date": 1394296507, "tags": ["python"], "instruction": "What is the difference between os.path.basename() and os.path.dirname()?\n\nWhat is the difference between os.path.basename() and os.path.dirname()? I already searched for answers and read some links, but didn't understand. Can anyone give a simple explanation?", "output": "Both functions use the os.path.split(path) function to split the pathname path into a pair; (head, tail). The os.path.dirname(path) function returns the head of the path. E.g.: The dirname of '/foo/bar/item' is '/foo/bar'. The os.path.basename(path) function returns the tail of the path. E.g.: The basename of '/foo/bar/item' returns 'item' From: http://docs.python.org/3/library/os.path.html#os.path.basename"} +{"question_id": 17091769, "score": 195, "creation_date": 1371139352, "tags": ["python", "dataframe", "row", "pandas"], "instruction": "Python pandas: fill a dataframe row by row\n\nThe simple task of adding a row to a pandas.DataFrame object seems to be hard to accomplish. There are 3 stackoverflow questions relating to this, none of which give a working answer. Here is what I'm trying to do. I have a DataFrame of which I already know the shape as well as the names of the rows and columns. >>> df = pandas.DataFrame(columns=['a','b','c','d'], index=['x','y','z']) >>> df a b c d x NaN NaN NaN NaN y NaN NaN NaN NaN z NaN NaN NaN NaN Now, I have a function to compute the values of the rows iteratively. How can I fill in one of the rows with either a dictionary or a pandas.Series ? Here are various attempts that have failed: >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df['y'] = y AssertionError: Length of values does not match length of index Apparently it tried to add a column instead of a row. >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.join(y) AttributeError: 'builtin_function_or_method' object has no attribute 'is_unique' Very uninformative error message. >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.set_value(index='y', value=y) TypeError: set_value() takes exactly 4 arguments (3 given) Apparently that is only for setting individual values in the dataframe. >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.append(y) Exception: Can only append a Series if ignore_index=True Well, I don't want to ignore the index, otherwise here is the result: >>> df.append(y, ignore_index=True) a b c d 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 1 5 2 3 It did align the column names with the values, but lost the row labels. >>> y = {'a':1, 'b':5, 'c':2, 'd':3} >>> df.ix['y'] = y >>> df a b \\ x NaN NaN y {'a': 1, 'c': 2, 'b': 5, 'd': 3} {'a': 1, 'c': 2, 'b': 5, 'd': 3} z NaN NaN c d x NaN NaN y {'a': 1, 'c': 2, 'b': 5, 'd': 3} {'a': 1, 'c': 2, 'b': 5, 'd': 3} z NaN NaN That also failed miserably. So how do you do it ?", "output": "df['y'] will set a column since you want to set a row, use .loc Note that .ix is equivalent here, yours failed because you tried to assign a dictionary to each element of the row y probably not what you want; converting to a Series tells pandas that you want to align the input (for example you then don't have to to specify all of the elements) In [6]: import pandas as pd In [7]: df = pd.DataFrame(columns=['a','b','c','d'], index=['x','y','z']) In [8]: df.loc['y'] = pd.Series({'a':1, 'b':5, 'c':2, 'd':3}) In [9]: df Out[9]: a b c d x NaN NaN NaN NaN y 1 5 2 3 z NaN NaN NaN NaN"} +{"question_id": 34009653, "score": 195, "creation_date": 1448924669, "tags": ["python", "python-3.x", "int", "type-conversion", "byte"], "instruction": "Convert bytes to int?\n\nI'm currently working on an encryption/decryption program and I need to be able to convert bytes to an integer. I know that: bytes([3]) = b'\\x03' Yet I cannot find out how to do the inverse. What am I doing terribly wrong?", "output": "Assuming you're on at least 3.2, there's a built in for this: int.from_bytes( bytes, byteorder, *, signed=False ) ... The argument bytes must either be a bytes-like object or an iterable producing bytes. The byteorder argument determines the byte order used to represent the integer. If byteorder is \"big\", the most significant byte is at the beginning of the byte array. If byteorder is \"little\", the most significant byte is at the end of the byte array. To request the native byte order of the host system, use sys.byteorder as the byte order value. The signed argument indicates whether two\u2019s complement is used to represent the integer. ## Examples: int.from_bytes(b'\\x00\\x01', \"big\") # 1 int.from_bytes(b'\\x00\\x01', \"little\") # 256 int.from_bytes(b'\\x00\\x10', byteorder='little') # 4096 int.from_bytes(b'\\xfc\\x00', byteorder='big', signed=True) #-1024"} +{"question_id": 3051241, "score": 195, "creation_date": 1276671533, "tags": ["python", "class", "documentation", "docstring", "class-attributes"], "instruction": "How to document class attributes in Python?\n\nI'm writing a lightweight class whose attributes are intended to be publicly accessible, and only sometimes overridden in specific instantiations. There's no provision in the Python language for creating docstrings for class attributes, or any sort of attributes, for that matter. What is the expected and supported way, should there be one, to document these attributes? Currently I'm doing this sort of thing: class Albatross(object): \"\"\"A bird with a flight speed exceeding that of an unladen swallow. Attributes: \"\"\" flight_speed = 691 __doc__ += \"\"\" flight_speed (691) The maximum speed that such a bird can attain. \"\"\" nesting_grounds = \"Raymond Luxury-Yacht\" __doc__ += \"\"\" nesting_grounds (\"Raymond Luxury-Yacht\") The locale where these birds congregate to reproduce. \"\"\" def __init__(self, **keyargs): \"\"\"Initialize the Albatross from the keyword arguments.\"\"\" self.__dict__.update(keyargs) This will result in the class's docstring containing the initial standard docstring section, as well as the lines added for each attribute via augmented assignment to __doc__. Although this style doesn't seem to be expressly forbidden in the docstring style guidelines, it's also not mentioned as an option. The advantage here is that it provides a way to document attributes alongside their definitions, while still creating a presentable class docstring, and avoiding having to write comments that reiterate the information from the docstring. I'm still kind of annoyed that I have to actually write the attributes twice; I'm considering using the string representations of the values in the docstring to at least avoid duplication of the default values. Is this a heinous breach of the ad hoc community conventions? Is it okay? Is there a better way? For example, it's possible to create a dictionary containing values and docstrings for the attributes and then add the contents to the class __dict__ and docstring towards the end of the class declaration; this would alleviate the need to type the attribute names and values twice. edit: this last idea is, I think, not actually possible, at least not without dynamically building the entire class from data, which seems like a really bad idea unless there's some other reason to do that. I'm pretty new to python and still working out the details of coding style, so unrelated critiques are also welcome.", "output": "In short: class attributes cannot have doc strings in the way that classes and functions have. To avoid confusion, the term property has a specific meaning in python. What you're talking about is what we call class attributes. Since they are always acted upon through their class, I find that it makes sense to document them within the class' doc string. Something like this: class Albatross(object): \"\"\"A bird with a flight speed exceeding that of an unladen swallow. Attributes: flight_speed The maximum speed that such a bird can attain. nesting_grounds The locale where these birds congregate to reproduce. \"\"\" flight_speed = 691 nesting_grounds = \"Throatwarbler Man Grove\" I think that's a lot easier on the eyes than the approach in your example. If I really wanted a copy of the attribute values to appear in the doc string, I would put them beside or below the description of each attribute. Keep in mind that in Python, doc strings are actual members of the objects they document, not merely source code annotations. Since class attribute variables are not objects themselves but references to objects, they have no way of holding doc strings of their own. I guess you could make a case for doc strings on references, perhaps to describe \"what should go here\" instead of \"what is actually here\", but I find it easy enough to do that in the containing class doc string."} +{"question_id": 39187886, "score": 195, "creation_date": 1472356830, "tags": ["python"], "instruction": "What is the difference between subprocess.popen and subprocess.run\n\nI'm new to the subprocess module and the documentation leaves me wondering what the difference is between subprocess.popen and subprocess.run. Is there a difference in what the command does? Is one just newer? Which is better to use?", "output": "subprocess.run() was added in Python 3.5 as a simplification over subprocess.Popen when you just want to execute a command and wait until it finishes, but you don't want to do anything else in the mean time. For other cases, you still need to use subprocess.Popen. The main difference is that subprocess.run() executes a command and waits for it to finish, while with subprocess.Popen you can continue doing your stuff while the process finishes and then just repeatedly call Popen.communicate() yourself to pass and receive data to your process. Secondly, subprocess.run() returns subprocess.CompletedProcess. subprocess.run() just wraps Popen and Popen.communicate() so you don't need to make a loop to pass/receive data or wait for the process to finish. Check the official documentation for info on which params subprocess.run() pass to Popen and communicate()."} +{"question_id": 1450957, "score": 195, "creation_date": 1253451160, "tags": ["python", "json"], "instruction": "Python's 'json' module, converts int dictionary keys to strings\n\nI have found that when the following is run, Python's json module (included since 2.6) converts int dictionary keys to strings. import json releases = {1: \"foo-v0.1\"} json.dumps(releases) Output: '{\"1\": \"foo-v0.1\"}' Is there an easy way to preserve the key as an int, without needing to parse the string on dump and load? I believe it would be possible using the hooks provided by the json module, but again this still requires parsing. Is there possibly an argument I have overlooked? Sub-question: Thanks for the answers. Seeing as json works as I feared, is there an easy way to convey key type by maybe parsing the output of dumps? Also I should note the code doing the dumping and the code downloading the JSON object from a server and loading it, are both written by me.", "output": "This is one of those subtle differences among various mapping collections that can bite you. JSON treats keys as strings; Python supports distinct keys differing only in type. In Python (and apparently in Lua) the keys to a mapping (dictionary or table, respectively) are object references. In Python they must be immutable types, or they must be objects which implement a __hash__ method. (The Lua docs suggest that it automatically uses the object's ID as a hash/key even for mutable objects and relies on string interning to ensure that equivalent strings map to the same objects). In Perl, JavaScript, awk and many other languages the keys for hashes, associative arrays or whatever they're called for the given language, are strings (or \"scalars\" in Perl). In Perl, $foo{1}, $foo{1.0}, and $foo{\"1\"} are all references to the same mapping in %foo --- the key is evaluated as a scalar! JSON started as a JavaScript serialization technology. (JSON stands for JavaScript Object Notation.) Naturally it implements semantics for its mapping notation which are consistent with its mapping semantics. If both ends of your serialization are going to be Python then you'd be better off using pickles. If you really need to convert these back from JSON into native Python objects I guess you have a couple of choices. First you could try (try: ... except: ...) to convert any key to a number in the event of a dictionary look-up failure. Alternatively, if you add code to the other end (the serializer or generator of this JSON data) then you could have it perform a JSON serialization on each of the key values\u2014providing those as a list of keys. (Then your Python code would first iterate over the list of keys, instantiating/deserializing them into native Python objects ... and then use those for access the values out of the mapping)."} +{"question_id": 2183233, "score": 194, "creation_date": 1265105763, "tags": ["python", "logging", "python-logging"], "instruction": "How to add a custom loglevel to Python's logging facility\n\nI'd like to have loglevel TRACE (5) for my application, as I don't think that debug() is sufficient. Additionally log(5, msg) isn't what I want. How can I add a custom loglevel to a Python logger? I've a mylogger.py with the following content: import logging @property def log(obj): myLogger = logging.getLogger(obj.__class__.__name__) return myLogger In my code I use it in the following way: class ExampleClass(object): from mylogger import log def __init__(self): '''The constructor with the logger''' self.log.debug(\"Init runs\") Now I'd like to call self.log.trace(\"foo bar\") Edit (Dec 8th 2016): I changed the accepted answer to pfa's which is, IMHO, an excellent solution based on the very good proposal from Eric S.", "output": "To people reading in 2022 and beyond: you should probably check out the currently next-highest-rated answer here: https://stackoverflow.com/a/35804945/1691778 My original answer is below. -- @Eric S. Eric S.'s answer is excellent, but I learned by experimentation that this will always cause messages logged at the new debug level to be printed -- regardless of what the log level is set to. So if you make a new level number of 9, if you call setLevel(50), the lower level messages will erroneously be printed. To prevent that from happening, you need another line inside the \"debugv\" function to check if the logging level in question is actually enabled. Fixed example that checks if the logging level is enabled: import logging DEBUG_LEVELV_NUM = 9 logging.addLevelName(DEBUG_LEVELV_NUM, \"DEBUGV\") def debugv(self, message, *args, **kws): if self.isEnabledFor(DEBUG_LEVELV_NUM): # Yes, logger takes its '*args' as 'args'. self._log(DEBUG_LEVELV_NUM, message, args, **kws) logging.Logger.debugv = debugv If you look at the code for class Logger in logging.__init__.py for Python 2.7, this is what all the standard log functions do (.critical, .debug, etc.). I apparently can't post replies to others' answers for lack of reputation... hopefully Eric will update his post if he sees this. =)"} +{"question_id": 25122099, "score": 194, "creation_date": 1407165691, "tags": ["python", "pandas", "move", "dataframe", "shift"], "instruction": "Move column by name to front of table in pandas\n\nHere is my df: Net Upper Lower Mid Zsore Answer option More than once a day 0% 0.22% -0.12% 2 65 Once a day 0% 0.32% -0.19% 3 45 Several times a week 2% 2.45% 1.10% 4 78 Once a week 1% 1.63% -0.40% 6 65 How can I move a column by name (\"Mid\") to the front of the table, index 0. This is what the result should look like: Mid Upper Lower Net Zsore Answer option More than once a day 2 0.22% -0.12% 0% 65 Once a day 3 0.32% -0.19% 0% 45 Several times a week 4 2.45% 1.10% 2% 78 Once a week 6 1.63% -0.40% 1% 65 My current code moves the column by index using df.columns.tolist() but I'd like to shift it by name.", "output": "We can use loc to reorder by passing a list: In [27]: # get a list of columns cols = list(df) # move the column to head of list using index, pop and insert cols.insert(0, cols.pop(cols.index('Mid'))) cols Out[27]: ['Mid', 'Net', 'Upper', 'Lower', 'Zsore'] In [28]: # use ix to reorder df = df.loc[:, cols] df Out[28]: Mid Net Upper Lower Zsore Answer_option More_than_once_a_day 2 0% 0.22% -0.12% 65 Once_a_day 3 0% 0.32% -0.19% 45 Several_times_a_week 4 2% 2.45% 1.10% 78 Once_a_week 6 1% 1.63% -0.40% 65 Another method is to take a reference to the column and reinsert it at the front: In [39]: mid = df['Mid'] df.drop(labels=['Mid'], axis=1,inplace = True) df.insert(0, 'Mid', mid) df Out[39]: Mid Net Upper Lower Zsore Answer_option More_than_once_a_day 2 0% 0.22% -0.12% 65 Once_a_day 3 0% 0.32% -0.19% 45 Several_times_a_week 4 2% 2.45% 1.10% 78 Once_a_week 6 1% 1.63% -0.40% 65 You can, with very early versions of Pandas, also use ix to achieve the same results: df = df.ix[:, cols] But ix was deprecated from pandas 0.20.0 onwards and was discontinued as of Pandas 1.0."} +{"question_id": 14007545, "score": 194, "creation_date": 1356219957, "tags": ["python", "regex", "regex-group"], "instruction": "Python Regex instantly replace groups\n\nIs there any way to directly replace all groups using regex syntax? The normal way: re.match(r\"(?:aaa)(_bbb)\", string1).group(1) But I want to achieve something like this: re.match(r\"(\\d.*?)\\s(\\d.*?)\", \"(CALL_GROUP_1) (CALL_GROUP_2)\") I want to build the new string instantaneously from the groups the Regex just captured.", "output": "Have a look at re.sub: result = re.sub(r\"(\\d.*?)\\s(\\d.*?)\", r\"\\1 \\2\", string1) This is Python's regex substitution (replace) function. The replacement string can be filled with so-called backreferences (backslash, group number) which are replaced with what was matched by the groups. Groups are counted the same as by the group(...) function, i.e. starting from 1, from left to right, by opening parentheses."} +{"question_id": 33738467, "score": 194, "creation_date": 1447686339, "tags": ["python", "flask", "flask-sqlalchemy"], "instruction": "How do I know if I can disable SQLALCHEMY_TRACK_MODIFICATIONS?\n\nEvery time I run my app that uses Flask-SQLAlchemy I get the following warning that the SQLALCHEMY_TRACK_MODIFICATIONS option will be disabled. /home/david/.virtualenvs/flask-sqlalchemy/lib/python3.5/site-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning. warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.') I tried to find out what this option does, but the Flask-SQLAlchemy documentation isn't clear about what uses this tracking. SQLALCHEMY_TRACK_MODIFICATIONS If set to True (the default) Flask-SQLAlchemy will track modifications of objects and emit signals. This requires extra memory and can be disabled if not needed. How do I find out if my project requires SQLALCHEMY_TRACK_MODIFICATIONS = True or if I can safely disable this feature and save memory on my server?", "output": "Most likely your application doesn't use the Flask-SQLAlchemy event system, so you're probably safe to turn off. You'll need to audit the code to verify--you're looking for anything that hooks into models_committed or before_models_committed. If you do find that you're using the Flask-SQLAlchemy event system, you probably should update the code to use SQLAlchemy's built-in event system instead. The default value as of Flask-SQLAlchemy 2.1 is None, which is a falsy value, so the event system is disabled. In older versions, the default value was True, so you'll need to explicitly disable it. However, in both cases, the warning won't be silenced until this is explicitly set to False. To do that, add: SQLALCHEMY_TRACK_MODIFICATIONS = False to your app config. Background--here's what the warning is telling you: Flask-SQLAlchemy has its own event notification system that gets layered on top of SQLAlchemy. To do this, it tracks modifications to the SQLAlchemy session. This takes extra resources, so the option SQLALCHEMY_TRACK_MODIFICATIONS allows you to disable the modification tracking system. The rationale for the change is three-fold: Not many people use Flask-SQLAlchemy's event system, but most people don't realize they can save system resources by disabling it. So a saner default is to disable it and those who want it can turn it on. The event system in Flask-SQLAlchemy has been rather buggy (see issues linked to in the pull request mentioned below), requiring additional maintenance for a feature that few people use. In v0.7, SQLAlchemy itself added a powerful event system including the ability to create custom events. Ideally, the Flask-SQLAlchemy event system should do nothing more than create a few custom SQLAlchemy event hooks and listeners, and then let SQLAlchemy itself manage the event trigger. You can see more in the discussion around the pull request that started triggering this warning."} +{"question_id": 37642573, "score": 194, "creation_date": 1465134064, "tags": ["python", "loops", "for-loop", "while-loop"], "instruction": "How can I make sense of the `else` clause of Python loops?\n\nMany Python programmers are probably unaware that the syntax of while loops and for loops includes an optional else: clause: for val in iterable: do_something(val) else: clean_up() The body of the else clause is a good place for certain kinds of clean-up actions, and is executed on normal termination of the loop: I.e., exiting the loop with return or break skips the else clause; exiting after a continue executes it. I know this only because I just looked it up (yet again), because I can never remember when the else clause is executed. Always? On \"failure\" of the loop, as the name suggests? On regular termination? Even if the loop is exited with return? I can never be entirely sure without looking it up. I blame my persisting uncertainty on the choice of keyword: I find else incredibly unmnemonic for this semantics. My question is not \"why is this keyword used for this purpose\" (which I would probably vote to close, though only after reading the answers and comments), but how can I think about the else keyword so that its semantics make sense, and I can therefore remember it? I'm sure there was a fair amount of discussion about this, and I can imagine that the choice was made for consistency with the try statement's else: clause (which I also have to look up), and with the goal of not adding to the list of Python's reserved words. Perhaps the reasons for choosing else will clarify its function and make it more memorable, but I'm after connecting name to function, not after historical explanation per se. The answers to this question, which my question was briefly closed as a duplicate of, contain a lot of interesting back story. My question has a different focus (how to connect the specific semantics of else with the keyword choice), but I feel there should be a link to this question somewhere.", "output": "The while statement with an else clause while condition: iteration else: conclusion is exactly equivalent to while True: if not condition: conclusion break iteration The for statement with an else clause for item in iterable: iteration else: conclusion is exactly equivalent to iterator = iter(iterable) while True: try: item = next(iterator) except StopIteration: conclusion break iteration It helps understand the effect of a break or continue statement in the iteration statement. Note. \u2014 For the while and for statements without an else clause, replace the conclusion statement with a pass statement in the equivalent code."} +{"question_id": 23951641, "score": 194, "creation_date": 1401443436, "tags": ["python", "python-2.7", "enums", "int", "type-conversion"], "instruction": "How to convert int to Enum in python?\n\nUsing the new Enum feature (via backport enum34) with python 2.7.6. Given the following definition, how can I convert an int to the corresponding Enum value? from enum import Enum class Fruit(Enum): Apple = 4 Orange = 5 Pear = 6 I know I can hand craft a series of if-statements to do the conversion but is there an easy pythonic way to convert? Basically, I'd like a function ConvertIntToFruit(int) that returns an enum value. My use case is I have a csv file of records where I'm reading each record into an object. One of the file fields is an integer field that represents an enumeration. As I'm populating the object I'd like to convert that integer field from the file into the corresponding Enum value in the object.", "output": "You 'call' the Enum class: Fruit(5) to turn 5 into Fruit.Orange: >>> from enum import Enum >>> class Fruit(Enum): ... Apple = 4 ... Orange = 5 ... Pear = 6 ... >>> Fruit(5) <Fruit.Orange: 5> From the Programmatic access to enumeration members and their attributes section of the documentation: Sometimes it\u2019s useful to access members in enumerations programmatically (i.e. situations where Color.red won\u2019t do because the exact color is not known at program-writing time). Enum allows such access: >>> Color(1) <Color.red: 1> >>> Color(3) <Color.blue: 3> In a related note: to map a string value containing the name of an enum member, use subscription: >>> s = 'Apple' >>> Fruit[s] <Fruit.Apple: 4>"} +{"question_id": 19158339, "score": 194, "creation_date": 1380800696, "tags": ["python", "global-variables", "side-effects"], "instruction": "Why are global variables evil?\n\nWhy is the global keyword considered bad practice in Python (and programming in general)? Links with more information would also be appreciated.", "output": "This has nothing to do with Python; global variables are bad in any programming language. However, global constants are not conceptually the same as global variables; global constants are perfectly harmless. In Python the distinction between the two is purely by convention: CONSTANTS_ARE_CAPITALIZED and globals_are_not. The reason global variables are bad is that they enable functions to have hidden (non-obvious, surprising, hard to detect, hard to diagnose) side effects, leading to an increase in complexity, potentially leading to spaghetti code. However, sane use of global state is acceptable (as is local state and mutability) even in functional programming, either for algorithm optimization, reduced complexity, caching and memoization, or the practicality of porting structures originating in a predominantly imperative codebase. All in all, your question can be answered in many ways, so your best bet is to just google \"why are global variables bad\". Some examples: Global Variables Are Bad - Wiki Wiki Web Why is Global State so Evil? - Software Engineering Stack Exchange What are the pros and cons in use of global variables? If you want to go deeper and find out why side effects are all about, and many other enlightening things, you should learn functional programming: Side effect (computer science) - Wikipedia Why are side-effects considered evil in functional programming? - Software Engineering Stack Exchange Functional programming - Wikipedia"} +{"question_id": 4441539, "score": 194, "creation_date": 1292344355, "tags": ["python", "django", "django-models", "django-validation"], "instruction": "Why doesn't django's model.save() call full_clean()?\n\nI'm just curious if anyone knows if there's good reason why Django's ORM doesn't call full_clean on a model unless it is being saved as part of a model form? Note that full_clean() will not be called automatically when you call your model\u2019s save() method. You\u2019ll need to call it manually when you want to run one-step model validation for your own manually created models. - from Django's full_clean doc (NOTE: quote updated for Django 1.6... previous Django docs had a caveat about ModelForms as well.) Are there good reasons why people wouldn't want this behavior? I'd think if you took the time to add validation to a model, you'd want that validation run every time the model is saved. I know how to get everything to work properly, I'm just looking for an explanation.", "output": "AFAIK, this is because of backwards compatibility. There are also problems with ModelForms with excluded fields, models with default values, pre_save() signals, etc. Sources you might be intrested in: http://code.djangoproject.com/ticket/13100 http://groups.google.com/group/django-developers/browse_frm/thread/b888734b05878f87"} +{"question_id": 4617291, "score": 193, "creation_date": 1294332185, "tags": ["python", "sql", "mysql", "sqlalchemy"], "instruction": "How do I get a raw, compiled SQL query from a SQLAlchemy expression?\n\nI have a SQLAlchemy query object and want to get the text of the compiled SQL statement, with all its parameters bound (e.g. no %s or other variables waiting to be bound by the statement compiler or MySQLdb dialect engine, etc). Calling str() on the query reveals something like this: SELECT id WHERE date_added <= %s AND date_added >= %s ORDER BY count DESC I've tried looking in query._params but it's an empty dict. I wrote my own compiler using this example of the sqlalchemy.ext.compiler.compiles decorator but even the statement there still has %s where I want data. I can't quite figure out when my parameters get mixed in to create the query; when examining the query object they're always an empty dictionary (though the query executes fine and the engine prints it out when you turn echo logging on). I'm starting to get the message that SQLAlchemy doesn't want me to know the underlying query, as it breaks the general nature of the expression API's interface all the different DB-APIs. I don't mind if the query gets executed before I found out what it was; I just want to know!", "output": "This blogpost by Nicolas Cadou provides an updated answer. Quoting from the blog post, this is suggested and worked for me: from sqlalchemy.dialects import postgresql print str(q.statement.compile(dialect=postgresql.dialect())) Where q is defined as: q = DBSession.query(model.Name).distinct(model.Name.value) \\ .order_by(model.Name.value) Or just any kind of session.query()."} +{"question_id": 15033511, "score": 193, "creation_date": 1361568590, "tags": ["python", "numpy", "scipy", "statistics", "confidence-interval"], "instruction": "Compute a confidence interval from sample data\n\nI have sample data which I would like to compute a confidence interval for, assuming a normal distribution. I have found and installed the numpy and scipy packages and have gotten numpy to return a mean and standard deviation (numpy.mean(data) with data being a list). Any advice on getting a sample confidence interval would be much appreciated.", "output": "import numpy as np import scipy.stats def mean_confidence_interval(data, confidence=0.95): a = 1.0 * np.array(data) n = len(a) m, se = np.mean(a), scipy.stats.sem(a) h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1) return m, m-h, m+h You can calculate like this."} +{"question_id": 1080411, "score": 193, "creation_date": 1246645796, "tags": ["python", "web-scraping", "hyperlink", "beautifulsoup"], "instruction": "retrieve links from web page using python and BeautifulSoup\n\nHow can I retrieve the links of a webpage and copy the url address of the links using Python?", "output": "Here's a short snippet using the SoupStrainer class in BeautifulSoup: import httplib2 from bs4 import BeautifulSoup, SoupStrainer http = httplib2.Http() status, response = http.request('http://www.nytimes.com') for link in BeautifulSoup(response, 'html.parser', parse_only=SoupStrainer('a')): if link.has_attr('href'): print(link['href']) The BeautifulSoup documentation is actually quite good, and covers a number of typical scenarios: BeautifulSoup documentation Edit: Note that I used the SoupStrainer class because it's a bit more efficient (memory and speed wise), if you know what you're parsing in advance."} +{"question_id": 4768941, "score": 193, "creation_date": 1295712860, "tags": ["python", "coding-style", "pep8"], "instruction": "How can I break a line of chained methods in Python?\n\nI have a line of the following code (don't blame for naming conventions, they are not mine): subkeyword = Session.query( Subkeyword.subkeyword_id, Subkeyword.subkeyword_word ).filter_by( subkeyword_company_id=self.e_company_id ).filter_by( subkeyword_word=subkeyword_word ).filter_by( subkeyword_active=True ).one() I don't like how it looks like (not too readable), but I don't have any better idea to limit lines to 79 characters in this situation. Is there a better way of breaking it (preferably without backslashes)?", "output": "You could use additional parentheses: subkeyword = ( Session.query(Subkeyword.subkeyword_id, Subkeyword.subkeyword_word) .filter_by(subkeyword_company_id=self.e_company_id) .filter_by(subkeyword_word=subkeyword_word) .filter_by(subkeyword_active=True) .one() )"} +{"question_id": 12737740, "score": 193, "creation_date": 1349395596, "tags": ["python", "python-requests"], "instruction": "Python Requests and persistent sessions\n\nI am using the requests module. I have figured out how to submit data to a login form on a website and retrieve the session key, but I can't see an obvious way to use this session key in subsequent requests. Can someone fill in the ellipsis in the code below or suggest another approach? >>> import requests >>> login_data = {'formPosted': '1', 'login_email': 'me@example.com', 'password': 'pw'} >>> r = requests.post('https://localhost/login.py', login_data) >>> >>> r.text 'You are being redirected <a href=\"profilePage?_ck=1349394964\">here</a>' >>> r.cookies {'session_id_myapp': '127-0-0-1-825ff22a-6ed1-453b-aebc-5d3cf2987065'} >>> >>> r2 = requests.get('https://localhost/profile_data.json', ...)", "output": "You can easily create a persistent session using: s = requests.Session() After that, continue with your requests as you would: s.post('https://localhost/login.py', login_data) # logged in! cookies saved for future requests. r2 = s.get('https://localhost/profile_data.json', ...) # cookies sent automatically! # do whatever, s will keep your cookies intact :) For more about Sessions: https://requests.readthedocs.io/en/latest/user/advanced/#session-objects"} +{"question_id": 38542419, "score": 193, "creation_date": 1469281150, "tags": ["python", "pandas"], "instruction": "Could pandas use column as index?\n\nI have a spreadsheet like this: Locality 2005 2006 2007 2008 2009 ABBOTSFORD 427000 448000 602500 600000 638500 ABERFELDIE 534000 600000 735000 710000 775000 AIREYS INLET459000 440000 430000 517500 512500 I don't want to manually swap the column with the row. Could it be possible to use pandas reading data to a list as this: data['ABBOTSFORD']=[427000,448000,602500,600000,638500] data['ABERFELDIE']=[534000,600000,735000,710000,775000] data['AIREYS INLET']=[459000,440000,430000,517500,512500]", "output": "Yes, with pandas.DataFrame.set_index you can make 'Locality' your row index. data.set_index('Locality', inplace=True) If inplace=True is not provided, set_index returns the modified dataframe as a result. Example: > import pandas as pd > df = pd.DataFrame([['ABBOTSFORD', 427000, 448000], ['ABERFELDIE', 534000, 600000]], columns=['Locality', 2005, 2006]) > df Locality 2005 2006 0 ABBOTSFORD 427000 448000 1 ABERFELDIE 534000 600000 > df.set_index('Locality', inplace=True) > df 2005 2006 Locality ABBOTSFORD 427000 448000 ABERFELDIE 534000 600000 > df.loc['ABBOTSFORD'] 2005 427000 2006 448000 Name: ABBOTSFORD, dtype: int64 > df.loc['ABBOTSFORD'][2005] 427000 > df.loc['ABBOTSFORD'].values array([427000, 448000]) > df.loc['ABBOTSFORD'].tolist() [427000, 448000]"} +{"question_id": 30487767, "score": 193, "creation_date": 1432742903, "tags": ["python", "argparse"], "instruction": "Check if argparse optional argument is set or not\n\nI would like to check whether an optional argparse argument has been set by the user or not. Can I safely check using isset? Something like this: if(isset(args.myArg)): #do something else: #do something else Does this work the same for float / int / string type arguments? I could set a default parameter and check it (e.g., set myArg = -1, or \"\" for a string, or \"NOT_SET\"). However, the value I ultimately want to use is only calculated later in the script. So I would be setting it to -1 as a default, and then updating it to something else later. This seems a little clumsy in comparison with simply checking if the value was set by the user.", "output": "I think that optional arguments (specified with --) are initialized to None if they are not supplied. So you can test with is not None. Try the example below: import argparse def main(): parser = argparse.ArgumentParser(description=\"My Script\") parser.add_argument(\"--myArg\") args, leftovers = parser.parse_known_args() if args.myArg is not None: print \"myArg has been set (value is %s)\" % args.myArg"} +{"question_id": 33813815, "score": 193, "creation_date": 1447965059, "tags": ["python", "pandas", "dataframe", "parquet", "blaze"], "instruction": "How to read a Parquet file into Pandas DataFrame?\n\nHow to read a modestly sized Parquet data-set into an in-memory Pandas DataFrame without setting up a cluster computing infrastructure such as Hadoop or Spark? This is only a moderate amount of data that I would like to read in-memory with a simple Python script on a laptop. The data does not reside on HDFS. It is either on the local file system or possibly in S3. I do not want to spin up and configure other services like Hadoop, Hive or Spark. I thought Blaze/Odo would have made this possible: the Odo documentation mentions Parquet, but the examples seem all to be going through an external Hive runtime.", "output": "pandas 0.21 introduces new functions for Parquet: import pandas as pd pd.read_parquet('example_pa.parquet', engine='pyarrow') or import pandas as pd pd.read_parquet('example_fp.parquet', engine='fastparquet') The above link explains: These engines are very similar and should read/write nearly identical parquet format files. These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library)."} +{"question_id": 59129812, "score": 193, "creation_date": 1575233173, "tags": ["python", "deep-learning", "pytorch", "object-detection", "low-memory"], "instruction": "How to avoid \"CUDA out of memory\" in PyTorch\n\nI think it's a pretty common message for PyTorch users with low GPU memory: RuntimeError: CUDA out of memory. Tried to allocate X MiB (GPU X; X GiB total capacity; X GiB already allocated; X MiB free; X cached) I tried to process an image by loading each layer to GPU and then loading it back: for m in self.children(): m.cuda() x = m(x) m.cpu() torch.cuda.empty_cache() But it doesn't seem to be very effective. I'm wondering is there any tips and tricks to train large deep learning models while using little GPU memory.", "output": "Although import torch torch.cuda.empty_cache() provides a good alternative for clearing the occupied cuda memory and we can also manually clear the not in use variables by using, import gc del variables gc.collect() But still after using these commands, the error might appear again because pytorch doesn't actually clears the memory instead clears the reference to the memory occupied by the variables. So reducing the batch_size after restarting the kernel and finding the optimum batch_size is the best possible option (but sometimes not a very feasible one). Another way to get a deeper insight into the alloaction of memory in gpu is to use: torch.cuda.memory_summary(device=None, abbreviated=False) wherein, both the arguments are optional. This gives a readable summary of memory allocation and allows you to figure the reason of CUDA running out of memory and restart the kernel to avoid the error from happening again (Just like I did in my case). Passing the data iteratively might help but changing the size of layers of your network or breaking them down would also prove effective (as sometimes the model also occupies a significant memory for example, while doing transfer learning)."} +{"question_id": 50186904, "score": 193, "creation_date": 1525505379, "tags": ["python", "directory", "pathlib"], "instruction": "PathLib recursively remove directory?\n\nIs there any way to remove a directory and its contents in the PathLib module? With path.unlink() it only removes a file, with path.rmdir() the directory has to be empty. Is there no way to do it in one function call?", "output": "As you already know, the only two Path methods for removing files/directories are .unlink() and .rmdir() and neither does what you want. Pathlib is a module that provides object oriented paths across different OS's, it isn't meant to have lots of diverse methods. The aim of this library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them. The \"uncommon\" file system alterations, such as recursively removing a directory, is stored in different modules. If you want to recursively remove a directory, you should use the shutil module. (It works with Path instances too!) import shutil import pathlib import os # for checking results print(os.listdir()) # [\"a_directory\", \"foo.py\", ...] path = pathlib.Path(\"a_directory\") shutil.rmtree(path) print(os.listdir()) # [\"foo.py\", ...]"} +{"question_id": 6800193, "score": 192, "creation_date": 1311422418, "tags": ["python", "algorithm", "performance", "factorization"], "instruction": "What is the most efficient way of finding all the factors of a number in Python?\n\nCan someone explain to me an efficient way of finding all the factors of a number in Python (2.7)? I can create an algorithm to do this, but I think it is poorly coded and takes too long to produce a result for a large number.", "output": "from functools import reduce def factors(n): return set(reduce( list.__add__, ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))) This will return all of the factors, very quickly, of a number n. Why square root as the upper limit? sqrt(x) * sqrt(x) = x. So if the two factors are the same, they're both the square root. If you make one factor bigger, you have to make the other factor smaller. This means that one of the two will always be less than or equal to sqrt(x), so you only have to search up to that point to find one of the two matching factors. You can then use x // fac1 to get fac2. The reduce(list.__add__, ...) is taking the little lists of [fac1, fac2] and joining them together in one long list. The [i, n//i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0 returns a pair of factors if the remainder when you divide n by the smaller one is zero (it doesn't need to check the larger one too; it just gets that by dividing n by the smaller one.) The set(...) on the outside is getting rid of duplicates, which only happens for perfect squares. For n = 4, this will return 2 twice, so set gets rid of one of them."} +{"question_id": 26545051, "score": 192, "creation_date": 1414142433, "tags": ["python", "memory-management", "dir"], "instruction": "Is there a way to delete created variables, functions, etc from the memory of the interpreter?\n\nI've been searching for the accurate answer to this question for a couple of days now but haven't got anything good. I'm not a complete beginner in programming, but not yet even on the intermediate level. When I'm in the shell of Python, I type: dir() and I can see all the names of all the objects in the current scope (main block), there are 6 of them: ['__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] Then, when I'm declaring a variable, for example x = 10, it automatically adds to that lists of objects under built-in module dir(), and when I type dir() again, it shows now: ['__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'x'] The same goes for functions, classes and so on. How do I delete all those new objects without erasing the standard 6 which where available at the beginning? I've read here about \"memory cleaning\", \"cleaning of the console\", which erases all the text from the command prompt window: >>> import sys >>> clear = lambda: os.system('cls') >>> clear() But all this has nothing to do with what I'm trying to achieve, it doesn't clean out all used objects.", "output": "You can delete individual names with del: del x or you can remove them from the globals() object: for name in dir(): if not name.startswith('_'): del globals()[name] This is just an example loop; it defensively only deletes names that do not start with an underscore, making a (not unreasoned) assumption that you only used names without an underscore at the start in your interpreter. You could use a hard-coded list of names to keep instead (whitelisting) if you really wanted to be thorough. There is no built-in function to do the clearing for you, other than just exit and restart the interpreter. Modules you've imported (import os) are going to remain imported because they are referenced by sys.modules; subsequent imports will reuse the already imported module object. You just won't have a reference to them in your current global namespace. Python doesn\u2019t make any security guarantees about data in memory however. When objects no longer are referenced the interpreter marks the memory as no longer in use but does not take steps to overwrite that memory to prevent access to data. If you need that level of security protection you\u2019ll need to use third-party extensions that manage their own memory with security in mind."} +{"question_id": 33625931, "score": 192, "creation_date": 1447145109, "tags": ["python", "file", "copy", "python-2.x", "pathlib"], "instruction": "Copy file with pathlib in Python\n\nI try to copy a file with pathlib import pathlib import shutil my_file=pathlib.Path('/etc/hosts') to_file=pathlib.Path('/tmp/foo') shutil.copy(my_file, to_file) I get this exception: /home/foo_egs_d/bin/python /home/foo_egs_d/src/test-pathlib-copy.py Traceback (most recent call last): File \"/home/foo_egs_d/src/test-pathlib-copy.py\", line 6, in <module> shutil.copy(my_file, to_file) File \"/usr/lib/python2.7/shutil.py\", line 117, in copy if os.path.isdir(dst): File \"/home/foo_egs_d/lib/python2.7/genericpath.py\", line 41, in isdir st = os.stat(s) TypeError: coercing to Unicode: need string or buffer, PosixPath found Process finished with exit code ... how to copy file with pathlib in Python 2.7?", "output": "To use shutil.copy: import pathlib import shutil my_file = pathlib.Path('/etc/hosts') to_file = pathlib.Path('/tmp/foo') shutil.copy(str(my_file), str(to_file)) # For Python <= 3.7. shutil.copy(my_file, to_file) # For Python 3.8+. The problem is pathlib.Path create a PosixPath object if you're using Unix/Linux, WindowsPath if you're using Microsoft Windows. With older versions of Python, shutil.copy requires a string as its arguments. For them, use the str function here."} +{"question_id": 138250, "score": 192, "creation_date": 1222416650, "tags": ["python", "graphics", "rgb", "pixel"], "instruction": "How to read the RGB value of a given pixel in Python?\n\nIf I open an image with open(\"image.jpg\"), how can I get the RGB values of a pixel assuming I have the coordinates of the pixel? Then, how can I do the reverse of this? Starting with a blank graphic, 'write' a pixel with a certain RGB value? I would prefer if I didn't have to download any additional libraries.", "output": "It's probably best to use the Python Image Library to do this which I'm afraid is a separate download. The easiest way to do what you want is via the load() method on the Image object which returns a pixel access object which you can manipulate like an array: from PIL import Image im = Image.open('dead_parrot.jpg') # Can be many different formats. pix = im.load() print im.size # Get the width and hight of the image for iterating over print pix[x,y] # Get the RGBA Value of the a pixel of an image pix[x,y] = value # Set the RGBA Value of the image (tuple) im.save('alive_parrot.png') # Save the modified pixels as .png Alternatively, look at ImageDraw which gives a much richer API for creating images."} +{"question_id": 21519203, "score": 192, "creation_date": 1391397712, "tags": ["python", "list", "matplotlib", "plot", "coordinates"], "instruction": "Plotting a list of (x, y) coordinates\n\nI have a list of pairs (a, b) that I would like to plot with matplotlib in python as actual x-y coordinates. Currently, it is making two plots, where the index of the list gives the x-coordinate, and the first plot's y values are the as in the pairs and the second plot's y values are the bs in the pairs. To clarify, my data looks like this: li = [(a,b), (c,d), ... , (t, u)] and I want to do a one-liner that just calls plt.plot(). If I didn't require a one-liner I could trivially do: xs = [x[0] for x in li] ys = [x[1] for x in li] plt.plot(xs, ys) How can I get matplotlib to plot these pairs as x-y coordinates? Sample data # sample data li = list(zip(range(1, 14), range(14, 27))) li \u2192 [(1, 14), (2, 15), (3, 16), (4, 17), (5, 18), (6, 19), (7, 20), (8, 21), (9, 22), (10, 23), (11, 24), (12, 25), (13, 26)] Incorrect Plot plt.plot(li) plt.title('Incorrect Plot:\\nEach index of the tuple plotted as separate lines') Desired Plot This produces the correct plot, but to many lines of code are used to unpack li. I need to unpack and plot with a single line of code, not multiple list-comprehensions. xs = [x[0] for x in li] ys = [x[1] for x in li] plt.plot(xs, ys) plt.title('Correct Plot:\\nBut uses to many lines to unpack li')", "output": "Given li in the question: li = list(zip(range(1, 14), range(14, 27))) To unpack the data from pairs into lists use zip: x, y = zip(*li) x \u2192 (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13) y \u2192 (14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26) The one-liner uses the unpacking operator (*), to unpack the list of tuples for zip, and unpacks the zip object into the plot API. plt.scatter(*zip(*li)) plt.plot(*zip(*li))"} +{"question_id": 42014484, "score": 192, "creation_date": 1486079480, "tags": ["python", "pytest", "parametrized-testing", "pytest-fixtures", "parametrize"], "instruction": "pytest using fixtures as arguments in parametrize\n\nI would like to use fixtures as arguments of pytest.mark.parametrize or something that would have the same results. For example: import pytest import my_package @pytest.fixture def dir1_fixture(): return '/dir1' @pytest.fixture def dir2_fixture(): return '/dir2' @pytest.parametrize('dirname, expected', [(dir1_fixture, 'expected1'), (dir2_fixture, 'expected2')]) def test_directory_command(dirname, expected): result = my_package.directory_command(dirname) assert result == expected The problem with fixture params is that every param of the fixture will get run every time it's used, but I don't want that. I want to be able to choose which fixtures will get used depending on the test.", "output": "Will was on the right path, you should use request.getfixturevalue to retrieve the fixture. But you can do it right in the test, which is simpler. @pytest.mark.parametrize('dirname, expected', [ ('dir1_fixture', 'expected1'), ('dir2_fixture', 'expected2')]) def test_directory_command(dirname, expected, request): result = my_package.directory_command(request.getfixturevalue(dirname)) assert result == expected Another way is to use lazy-fixture plugin: @pytest.mark.parametrize('dirname, expected', [ (pytest.lazy_fixture('dir1_fixture'), 'expected1'), (pytest.lazy_fixture('dir2_fixture'), 'expected2')]) def test_directory_command(dirname, expected): result = my_package.directory_command(dirname) assert result == expected"} +{"question_id": 2276200, "score": 191, "creation_date": 1266353188, "tags": ["python", "encoding", "utf-8", "console"], "instruction": "Changing default encoding of Python?\n\nI have many \"can't encode\" and \"can't decode\" problems with Python when I run my applications from the console. But in the Eclipse PyDev IDE, the default character encoding is set to UTF-8, and I'm fine. I searched around for setting the default encoding, and people say that Python deletes the sys.setdefaultencoding function on startup, and we can not use it. So what's the best solution for it?", "output": "Here is a simpler method (hack) that gives you back the setdefaultencoding() function that was deleted from sys: import sys # sys.setdefaultencoding() does not exist, here! reload(sys) # Reload does the trick! sys.setdefaultencoding('UTF8') (Note for Python 3.4+: reload() is in the importlib library.) This is not a safe thing to do, though: this is obviously a hack, since sys.setdefaultencoding() is purposely removed from sys when Python starts. Reenabling it and changing the default encoding can break code that relies on ASCII being the default (this code can be third-party, which would generally make fixing it impossible or dangerous). PS: This hack doesn't seem to work with Python 3.9 anymore."} +{"question_id": 36268749, "score": 191, "creation_date": 1459190293, "tags": ["python", "list", "list-comprehension"], "instruction": "How to remove multiple items from a list in just one statement?\n\nIn python, I know how to remove items from a list: item_list = ['item', 5, 'foo', 3.14, True] item_list.remove('item') item_list.remove(5) The above code removes the values 5 and 'item' from item_list. But when there is a lot of stuff to remove, I have to write many lines of: item_list.remove(\"something_to_remove\") If I know the index of what I am removing, I use: del item_list[x] where x is the index of the item I want to remove. If I know the index of all of the numbers that I want to remove, I'll use some sort of loop to del the items at the indices. But what if I don't know the indices of the items I want to remove? I tried item_list.remove('item', 'foo'), but I got an error saying that remove only takes one argument. Is there a way to remove multiple items from a list in a single statement? P.S. I've used del and remove. Can someone explain the difference between these two, or are they the same?", "output": "In Python, creating a new object e.g. with a list comprehension is often better than modifying an existing one: item_list = ['item', 5, 'foo', 3.14, True] item_list = [e for e in item_list if e not in ('item', 5)] ... which is equivalent to: item_list = ['item', 5, 'foo', 3.14, True] new_list = [] for e in item_list: if e not in ('item', 5): new_list.append(e) item_list = new_list In case of a big list of filtered out values (here, ('item', 5) is a small set of elements), using a set is faster as the in operation is O(1) time complexity on average. It's also a good idea to build the iterable you're removing first, so that you're not creating it on every iteration of the list comprehension: unwanted = {'item', 5} item_list = [e for e in item_list if e not in unwanted] A bloom filter is also a good solution if memory is not cheap."} +{"question_id": 35339139, "score": 191, "creation_date": 1455192998, "tags": ["python", "pandas", "dataframe", "datetime", "frequency"], "instruction": "What values are valid in Pandas 'Freq' tags?\n\nI am trying to use date_range. I came across some values valid for freq, like BME and BMS and I would like to be able to quickly look up the proper strings to get what I want. What values are valid in Pandas 'Freq' tags?", "output": "You can find it called Offset Aliases: A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as offset aliases. Alias Description B business day frequency C custom business day frequency D calendar day frequency W weekly frequency ME month end frequency SME semi-month end frequency (15th and end of month) BME business month end frequency CBME custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency QE quarter end frequency BQE business quarter end frequency QS quarter start frequency BQS business quarter start frequency YE year end frequency BYE business year end frequency YS year start frequency BYS business year start frequency h hourly frequency bh business hour frequency cbh custom business hour frequency min minutely frequency s secondly frequency ms milliseconds us microseconds ns nanoseconds"} +{"question_id": 20002503, "score": 191, "creation_date": 1384523134, "tags": ["python", "boolean", "boolean-expression"], "instruction": "Why does \"a == x or y or z\" always evaluate to True? How can I compare \"a\" to all of those?\n\nI am writing a security system that denies access to unauthorized users. name = input(\"Hello. Please enter your name: \") if name == \"Kevin\" or \"Jon\" or \"Inbar\": print(\"Access granted.\") else: print(\"Access denied.\") It grants access to authorized users as expected, but it also lets in unauthorized users! Hello. Please enter your name: Bob Access granted. Why does this occur? I've plainly stated to only grant access when name equals Kevin, Jon, or Inbar. I have also tried the opposite logic, if \"Kevin\" or \"Jon\" or \"Inbar\" == name, but the result is the same. This question is intended as the canonical duplicate target of this very common problem. There is another popular question How to test multiple variables for equality against a single value? that has the same fundamental problem, but the comparison targets are reversed. This question should not be closed as a duplicate of that one as this problem is encountered by newcomers to Python who might have difficulties applying the knowledge from the reversed question to their problem. For in instead of ==, there are solutions here: How to test the membership of multiple values in a list", "output": "In many cases, Python looks and behaves like natural English, but this is one case where that abstraction fails. People can use context clues to determine that \"Jon\" and \"Inbar\" are objects joined to the verb \"equals\", but the Python interpreter is more literal minded. if name == \"Kevin\" or \"Jon\" or \"Inbar\": is logically equivalent to: if (name == \"Kevin\") or (\"Jon\") or (\"Inbar\"): Which, for user Bob, is equivalent to: if (False) or (\"Jon\") or (\"Inbar\"): The or operator chooses the first operand that is \"truthy\", i.e. which would satisfy an if condition (or the last one, if none of them are \"truthy\"): if \"Jon\": Since \"Jon\" is truthy, the if block executes. That is what causes \"Access granted\" to be printed regardless of the name given. All of this reasoning also applies to the expression if \"Kevin\" or \"Jon\" or \"Inbar\" == name. the first value, \"Kevin\", is true, so the if block executes. There are three common ways to properly construct this conditional. Use multiple == operators to explicitly check against each value: if name == \"Kevin\" or name == \"Jon\" or name == \"Inbar\": Compose a collection of valid values (a set, a list or a tuple for example), and use the in operator to test for membership: if name in {\"Kevin\", \"Jon\", \"Inbar\"}: Use any() and a generator expression to explicitly check against each value in a loop: if any(name == auth for auth in [\"Kevin\", \"Jon\", \"Inbar\"]): In general the second should be preferred as it's easier to read and also faster: >>> import timeit >>> timeit.timeit('name == \"Kevin\" or name == \"Jon\" or name == \"Inbar\"', setup=\"name='Inbar'\") 0.0960568820592016 >>> timeit.timeit('name in {\"Kevin\", \"Jon\", \"Inbar\"}', setup=\"name='Inbar'\") 0.034957461059093475 >>> timeit.timeit('any(name == auth for auth in [\"Kevin\", \"Jon\", \"Inbar\"])', setup=\"name='Inbar'\") 0.6511583919636905 For those who may want proof that if a == b or c or d or e: ... is indeed parsed like this. The built-in ast module provides an answer: >>> import ast >>> ast.parse(\"a == b or c or d or e\", \"<string>\", \"eval\") <ast.Expression object at 0x7f929c898220> >>> print(ast.dump(_, indent=4)) Expression( body=BoolOp( op=Or(), values=[ Compare( left=Name(id='a', ctx=Load()), ops=[ Eq()], comparators=[ Name(id='b', ctx=Load())]), Name(id='c', ctx=Load()), Name(id='d', ctx=Load()), Name(id='e', ctx=Load())])) As one can see, it's the boolean operator or applied to four sub-expressions: comparison a == b; and simple expressions c, d, and e."} +{"question_id": 1802971, "score": 191, "creation_date": 1259232474, "tags": ["python", "nameerror"], "instruction": "NameError: name 'self' is not defined\n\nWhy such structure class A: def __init__(self, a): self.a = a def p(self, b=self.a): print b gives an error NameError: name 'self' is not defined?", "output": "Default argument values are evaluated at function define-time, but self is an argument only available at function call time. Thus arguments in the argument list cannot refer each other. It's a common pattern to default an argument to None and add a test for that in code: def p(self, b=None): if b is None: b = self.a print b Update 2022: Python developers are now considering late-bound argument defaults for future Python versions."} +{"question_id": 5419389, "score": 190, "creation_date": 1300971006, "tags": ["python"], "instruction": "How to overwrite the previous print to stdout?\n\nIf I had the following code: for x in range(10): print(x) I would get the output of 1 2 etc.. What I would like to do is instead of printing a newline, I want to replace the previous value and overwrite it with the new value on the same line.", "output": "Simple Version One way is to use the carriage return ('\\r') character to return to the start of the line without advancing to the next line. Python 3 for x in range(10): print(x, end='\\r') print() Python 2.7 forward compatible from __future__ import print_function for x in range(10): print(x, end='\\r') print() Python 2.7 for x in range(10): print '{}\\r'.format(x), print Python 2.0-2.6 for x in range(10): print '{0}\\r'.format(x), print In the latter two (Python 2-only) cases, the comma at the end of the print statement tells it not to go to the next line. The last print statement advances to the next line so your prompt won't overwrite your final output. Line Cleaning If you can\u2019t guarantee that the new line of text is not shorter than the existing line, then you just need to add a \u201cclear to end of line\u201d escape sequence, '\\x1b[1K' ('\\x1b' = ESC): for x in range(75): print('*' * (75 - x), x, end='\\x1b[1K\\r') print() Long Line Wrap All these methods assume you\u2019re not writing more than the length of the line. The carriage return only returns to the start of the current line, so if your output is longer than a line, you\u2019ll only erase the last line. If this is enough of a problem that you need to control it, you can disable line wrapping to keep the cursor from wrapping to the next line. (Instead, the cursor sticks to the end of the line, and successive characters overwrite.) Line wrap is disabled with print('\\x1b[7l', end='') and re-enabled with print('\\x1b[7h', end=''). Note that there is no automatic re-enable of line wrap at any point: don\u2019t leave the terminal broken if an exception ends your program!"} +{"question_id": 16627227, "score": 190, "creation_date": 1368899226, "tags": ["python", "http", "web-scraping", "http-status-code-403"], "instruction": "How do I avoid HTTP error 403 when web scraping with Python?\n\nWhen I try this code to scrape a web page: #import requests import urllib.request from bs4 import BeautifulSoup #from urllib import urlopen import re webpage = urllib.request.urlopen('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1').read findrows = re.compile('<tr class=\"- banding(?:On|Off)>(.*?)</tr>') findlink = re.compile('<a href =\">(.*)</a>') row_array = re.findall(findrows, webpage) links = re.finall(findlink, webpate) print(len(row_array)) iterator = [] I get an error like: File \"C:\\Python33\\lib\\urllib\\request.py\", line 160, in urlopen return opener.open(url, data, timeout) File \"C:\\Python33\\lib\\urllib\\request.py\", line 479, in open response = meth(req, response) File \"C:\\Python33\\lib\\urllib\\request.py\", line 591, in http_response 'http', request, response, code, msg, hdrs) File \"C:\\Python33\\lib\\urllib\\request.py\", line 517, in error return self._call_chain(*args) File \"C:\\Python33\\lib\\urllib\\request.py\", line 451, in _call_chain result = func(*args) File \"C:\\Python33\\lib\\urllib\\request.py\", line 599, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden Does the website think I'm a bot? How can I fix the problem?", "output": "This is probably because of mod_security or some similar server security feature which blocks known spider/bot user agents (urllib uses something like python urllib/3.3.0, it's easily detected). Try setting a known browser user agent with: from urllib.request import Request, urlopen req = Request( url='http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1', headers={'User-Agent': 'Mozilla/5.0'} ) webpage = urlopen(req).read() This works for me. By the way, in your code you are missing the () after .read in the urlopen line, but I think that it's a typo. TIP: since this is exercise, choose a different, non restrictive site. Maybe they are blocking urllib for some reason..."} +{"question_id": 16874598, "score": 190, "creation_date": 1370102709, "tags": ["python", "md5", "md5sum"], "instruction": "How to calculate the MD5 checksum of a file in Python?\n\nI have written some code in Python that checks for an MD5 hash in a file and makes sure the hash matches that of the original. Here is what I have developed: # Defines filename filename = \"file.exe\" # Gets MD5 from file def getmd5(filename): return m.hexdigest() md5 = dict() for fname in filename: md5[fname] = getmd5(fname) # If statement for alerting the user whether the checksum passed or failed if md5 == '>md5 will go here<': print(\"MD5 Checksum passed. You may now close this window\") input (\"press enter\") else: print(\"MD5 Checksum failed. Incorrect MD5 in file 'filename'. Please download a new copy\") input(\"press enter\") exit But whenever I run the code, I get the following error: Traceback (most recent call last): File \"C:\\Users\\Username\\md5check.py\", line 13, in <module> md5[fname] = getmd5(fname) File \"C:\\Users\\Username\\md5check.py, line 9, in getmd5 return m.hexdigest() NameError: global name 'm' is not defined Is there anything I am missing in my code?", "output": "In regards to your error and what's missing in your code. m is a name which is not defined for getmd5() function. No offence, I know you are a beginner, but your code is all over the place. Let's look at your issues one by one :) First, you are not using hashlib.md5.hexdigest() method correctly. Please refer explanation on hashlib functions in Python Doc Library. The correct way to return MD5 for provided string is to do something like this: >>> import hashlib >>> hashlib.md5(\"example string\").hexdigest() '2a53375ff139d9837e93a38a279d63e5' However, you have a bigger problem here. You are calculating MD5 on a file name string, where in reality MD5 is calculated based on file contents. You will need to basically read file contents and pipe it though MD5. My next example is not very efficient, but something like this: >>> import hashlib >>> hashlib.md5(open('filename.exe','rb').read()).hexdigest() 'd41d8cd98f00b204e9800998ecf8427e' As you can clearly see second MD5 hash is totally different from the first one. The reason for that is that we are pushing contents of the file through, not just file name. A simple solution could be something like that: # Import hashlib library (md5 method is part of it) import hashlib # File to check file_name = 'filename.exe' # Correct original md5 goes here original_md5 = '5d41402abc4b2a76b9719d911017c592' # Open,close, read file and calculate MD5 on its contents with open(file_name, 'rb') as file_to_check: # read contents of the file data = file_to_check.read() # pipe contents of the file through md5_returned = hashlib.md5(data).hexdigest() # Finally compare original MD5 with freshly calculated if original_md5 == md5_returned: print \"MD5 verified.\" else: print \"MD5 verification failed!.\" Please look at the post Python: Generating a MD5 checksum of a file. It explains in detail a couple of ways how it can be achieved efficiently. Best of luck."} +{"question_id": 30787901, "score": 190, "creation_date": 1434044915, "tags": ["python", "pandas", "dataframe"], "instruction": "How to get a value from a Pandas DataFrame and not the index and object type\n\nSay I have the following DataFrame Letter Number A 1 B 2 C 3 D 4 Which can be obtained through the following code import pandas as pd letters = pd.Series(('A', 'B', 'C', 'D')) numbers = pd.Series((1, 2, 3, 4)) keys = ('Letters', 'Numbers') df = pd.concat((letters, numbers), axis=1, keys=keys) Now I want to get the value C from the column Letters. The command line df[df.Letters=='C'].Letters will return 2 C Name: Letters, dtype: object How can I get only the value C and not the whole two line output?", "output": "df[df.Letters=='C'].Letters.item() This returns the first element in the Index/Series returned from that selection. In this case, the value is always the first element. EDIT: Or you can run a loc() and access the first element that way. This was shorter and is the way I have implemented it in the past. Pandas Index doc Pandas Series doc"} +{"question_id": 42579908, "score": 190, "creation_date": 1488546959, "tags": ["python", "pandas", "correlation"], "instruction": "Use .corr to get the correlation between two columns\n\nI have the following pandas dataframe Top15: I create a column that estimates the number of citable documents per person: Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita'] Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst'] I want to know the correlation between the number of citable documents per capita and the energy supply per capita. So I use the .corr() method (Pearson's correlation): data = Top15[['Citable docs per Capita','Energy Supply per Capita']] correlation = data.corr(method='pearson') I want to return a single number, but the result is:", "output": "Without actual data it is hard to answer the question but I guess you are looking for something like this: Top15['Citable docs per Capita'].corr(Top15['Energy Supply per Capita']) That calculates the correlation between your two columns 'Citable docs per Capita' and 'Energy Supply per Capita'. To give an example: import pandas as pd df = pd.DataFrame({'A': range(4), 'B': [2*i for i in range(4)]}) A B 0 0 0 1 1 2 2 2 4 3 3 6 Then df['A'].corr(df['B']) gives 1 as expected. Now, if you change a value, e.g. df.loc[2, 'B'] = 4.5 A B 0 0 0.0 1 1 2.0 2 2 4.5 3 3 6.0 the command df['A'].corr(df['B']) returns 0.99586 which is still close to 1, as expected. If you apply .corr() directly to your dataframe, it will return all pairwise correlations between your columns; that's why you then observe 1s at the diagonal of your matrix (each column is perfectly correlated with itself). df.corr() will therefore return A B A 1.000000 0.995862 B 0.995862 1.000000 In the graphic you show, only the upper left corner of the correlation matrix is represented (I assume). There can be cases, where you get NaNs in your solution - check this post for an example. If you want to filter entries above/below a certain threshold, you can check this question. If you want to plot a heatmap of the correlation coefficients, you can check this answer and if you then run into the issue with overlapping axis-labels check the following post."} +{"question_id": 5424716, "score": 189, "creation_date": 1300996313, "tags": ["python", "input", "types", "numbers"], "instruction": "How can I check if string input is a number?\n\nHow do I check if a user's string input is a number (e.g., -1, 0, 1, etc.)? user_input = input(\"Enter something:\") if type(user_input) == int: print(\"Is a number\") else: print(\"Not a number\") The above won't work since input always returns a string.", "output": "Simply try converting it to an int and then bailing out if it doesn't work. try: val = int(userInput) except ValueError: print(\"That's not an int!\") See Handling Exceptions in the official tutorial."} +{"question_id": 4764932, "score": 189, "creation_date": 1295652468, "tags": ["python", "image", "python-imaging-library", "exif"], "instruction": "In Python, how do I read the exif data for an image?\n\nI'm using PIL. How do I turn the EXIF data of a picture into a dictionary?", "output": "You can use the _getexif() protected method of a PIL Image. import PIL.Image img = PIL.Image.open('img.jpg') exif_data = img._getexif() This should give you a dictionary indexed by EXIF numeric tags. If you want the dictionary indexed by the actual EXIF tag name strings, try something like: import PIL.ExifTags exif = { PIL.ExifTags.TAGS[k]: v for k, v in img._getexif().items() if k in PIL.ExifTags.TAGS }"} +{"question_id": 12007686, "score": 188, "creation_date": 1345213614, "tags": ["python", "string", "list"], "instruction": "Join a list of strings in python and wrap each string in quotation marks\n\nI've got: words = ['hello', 'world', 'you', 'look', 'nice'] I want to have: '\"hello\", \"world\", \"you\", \"look\", \"nice\"' What's the easiest way to do this with Python?", "output": "Update 2021: With f strings in Python3 >>> words = ['hello', 'world', 'you', 'look', 'nice'] >>> ', '.join(f'\"{w}\"' for w in words) '\"hello\", \"world\", \"you\", \"look\", \"nice\"' Original Answer (Supports Python 2.6+) >>> words = ['hello', 'world', 'you', 'look', 'nice'] >>> ', '.join('\"{0}\"'.format(w) for w in words) '\"hello\", \"world\", \"you\", \"look\", \"nice\"'"} +{"question_id": 5189199, "score": 188, "creation_date": 1299207874, "tags": ["python", "pip", "easy-install"], "instruction": "Bypass confirmation prompt for pip uninstall\n\nI'm trying to uninstall all django packages in my superuser environment to ensure that all my webapp dependencies are installed to my virtualenv. sudo su sudo pip freeze | grep -E '^django-' | xargs pip -q uninstall But pip wants to confirm every package uninstall, and there doesn't seem to be a -y option for pip. Is there a better way to uninstall a batch of python modules? Is rm -rf .../site-packages/ a proper way to go? Is there an easy_install alternative? Alternatively, would it be better to force pip to install all dependencies to the virtualenv rather than relying on the system python modules to meet those dependencies, e.g. pip --upgrade install, but forcing even equally old versions to be installed to override any system modules. I tried activating my virtualenv and then pip install --upgrade -r requirements.txt and that does seem to install the dependencies, even those existing in my system path, but I can't be sure if that's because my system modules were old. And man pip doesn't seem to guarantee this behavior (i.e. installing the same version of a package that already exists in the system site-packages).", "output": "starting with pip version 7.1.2 you can run pip uninstall -y <python package(s)> pip uninstall -y package1 package2 package3 or from file pip uninstall -y -r requirements.txt"} +{"question_id": 6287529, "score": 188, "creation_date": 1307587237, "tags": ["python", "html", "beautifulsoup"], "instruction": "How to find children of nodes using BeautifulSoup\n\nI want to get all the <a> tags which are children of <li>: <div> <li class=\"test\"> <a>link1</a> <ul> <li> <a>link2</a> </li> </ul> </li> </div> I know how to find element with particular class like this: soup.find(\"li\", { \"class\" : \"test\" }) But I don't know how to find all <a> which are children of <li class=test> but not any others. Like I want to select: <a>link1</a>", "output": "Try this li = soup.find('li', {'class': 'test'}) children = li.findChildren(\"a\" , recursive=False) for child in children: print(child)"} +{"question_id": 42086276, "score": 188, "creation_date": 1486460092, "tags": ["python", "matplotlib"], "instruction": "Get default line color cycle\n\nI noticed when you plot that the first line is blue, then orange, then green, and so on. Is there some way to access this list of colors? I've seen a million posts on how to change the color cycle or access the iterator, but not on how to just get the list of colors that matplotlib cycles through by default.", "output": "In matplotlib versions >= 1.5, you can print the rcParam called axes.prop_cycle: print(plt.rcParams['axes.prop_cycle'].by_key()['color']) # [u'#1f77b4', u'#ff7f0e', u'#2ca02c', u'#d62728', u'#9467bd', u'#8c564b', u'#e377c2', u'#7f7f7f', u'#bcbd22', u'#17becf'] Or equivalently, in python2: print plt.rcParams['axes.prop_cycle'].by_key()['color'] In versions < 1.5, this was called color_cycle: print plt.rcParams['axes.color_cycle'] # [u'b', u'g', u'r', u'c', u'm', u'y', u'k'] Note that the default color cycle changed in version 2.0.0 http://matplotlib.org/users/dflt_style_changes.html#colors-in-default-property-cycle"} +{"question_id": 60819376, "score": 188, "creation_date": 1584987372, "tags": ["python", "fastapi", "uvicorn"], "instruction": "FastAPI throws an error (Error loading ASGI app. Could not import module \"api\")\n\nI tried to run FastAPI using uvicorn webserver but it throws an error. I run this command, uvicorn api:app --reload --host 0.0.0.0 but there is an error in the terminal. Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) Started reloader process [23445] Error loading ASGI app. Could not import module \"api\". Stopping reloader process [23445]", "output": "TL;DR Add the directory name in front of your filename uvicorn src.main:app or cd into that directory cd src uvicorn main:app Long Answer It happens because you are not in the same folder with your FastAPI app instance more specifically: Let's say i have an app-tree like this; my_fastapi_app/ \u251c\u2500\u2500 app.yaml \u251c\u2500\u2500 docker-compose.yml \u251c\u2500\u2500 src \u2502 \u2514\u2500\u2500 main.py \u2514\u2500\u2500 tests \u251c\u2500\u2500 test_xx.py \u2514\u2500\u2500 test_yy.py $ pwd # Present Working Directory /home/yagiz/Desktop/my_fastapi_app I'm not inside the same folder with my app instance, so if I try to run my app with uvicorn I'll get an error like yours $ uvicorn main:app --reload INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [40645] using statreload ERROR: Error loading ASGI app. Could not import module \"main\". The answer is so simple, add the folder name in front of your filename uvicorn src.main:app --reload or you can change your working directory cd src Now i'm inside of the folder with my app instance src \u2514\u2500\u2500 main.py Run your uvicorn again $ uvicorn main:app --reload INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [40726] using statreload INFO: Started server process [40728] INFO: Waiting for application startup. INFO: Application startup complete."} +{"question_id": 8420705, "score": 188, "creation_date": 1323283407, "tags": ["python", "continue"], "instruction": "Example use of \"continue\" statement in Python?\n\nThe definition of the continue statement is: The continue statement continues with the next iteration of the loop. I can't find any good examples of code. Could someone suggest some simple cases where continue is necessary?", "output": "Here's a simple example: for letter in 'Django': if letter == 'D': continue print(\"Current Letter: \" + letter) Output will be: Current Letter: j Current Letter: a Current Letter: n Current Letter: g Current Letter: o It skips the rest of the current iteration (here: print) and continues to the next iteration of the loop."} +{"question_id": 62885911, "score": 188, "creation_date": 1594683418, "tags": ["python", "path", "pip", "requirements.txt"], "instruction": "pip freeze creates some weird path instead of the package version\n\nI am working on developing a python package. I use pip freeze > requirements.txt to add the required package into the requirement.txt file. However, I realized that some of the packages, instead of the package version, have some path in front of them. numpy==1.19.0 packaging==20.4 pandas @ file:///opt/concourse/worker/volumes/live/38d1301c-8fa9-4d2f-662e-34dddf33b183/volume/pandas_1592841668171/work pandocfilters==1.4.2 Whereas, inside the environment, I get: >>> pandas.__version__ '1.0.5' Do you have any idea how to address this problem?", "output": "It looks like this is an open issue with pip freeze in version 20.1, the current workaround is to use: pip list --format=freeze > requirements.txt In a nutshell, this is caused by changing the behavior of pip freeze to include direct references for distributions installed from direct URL references. You can read more about the issue on GitHub: pip freeze does not show version for in-place installs Output of \"pip freeze\" and \"pip list --format=freeze\" differ for packages installed via Direct URLs Better freeze of distributions installed from direct URL references"} +{"question_id": 35803027, "score": 187, "creation_date": 1457114659, "tags": ["python", "amazon-web-services", "amazon-s3", "boto3"], "instruction": "Retrieving subfolders names in S3 bucket from Boto3\n\nUsing Boto3, I can access my AWS S3 bucket: s3 = boto3.resource('s3') bucket = s3.Bucket('my-bucket-name') Now, the bucket contains folder first-level, which itself contains several sub-folders named with a timestamp, for instance 1456753904534. I need to know the name of these sub-folders for another job I'm doing and I wonder whether I could have boto3 retrieve those for me. So I tried: objs = bucket.meta.client.list_objects(Bucket='my-bucket-name') which gives a dictionary, whose key 'Contents' gives me all the third-level files instead of the second-level timestamp directories, in fact I get a list containing things as {u'ETag': '\"etag\"', u'Key': first-level/1456753904534/part-00014', u'LastModified': datetime.datetime(2016, 2, 29, 13, 52, 24, tzinfo=tzutc()), u'Owner': {u'DisplayName': 'owner', u'ID': 'id'}, u'Size': size, u'StorageClass': 'storageclass'} You can see that the specific files, in this case part-00014 are retrieved, while I'd like to get the name of the directory alone. In principle I could strip out the directory name from all the paths but it's ugly and expensive to retrieve everything at third level to get the second level! I also tried something reported here: for o in bucket.objects.filter(Delimiter='/'): print(o.key) but I do not get the folders at the desired level. Is there a way to solve this?", "output": "S3 is an object storage, it doesn't have real directory structure. The \"/\" is rather cosmetic. One reason that people want to have a directory structure, because they can maintain/prune/add a tree to the application. For S3, you treat such structure as sort of index or search tag. To manipulate object in S3, you need boto3.client or boto3.resource, e.g. To list all object import boto3 s3 = boto3.client(\"s3\") all_objects = s3.list_objects(Bucket = 'bucket-name') http://boto3.readthedocs.org/en/latest/reference/services/s3.html#S3.Client.list_objects In fact, if the s3 object name is stored using '/' separator. The more recent version of list_objects (list_objects_v2) allows you to limit the response to keys that begin with the specified prefix. To limit the items to items under certain sub-folders: import boto3 s3 = boto3.client(\"s3\") response = s3.list_objects_v2( Bucket=BUCKET, Prefix ='DIR1/DIR2', MaxKeys=100 ) Documentation Another option is using python os.path function to extract the folder prefix. Problem is that this will require listing objects from undesired directories. import os s3_key = 'first-level/1456753904534/part-00014' filename = os.path.basename(s3_key) foldername = os.path.dirname(s3_key) # if you are not using conventional delimiter like '#' s3_key = 'first-level#1456753904534#part-00014' filename = s3_key.split(\"#\")[-1] A reminder about boto3 : boto3.resource is a nice high level API. There are pros and cons using boto3.client vs boto3.resource. If you develop internal shared library, using boto3.resource will give you a blackbox layer over the resources used."} +{"question_id": 21388541, "score": 187, "creation_date": 1390846774, "tags": ["python", "string", "numbers"], "instruction": "How do you check in Python whether a string contains only numbers?\n\nHow do you check whether a string contains only numbers? I've given it a go here. I'd like to see the simplest way to accomplish this. import string def main(): isbn = input(\"Enter your 10 digit ISBN number: \") if len(isbn) == 10 and string.digits == True: print (\"Works\") else: print(\"Error, 10 digit number was not inputted and/or letters were inputted.\") main() if __name__ == \"__main__\": main() input(\"Press Enter to exit: \")", "output": "You'll want to use the isdigit method on your str object: if len(isbn) == 10 and isbn.isdigit(): From the isdigit documentation: str.isdigit() Return True if all characters in the string are digits and there is at least one character, False otherwise. Digits include decimal characters and digits that need special handling, such as the compatibility superscript digits. This covers digits which cannot be used to form numbers in base 10, like the Kharosthi numbers. Formally, a digit is a character that has the property value Numeric_Type=Digit or Numeric_Type=Decimal."} +{"question_id": 14895599, "score": 187, "creation_date": 1360934163, "tags": ["python", "list", "insert"], "instruction": "Insert an element at a specific index in a list and return the updated list\n\nI have this: >>> a = [1, 2, 4] >>> print(a) [1, 2, 4] >>> print(a.insert(2, 3)) None >>> print(a) [1, 2, 3, 4] >>> b = a.insert(3, 6) >>> print(b) None >>> print a [1, 2, 3, 6, 4] Is there a way I can get the updated list as the result, instead of updating the original list in place?", "output": "The shortest I got: b = a[:2] + [3] + a[2:] >>> >>> a = [1, 2, 4] >>> print a [1, 2, 4] >>> b = a[:2] + [3] + a[2:] >>> print a [1, 2, 4] >>> print b [1, 2, 3, 4]"} +{"question_id": 3927628, "score": 187, "creation_date": 1287000756, "tags": ["python", "profiling", "line-by-line"], "instruction": "How can I profile Python code line-by-line?\n\nI've been using cProfile to profile my code, and it's been working great. I also use gprof2dot.py to visualize the results (makes it a little clearer). However, cProfile (and most other Python profilers I've seen so far) seem to only profile at the function-call level. This causes confusion when certain functions are called from different places - I have no idea if call #1 or call #2 is taking up the majority of the time. This gets even worse when the function in question is six levels deep, called from seven other places. How do I get a line-by-line profiling? Instead of this: function #12, total time: 2.0s I'd like to see something like this: function #12 (called from somefile.py:102) 0.5s function #12 (called from main.py:12) 1.5s cProfile does show how much of the total time \"transfers\" to the parent, but again this connection is lost when you have a bunch of layers and interconnected calls. Ideally, I'd love to have a GUI that would parse through the data, then show me my source file with a total time given to each line. Something like this: main.py: a = 1 # 0.0s result = func(a) # 0.4s c = 1000 # 0.0s result = func(c) # 5.0s Then I'd be able to click on the second \"func(c)\" call to see what's taking up time in that call, separate from the \"func(a)\" call. Does that make sense?", "output": "I believe that's what Robert Kern's line_profiler is intended for. From the link: File: pystone.py Function: Proc2 at line 149 Total time: 0.606656 s Line # Hits Time Per Hit % Time Line Contents ============================================================== 149 @profile 150 def Proc2(IntParIO): 151 50000 82003 1.6 13.5 IntLoc = IntParIO + 10 152 50000 63162 1.3 10.4 while 1: 153 50000 69065 1.4 11.4 if Char1Glob == 'A': 154 50000 66354 1.3 10.9 IntLoc = IntLoc - 1 155 50000 67263 1.3 11.1 IntParIO = IntLoc - IntGlob 156 50000 65494 1.3 10.8 EnumLoc = Ident1 157 50000 68001 1.4 11.2 if EnumLoc == Ident1: 158 50000 63739 1.3 10.5 break 159 50000 61575 1.2 10.1 return IntParIO"} +{"question_id": 37211115, "score": 187, "creation_date": 1463145493, "tags": ["python", "environment-variables", "virtualenv"], "instruction": "How to enable a virtualenv in a systemd service unit?\n\nI want to \"activate\" a virtualenv in a systemd service file. I would like to avoid having a shell process between the systemd process and the python interpreter. My current solution looks like this: [Unit] Description=fooservice After=syslog.target network.target [Service] Type=simple User=fooservice WorkingDirectory={{ venv_home }} ExecStart={{ venv_home }}/fooservice --serve-in-foreground Restart=on-abort EnvironmentFile=/etc/sysconfig/fooservice.env [Install] WantedBy=multi-user.target /etc/sysconfig/fooservice.env PATH={{ venv_home }}/bin:/usr/local/bin:/usr/bin:/bin PYTHONIOENCODING=utf-8 PYTHONPATH={{ venv_home }}/... VIRTUAL_ENV={{ venv_home }} But I am having trouble. I get ImportErrors since some entries in sys.path are missing.", "output": "The virtualenv is \"baked into the Python interpreter in the virtualenv\". So for most cases this means you can launch python or console_scripts directly in that virtualenv and don't need to activate the virtualenv first or manage PATH yourself. Note: Replace {{ venv_home }} with the venv home dir to use: ExecStart={{ venv_home }}/bin/fooservice --serve-in-foreground or ExecStart={{ venv_home }}/bin/python {{ venv_home }}/fooservice.py --serve-in-foreground and remove the EnvironmentFile entry. To verify that it is indeed correct you can check sys.path by running {{ venv_home }}/bin/python -m site and comparing the output to python -m site"} +{"question_id": 12214801, "score": 186, "creation_date": 1346413905, "tags": ["python", "string", "hex", "ordinal-indicator"], "instruction": "Print a string as hexadecimal bytes\n\nI have this string: Hello, World! and I want to print it using Python as '48:65:6c:6c:6f:2c:20:57:6f:72:6c:64:21'. hex() works only for integers. How can it be done?", "output": "You can transform your string to an integer generator. Apply hexadecimal formatting for each element and intercalate with a separator: >>> s = \"Hello, World!\" >>> \":\".join(\"{:02x}\".format(ord(c)) for c in s) '48:65:6c:6c:6f:2c:20:57:6f:72:6c:64:21"} +{"question_id": 18140838, "score": 186, "creation_date": 1376028005, "tags": ["python", "sql", "django", "django-models", "django-orm"], "instruction": "What is the SQL ''LIKE\" equivalent on Django ORM queries?\n\nWhat is the equivalent of the following SQL statement in Django? SELECT * FROM table_name WHERE string LIKE pattern; I tried this: result = table.objects.filter( pattern in string ) but it didn't work. How can I implement it?", "output": "Use __contains or __icontains (case-insensitive): result = table.objects.filter(string__contains='pattern') The SQL equivalent is SELECT ... WHERE string LIKE '%pattern%'; @Dmitri's answer below covers patterns like 'pattern%' or '%pattern'"} +{"question_id": 32244019, "score": 186, "creation_date": 1440663094, "tags": ["python", "pandas", "matplotlib"], "instruction": "How to rotate x-axis tick labels in a pandas plot\n\nWith the following code: import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[[\"celltype\",\"s1\",\"s2\"]] df.set_index([\"celltype\"],inplace=True) df.plot(kind='bar',alpha=0.75) plt.xlabel(\"\") I made this plot: How can I rotate the x-axis tick labels to 0 degrees? I tried adding this but did not work: plt.set_xticklabels(df.index,rotation=90)", "output": "Pass param rot=0 to rotate the xticklabels: import matplotlib matplotlib.style.use('ggplot') import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]}) df = df[[\"celltype\",\"s1\",\"s2\"]] df.set_index([\"celltype\"],inplace=True) df.plot(kind='bar',alpha=0.75, rot=0) plt.xlabel(\"\") plt.show() yields plot:"} +{"question_id": 25082410, "score": 186, "creation_date": 1406903116, "tags": ["python", "string", "list", "function", "list-comprehension"], "instruction": "Apply function to each element of a list\n\nSuppose I have a list like: mylis = ['this is test', 'another test'] How do I apply a function to each element in the list? For example, how do I apply str.upper to get: ['THIS IS TEST', 'ANOTHER TEST']", "output": "Using the built-in standard library map: >>> mylis = ['this is test', 'another test'] >>> list(map(str.upper, mylis)) ['THIS IS TEST', 'ANOTHER TEST'] In Python 2.x, map constructed the desired new list by applying a given function to every element in a list. In Python 3.x, map constructs an iterator instead of a list, so the call to list is necessary. If you are using Python 3.x and require a list the list comprehension approach would be better suited."} +{"question_id": 31252791, "score": 186, "creation_date": 1436206320, "tags": ["python", "flask"], "instruction": "Flask ImportError: No Module Named Flask\n\nI'm following the Flask tutorial here: http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world I get to the point where I try ./run.py and I get: Traceback (most recent call last): File \"./run.py\", line 3, in <module> from app import app File \"/Users/benjaminclayman/Desktop/microblog/app/__init__.py\", line 1, in <module> from flask import Flask ImportError: No module named flask This looks similar to: ImportError: No module named flask But their solutions aren't helpful. For reference, I do have a folder named flask which one user mentioned may cause issues.", "output": "Try deleting the virtualenv you created. Then create a new virtualenv with: virtualenv flask Then: cd flask Now let's activate the virtualenv source bin/activate Now you should see (flask) on the left of the command line. Edit: In windows there is no \"source\" that's a linux thing, instead execute the activate.bat file, here I do it using Powershell: PS C:\\DEV\\aProject> & .\\Flask\\Scripts\\activate) Let's install flask: pip install flask Then create a file named hello.py (NOTE: see UPDATE Flask 1.0.2 below): from flask import Flask app = Flask(__name__) @app.route(\"/\") def hello(): return \"Hello World!\" if __name__ == \"__main__\": app.run() and run it with: python hello.py UPDATE Flask 1.0.2 With the new flask release there is no need to run the app from your script. hello.py should look like this now: from flask import Flask app = Flask(__name__) @app.route(\"/\") def hello(): return \"Hello World!\" and run it with: FLASK_APP=hello.py flask run Make sure to be inside the folder where hello.py is when running the latest command. All the steps before the creation of the hello.py apply for this case as well"} +{"question_id": 30752268, "score": 186, "creation_date": 1433928175, "tags": ["python", "django", "django-models", "django-aggregation"], "instruction": "How to filter objects for count annotation in Django?\n\nConsider simple Django models Event and Participant: class Event(models.Model): title = models.CharField(max_length=100) class Participant(models.Model): event = models.ForeignKey(Event, db_index=True) is_paid = models.BooleanField(default=False, db_index=True) It's easy to annotate events query with total number of participants: events = Event.objects.all().annotate(participants=models.Count('participant')) How to annotate with count of participants filtered by is_paid=True? I need to query all events regardless of number of participants, e.g. I don't need to filter by annotated result. If there are 0 participants, that's ok, I just need 0 in annotated value. The example from documentation doesn't work here, because it excludes objects from query instead of annotating them with 0. Update. Django 1.8 has new conditional expressions feature, so now we can do like this: events = Event.objects.all().annotate(paid_participants=models.Sum( models.Case( models.When(participant__is_paid=True, then=1), default=0, output_field=models.IntegerField() ))) Update 2. Django 2.0 has new Conditional aggregation feature, see the accepted answer below. This also works in Django 3.x", "output": "Conditional aggregation in Django 2.0+ allows you to further reduce the amount of faff this has been in the past. This will also use Postgres' filter logic, which is somewhat faster than a sum-case (I've seen numbers like 20-30% bandied around). Anyway, in your case, we're looking at something as simple as: from django.db.models import Q, Count events = Event.objects.annotate( paid_participants=Count('participant', filter=Q(participant__is_paid=True)) ) There's a separate section in the docs about filtering on annotations. It's the same stuff as conditional aggregation but more like my example above. Either which way, this is a lot healthier than the gnarly subqueries I was doing before. For more complex annotation filters, it may be more understandable to structure the filter as a separate queryset that is passed to an __in expression: from django.db.models import Q, Count paid_participants = Participant.objects.filter(is_paid=True) events = Event.objects.annotate( paid_participants=Count( 'participant', filter=Q(participant__in=paid_participants) ) ) In this context, the Participant queryset does not need to explicitly filter on the outer Event ID like a subquery because Count (and other aggregation functions) implicitly filter for reverse foreign key relationships of the current row. In other words, with this format specify your filters globally for the Model that is being counted."} +{"question_id": 1393324, "score": 186, "creation_date": 1252407368, "tags": ["python"], "instruction": "Given a URL to a text file, what is the simplest way to read the contents of the text file?\n\nIn Python, when given the URL for a text file, what is the simplest way to access the contents off the text file and print the contents of the file out locally line-by-line without saving a local copy of the text file? TargetURL=http://www.myhost.com/SomeFile.txt #read the file #print first line #print second line #etc", "output": "Edit 09/2016: In Python 3 and up use urllib.request instead of urllib2 Actually the simplest way is: import urllib2 # the lib that handles the url stuff data = urllib2.urlopen(target_url) # it's a file like object and works just like a file for line in data: # files are iterable print line You don't even need \"readlines\", as Will suggested. You could even shorten it to: * import urllib2 for line in urllib2.urlopen(target_url): print line But remember in Python, readability matters. However, this is the simplest way but not the safe way because most of the time with network programming, you don't know if the amount of data to expect will be respected. So you'd generally better read a fixed and reasonable amount of data, something you know to be enough for the data you expect but will prevent your script from been flooded: import urllib2 data = urllib2.urlopen(\"http://www.google.com\").read(20000) # read only 20 000 chars data = data.split(\"\\n\") # then split it into lines for line in data: print line * Second example in Python 3: import urllib.request # the lib that handles the url stuff for line in urllib.request.urlopen(target_url): print(line.decode('utf-8')) #utf-8 or iso8859-1 or whatever the page encoding scheme is"} +{"question_id": 47022070, "score": 186, "creation_date": 1509388476, "tags": ["python", "python-3.x", "dataframe", "jupyter-notebook"], "instruction": "Display all dataframe columns in a Jupyter Python Notebook\n\nI want to show all columns in a dataframe in a Jupyter Notebook. Jupyter shows some of the columns and adds dots to the last columns like in the following picture: How can I display all columns?", "output": "Try the display max_columns setting as follows: import pandas as pd from IPython.display import display df = pd.read_csv(\"some_data.csv\") pd.options.display.max_columns = None display(df) Or pd.set_option('display.max_columns', None) Pandas 0.11.0 backwards This is deprecated but in versions of Pandas older than 0.11.0 the max_columns setting is specified as follows: pd.set_printoptions(max_columns=500)"} +{"question_id": 52390576, "score": 186, "creation_date": 1537286619, "tags": ["python", "python-3.x", "hash", "python-dataclasses"], "instruction": "How can I make a python dataclass hashable?\n\nI have a dataclass whose instances I want to hash and order, using the id member as a key. from dataclasses import dataclass, field @dataclass(eq=True, order=True) class Category: id: str = field(compare=True) name: str = field(default=\"set this in post_init\", compare=False) I know that I can implement __hash__ myself. However, I would like dataclasses to do the work for me because they are intended to handle this. Unfortunately, the above dataclass fails: a = sorted(list(set([ Category(id='x'), Category(id='y')]))) Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: unhashable type: 'Category'", "output": "From the docs: Here are the rules governing implicit creation of a __hash__() method: [...] If eq and frozen are both true, by default dataclass() will generate a __hash__() method for you. If eq is true and frozen is false, __hash__() will be set to None, marking it unhashable (which it is, since it is mutable). If eq is false, __hash__() will be left untouched meaning the __hash__() method of the superclass will be used (if the superclass is object, this means it will fall back to id-based hashing). Since you set eq=True and left frozen at the default (False), your dataclass is unhashable. You have 3 options: Set frozen=True (in combination with the default eq=True), which will make your class immutable and hashable. @dataclass(frozen=True) Set unsafe_hash=True, which will create a __hash__ method but leave your class mutable. @dataclass(unsafe_hash=True) Mutability risks problems if an instance of your class is modified while stored in a dict or set: cat = Category('foo', 'bar') categories = {cat} cat.id = 'baz' print(cat in categories) # False Manually implement a __hash__ method."} +{"question_id": 22646463, "score": 186, "creation_date": 1395782303, "tags": ["python", "numpy", "bit-manipulation", "boolean-expression", "ampersand"], "instruction": "'and' (boolean) vs '&' (bitwise) - Why difference in behavior with lists vs numpy arrays?\n\nWhat explains the difference in behavior of boolean and bitwise operations on lists vs NumPy arrays? I'm confused about the appropriate use of & vs and in Python, illustrated in the following examples. mylist1 = [True, True, True, False, True] mylist2 = [False, True, False, True, False] >>> len(mylist1) == len(mylist2) True # ---- Example 1 ---- >>> mylist1 and mylist2 [False, True, False, True, False] # I would have expected [False, True, False, False, False] # ---- Example 2 ---- >>> mylist1 & mylist2 TypeError: unsupported operand type(s) for &: 'list' and 'list' # Why not just like example 1? >>> import numpy as np # ---- Example 3 ---- >>> np.array(mylist1) and np.array(mylist2) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() # Why not just like Example 4? # ---- Example 4 ---- >>> np.array(mylist1) & np.array(mylist2) array([False, True, False, False, False], dtype=bool) # This is the output I was expecting! This answer and this answer helped me understand that and is a boolean operation but & is a bitwise operation. I read about bitwise operations to better understand the concept, but I am struggling to use that information to make sense of my above 4 examples. Example 4 led me to my desired output, so that is fine, but I am still confused about when/how/why I should use and vs &. Why do lists and NumPy arrays behave differently with these operators? Can anyone help me understand the difference between boolean and bitwise operations to explain why they handle lists and NumPy arrays differently?", "output": "and tests whether both expressions are logically True while & (when used with True/False values) tests if both are True. In Python, empty built-in objects are typically treated as logically False while non-empty built-ins are logically True. This facilitates the common use case where you want to do something if a list is empty and something else if the list is not. Note that this means that the list [False] is logically True: >>> if [False]: ... print('True') ... True So in Example 1, the first list is non-empty and therefore logically True, so the truth value of the and is the same as that of the second list. (In our case, the second list is non-empty and therefore logically True, but identifying that would require an unnecessary step of calculation.) For example 2, lists cannot meaningfully be combined in a bitwise fashion because they can contain arbitrary unlike elements. Things that can be combined bitwise include: Trues and Falses, integers. NumPy objects, by contrast, support vectorized calculations. That is, they let you perform the same operations on multiple pieces of data. Example 3 fails because NumPy arrays (of length > 1) have no truth value as this prevents vector-based logic confusion. Example 4 is simply a vectorized bit and operation. Bottom Line If you are not dealing with arrays and are not performing math manipulations of integers, you probably want and. If you have vectors of truth values that you wish to combine, use numpy with &."} +{"question_id": 65122957, "score": 186, "creation_date": 1606986952, "tags": ["python", "pip"], "instruction": "Resolving new pip backtracking runtime issue\n\nThe new pip dependency resolver that was released with version 20.3 takes an inappropriately long time to install a package. On our CI pipeline yesterday, a docker build that used to take ~10 minutes timed out after 1h of pip installation messages like this (almost for every library that is installed by any dependency there is a similar log output): INFO: pip is looking at multiple versions of setuptools to determine which version is compatible with other requirements. This could take a while. Downloading setuptools-50.0.0-py3-none-any.whl (783 kB) Downloading setuptools-49.6.0-py3-none-any.whl (803 kB) Downloading setuptools-49.5.0-py3-none-any.whl (803 kB) Downloading setuptools-49.4.0-py3-none-any.whl (803 kB) Downloading setuptools-49.3.2-py3-none-any.whl (790 kB) INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking Downloading setuptools-49.3.1-py3-none-any.whl (790 kB) Downloading setuptools-49.3.0-py3-none-any.whl (790 kB) Downloading setuptools-49.2.1-py3-none-any.whl (789 kB) Downloading setuptools-49.2.0-py3-none-any.whl (789 kB) Downloading setuptools-49.1.3-py3-none-any.whl (789 kB) Downloading setuptools-49.1.2-py3-none-any.whl (789 kB) Downloading setuptools-49.1.1-py3-none-any.whl (789 kB) Downloading setuptools-49.1.0-py3-none-any.whl (789 kB) Downloading setuptools-49.0.1-py3-none-any.whl (789 kB) Downloading setuptools-49.0.0-py3-none-any.whl (789 kB) Downloading setuptools-48.0.0-py3-none-any.whl (786 kB) Downloading setuptools-47.3.2-py3-none-any.whl (582 kB) Downloading setuptools-47.3.1-py3-none-any.whl (582 kB) Downloading setuptools-47.3.0-py3-none-any.whl (583 kB) Downloading setuptools-47.2.0-py3-none-any.whl (583 kB) Downloading setuptools-47.1.1-py3-none-any.whl (583 kB) Downloading setuptools-47.1.0-py3-none-any.whl (583 kB) Downloading setuptools-47.0.0-py3-none-any.whl (583 kB) Downloading setuptools-46.4.0-py3-none-any.whl (583 kB) Downloading setuptools-46.3.1-py3-none-any.whl (582 kB) Downloading setuptools-46.3.0-py3-none-any.whl (582 kB) Downloading setuptools-46.2.0-py3-none-any.whl (582 kB) Downloading setuptools-46.1.3-py3-none-any.whl (582 kB) Downloading setuptools-46.1.2-py3-none-any.whl (582 kB) Downloading setuptools-46.1.1-py3-none-any.whl (582 kB) Downloading setuptools-46.1.0-py3-none-any.whl (582 kB) Downloading setuptools-46.0.0-py3-none-any.whl (582 kB) Downloading setuptools-45.3.0-py3-none-any.whl (585 kB) Downloading setuptools-45.2.0-py3-none-any.whl (584 kB) Downloading setuptools-45.1.0-py3-none-any.whl (583 kB) Downloading setuptools-45.0.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-44.1.1-py2.py3-none-any.whl (583 kB) Downloading setuptools-44.1.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-44.0.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-43.0.0-py2.py3-none-any.whl (583 kB) Downloading setuptools-42.0.2-py2.py3-none-any.whl (583 kB) Downloading setuptools-42.0.1-py2.py3-none-any.whl (582 kB) Downloading setuptools-42.0.0-py2.py3-none-any.whl (582 kB) Downloading setuptools-41.6.0-py2.py3-none-any.whl (582 kB) Downloading setuptools-41.5.1-py2.py3-none-any.whl (581 kB) Downloading setuptools-41.5.0-py2.py3-none-any.whl (581 kB) Downloading setuptools-41.4.0-py2.py3-none-any.whl (580 kB) Downloading setuptools-41.3.0-py2.py3-none-any.whl (580 kB) Downloading setuptools-41.2.0-py2.py3-none-any.whl (576 kB) Downloading setuptools-41.1.0-py2.py3-none-any.whl (576 kB) Downloading setuptools-41.0.1-py2.py3-none-any.whl (575 kB) Downloading setuptools-41.0.0-py2.py3-none-any.whl (575 kB) Downloading setuptools-40.9.0-py2.py3-none-any.whl (575 kB) Downloading setuptools-40.8.0-py2.py3-none-any.whl (575 kB) Downloading setuptools-40.7.3-py2.py3-none-any.whl (574 kB) Downloading setuptools-40.7.2-py2.py3-none-any.whl (574 kB) Downloading setuptools-40.7.1-py2.py3-none-any.whl (574 kB) Downloading setuptools-40.7.0-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.3-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.2-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.1-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.6.0-py2.py3-none-any.whl (573 kB) Downloading setuptools-40.5.0-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.3-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.2-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.1-py2.py3-none-any.whl (569 kB) Downloading setuptools-40.4.0-py2.py3-none-any.whl (568 kB) Downloading setuptools-40.3.0-py2.py3-none-any.whl (568 kB) I am quite confused whether we are using the new pip resolver correctly, especially since - Substantial improvements in new resolver for performance, output and error messages, avoiding infinite loops, and support for constraints files. The behavior seen is described as backtracking in the release notes. I understand why it is there. It specifies that I can use a constraint file (looks like a requirements.txt) that fixes the version of the dependencies to reduce the runtime using pip install -c constraints.txt setup.py. What is the best way to produce this constraints file? Currently, the best way I can think of is running pip install setup.py locally in a new virtual environment, then using pip freeze > constraints.txt. However, this still takes a lot of time for the local install (it's been stuck for about 10 minutes now). The notes do mention that This means the \u201cwork\u201d is done once during development process, and so will save users this work during deployment. With the old dependency resolver, I was able to install this package in less than a minute locally. What is the recommended process here? Edit: I just found out that some of the dependencies are pointing directly to out internal gitlab server. If I instead install directly from our internal package registry, it works in a couple of minutes again.", "output": "Latest update (2022-02) There seems to be major update in pip just few days old (version 22.0, release notes + relevant issue on github). I haven't tested it in more detail but it really seems to me that they optimized installation order calculation in complex case in such way that it resolves many issues we all encountered earlier. But I will need more time to check it. Anyway, the rest of this answer is still valid and smart requirements pinning suitable for particular project is a good practice imo. Since I encountered similar issue I agree this is quite annoying. Backtracking might be useful feature but you don't want to wait hours to complete with uncertain success. I found several option that might help: Use the old resolver (--use-deprecated=legacy-resolver) proposed in the answer by @Daniel Davee, but this is more like temporary solution than a proper one. Skip resolving dependencies with --no-deps option. I would not recommend this generally but in some cases you can have a working set of packages versions although there are some conflicts. Reduce the number of versions pip will try to backtrack and be more strict on package dependencies. This means instead of putting e.g. numpy in my requirements.txt, I could try numpy >= 1.18.0 or be even more strict with numpy == 1.18.0. The strictness might help a lot. Check the following sources: Fixing conflicts Github pip discussion Reducing backtracking I still do not have a proper answer that would always help but the best practice for requirements.txt seems to \"pin\" package versions. I found pip-tools that could help you manage this even with constrains.txt (but I am in an experimental phase so I can not tell you more). Update (2021-04) It seems author of the question was able to fix the issue (something with custom gitlab server) but I would like to extend this answer since it might be useful for others. After reading and trying I ended up with pinning all my package versions to a specific one. This really should be the correct way. Although everything can still work without it, there might be cases where if you don't pin your dependencies, your package manager will silently install a new version (when it's released) with possible bugs or incompatibility (this happens to me with dask last this year). There are several tools which might help you, I would recommend one of these approaches: Easiest one with pipreqs pipreqs is a library which generates pip requirements.txt file based on imports of any project you can start by pip install pipreqs and runnning just pipreqs in your project root (or eventually with --force flag if your requirements already exists) it will easily create requirements.txt with pinned versions based on imports in your project and versions taken from your environment then you can at any time create new environment based on this requirements.txt This is really simple tool (you even do not need to write your requirements.txt). It does not allow you to create something complex (might not be a good choice for bigger projects), last week I found one strange behavior (see this) but generally I'm happy with this tool as it usually works perfectly. Using pip-tools There are several other tools commonly used like pip-tools, Pipenv or Poetry. You can read more in Faster Docker builds with pipenv, poetry, or pip-tools or Python Application Dependency Management in 2018 (older but seems still valid to me). And it still seems to me that the best option (although it depends on your project/use case) is pip-tools. You can (this is one option, see more in docs): create requirements.in (the same format as requirements.txt, it's up to you whether you pin some package dependency or not) then you can use it by pip install pip-tools and running pip-compile requirements.in this will generate new requirements.txt file where all versions are pinned, it's clear, what is the origin (Optionally) you can run it with --generate-hashes option then you can (as with pipreqs) at any time create new environment based on this requirements.txt pip-tools offer you --upgrade option to upgrade the final reqs supports layered requirements (e.g. having dev and prod versions) there is integration with pre-commit offers pip-sync tool to update your environment based on requirements.txt There are few more stuff you can do with it and I really love the integration with pre-commit. This allows you to use the same requirements as before (just with .in suffix) and add pre-commit hook that automatically updates requirements.txt (so you will never experience having different local environment from the generated requirements.txt which might easily happen when you run something manually)."} +{"question_id": 48409128, "score": 186, "creation_date": 1516734685, "tags": ["python", "pandas", "dataframe"], "instruction": "What is the difference between using loc and using just square brackets to filter for columns in Pandas/Python?\n\nI've noticed three methods of selecting a column in a Pandas DataFrame: First method of selecting a column using loc: df_new = df.loc[:, 'col1'] Second method - seems simpler and faster: df_new = df['col1'] Third method - most convenient: df_new = df.col1 Is there a difference between these three methods? I don't think so, in which case I'd rather use the third method. I'm mostly curious as to why there appear to be three methods for doing the same thing.", "output": "In the following situations, they behave the same: Selecting a single column (df['A'] is the same as df.loc[:, 'A'] -> selects column A) Selecting a list of columns (df[['A', 'B', 'C']] is the same as df.loc[:, ['A', 'B', 'C']] -> selects columns A, B and C) Slicing by rows (df[1:3] is the same as df.iloc[1:3] -> selects rows 1 and 2. Note, however, if you slice rows with loc, instead of iloc, you'll get rows 1, 2 and 3 assuming you have a RangeIndex. See details here.) However, [] does not work in the following situations: You can select a single row with df.loc[row_label] You can select a list of rows with df.loc[[row_label1, row_label2]] You can slice columns with df.loc[:, 'A':'C'] These three cannot be done with []. More importantly, if your selection involves both rows and columns, then assignment becomes problematic. df[1:3]['A'] = 5 This selects rows 1 and 2 then selects column 'A' of the returning object and assigns value 5 to it. The problem is, the returning object might be a copy so this may not change the actual DataFrame. This raises SettingWithCopyWarning. The correct way of making this assignment is: df.loc[1:3, 'A'] = 5 With .loc, you are guaranteed to modify the original DataFrame. It also allows you to slice columns (df.loc[:, 'C':'F']), select a single row (df.loc[5]), and select a list of rows (df.loc[[1, 2, 5]]). Also note that these two were not included in the API at the same time. .loc was added much later as a more powerful and explicit indexer. See unutbu's answer for more detail. Note: Getting columns with [] vs . is a completely different topic. . is only there for convenience. It only allows accessing columns whose names are valid Python identifiers (i.e. they cannot contain spaces, they cannot be composed of numbers...). It cannot be used when the names conflict with Series/DataFrame methods. It also cannot be used for non-existing columns (i.e. the assignment df.a = 1 won't work if there is no column a). Other than that, . and [] are the same."} +{"question_id": 6862250, "score": 185, "creation_date": 1311869979, "tags": ["python", "html", "django", "django-forms"], "instruction": "Change a Django form field to a hidden field\n\nI have a Django form with a RegexField, which is very similar to a normal text input field. In my view, under certain conditions I want to hide it from the user, and trying to keep the form as similar as possible. What's the best way to turn this field into a HiddenInput field? I know I can set attributes on the field with: form['fieldname'].field.widget.attr['readonly'] = 'readonly' And I can set the desired initial value with: form.initial['fieldname'] = 'mydesiredvalue' However, that won't change the form of the widget. What's the best / most \"django-y\" / least \"hacky\" way to make this field a <input type=\"hidden\"> field?", "output": "If you have a custom template and view you may exclude the field and use {{ modelform.instance.field }} to get the value. also you may prefer to use in the view: field = form.fields['field_name'] field.widget = field.hidden_widget() but I'm not sure it will protect save method on post. edit: field with multiple values don't supports HiddenInput as input type, so use default hidden input widget for this field instead."} +{"question_id": 13269890, "score": 185, "creation_date": 1352291592, "tags": ["python", "pandas", "merge", "cartesian-product", "cross-join"], "instruction": "cartesian product in pandas\n\nI have two pandas dataframes: from pandas import DataFrame df1 = DataFrame({'col1':[1,2],'col2':[3,4]}) df2 = DataFrame({'col3':[5,6]}) What is the best practice to get their cartesian product (of course without writing it explicitly like me)? #df1, df2 cartesian product df_cartesian = DataFrame({'col1':[1,2,1,2],'col2':[3,4,3,4],'col3':[5,5,6,6]})", "output": "In recent versions of Pandas (>= 1.2) this is built into merge so you can do: from pandas import DataFrame df1 = DataFrame({'col1':[1,2],'col2':[3,4]}) df2 = DataFrame({'col3':[5,6]}) df1.merge(df2, how='cross') This is equivalent to the previous pandas < 1.2 answer but is easier to read. For pandas < 1.2: If you have a key that is repeated for each row, then you can produce a cartesian product using merge (like you would in SQL). from pandas import DataFrame, merge df1 = DataFrame({'key':[1,1], 'col1':[1,2],'col2':[3,4]}) df2 = DataFrame({'key':[1,1], 'col3':[5,6]}) merge(df1, df2,on='key')[['col1', 'col2', 'col3']] Output: col1 col2 col3 0 1 3 5 1 1 3 6 2 2 4 5 3 2 4 6 See here for the documentation: http://pandas.pydata.org/pandas-docs/stable/merging.html"} +{"question_id": 11458239, "score": 185, "creation_date": 1342117662, "tags": ["python", "tuples"], "instruction": "How to change values in a tuple?\n\nI have a tuple called values which contains the following: ('275', '54000', '0.0', '5000.0', '0.0') I want to change the first value (i.e., 275) in this tuple but I understand that tuples are immutable so values[0] = 200 will not work. How can I achieve this?", "output": "It's possible via: t = ('275', '54000', '0.0', '5000.0', '0.0') lst = list(t) lst[0] = '300' t = tuple(lst) But if you're going to need to change things, you probably are better off keeping it as a list"} +{"question_id": 6628476, "score": 185, "creation_date": 1310148135, "tags": ["python", "ubuntu", "virtualenv", "pip"], "instruction": "Renaming a virtualenv folder without breaking it\n\nI've created folder and initialized a virtualenv instance in it. $ mkdir myproject $ cd myproject $ virtualenv env When I run (env)$ pip freeze, it shows the installed packages as it should. Now I want to rename myproject/ to project/. $ mv myproject/ project/ However, now when I run $ . env/bin/activate (env)$ pip freeze it says pip is not installed. How do I rename the project folder without breaking the environment?", "output": "This answer uses an obsolete flag that is not available on virtualenv after v16.7.11 (2021-07-20) You need to adjust your install to use relative paths. virtualenv provides for this with the experimental --relocatable option. From the docs: Normally environments are tied to a specific path. That means that you cannot move an environment around or copy it to another computer. You can fix up an environment to make it relocatable with the command: $ virtualenv --relocatable ENV NOTE: ENV is the name of the virtual environment and you must run this from outside the ENV directory. This will make some of the files created by setuptools or distribute use relative paths, and will change all the scripts to use activate_this.py instead of using the location of the Python interpreter to select the environment. Note: you must run this after you've installed any packages into the environment. If you make an environment relocatable, then install a new package, you must run virtualenv --relocatable again."} +{"question_id": 11597901, "score": 185, "creation_date": 1342933890, "tags": ["python", "list", "syntax"], "instruction": "Why are trailing commas allowed in a list?\n\nI am curious why in Python a trailing comma in a list is valid syntax, and it seems that Python simply ignores it: >>> ['a','b',] ['a', 'b'] It makes sense when its a tuple since ('a') and ('a',) are two different things, but in lists?", "output": "The main advantages are that it makes multi-line lists easier to edit and that it reduces clutter in diffs. Changing: s = ['manny', 'mo', 'jack', ] to: s = ['manny', 'mo', 'jack', 'roger', ] involves only a one-line change in the diff: s = ['manny', 'mo', 'jack', + 'roger', ] This beats the more confusing multi-line diff when the trailing comma was omitted: s = ['manny', 'mo', - 'jack' + 'jack', + 'roger' ] The latter diff makes it harder to see that only one line was added and that the other line didn't change content. It also reduces the risk of doing this: s = ['manny', 'mo', 'jack' 'roger' # Added this line, but forgot to add a comma on the previous line ] and triggering implicit string literal concatenation, producing s = ['manny', 'mo', 'jackroger'] instead of the intended result."} +{"question_id": 12791997, "score": 185, "creation_date": 1349749127, "tags": ["python", "chmod"], "instruction": "How do you do a simple \"chmod +x\" from within python?\n\nI want to create a file from within a python script that is executable. import os import stat os.chmod('somefile', stat.S_IEXEC) it appears os.chmod doesn't 'add' permissions the way unix chmod does. With the last line commented out, the file has the filemode -rw-r--r--, with it not commented out, the file mode is ---x------. How can I just add the u+x flag while keeping the rest of the modes intact?", "output": "Use os.stat() to get the current permissions, use | to OR the bits together, and use os.chmod() to set the updated permissions. Example: import os import stat st = os.stat('somefile') os.chmod('somefile', st.st_mode | stat.S_IEXEC)"} +{"question_id": 1499019, "score": 185, "creation_date": 1254325907, "tags": ["python", "operating-system", "environment-variables"], "instruction": "How to get the PATH environment-variable separator in Python?\n\nWhen multiple directories need to be concatenated, as in an executable search path, there is an os-dependent separator character. For Windows it's ';', for Linux it's ':'. Is there a way in Python to get which character to split on? In the discussions to this question How do I find out my python path using python? , it is suggested that os.sep will do it. That answer is wrong, since it is the separator for components of a directory or filename and equates to '\\\\' or '/'.", "output": "os.pathsep The character conventionally used by the operating system to separate search path components (as in PATH), such as ':' for POSIX or ';' for Windows. Also available via os.path."} +{"question_id": 75898276, "score": 185, "creation_date": 1680263884, "tags": ["python", "prompt", "openai-api", "completion", "chatgpt-api"], "instruction": "OpenAI API error 429: \"You exceeded your current quota, please check your plan and billing details\"\n\nI'm making a Python script to use OpenAI via its API. However, I'm getting this error: openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details My script is the following: #!/usr/bin/env python3.8 # -*- coding: utf-8 -*- import openai openai.api_key = \"<My PAI Key>\" completion = openai.ChatCompletion.create( model=\"gpt-3.5-turbo\", messages=[ {\"role\": \"user\", \"content\": \"Tell the world about the ChatGPT API in the style of a pirate.\"} ] ) print(completion.choices[0].message.content) I'm declaring the shebang python3.8, because I'm using pyenv. I think it should work, since I did 0 API requests, so I'm assuming there's an error in my code.", "output": "TL;DR: You need to upgrade to a paid plan. Set up a paid account, add a credit or debit card, and generate a new API key if your old one was generated before the upgrade. It might take 10 minutes or so after you upgrade to a paid plan before the paid account becomes active and the error disappears. Problem As stated in the official OpenAI documentation: TYPE OVERVIEW RateLimitError Cause: You have hit your assigned rate limit. Solution: Pace your requests. Read more in our rate limit guide. Also, read more about Error Code 429 - You exceeded your current quota, please check your plan and billing details: This (i.e., 429) error message indicates that you have hit your maximum monthly spend (hard limit) for the API. This means that you have consumed all the credits or units allocated to your plan and have reached the limit of your billing cycle. This could happen for several reasons, such as: You are using a high-volume or complex service that consumes a lot of credits or units per request. You are using a large or diverse data set that requires a lot of requests to process. Your limit is set too low for your organization\u2019s usage. Did you sign up some time ago? You're getting error 429 because either you used all your free tokens or 3 months have passed since you signed up. As stated in the official OpenAI article: To explore and experiment with the API, all new users get $5 worth of free tokens. These tokens expire after 3 months. After the quota has passed you can choose to enter billing information to upgrade to a paid plan and continue your use of the API on pay-as-you-go basis. If no billing information is entered you will still have login access, but will be unable to make any further API requests. Please see the pricing page for the latest information on pay-as-you-go pricing. Note: If you signed up earlier (e.g., in December 2022), you got $18 worth of free tokens. Check your API usage in the usage dashboard. For example, my free trial expires tomorrow and this is what I see right now in the usage dashboard: This is how my dashboard looks after expiration: If I run a simple script after my free trial has expired, I get the following error: openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details. Did you create your second OpenAI account? You're getting error 429 because you created a second OpenAI account with the same phone number. It seems like free credit is given based on phone numbers. As explained on the official OpenAI forum by @SapphireFelineBytes: I created an Open AI account in November and my $18 credits expired on March 1st. So, like many of you here, I tried creating a new account with a different email address, but same number. They gave me $0 credits. I tried now with a different phone number and email. This time I got $5 credits. It's confirmed that free credit is given based on phone numbers, as explained on the official OpenAI forum by @logankilpatrick: Also note, you only get free credits for the first account associated with your phone number. Subsequent accounts are not granted free credits. Solution Try to do the following: Set up paid account. Add a credit or debit card. Generate a new API key if your old API key was generated before you upgraded to the paid plan. When you upgrade to a paid plan, don't expect the error to disappear immediately, as @dcferreira mentioned in the comment above. It might take a few minutes to more than an hour after the upgrade before the error disappears. In the comment below, @JoeMornin confirmed that it took 10 minutes for his paid account to become active. In the meantime, he was getting the following error: You've reached your usage limit. See your usage dashboard and billing settings for more details. If you have further questions, please contact us through our help center at help.openai.com."} +{"question_id": 35340921, "score": 184, "creation_date": 1455198031, "tags": ["python", "amazon-web-services", "aws-lambda"], "instruction": "AWS error from Python: No module named lambda_function\n\nI am creating a AWS Lambda python deployment package. I am using one external dependency requests. I installed the external dependency using the AWS documentation. Below is my Python code. import requests print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print(\"Received event: \" + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8') try: response = s3.get_object(Bucket=bucket, Key=key) s3.download_file(bucket,key, '/tmp/data.txt') lines = [line.rstrip('\\n') for line in open('/tmp/data.txt')] for line in lines: col=line.split(',') print(col[5],col[6]) print(\"CONTENT TYPE: \" + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Created the Zip the content of the project-dir directory and uploaded to the lambda(Zip the directory content, not the directory). When I am execute the function I am getting the below mentioned error. START RequestId: 9e64e2c7-d0c3-11e5-b34e-75c7fb49d058 Version: $LATEST **Unable to import module 'lambda_function': No module named lambda_function** END RequestId: 9e64e2c7-d0c3-11e5-b34e-75c7fb49d058 REPORT RequestId: 9e64e2c7-d0c3-11e5-b34e-75c7fb49d058 Duration: 19.63 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 9 MB", "output": "Error was due to file name of the lambda function. The console no longer asks you what handler name you want - it just sets it to \"lambda_function.lambda_handler\". So, you need to make sure your filename is \"lambda_function.py\" and the function name in there is \"lambda_handler\". The other alternative is explicitly setting the name of the handler after creating the function."} +{"question_id": 15562446, "score": 184, "creation_date": 1363924525, "tags": ["python", "flask"], "instruction": "How to stop flask application without using ctrl-c\n\nI want to implement a command which can stop flask application by using flask-script. I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.", "output": "If you are just running the server on your desktop, you can expose an endpoint to kill the server (read more at Shutdown The Simple Server): from flask import request def shutdown_server(): func = request.environ.get('werkzeug.server.shutdown') if func is None: raise RuntimeError('Not running with the Werkzeug Server') func() @app.get('/shutdown') def shutdown(): shutdown_server() return 'Server shutting down...' Here is another approach that is more contained: from multiprocessing import Process server = Process(target=app.run) server.start() # ... server.terminate() server.join() Let me know if this helps."} +{"question_id": 8238360, "score": 184, "creation_date": 1322031630, "tags": ["python", "python-2.7", "exception"], "instruction": "How to save traceback / sys.exc_info() values in a variable?\n\nI want to save the name of the error and the traceback details into a variable. Here's is my attempt. import sys try: try: print x except Exception, ex: raise NameError except Exception, er: print \"0\", sys.exc_info()[0] print \"1\", sys.exc_info()[1] print \"2\", sys.exc_info()[2] Output: 0 <type 'exceptions.NameError'> 1 2 <traceback object at 0xbd5fc8> Desired Output: 0 NameError 1 2 Traceback (most recent call last): File \"exception.py\", line 6, in <module> raise NameError P.S. I know this can be done easily using the traceback module, but I want to know usage of sys.exc_info()[2] object here.", "output": "This is how I do it: >>> import traceback >>> try: ... int('k') ... except: ... var = traceback.format_exc() ... >>> print(var) Traceback (most recent call last): File \"<stdin>\", line 2, in <module> ValueError: invalid literal for int() with base 10: 'k' You should however take a look at the traceback documentation, as you might find there more suitable methods, depending to how you want to process your variable afterwards..."} +{"question_id": 36399381, "score": 184, "creation_date": 1459763422, "tags": ["python", "geometry"], "instruction": "What's the fastest way of checking if a point is inside a polygon in python\n\nI found two main methods to look if a point belongs inside a polygon. One is using the ray tracing method used here, which is the most recommended answer, the other is using matplotlib path.contains_points (which seems a bit obscure to me). I will have to check lots of points continuously. Does anybody know if any of these two is more recommendable than the other or if there are even better third options? I checked the two methods and matplotlib looks much faster. from time import time import numpy as np import matplotlib.path as mpltPath # regular polygon for testing lenpoly = 100 polygon = [[np.sin(x)+0.5,np.cos(x)+0.5] for x in np.linspace(0,2*np.pi,lenpoly)[:-1]] # random points set of points to test N = 10000 points = np.random.rand(N,2) # Ray tracing def ray_tracing_method(x,y,poly): n = len(poly) inside = False p1x,p1y = poly[0] for i in range(n+1): p2x,p2y = poly[i % n] if y > min(p1y,p2y): if y <= max(p1y,p2y): if x <= max(p1x,p2x): if p1y != p2y: xints = (y-p1y)*(p2x-p1x)/(p2y-p1y)+p1x if p1x == p2x or x <= xints: inside = not inside p1x,p1y = p2x,p2y return inside start_time = time() inside1 = [ray_tracing_method(point[0], point[1], polygon) for point in points] print(\"Ray Tracing Elapsed time: \" + str(time()-start_time)) # Matplotlib mplPath start_time = time() path = mpltPath.Path(polygon) inside2 = path.contains_points(points) print(\"Matplotlib contains_points Elapsed time: \" + str(time()-start_time)) which gives, Ray Tracing Elapsed time: 0.441395998001 Matplotlib contains_points Elapsed time: 0.00994491577148 Same relative difference was obtained one using a triangle instead of the 100 sides polygon. I will also check shapely since it looks a package just devoted to these kind of problems.", "output": "You can consider shapely: from shapely.geometry import Point from shapely.geometry.polygon import Polygon point = Point(0.5, 0.5) polygon = Polygon([(0, 0), (0, 1), (1, 1), (1, 0)]) print(polygon.contains(point)) From the methods you've mentioned I've only used the second, path.contains_points, and it works fine. In any case depending on the precision you need for your test I would suggest creating a numpy bool grid with all nodes inside the polygon to be True (False if not). If you are going to make a test for a lot of points this might be faster (although notice this relies you are making a test within a \"pixel\" tolerance): from matplotlib import path import matplotlib.pyplot as plt import numpy as np first = -3 size = (3-first)/100 xv,yv = np.meshgrid(np.linspace(-3,3,100),np.linspace(-3,3,100)) p = path.Path([(0,0), (0, 1), (1, 1), (1, 0)]) # square with legs length 1 and bottom left corner at the origin flags = p.contains_points(np.hstack((xv.flatten()[:,np.newaxis],yv.flatten()[:,np.newaxis]))) grid = np.zeros((101,101),dtype='bool') grid[((xv.flatten()-first)/size).astype('int'),((yv.flatten()-first)/size).astype('int')] = flags xi,yi = np.random.randint(-300,300,100)/100,np.random.randint(-300,300,100)/100 vflag = grid[((xi-first)/size).astype('int'),((yi-first)/size).astype('int')] plt.imshow(grid.T,origin='lower',interpolation='nearest',cmap='binary') plt.scatter(((xi-first)/size).astype('int'),((yi-first)/size).astype('int'),c=vflag,cmap='Greens',s=90) plt.show() The result is this:"} +{"question_id": 47533787, "score": 184, "creation_date": 1511878764, "tags": ["python", "python-typing"], "instruction": "Type hinting tuples in Python\n\nWhen I want to typehint a tuple in Python like: def func(var: tuple[int, int]): # do something func((1, 2)) # would be fine func((1, 2, 3)) # would throw an error It is required to give the exact number of items in the tuple. That's different from list typehinting: def func(var: list[int]): # do something func([1]) # would be fine func([1, 2]) # would also be fine func([1, 2, 3]) # would also be fine That's consequentially, in a way, because of the type of tuples. Because they are designed not to be changed, you have to hardcode the amount of items in it. So my question is, is there a way to make the number of items in a tuple type hint flexible? I tried something like that but it didn't work: def func(var: tuple[*int]):", "output": "Yes, you can make the number of items in a tuple type hint flexible. From Python 3.9 and on you can do: def func(var: tuple[int, ...]): pass Before Python 3.9 you would do: from typing import Tuple def func(var: Tuple[int, ...]): pass From the docs: https://docs.python.org/3/library/stdtypes.html#tuple To specify a variable-length tuple of homogeneous type, use literal ellipsis, e.g. tuple[int, ...]. A plain tuple is equivalent to tuple[Any, ...], and in turn to tuple."} +{"question_id": 12876177, "score": 184, "creation_date": 1350156079, "tags": ["python", "tuples"], "instruction": "How to create a \"singleton\" tuple with only one element\n\nIn the below example, I would expect all the elements to be tuples. Why is a tuple converted to a string when it only contains a single string? >>> a = [('a'), ('b'), ('c', 'd')] >>> a ['a', 'b', ('c', 'd')] >>> >>> for elem in a: ... print type(elem) ... <type 'str'> <type 'str'> <type 'tuple'>", "output": "why is a tuple converted to a string when it only contains a single string? a = [('a'), ('b'), ('c', 'd')] Because those first two elements aren't tuples; they're just strings. The parenthesis don't automatically make them tuples. You have to add a comma after the string to indicate to python that it should be a tuple. >>> type( ('a') ) <type 'str'> >>> type( ('a',) ) <type 'tuple'> To fix your example code, add commas here: >>> a = [('a',), ('b',), ('c', 'd')] ^ ^ From the Python Docs: A special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these. Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses). Ugly, but effective. If you truly hate the trailing comma syntax, a workaround is to pass a list to the tuple() function: x = tuple(['a'])"} +{"question_id": 42708389, "score": 183, "creation_date": 1489108263, "tags": ["python", "django", "environment-variables"], "instruction": "How to set environment variables in PyCharm?\n\nI have started to work on a Django project, and I would like to set some environment variables without setting them manually or having a bash file to source. I want to set the following variables: export DATABASE_URL=postgres://127.0.0.1:5432/my_db_name export DEBUG=1 # there are other variables, but they contain personal information I have read this, but that does not solve what I want. In addition, I have tried setting the environment variables in Preferences-> Build, Execution, Deployment->Console->Python Console/Django Console, but it sets the variables for the interpreter.", "output": "You can set environmental variables in Pycharm's run configurations menu. Open the Run Configuration selector in the top-right and click Edit Configurations... Select the correct file from the menu, find Environmental variables and click ... Add or change variables, then click OK You can access your environmental variables with os.environ import os print(os.environ['SOME_VAR'])"} +{"question_id": 76187256, "score": 183, "creation_date": 1683349902, "tags": ["python", "openai-api", "urllib3"], "instruction": "ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with LibreSSL 2.8.3\n\nAfter pip install openai, when I try to import openai, it shows this error: the 'ssl' module of urllib3 is compile with LibreSSL not OpenSSL I just followed a tutorial on a project about using API of OpenAI. But when I get to the first step which is the install and import OpenAI, I got stuck. And I tried to find the solution for this error but I found nothing. Here is the message after I try to import OpenAI: Python 3.9.6 (default, Mar 10 2023, 20:16:38) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import openai Traceback (most recent call last): File \"<stdin>\", line 1, in <module> File \"/Users/yule/Library/Python/3.9/lib/python/site-packages/openai/__init__.py\", line 19, in <module> from openai.api_resources import ( File \"/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_resources/__init__.py\", line 1, in <module> from openai.api_resources.audio import Audio # noqa: F401 File \"/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_resources/audio.py\", line 4, in <module> from openai import api_requestor, util File \"/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py\", line 22, in <module> import requests File \"/Users/mic/Library/Python/3.9/lib/python/site-packages/requests/__init__.py\", line 43, in <module> import urllib3 File \"/Users/mic/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py\", line 38, in <module> raise ImportError( ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with LibreSSL 2.8.3. See: https://github.com/urllib3/urllib3/issues/2168 I tried to --upgrade the urllib3, but it is still not working. The result is: pip3 install --upgrade urllib3 Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: urllib3 in ./Library/Python/3.9/lib/python/site-packages (2.0.2)", "output": "The reason why the error message mentioned OpenSSL 1.1.1+ and LibreSSL 2.8.3 is that urllib3 v2.0 (the version you've installed) requires OpenSSL 1.1.1+ to work properly, as it relies on some new features of OpenSSL 1.1.1. The issue is that the version of the 'ssl' module that is currently installed in your environment is compiled with LibreSSL 2.8.3, which is not compatible with urllib3 v2.0. To use urllib3 v2.0, you need an 'ssl' module compiled with OpenSSL 1.1.1 or later, by trying: brew install openssl@1.1 Or you could use an older version of urllib3 that is compatible suc. For example urllib3 v1.26.6, which does not have a strict OpenSSL version requirement. You can force the version installing with this command: pip install urllib3==1.26.6"} +{"question_id": 8187082, "score": 183, "creation_date": 1321640304, "tags": ["python"], "instruction": "How can you set class attributes from variable arguments (kwargs) in python\n\nSuppose I have a class with a constructor (or other function) that takes a variable number of arguments and then sets them as class attributes conditionally. I could set them manually, but it seems that variable parameters are common enough in python that there should be a common idiom for doing this. But I'm not sure how to do this dynamically. I have an example using eval, but that's hardly safe. I want to know the proper way to do this -- maybe with lambda? class Foo: def setAllManually(self, a=None, b=None, c=None): if a!=None: self.a = a if b!=None: self.b = b if c!=None: self.c = c def setAllWithEval(self, **kwargs): for key in **kwargs: if kwargs[param] != None eval(\"self.\" + key + \"=\" + kwargs[param])", "output": "You could update the __dict__ attribute (which represents the instance attributes in the form of a dictionary) with the keyword arguments: class Bar(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) then you can: >>> bar = Bar(a=1, b=2) >>> bar.a 1 and with something like: allowed_keys = {'a', 'b', 'c'} self.__dict__.update((k, v) for k, v in kwargs.items() if k in allowed_keys) you could filter the keys beforehand (use iteritems instead of items if you\u2019re still using Python 2.x)."} +{"question_id": 283645, "score": 183, "creation_date": 1226488712, "tags": ["python", "sql"], "instruction": "Python list in SQL query as parameter\n\nI have a Python list, say l = [1,5,8] I want to write a SQL query to get the data for all the elements of the list, say select name from students where id = |IN THE LIST l| How do I accomplish this?", "output": "Answers so far have been templating the values into a plain SQL string. That's absolutely fine for integers, but if we wanted to do it for strings we get the escaping issue. Here's a variant using a parameterised query that would work for both: placeholder= '?' # For SQLite. See DBAPI paramstyle. placeholders= ', '.join(placeholder for unused in l) query= 'SELECT name FROM students WHERE id IN (%s)' % placeholders cursor.execute(query, l)"} +{"question_id": 3749512, "score": 183, "creation_date": 1284969001, "tags": ["python", "group-by"], "instruction": "Python group by\n\nAssume that I have a set of data pair where index 0 is the value and index 1 is the type: input = [ ('11013331', 'KAT'), ('9085267', 'NOT'), ('5238761', 'ETH'), ('5349618', 'ETH'), ('11788544', 'NOT'), ('962142', 'ETH'), ('7795297', 'ETH'), ('7341464', 'ETH'), ('9843236', 'KAT'), ('5594916', 'ETH'), ('1550003', 'ETH') ] I want to group them by their type (by the 1st indexed string) as such: result = [ { 'type': 'KAT', 'items': ['11013331', '9843236'] }, { 'type': 'NOT', 'items': ['9085267', '11788544'] }, { 'type': 'ETH', 'items': ['5238761', '962142', '7795297', '7341464', '5594916', '1550003'] } ] How can I achieve this in an efficient way?", "output": "Do it in 2 steps. First, create a dictionary. >>> input = [('11013331', 'KAT'), ('9085267', 'NOT'), ('5238761', 'ETH'), ('5349618', 'ETH'), ('11788544', 'NOT'), ('962142', 'ETH'), ('7795297', 'ETH'), ('7341464', 'ETH'), ('9843236', 'KAT'), ('5594916', 'ETH'), ('1550003', 'ETH')] >>> from collections import defaultdict >>> res = defaultdict(list) >>> for v, k in input: res[k].append(v) ... Then, convert that dictionary into the expected format. >>> [{'type':k, 'items':v} for k,v in res.items()] [{'items': ['9085267', '11788544'], 'type': 'NOT'}, {'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'items': ['11013331', '9843236'], 'type': 'KAT'}] It is also possible with itertools.groupby but it requires the input to be sorted first. >>> sorted_input = sorted(input, key=itemgetter(1)) >>> groups = groupby(sorted_input, key=itemgetter(1)) >>> [{'type':k, 'items':[x[0] for x in v]} for k, v in groups] [{'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}, {'items': ['11013331', '9843236'], 'type': 'KAT'}, {'items': ['9085267', '11788544'], 'type': 'NOT'}] Note: before python 3.7, both of these do not respect the original order of the keys. You need an OrderedDict if you need to keep the order. >>> from collections import OrderedDict >>> res = OrderedDict() >>> for v, k in input: ... if k in res: res[k].append(v) ... else: res[k] = [v] ... >>> [{'type':k, 'items':v} for k,v in res.items()] [{'items': ['11013331', '9843236'], 'type': 'KAT'}, {'items': ['9085267', '11788544'], 'type': 'NOT'}, {'items': ['5238761', '5349618', '962142', '7795297', '7341464', '5594916', '1550003'], 'type': 'ETH'}] On or after python 3.7, a regular dict keeps insertion order."} +{"question_id": 38601, "score": 183, "creation_date": 1220311375, "tags": ["python", "django"], "instruction": "Using Django time/date widgets in custom form\n\nHow can I use the nifty JavaScript date and time widgets that the default admin uses with my custom view? I have looked through the Django forms documentation, and it briefly mentions django.contrib.admin.widgets, but I don't know how to use it? Here is my template that I want it applied on. <form action=\".\" method=\"POST\"> <table> {% for f in form %} <tr> <td> {{ f.name }}</td> <td>{{ f }}</td> </tr> {% endfor %} </table> <input type=\"submit\" name=\"submit\" value=\"Add Product\"> </form> Also, I think it should be noted that I haven't really written a view up myself for this form, I am using a generic view. Here is the entry from the url.py: (r'^admin/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}), And I am relevantly new to the whole Django/MVC/MTV thing, so please go easy...", "output": "The growing complexity of this answer over time, and the many hacks required, probably ought to caution you against doing this at all. It's relying on undocumented internal implementation details of the admin, is likely to break again in future versions of Django, and is no easier to implement than just finding another JS calendar widget and using that. That said, here's what you have to do if you're determined to make this work: Define your own ModelForm subclass for your model (best to put it in forms.py in your app), and tell it to use the AdminDateWidget / AdminTimeWidget / AdminSplitDateTime (replace 'mydate' etc with the proper field names from your model): from django import forms from my_app.models import Product from django.contrib.admin import widgets class ProductForm(forms.ModelForm): class Meta: model = Product def __init__(self, *args, **kwargs): super(ProductForm, self).__init__(*args, **kwargs) self.fields['mydate'].widget = widgets.AdminDateWidget() self.fields['mytime'].widget = widgets.AdminTimeWidget() self.fields['mydatetime'].widget = widgets.AdminSplitDateTime() Change your URLconf to pass 'form_class': ProductForm instead of 'model': Product to the generic create_object view (that'll mean from my_app.forms import ProductForm instead of from my_app.models import Product, of course). In the head of your template, include {{ form.media }} to output the links to the Javascript files. And the hacky part: the admin date/time widgets presume that the i18n JS stuff has been loaded, and also require core.js, but don't provide either one automatically. So in your template above {{ form.media }} you'll need: <script type=\"text/javascript\" src=\"/my_admin/jsi18n/\"></script> <script type=\"text/javascript\" src=\"/media/admin/js/core.js\"></script> You may also wish to use the following admin CSS (thanks Alex for mentioning this): <link rel=\"stylesheet\" type=\"text/css\" href=\"/media/admin/css/forms.css\"/> <link rel=\"stylesheet\" type=\"text/css\" href=\"/media/admin/css/base.css\"/> <link rel=\"stylesheet\" type=\"text/css\" href=\"/media/admin/css/global.css\"/> <link rel=\"stylesheet\" type=\"text/css\" href=\"/media/admin/css/widgets.css\"/> This implies that Django's admin media (ADMIN_MEDIA_PREFIX) is at /media/admin/ - you can change that for your setup. Ideally you'd use a context processor to pass this values to your template instead of hardcoding it, but that's beyond the scope of this question. This also requires that the URL /my_admin/jsi18n/ be manually wired up to the django.views.i18n.javascript_catalog view (or null_javascript_catalog if you aren't using I18N). You have to do this yourself instead of going through the admin application so it's accessible regardless of whether you're logged into the admin (thanks Jeremy for pointing this out). Sample code for your URLconf: (r'^my_admin/jsi18n', 'django.views.i18n.javascript_catalog'), Lastly, if you are using Django 1.2 or later, you need some additional code in your template to help the widgets find their media: {% load adminmedia %} /* At the top of the template. */ /* In the head section of the template. */ <script type=\"text/javascript\"> window.__admin_media_prefix__ = \"{% filter escapejs %}{% admin_media_prefix %}{% endfilter %}\"; </script> Thanks lupefiasco for this addition."} +{"question_id": 71673404, "score": 183, "creation_date": 1648626375, "tags": ["python", "python-black"], "instruction": "ImportError: cannot import name '_unicodefun' from 'click'\n\nWhen running our lint checks with the Python Black package, an error comes up: ImportError: cannot import name '_unicodefun' from 'click' (/Users/robot/.cache/pre-commit/repo3u71ccm2/py_env-python3.9/lib/python3.9/site-packages/click/init.py)` In researching this, I found the following related issues: ImportError: cannot import name '_unicodefun' from 'click' #2976 ImportError: cannot import name '_unicodefun' from 'click' #6013 How can I solve this problem? Is this a false positive from the linter? Do I need to modify my code?", "output": "This has been fixed by Black 22.3.0. Versions before that won't work with click 8.1.0. Incompatible with click 8.1.0 (ImportError: cannot import name '_unicodefun' from 'click') #2964 E.g.: black.yml python-version: 3.8 - name: install black run: | - pip install black==20.8b1 + pip install black==22.3.0 - name: run black run: | black . --check --line-length 100 https://github.com/Clinical-Genomics/cgbeacon2/pull/221/files As a workaround, pin click to the last version via pip install --upgrade click==8.0.2."} +{"question_id": 6593979, "score": 183, "creation_date": 1309943317, "tags": ["python", "list", "set", "python-2.6"], "instruction": "How to convert a set to a list in python?\n\nI am trying to convert a set to a list in Python 2.6. I'm using this syntax: first_list = [1,2,3,4] my_set=set(first_list) my_list = list(my_set) However, I get the following stack trace: Traceback (most recent call last): File \"<console>\", line 1, in <module> TypeError: 'set' object is not callable How can I fix this?", "output": "It is already a list: >>> type(my_set) <class 'list'> Do you want something like: >>> my_set = set([1, 2, 3, 4]) >>> my_list = list(my_set) >>> print(my_list) [1, 2, 3, 4] EDIT: Output of your last comment: >>> my_list = [1,2,3,4] >>> my_set = set(my_list) >>> my_new_list = list(my_set) >>> print(my_new_list) [1, 2, 3, 4] I'm wondering if you did something like this: >>> set = set() >>> set([1, 2]) Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: 'set' object is not callable"} +{"question_id": 70837397, "score": 183, "creation_date": 1643042804, "tags": ["python", "pandas", "dataframe", "data-wrangling", "data-munging"], "instruction": "Good alternative to Pandas .append() method, now that it has been deprecated?\n\nI use the following method a lot to append a single row to a dataframe. However, it has been deprecated. One thing I really like about it is that it allows you to append a simple dict object. For example: # Creating an empty dataframe df = pd.DataFrame(columns=['a', 'b']) # Appending a row df = df.append({ 'a': 1, 'b': 2 }, ignore_index=True) Again, what I like most about this is that the code is very clean and requires very few lines. Now I suppose the recommended alternative is: # Create the new row as its own dataframe df_new_row = pd.DataFrame({ 'a': [1], 'b': [2] }) df = pd.concat([df, df_new_row]) So what was one line of code before is now two lines with a throwaway variable and extra cruft where I create the new dataframe. :( Is there a good way to do this that just uses a dict like I have in the past (that is not deprecated)?", "output": "Create a list with your dictionaries, if they are needed, and then create a new dataframe with df = pd.DataFrame.from_records(your_list). List's \"append\" method are very efficient and won't be ever deprecated. Dataframes on the other hand, frequently have to be recreated and all data copied over on appends, due to their design - that is why they deprecated the method"} +{"question_id": 24886625, "score": 182, "creation_date": 1406029832, "tags": ["python", "matplotlib", "pycharm"], "instruction": "Pycharm does not show plot\n\nPycharm does not show plot from the following code: import pandas as pd import numpy as np import matplotlib as plt ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts = ts.cumsum() ts.plot() What happens is that a window appears for less than a second, and then disappears again. Using the Pyzo IEP IDE (using same interpreter) on the same code the plot shows as expected. ...So the problem must be with some setting on Pycharm. I've tried using both python.exe and pythonw.exe as interpreter both with same results. This is my sys_info: C:\\pyzo2014a\\pythonw.exe -u C:\\Program Files (x86)\\JetBrains\\PyCharm Community Edition 3.4.1\\helpers\\pydev\\pydevconsole.py 57315 57316 PyDev console: using IPython 2.1.0import sys; print('Python %s on %s' % (sys.version, sys.platform)) Python 3.4.1 |Continuum Analytics, Inc.| (default, May 19 2014, 13:02:30) [MSC v.1600 64 bit (AMD64)] on win32 sys.path.extend(['C:\\\\Users\\\\Rasmus\\\\PycharmProjects\\\\untitled2']) In[3]: import IPython print(IPython.sys_info()) {'commit_hash': '681fd77', 'commit_source': 'installation', 'default_encoding': 'UTF-8', 'ipython_path': 'C:\\\\pyzo2014a\\\\lib\\\\site-packages\\\\IPython', 'ipython_version': '2.1.0', 'os_name': 'nt', 'platform': 'Windows-8-6.2.9200', 'sys_executable': 'C:\\\\pyzo2014a\\\\pythonw.exe', 'sys_platform': 'win32', 'sys_version': '3.4.1 |Continuum Analytics, Inc.| (default, May 19 2014, ' '13:02:30) [MSC v.1600 64 bit (AMD64)]'}", "output": "Just use import matplotlib.pyplot as plt plt.show() This command tells the system to draw the plot in Pycharm. Example: plt.imshow(img.reshape((28, 28))) plt.show()"} +{"question_id": 17531684, "score": 182, "creation_date": 1373301331, "tags": ["python", "string", "nltk", "n-gram"], "instruction": "n-grams in python, four, five, six grams?\n\nI'm looking for a way to split a text into n-grams. Normally I would do something like: import nltk from nltk import bigrams string = \"I really like python, it's pretty awesome.\" string_bigrams = bigrams(string) print string_bigrams I am aware that nltk only offers bigrams and trigrams, but is there a way to split my text in four-grams, five-grams or even hundred-grams? Thanks!", "output": "Great native python based answers given by other users. But here's the nltk approach (just in case, the OP gets penalized for reinventing what's already existing in the nltk library). There is an ngram module that people seldom use in nltk. It's not because it's hard to read ngrams, but training a model base on ngrams where n > 3 will result in much data sparsity. from nltk import ngrams sentence = 'this is a foo bar sentences and I want to ngramize it' n = 6 sixgrams = ngrams(sentence.split(), n) for grams in sixgrams: print(grams)"} +{"question_id": 28439701, "score": 182, "creation_date": 1423594938, "tags": ["python", "arrays", "numpy"], "instruction": "How to save and load numpy.array() data properly?\n\nI wonder, how to save and load numpy.array data properly. Currently I'm using the numpy.savetxt() method. For example, if I got an array markers, which looks like this: I try to save it by the use of: numpy.savetxt('markers.txt', markers) In other script I try to open previously saved file: markers = np.fromfile(\"markers.txt\") And that's what I get... Saved data first looks like this: 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 0.000000000000000000e+00 But when I save just loaded data by the use of the same method, ie. numpy.savetxt() it looks like this: 1.398043286095131769e-76 1.398043286095288860e-76 1.396426376485745879e-76 1.398043286055061908e-76 1.398043286095288860e-76 1.182950697433698368e-76 1.398043275797188953e-76 1.398043286095288860e-76 1.210894289234927752e-99 1.398040649781712473e-76 What am I doing wrong? PS there are no other \"backstage\" operation which I perform. Just saving and loading, and that's what I get. Thank you in advance.", "output": "The most reliable way I have found to do this is to use np.savetxt with np.loadtxt and not np.fromfile which is better suited to binary files written with tofile. The np.fromfile and np.tofile methods write and read binary files whereas np.savetxt writes a text file. So, for example: a = np.array([1, 2, 3, 4]) np.savetxt('test1.txt', a, fmt='%d') b = np.loadtxt('test1.txt', dtype=int) a == b # array([ True, True, True, True], dtype=bool) Or: a.tofile('test2.dat') c = np.fromfile('test2.dat', dtype=int) c == a # array([ True, True, True, True], dtype=bool) I use the former method even if it is slower and creates bigger files (sometimes): the binary format can be platform dependent (for example, the file format depends on the endianness of your system). There is a platform independent format for NumPy arrays, which can be saved and read with np.save and np.load: np.save('test3.npy', a) # .npy extension is added if not given d = np.load('test3.npy') a == d # array([ True, True, True, True], dtype=bool)"} +{"question_id": 2451386, "score": 182, "creation_date": 1268698901, "tags": ["python", "operators", "caret"], "instruction": "What does the caret (^) operator do?\n\nI ran across the caret operator in python today and trying it out, I got the following output: >>> 8^3 11 >>> 8^4 12 >>> 8^1 9 >>> 8^0 8 >>> 7^1 6 >>> 7^2 5 >>> 7^7 0 >>> 7^8 15 >>> 9^1 8 >>> 16^1 17 >>> 15^1 14 >>> It seems to be based on 8, so I'm guessing some sort of byte operation? I can't seem to find much about this searching sites other than it behaves oddly for floats, does anybody have a link to what this operator does or can you explain it here?", "output": "It's a bitwise XOR (exclusive OR). It evaluates to True if and only if its arguments differ (one is True, the other is False). To demonstrate: >>> 0^0 0 >>> 1^1 0 >>> 1^0 1 >>> 0^1 1 To explain one of your own examples: >>> 8^3 11 Think about it this way: 1000 # 8 (binary) 0011 # 3 (binary) ---- # APPLY XOR ('vertically') 1011 # result = 11 (binary)"} +{"question_id": 42370977, "score": 182, "creation_date": 1487689624, "tags": ["python", "pandas", "openpyxl", "xlsxwriter"], "instruction": "How to save a new sheet in an existing excel file, using Pandas?\n\nI want to use excel files to store data elaborated with python. My problem is that I can't add sheets to an existing excel file. Here I suggest a sample code to work with in order to reach this issue import pandas as pd import numpy as np path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" x1 = np.random.randn(100, 2) df1 = pd.DataFrame(x1) x2 = np.random.randn(100, 2) df2 = pd.DataFrame(x2) writer = pd.ExcelWriter(path, engine = 'xlsxwriter') df1.to_excel(writer, sheet_name = 'x1') df2.to_excel(writer, sheet_name = 'x2') writer.save() writer.close() This code saves two DataFrames to two sheets, named \"x1\" and \"x2\" respectively. If I create two new DataFrames and try to use the same code to add two new sheets, 'x3' and 'x4', the original data is lost. import pandas as pd import numpy as np path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" x3 = np.random.randn(100, 2) df3 = pd.DataFrame(x3) x4 = np.random.randn(100, 2) df4 = pd.DataFrame(x4) writer = pd.ExcelWriter(path, engine = 'xlsxwriter') df3.to_excel(writer, sheet_name = 'x3') df4.to_excel(writer, sheet_name = 'x4') writer.save() writer.close() I want an excel file with four sheets: 'x1', 'x2', 'x3', 'x4'. I know that 'xlsxwriter' is not the only \"engine\", there is 'openpyxl'. I also saw there are already other people that have written about this issue, but still I can't understand how to do that. Here a code taken from this link import pandas from openpyxl import load_workbook book = load_workbook('Masterfile.xlsx') writer = pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) data_filtered.to_excel(writer, \"Main\", cols=['Diff1', 'Diff2']) writer.save() They say that it works, but it is hard to figure out how. I don't understand what \"ws.title\", \"ws\", and \"dict\" are in this context. Which is the best way to save \"x1\" and \"x2\", then close the file, open it again and add \"x3\" and \"x4\"?", "output": "Thank you. I believe that a complete example could be good for anyone else who have the same issue: import pandas as pd import numpy as np path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" x1 = np.random.randn(100, 2) df1 = pd.DataFrame(x1) x2 = np.random.randn(100, 2) df2 = pd.DataFrame(x2) writer = pd.ExcelWriter(path, engine = 'xlsxwriter') df1.to_excel(writer, sheet_name = 'x1') df2.to_excel(writer, sheet_name = 'x2') writer.close() Here I generate an excel file, from my understanding it does not really matter whether it is generated via the \"xslxwriter\" or the \"openpyxl\" engine. When I want to write without loosing the original data then import pandas as pd import numpy as np from openpyxl import load_workbook path = r\"C:\\Users\\fedel\\Desktop\\excelData\\PhD_data.xlsx\" book = load_workbook(path) writer = pd.ExcelWriter(path, engine = 'openpyxl') writer.book = book x3 = np.random.randn(100, 2) df3 = pd.DataFrame(x3) x4 = np.random.randn(100, 2) df4 = pd.DataFrame(x4) df3.to_excel(writer, sheet_name = 'x3') df4.to_excel(writer, sheet_name = 'x4') writer.close() this code do the job!"} +{"question_id": 10517765, "score": 181, "creation_date": 1336572343, "tags": ["python", "django", "django-debug-toolbar"], "instruction": "django-debug-toolbar not showing up\n\nI looked at other questions and can't figure it out... I did the following to install django-debug-toolbar: pip install django-debug-toolbar added to middleware classes: MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', # Uncomment the next line for simple clickjacking protection: # 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'debug_toolbar.middleware.DebugToolbarMiddleware', ) 3 Added INTERNAL_IPS: INTERNAL_IPS = ('174.121.34.187',) 4 Added debug_toolbar to installed apps I am not getting any errors or anything, and the toolbar doesn't show up on any page, not even admin. I even added the directory of the debug_toolbar templates to my TEMPLATE_DIRS", "output": "What is DEBUG set to? It won't load unless it's True. If it's still not working, try adding '127.0.0.1' to INTERNAL_IPS as well. UPDATE This is a last-ditch-effort move, you shouldn't have to do this, but it will clearly show if there's merely some configuration issue or whether there's some larger issue. Add the following to settings.py: def show_toolbar(request): return True SHOW_TOOLBAR_CALLBACK = show_toolbar That will effectively remove all checks by debug toolbar to determine if it should or should not load itself; it will always just load. Only leave that in for testing purposes, if you forget and launch with it, all your visitors will get to see your debug toolbar too. For explicit configuration, also see the official install docs here. EDIT(6/17/2015): Apparently the syntax for the nuclear option has changed. It's now in its own dictionary: def show_toolbar(request): return True DEBUG_TOOLBAR_CONFIG = { \"SHOW_TOOLBAR_CALLBACK\" : show_toolbar, } Their tests use this dictionary."} +{"question_id": 18267749, "score": 181, "creation_date": 1376637057, "tags": ["python", "google-app-engine", "google-translate", "google-api-python-client", "google-translation-api"], "instruction": "Why is my Python App Engine app using the Translate API getting an error of ImportError: No module named apiclient.discovery?\n\nI got this error in Google App Engine's Python have used Google Translate API, But I don't know how to fix, <module> from apiclient.discovery import build ImportError: No module named apiclient.discovery I'll try to set environment which indicates to Google App Engine SDK, And upload to Google Apps Engine again, always get the error, Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the query that caused it. What is wrong, and how do I fix it?", "output": "You should be able to get these dependencies with this simple install: pip install --upgrade google-api-python-client This is described on the quick start page for python."} +{"question_id": 7300321, "score": 181, "creation_date": 1315150956, "tags": ["python", "download", "pip", "zip"], "instruction": "How to use Python's pip to download and keep the zipped files for a package?\n\nIf I want to use the pip command to download a package (and its dependencies), but keep all of the zipped files that get downloaded (say, django-socialregistration.tar.gz) - is there a way to do that? I've tried various command-line options, but it always seems to unpack and delete the zipfile - or it gets the zipfile, but only for the original package, not the dependencies.", "output": "pip install --download is deprecated. Starting from version 8.0.0 you should use pip download command: pip download <package-name>"} +{"question_id": 32688688, "score": 181, "creation_date": 1442817006, "tags": ["python", "django", "git", "packaging", "setuptools"], "instruction": "How to write setup.py to include a Git repository as a dependency\n\nI am trying to write setup.py for my package. My package needs to specify a dependency on another Git repository. This is what I have so far: from setuptools import setup, find_packages setup( name='abc', packages=find_packages(), url='https://github.abc.com/abc/myabc', description='This is a description for abc', long_description=open('README.md').read(), install_requires=[ \"requests==2.7.0\", \"SomePrivateLib>=0.1.0\", ], dependency_links = [ \"git+git://github.abc.com/abc/SomePrivateLib.git#egg=SomePrivateLib\", ], include_package_data=True, ) When I run: pip install -e https://github.abc.com/abc/myabc.git#egg=analyse I get Could not find a version that satisfies the requirement SomePrivateLib>=0.1.0 (from analyse) (from versions: ) No matching distribution found for SomePrivateLib>=0.1.0 (from analyse) What am I doing wrong?", "output": "Note: this answer is now outdated. Have a look at this answer for up-to-date instructions: https://stackoverflow.com/a/54701434/212774 You can find the right way to do it here. dependency_links=['http://github.com/user/repo/tarball/master#egg=package-1.0'] The key is not to give a link to a Git repository, but a link to a tarball. GitHub creates a tarball of the master branch for you if you append /tarball/master as shown above."} +{"question_id": 6506578, "score": 181, "creation_date": 1309265030, "tags": ["python", "sqlalchemy"], "instruction": "How to create a new database using SQLAlchemy?\n\nUsing SQLAlchemy, an Engine object is created like this: from sqlalchemy import create_engine engine = create_engine(\"postgresql://localhost/mydb\") Accessing engine fails if the database specified in the argument to create_engine (in this case, mydb) does not exist. Is it possible to tell SQLAlchemy to create a new database if the specified database doesn't exist?", "output": "On postgres, three databases are normally present by default. If you are able to connect as a superuser (eg, the postgres role), then you can connect to the postgres or template1 databases. The default pg_hba.conf permits only the unix user named postgres to use the postgres role, so the simplest thing is to just become that user. At any rate, create an engine as usual with a user that has the permissions to create a database: >>> engine = sqlalchemy.create_engine(\"postgresql://postgres@/postgres\") You cannot use engine.execute() however, because postgres does not allow you to create databases inside transactions, and sqlalchemy always tries to run queries in a transaction. To get around this, get the underlying connection from the engine: >>> conn = engine.connect() But the connection will still be inside a transaction, so you have to end the open transaction with a commit: >>> conn.execute(\"commit\") And you can then proceed to create the database using the proper PostgreSQL command for it. >>> conn.execute(\"create database test\") >>> conn.close()"} +{"question_id": 25540259, "score": 181, "creation_date": 1409196551, "tags": ["python", "matplotlib"], "instruction": "Remove or adapt border of frame of legend using matplotlib\n\nWhen plotting a plot using matplotlib: How to remove the box of the legend? How to change the color of the border of the legend box? How to remove only the border of the box of the legend?", "output": "When plotting a plot using matplotlib: How to remove the box of the legend? plt.legend(frameon=False) How to change the color of the border of the legend box? leg = plt.legend() leg.get_frame().set_edgecolor('b') How to remove only the border of the box of the legend? leg = plt.legend() leg.get_frame().set_linewidth(0.0) For the matplotlib object oriented approach: axes.legend(frameon=False) leg = axes.legend() leg.get_frame().set_edgecolor('b') leg.get_frame().set_linewidth(0.0)"} +{"question_id": 15455048, "score": 181, "creation_date": 1363470299, "tags": ["python", "memory-management"], "instruction": "Releasing memory in Python\n\nI have a few related questions regarding memory usage in the following example. If I run in the interpreter, foo = ['bar' for _ in xrange(10000000)] the real memory used on my machine goes up to 80.9mb. I then, del foo real memory goes down, but only to 30.4mb. The interpreter uses 4.4mb baseline so what is the advantage in not releasing 26mb of memory to the OS? Is it because Python is \"planning ahead\", thinking that you may use that much memory again? Why does it release 50.5mb in particular - what is the amount that is released based on? Is there a way to force Python to release all the memory that was used (if you know you won't be using that much memory again)? NOTE This question is different from How can I explicitly free memory in Python? because this question primarily deals with the increase of memory usage from baseline even after the interpreter has freed objects via garbage collection (with use of gc.collect or not).", "output": "Memory allocated on the heap can be subject to high-water marks. This is complicated by Python's internal optimizations for allocating small objects (PyObject_Malloc) in 4 KiB pools, classed for allocation sizes at multiples of 8 bytes -- up to 256 bytes (512 bytes in 3.3). The pools themselves are in 256 KiB arenas, so if just one block in one pool is used, the entire 256 KiB arena will not be released. In Python 3.3 the small object allocator was switched to using anonymous memory maps instead of the heap, so it should perform better at releasing memory. Additionally, the built-in types maintain freelists of previously allocated objects that may or may not use the small object allocator. The int type maintains a freelist with its own allocated memory, and clearing it requires calling PyInt_ClearFreeList(). This can be called indirectly by doing a full gc.collect. Try it like this, and tell me what you get. Here's the link for psutil.Process.memory_info. import os import gc import psutil proc = psutil.Process(os.getpid()) gc.collect() mem0 = proc.memory_info().rss # create approx. 10**7 int objects and pointers foo = ['abc' for x in range(10**7)] mem1 = proc.memory_info().rss # unreference, including x == 9999999 del foo, x mem2 = proc.memory_info().rss # collect() calls PyInt_ClearFreeList() # or use ctypes: pythonapi.PyInt_ClearFreeList() gc.collect() mem3 = proc.memory_info().rss pd = lambda x2, x1: 100.0 * (x2 - x1) / mem0 print \"Allocation: %0.2f%%\" % pd(mem1, mem0) print \"Unreference: %0.2f%%\" % pd(mem2, mem1) print \"Collect: %0.2f%%\" % pd(mem3, mem2) print \"Overall: %0.2f%%\" % pd(mem3, mem0) Output: Allocation: 3034.36% Unreference: -752.39% Collect: -2279.74% Overall: 2.23% Edit: I switched to measuring relative to the process VM size to eliminate the effects of other processes in the system. The C runtime (e.g. glibc, msvcrt) shrinks the heap when contiguous free space at the top reaches a constant, dynamic, or configurable threshold. With glibc you can tune this with mallopt (M_TRIM_THRESHOLD). Given this, it isn't surprising if the heap shrinks by more -- even a lot more -- than the block that you free. In 3.x range doesn't create a list, so the test above won't create 10 million int objects. Even if it did, the int type in 3.x is basically a 2.x long, which doesn't implement a freelist."} +{"question_id": 4391697, "score": 181, "creation_date": 1291838435, "tags": ["python"], "instruction": "Find the index of a dict within a list, by matching the dict's value\n\nI have a list of dicts: list = [{'id':'1234','name':'Jason'}, {'id':'2345','name':'Tom'}, {'id':'3456','name':'Art'}] How can I efficiently find the index position [0],[1], or [2] by matching on name = 'Tom'? If this were a one-dimensional list I could do list.index() but I'm not sure how to proceed by searching the values of the dicts within the list.", "output": "lst = [{'id':'1234','name':'Jason'}, {'id':'2345','name':'Tom'}, {'id':'3456','name':'Art'}] tom_index = next((index for (index, d) in enumerate(lst) if d[\"name\"] == \"Tom\"), None) # 1 If you need to fetch repeatedly from name, you should index them by name (using a dictionary), this way get operations would be O(1) time. An idea: def build_dict(seq, key): return dict((d[key], dict(d, index=index)) for (index, d) in enumerate(seq)) people_by_name = build_dict(lst, key=\"name\") tom_info = people_by_name.get(\"Tom\") # {'index': 1, 'id': '2345', 'name': 'Tom'}"} +{"question_id": 36172913, "score": 181, "creation_date": 1458721073, "tags": ["python", "opencv", "stereo-3d", "disparity-mapping"], "instruction": "OpenCV \u2013 Depth map from Uncalibrated Stereo System\n\nI'm trying to get a depth map with an uncalibrated method. I can obtain the fundamental matrix by finding correspondent points with SIFT and then using cv2.findFundamentalMat. I then use cv2.stereoRectifyUncalibrated to get the homography matrices for each image. Finally I use cv2.warpPerspective to rectify and compute the disparity, but this doesn't create a good depth map. The values are very high so I'm wondering if I have to use warpPerspective or if I have to calculate a rotation matrix from the homography matrices I got with stereoRectifyUncalibrated. I'm not sure of the projective matrix with the case of homography matrix obtained with the stereoRectifyUncalibrated to rectify. A part of the code: #Obtainment of the correspondent point with SIFT sift = cv2.SIFT() ###find the keypoints and descriptors with SIFT kp1, des1 = sift.detectAndCompute(dst1,None) kp2, des2 = sift.detectAndCompute(dst2,None) ###FLANN parameters FLANN_INDEX_KDTREE = 0 index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5) search_params = dict(checks=50) flann = cv2.FlannBasedMatcher(index_params,search_params) matches = flann.knnMatch(des1,des2,k=2) good = [] pts1 = [] pts2 = [] ###ratio test as per Lowe's paper for i,(m,n) in enumerate(matches): if m.distance < 0.8*n.distance: good.append(m) pts2.append(kp2[m.trainIdx].pt) pts1.append(kp1[m.queryIdx].pt) pts1 = np.array(pts1) pts2 = np.array(pts2) #Computation of the fundamental matrix F,mask= cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS) # Obtainment of the rectification matrix and use of the warpPerspective to transform them... pts1 = pts1[:,:][mask.ravel()==1] pts2 = pts2[:,:][mask.ravel()==1] pts1 = np.int32(pts1) pts2 = np.int32(pts2) p1fNew = pts1.reshape((pts1.shape[0] * 2, 1)) p2fNew = pts2.reshape((pts2.shape[0] * 2, 1)) retBool ,rectmat1, rectmat2 = cv2.stereoRectifyUncalibrated(p1fNew,p2fNew,F,(2048,2048)) dst11 = cv2.warpPerspective(dst1,rectmat1,(2048,2048)) dst22 = cv2.warpPerspective(dst2,rectmat2,(2048,2048)) #calculation of the disparity stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16*10, SADWindowSize=9) disp = stereo.compute(dst22.astype(uint8), dst11.astype(uint8)).astype(np.float32) plt.imshow(disp);plt.colorbar();plt.clim(0,400)#;plt.show() plt.savefig(\"0gauche.png\") #plot depth by using disparity focal length `C1[0,0]` from stereo calibration and `T[0]` the distance between cameras plt.imshow(C1[0,0]*T[0]/(disp),cmap='hot');plt.clim(-0,500);plt.colorbar();plt.show() Here are the rectified pictures with the uncalibrated method (and warpPerspective): Here are the rectified pictures with the calibrated method: I don't know how the difference is so important between the two kind of pictures. And for the calibrated method, it doesn't seem aligned. The disparity map using the uncalibrated method: The depths are calculated with : C1[0,0]*T[0]/(disp) with T from the stereoCalibrate. The values are very high. ------------ EDIT LATER ------------ I tried to \"mount\" the reconstruction matrix ([Devernay97], [Garcia01]) with the homography matrix obtained with \"stereoRectifyUncalibrated\", but the result is still not good. Am I doing this correctly? Y=np.arange(0,2048) X=np.arange(0,2048) (XX_field,YY_field)=np.meshgrid(X,Y) #I mount the X, Y and disparity in a same 3D array stock = np.concatenate((np.expand_dims(XX_field,2),np.expand_dims(YY_field,2)),axis=2) XY_disp = np.concatenate((stock,np.expand_dims(disp,2)),axis=2) XY_disp_reshape = XY_disp.reshape(XY_disp.shape[0]*XY_disp.shape[1],3) Ts = np.hstack((np.zeros((3,3)),T_0)) #i use only the translations obtained with the rectified calibration...Is it correct? # I establish the projective matrix with the homography matrix P11 = np.dot(rectmat1,C1) P1 = np.vstack((np.hstack((P11,np.zeros((3,1)))),np.zeros((1,4)))) P1[3,3] = 1 # P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1))))) P22 = np.dot(np.dot(rectmat2,C2),Ts) P2 = np.vstack((P22,np.zeros((1,4)))) P2[3,3] = 1 lambda_t = cv2.norm(P1[0,:].T)/cv2.norm(P2[0,:].T) #I define the reconstruction matrix Q = np.zeros((4,4)) Q[0,:] = P1[0,:].T Q[1,:] = P1[1,:].T Q[2,:] = lambda_t*P2[1,:].T - P1[1,:].T Q[3,:] = P1[2,:].T #I do the calculation to get my 3D coordinates test = [] for i in range(0,XY_disp_reshape.shape[0]): a = np.dot(inv(Q),np.expand_dims(np.concatenate((XY_disp_reshape[i,:],np.ones((1))),axis=0),axis=1)) test.append(a) test = np.asarray(test) XYZ = test[:,:,0].reshape(XY_disp.shape[0],XY_disp.shape[1],4)", "output": "TLDR; Use StereoSGBM (Semi Global Block Matching) and use some post filtering if you want it even smoother OP didn't provide original images, so I'm using Tsukuba from the Middlebury data set. Result with regular StereoBM Result with StereoSGBM (tuned) Best result I could find in literature See the publication here for details. Example of post filtering (see link below) Theory/Other considerations from OP's question The large black areas of your calibrated rectified images would lead me to believe that for those, calibration was not done very well. There's a variety of reasons that could be at play, maybe the physical setup, maybe lighting when you did calibration, etc., but there are plenty of camera calibration tutorials out there for that and my understanding is that you are asking for a way to get a better depth map from an uncalibrated setup (this isn't 100% clear, but the title seems to support this and I think that's what people will come here to try to find). Your basic approach is correct, but the results can definitely be improved. This form of depth mapping is not among those that produce the highest quality maps (especially being uncalibrated). The biggest improvement will likely come from using a different stereo matching algorithm. The lighting may also be having a significant effect. The right image (at least to my naked eye) appears to be less well lit which could interfere with the reconstruction. You could first try brightening it to the same level as the other, or gather new images if that is possible. From here out, I'll assume you have no access to the original cameras, so I'll consider gathering new images, altering the setup, or performing calibration to be out of scope. (If you do have access to the setup and cameras, then I would suggest checking calibration and using a calibrated method as this will work better). You used StereoBM for calculating your disparity (depth map) which does work, but StereoSGBM is much better suited for this application (it handles smoother edges better). You can see the difference below. This article explains the differences in more depth: Block matching focuses on high texture images (think a picture of a tree) and semi-global block matching will focus on sub pixel level matching and pictures with more smooth textures (think a picture of a hallway). Without any explicit intrinsic camera parameters, specifics about the camera setup (like focal distance, distance between the cameras, distance to the subject, etc.), a known dimension in the image, or motion (to use structure from motion), you can only obtain 3D reconstruction up to a projective transform; you won't have a sense of scale or necessarily rotation either, but you can still generate a relative depth map. You will likely suffer from some barrel and other distortions which could be removed with proper camera calibration, but you can get reasonable results without it as long as the cameras aren\u2019t terrible (lens system isn't too distorted) and are set up pretty close to canonical configuration (which basically means they are oriented such that their optical axes are as close to parallel as possible, and their fields of view overlap sufficiently). This doesn't however appear to be the OPs issue as he did manage to get alright rectified images with the uncalibrated method. Basic Procedure Find at least 5 well-matched points in both images you can use to calculate the Fundamental Matrix (you can use any detector and matcher you like, I kept FLANN but used ORB to do detection as SIFT isn't in the main version of OpenCV for 4.2.0) Calculate the Fundamental Matrix, F, with findFundamentalMat Undistort your images with stereoRectifyUncalibrated and warpPerspective Calculate Disparity (Depth Map) with StereoSGBM The results are much better: Matches with ORB and FLANN Undistorted images (left, then right) Disparity StereoBM This result looks similar to the OPs problems (speckling, gaps, wrong depths in some areas). StereoSGBM (tuned) This result looks much better and uses roughly the same method as the OP, minus the final disparity calculation, making me think the OP would see similar improvements on his images, had they been provided. Post filtering There's a good article about this in the OpenCV docs. I'd recommend looking at it if you need really smooth maps. The example photos above are frame 1 from the scene ambush_2 in the MPI Sintel Dataset. Full code (Tested on OpenCV 4.2.0): import cv2 import numpy as np import matplotlib.pyplot as plt imgL = cv2.imread(\"tsukuba_l.png\", cv2.IMREAD_GRAYSCALE) # left image imgR = cv2.imread(\"tsukuba_r.png\", cv2.IMREAD_GRAYSCALE) # right image def get_keypoints_and_descriptors(imgL, imgR): \"\"\"Use ORB detector and FLANN matcher to get keypoints, descritpors, and corresponding matches that will be good for computing homography. \"\"\" orb = cv2.ORB_create() kp1, des1 = orb.detectAndCompute(imgL, None) kp2, des2 = orb.detectAndCompute(imgR, None) ############## Using FLANN matcher ############## # Each keypoint of the first image is matched with a number of # keypoints from the second image. k=2 means keep the 2 best matches # for each keypoint (best matches = the ones with the smallest # distance measurement). FLANN_INDEX_LSH = 6 index_params = dict( algorithm=FLANN_INDEX_LSH, table_number=6, # 12 key_size=12, # 20 multi_probe_level=1, ) # 2 search_params = dict(checks=50) # or pass empty dictionary flann = cv2.FlannBasedMatcher(index_params, search_params) flann_match_pairs = flann.knnMatch(des1, des2, k=2) return kp1, des1, kp2, des2, flann_match_pairs def lowes_ratio_test(matches, ratio_threshold=0.6): \"\"\"Filter matches using the Lowe's ratio test. The ratio test checks if matches are ambiguous and should be removed by checking that the two distances are sufficiently different. If they are not, then the match at that keypoint is ignored. https://stackoverflow.com/questions/51197091/how-does-the-lowes-ratio-test-work \"\"\" filtered_matches = [] for m, n in matches: if m.distance < ratio_threshold * n.distance: filtered_matches.append(m) return filtered_matches def draw_matches(imgL, imgR, kp1, des1, kp2, des2, flann_match_pairs): \"\"\"Draw the first 8 mathces between the left and right images.\"\"\" # https://docs.opencv.org/4.2.0/d4/d5d/group__features2d__draw.html # https://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html img = cv2.drawMatches( imgL, kp1, imgR, kp2, flann_match_pairs[:8], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS, ) cv2.imshow(\"Matches\", img) cv2.imwrite(\"ORB_FLANN_Matches.png\", img) cv2.waitKey(0) def compute_fundamental_matrix(matches, kp1, kp2, method=cv2.FM_RANSAC): \"\"\"Use the set of good mathces to estimate the Fundamental Matrix. See https://en.wikipedia.org/wiki/Eight-point_algorithm#The_normalized_eight-point_algorithm for more info. \"\"\" pts1, pts2 = [], [] fundamental_matrix, inliers = None, None for m in matches[:8]: pts1.append(kp1[m.queryIdx].pt) pts2.append(kp2[m.trainIdx].pt) if pts1 and pts2: # You can play with the Threshold and confidence values here # until you get something that gives you reasonable results. I # used the defaults fundamental_matrix, inliers = cv2.findFundamentalMat( np.float32(pts1), np.float32(pts2), method=method, # ransacReprojThreshold=3, # confidence=0.99, ) return fundamental_matrix, inliers, pts1, pts2 ############## Find good keypoints to use ############## kp1, des1, kp2, des2, flann_match_pairs = get_keypoints_and_descriptors(imgL, imgR) good_matches = lowes_ratio_test(flann_match_pairs, 0.2) draw_matches(imgL, imgR, kp1, des1, kp2, des2, good_matches) ############## Compute Fundamental Matrix ############## F, I, points1, points2 = compute_fundamental_matrix(good_matches, kp1, kp2) ############## Stereo rectify uncalibrated ############## h1, w1 = imgL.shape h2, w2 = imgR.shape thresh = 0 _, H1, H2 = cv2.stereoRectifyUncalibrated( np.float32(points1), np.float32(points2), F, imgSize=(w1, h1), threshold=thresh, ) ############## Undistort (Rectify) ############## imgL_undistorted = cv2.warpPerspective(imgL, H1, (w1, h1)) imgR_undistorted = cv2.warpPerspective(imgR, H2, (w2, h2)) cv2.imwrite(\"undistorted_L.png\", imgL_undistorted) cv2.imwrite(\"undistorted_R.png\", imgR_undistorted) ############## Calculate Disparity (Depth Map) ############## # Using StereoBM stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15) disparity_BM = stereo.compute(imgL_undistorted, imgR_undistorted) plt.imshow(disparity_BM, \"gray\") plt.colorbar() plt.show() # Using StereoSGBM # Set disparity parameters. Note: disparity range is tuned according to # specific parameters obtained through trial and error. win_size = 2 min_disp = -4 max_disp = 9 num_disp = max_disp - min_disp # Needs to be divisible by 16 stereo = cv2.StereoSGBM_create( minDisparity=min_disp, numDisparities=num_disp, blockSize=5, uniquenessRatio=5, speckleWindowSize=5, speckleRange=5, disp12MaxDiff=2, P1=8 * 3 * win_size ** 2, P2=32 * 3 * win_size ** 2, ) disparity_SGBM = stereo.compute(imgL_undistorted, imgR_undistorted) plt.imshow(disparity_SGBM, \"gray\") plt.colorbar() plt.show()"} +{"question_id": 16348815, "score": 180, "creation_date": 1367533878, "tags": ["python", "list", "difference", "multiple-assignment"], "instruction": "Python assigning multiple variables to same value? list behavior\n\nI tried to use multiple assignment as shown below to initialize variables, but I got confused by the behavior, I expect to reassign the values list separately, I mean b[0] and c[0] equal 0 as before: a = b = c = [0, 3, 5] a[0] = 1 print(a) # [1, 3, 5] print(b) # [1, 3, 5] print(c) # [1, 3, 5] Is that correct? What should I use for multiple assignment? What is different from this? d = e = f = 3 e = 4 print(d) # 3 print(e) # 4 print(f) # 3", "output": "If you're coming to Python from a language in the C/Java/etc. family, it may help you to stop thinking about a as a \"variable\", and start thinking of it as a \"name\". a, b, and c aren't different variables with equal values; they're different names for the same identical value. Variables have types, identities, addresses, and all kinds of stuff like that. Names don't have any of that. Values do, of course, and you can have lots of names for the same value. If you give Notorious B.I.G. a hot dog,* Biggie Smalls and Chris Wallace have a hot dog. If you change the first element of a to 1, the first elements of b and c are 1. If you want to know if two names are naming the same object, use the is operator: >>> a=b=c=[0,3,5] >>> a is b True You then ask: what is different from this? d=e=f=3 e=4 print('f:',f) print('e:',e) Here, you're rebinding the name e to the value 4. That doesn't affect the names d and f in any way. In your previous version, you were assigning to a[0], not to a. So, from the point of view of a[0], you're rebinding a[0], but from the point of view of a, you're changing it in-place. You can use the id function, which gives you some unique number representing the identity of an object, to see exactly which object is which even when is can't help: >>> a=b=c=[0,3,5] >>> id(a) 4473392520 >>> id(b) 4473392520 >>> id(a[0]) 4297261120 >>> id(b[0]) 4297261120 >>> a[0] = 1 >>> id(a) 4473392520 >>> id(b) 4473392520 >>> id(a[0]) 4297261216 >>> id(b[0]) 4297261216 Notice that a[0] has changed from 4297261120 to 4297261216\u2014it's now a name for a different value. And b[0] is also now a name for that same new value. That's because a and b are still naming the same object. Under the covers, a[0]=1 is actually calling a method on the list object. (It's equivalent to a.__setitem__(0, 1).) So, it's not really rebinding anything at all. It's like calling my_object.set_something(1). Sure, likely the object is rebinding an instance attribute in order to implement this method, but that's not what's important; what's important is that you're not assigning anything, you're just mutating the object. And it's the same with a[0]=1. * Warning: Do not give Notorious B.I.G. a hot dog. Gangsta rap zombies should never be fed after midnight."} +{"question_id": 22231592, "score": 180, "creation_date": 1394126663, "tags": ["python", "pandas", "series"], "instruction": "Pandas: change data type of Series to String\n\nI use Pandas 'ver 0.12.0' with Python 2.7 and have a dataframe as below: df = pd.DataFrame({'id' : [123,512,'zhub1', 12354.3, 129, 753, 295, 610], 'colour': ['black', 'white','white','white', 'black', 'black', 'white', 'white'], 'shape': ['round', 'triangular', 'triangular','triangular','square', 'triangular','round','triangular'] }, columns= ['id','colour', 'shape']) The id Series consists of some integers and strings. Its dtype by default is object. I want to convert all contents of id to strings. I tried astype(str), which produces the output below. df['id'].astype(str) 0 1 1 5 2 z 3 1 4 1 5 7 6 2 7 6 1) How can I convert all elements of id to String? 2) I will eventually use id for indexing for dataframes. Would having String indices in a dataframe slow things down, compared to having an integer index?", "output": "A new answer to reflect the most current practices: as of now (v1.2.4), neither astype('str') nor astype(str) work. As per the documentation, a Series can be converted to the string datatype in the following ways: df['id'] = df['id'].astype(\"string\") df['id'] = pandas.Series(df['id'], dtype=\"string\") df['id'] = pandas.Series(df['id'], dtype=pandas.StringDtype) End to end example: import pandas as pd # Create a sample DataFrame data = { 'Name': ['John', 'Alice', 'Bob', 'John', 'Alice'], 'Age': [25, 30, 35, 25, 30], 'City': ['New York', 'London', 'Paris', 'New York', 'London'], 'Salary': [50000, 60000, 70000, 50000, 60000], 'Category': ['A', 'B', 'C', 'A', 'B'] } df = pd.DataFrame(data) # Print the DataFrame print(\"Original DataFrame:\") print(df) print(\"\\nData types:\") print(df.dtypes) cat_cols_ = None # Apply the code to change data types if not cat_cols_: # Get the columns with object data type object_columns = df.select_dtypes(include=['object']).columns.tolist() if len(object_columns) > 0: print(f\"\\nObject columns found, converting to string: {object_columns}\") # Convert object columns to string type df[object_columns] = df[object_columns].astype('string') # Get the categorical columns (including string and category data types) cat_cols_ = df.select_dtypes(include=['category', 'string']).columns.tolist() # Print the updated DataFrame and data types print(\"\\nUpdated DataFrame:\") print(df) print(\"\\nUpdated data types:\") print(df.dtypes) print(f\"\\nCategorical columns (cat_cols_): {cat_cols_}\") Original DataFrame: Name Age City Salary Category 0 John 25 New York 50000 A 1 Alice 30 London 60000 B 2 Bob 35 Paris 70000 C 3 John 25 New York 50000 A 4 Alice 30 London 60000 B Data types: Name object Age int64 City object Salary int64 Category object dtype: object Object columns found, converting to string: ['Name', 'City', 'Category'] Updated DataFrame: Name Age City Salary Category 0 John 25 New York 50000 A 1 Alice 30 London 60000 B 2 Bob 35 Paris 70000 C 3 John 25 New York 50000 A 4 Alice 30 London 60000 B Updated data types: Name string[python] Age int64 City string[python] Salary int64 Category string[python] dtype: object Categorical columns (cat_cols_): ['Name', 'City', 'Category']"} +{"question_id": 25504149, "score": 180, "creation_date": 1409050695, "tags": ["python", "flask"], "instruction": "Why does running the Flask dev server run itself twice?\n\nI'm using Flask for developing a website and while in development I run flask using the following file: #!/usr/bin/env python from datetime import datetime from app import app import config if __name__ == '__main__': print('################### Restarting @', datetime.utcnow(), '###################') app.run(port=4004, debug=config.DEBUG, host='0.0.0.0') When I start the server, or when it auto-restarts because files have been updated, it always shows the print line twice: ################### Restarting @ 2014-08-26 10:51:49.167062 ################### ################### Restarting @ 2014-08-26 10:51:49.607096 ################### Although it is not really a problem (the rest works as expected), I simply wonder why it behaves like this? Any ideas?", "output": "The Werkzeug reloader spawns a child process so that it can restart that process each time your code changes. Werkzeug is the library that supplies Flask with the development server when you call app.run(). See the restart_with_reloader() function code; your script is run again with subprocess.call(). If you set use_reloader to False you'll see the behaviour go away, but then you also lose the reloading functionality: app.run(port=4004, debug=config.DEBUG, host='0.0.0.0', use_reloader=False) You can disable the reloader when using the flask run command too: FLASK_DEBUG=1 flask run --no-reload You can use the werkzeug.serving.is_running_from_reloader function if you wanted to detect when you are in the reloading child process: from werkzeug.serving import is_running_from_reloader if is_running_from_reloader(): print(f\"################### Restarting @ {datetime.utcnow()} ###################\") However, if you need to set up module globals, then you should instead use the @app.before_first_request decorator on a function and have that function set up such globals. It'll be called just once after every reload when the first request comes in: @app.before_first_request def before_first_request(): print(f\"########### Restarted, first request @ {datetime.utcnow()} ############\") Do take into account that if you run this in a full-scale WSGI server that uses forking or new subprocesses to handle requests, that before_first_request handlers may be invoked for each new subprocess."} +{"question_id": 35497069, "score": 180, "creation_date": 1455854507, "tags": ["python", "bash", "jupyter-notebook", "ipython", "command-line-arguments"], "instruction": "Passing IPython variables as arguments to bash commands\n\nHow do I execute a bash command from Ipython/Jupyter notebook passing the value of a python variable as an argument like in this example: py_var=\"foo\" !grep py_var bar.txt (obviously I want to grep for foo and not the literal string py_var)", "output": "Simple case Prefix your variable names with a $. For example, say you want to copy a file file1 to a path stored in a python variable named dir_path: dir_path = \"/home/foo/bar\" !cp file1 $dir_path Note: If using Bash, keep in mind that some strings need to be quoted: !cp file1 \"$dir_path\" General case Wrap your variable names in {..}: dir_path = \"/home/foo/bar\" !cp file1 {dir_path} Its behaviour is more predictable and powerful than $... E.g. if you want to concatenate another string sub_dir to your path, $ can't do that, while with {..} you can do: !cp file1 {dir_path + sub_dir} Note: Again for Bash, quotes: !cp file1 \"{dir_path}\" !cp file1 \"{dir_path + sub_dir}\" Raw strings For a related discussion on the use of raw strings (prefixed with r) to pass the variables, see Passing Ipython variables as string arguments to shell command"} +{"question_id": 54334304, "score": 179, "creation_date": 1548271461, "tags": ["python", "python-3.x", "nlp", "spacy"], "instruction": "spaCy: Can't find model 'en_core_web_sm' on windows 10 and Python 3.5.3 :: Anaconda custom (64-bit)\n\nWhat is the difference between spacy.load('en_core_web_sm') and spacy.load('en')? This link explains different model sizes. But I am still not clear how spacy.load('en_core_web_sm') and spacy.load('en') differ spacy.load('en') runs fine for me. But the spacy.load('en_core_web_sm') throws error I have installed spacyas below. when I go to Jupyter notebook and run command nlp = spacy.load('en_core_web_sm') I get the below error --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-4-b472bef03043> in <module>() 1 # Import spaCy and load the language library 2 import spacy ----> 3 nlp = spacy.load('en_core_web_sm') 4 5 # Create a Doc object C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder\\lib\\site-packages\\spacy\\__init__.py in load(name, **overrides) 13 if depr_path not in (True, False, None): 14 deprecation_warning(Warnings.W001.format(path=depr_path)) ---> 15 return util.load_model(name, **overrides) 16 17 C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder\\lib\\site-packages\\spacy\\util.py in load_model(name, **overrides) 117 elif hasattr(name, 'exists'): # Path or Path-like to model data 118 return load_model_from_path(name, **overrides) --> 119 raise IOError(Errors.E050.format(name=name)) 120 121 OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. how I installed Spacy --- (C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder) C:\\Users\\nikhizzz>conda install -c conda-forge spacy Fetching package metadata ............. Solving package specifications: . Package plan for installation in environment C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder: The following NEW packages will be INSTALLED: blas: 1.0-mkl cymem: 1.31.2-py35h6538335_0 conda-forge dill: 0.2.8.2-py35_0 conda-forge msgpack-numpy: 0.4.4.2-py_0 conda-forge murmurhash: 0.28.0-py35h6538335_1000 conda-forge plac: 0.9.6-py_1 conda-forge preshed: 1.0.0-py35h6538335_0 conda-forge pyreadline: 2.1-py35_1000 conda-forge regex: 2017.11.09-py35_0 conda-forge spacy: 2.0.12-py35h830ac7b_0 conda-forge termcolor: 1.1.0-py_2 conda-forge thinc: 6.10.3-py35h830ac7b_2 conda-forge tqdm: 4.29.1-py_0 conda-forge ujson: 1.35-py35hfa6e2cd_1001 conda-forge The following packages will be UPDATED: msgpack-python: 0.4.8-py35_0 --> 0.5.6-py35he980bc4_3 conda-forge The following packages will be DOWNGRADED: freetype: 2.7-vc14_2 conda-forge --> 2.5.5-vc14_2 Proceed ([y]/n)? y blas-1.0-mkl.t 100% |###############################| Time: 0:00:00 0.00 B/s cymem-1.31.2-p 100% |###############################| Time: 0:00:00 1.65 MB/s msgpack-python 100% |###############################| Time: 0:00:00 5.37 MB/s murmurhash-0.2 100% |###############################| Time: 0:00:00 1.49 MB/s plac-0.9.6-py_ 100% |###############################| Time: 0:00:00 0.00 B/s pyreadline-2.1 100% |###############################| Time: 0:00:00 4.62 MB/s regex-2017.11. 100% |###############################| Time: 0:00:00 3.31 MB/s termcolor-1.1. 100% |###############################| Time: 0:00:00 187.81 kB/s tqdm-4.29.1-py 100% |###############################| Time: 0:00:00 2.51 MB/s ujson-1.35-py3 100% |###############################| Time: 0:00:00 1.66 MB/s dill-0.2.8.2-p 100% |###############################| Time: 0:00:00 4.34 MB/s msgpack-numpy- 100% |###############################| Time: 0:00:00 0.00 B/s preshed-1.0.0- 100% |###############################| Time: 0:00:00 0.00 B/s thinc-6.10.3-p 100% |###############################| Time: 0:00:00 5.49 MB/s spacy-2.0.12-p 100% |###############################| Time: 0:00:10 7.42 MB/s (C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder) C:\\Users\\nikhizzz>python -V Python 3.5.3 :: Anaconda custom (64-bit) (C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder) C:\\Users\\nikhizzz>python -m spacy download en Collecting en_core_web_sm==2.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0 Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz (37.4MB) 100% |################################| 37.4MB ... Installing collected packages: en-core-web-sm Running setup.py install for en-core-web-sm ... done Successfully installed en-core-web-sm-2.0.0 Linking successful C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder\\lib\\site-packages\\en_core_web_sm --> C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder\\lib\\site-packages\\spacy\\data\\en You can now load the model via spacy.load('en') (C:\\Users\\nikhizzz\\AppData\\Local\\conda\\conda\\envs\\tensorflowspyder) C:\\Users\\nikhizzz>", "output": "The answer to your misunderstanding is a Unix concept, softlinks which we could say that in Windows are similar to shortcuts. Let's explain this. When you spacy download en, spaCy tries to find the best small model that matches your spaCy distribution. The small model that I am talking about defaults to en_core_web_sm which can be found in different variations which correspond to the different spaCy versions (for example spacy, spacy-nightly have en_core_web_sm of different sizes). When spaCy finds the best model for you, it downloads it and then links the name en to the package it downloaded, e.g. en_core_web_sm. That basically means that whenever you refer to en you will be referring to en_core_web_sm. In other words, en after linking is not a \"real\" package, is just a name for en_core_web_sm. However, it doesn't work the other way. You can't refer directly to en_core_web_sm because your system doesn't know you have it installed. When you did spacy download en you basically did a pip install. So pip knows that you have a package named en installed for your python distribution, but knows nothing about the package en_core_web_sm. This package is just replacing package en when you import it, which means that package en is just a softlink to en_core_web_sm. Of course, you can directly download en_core_web_sm, using the command: python -m spacy download en_core_web_sm, or you can even link the name en to other models as well. For example, you could do python -m spacy download en_core_web_lg and then python -m spacy link en_core_web_lg en. That would make en a name for en_core_web_lg, which is a large spaCy model for the English language."} +{"question_id": 3764291, "score": 179, "creation_date": 1285101555, "tags": ["python", "network-programming"], "instruction": "How can I see if there's an available and active network connection in Python?\n\nI want to see if I can access an online API, but for that, I need to have Internet access. How can I see if there's a connection available and active using Python?", "output": "Perhaps you could use something like this: from urllib import request def internet_on(): try: request.urlopen('http://216.58.192.142', timeout=1) return True except request.URLError as err: return False For Python 2.x replace the import statement by import urllib2 as request: Currently, 8.8.8.8 is one of the IP addresses from Google. Change http://8.8.8.8 to whatever site can be expected to respond quickly. This fixed IP will not map to google.com forever. So this code is not robust -- it will need constant maintenance to keep it working. The reason why the code above uses a fixed IP address instead of fully qualified domain name (FQDN) is because a FQDN would require a DNS lookup. When the machine does not have a working internet connection, the DNS lookup itself may block the call to urllib_request.urlopen for more than a second. Thanks to @rzetterberg for pointing this out. If the fixed IP address above is not working, you can find a current IP address for google.com (on unix) by running % dig google.com +trace ... google.com. 300 IN A 216.58.192.142"} +{"question_id": 38642557, "score": 179, "creation_date": 1469725182, "tags": ["python", "templates", "jinja2"], "instruction": "How to load jinja template directly from filesystem\n\nThe jinja API document at pocoo.org states: The simplest way to configure Jinja2 to load templates for your application looks roughly like this: from jinja2 import Environment, PackageLoader env = Environment(loader=PackageLoader('yourapplication', templates')) This will create a template environment with the default settings and a loader that looks up the templates in the templates folder inside the yourapplication python package. As it turns out, this isn't so simple because you have to make/install a python package with your templates in it, which introduces a lot of needless complexity, especially if you have no intention of distributing your code. I found these related questions about doing so, but the answers are vague and unsatisfying: need to package jinja2 template for python How to make a python package containing only jinja templates How can I load the template directly from the filesystem, not as a resource in a package?", "output": "Use a FileSystemLoader instead of a PackageLoader. Suppose there is a python file in the same directory as the template: ./index.py ./template.html This index.py will find the template and render it: #!/usr/bin/python import jinja2 templateLoader = jinja2.FileSystemLoader(searchpath=\"./\") templateEnv = jinja2.Environment(loader=templateLoader) TEMPLATE_FILE = \"template.html\" template = templateEnv.get_template(TEMPLATE_FILE) outputText = template.render() # this is where to put args to the template renderer print(outputText) In the introduction, the PackageLoader approach seems to be presented as the default, \"simplest\" method; however, there is also a section which discusses all the built-in loaders."} +{"question_id": 13998492, "score": 179, "creation_date": 1356132444, "tags": ["python", "python-3.x"], "instruction": "When should iteritems() be used instead of items()?\n\nIs it legitimate to use items() instead of iteritems() in all places? Why was iteritems() removed from Python 3? Seems like a terrific and useful method. What's the reasoning behind it? Edit: To clarify, I want to know what is the correct idiom for iterating over a dictionary in a generator-like way (one item at a time, not all into memory) in a way that is compatible with both Python 2 and Python 3?", "output": "In Python 2.x - .items() returned a list of (key, value) pairs. In Python 3.x, .items() is now an itemview object, which behaves differently - so it has to be iterated over, or materialised... So, list(dict.items()) is required for what was dict.items() in Python 2.x. Python 2.7 also has a bit of a back-port for key handling, in that you have viewkeys, viewitems and viewvalues methods, the most useful being viewkeys which behaves more like a set (which you'd expect from a dict). Simple example: common_keys = list(dict_a.viewkeys() & dict_b.viewkeys()) Will give you a list of the common keys, but again, in Python 3.x - just use .keys() instead. Python 3.x has generally been made to be more \"lazy\" - i.e. map is now effectively itertools.imap, zip is itertools.izip, etc."} +{"question_id": 4971269, "score": 179, "creation_date": 1297440348, "tags": ["python", "matplotlib", "colormap"], "instruction": "How to pick a new color for each plotted line within a figure\n\nI'd like to NOT specify a color for each plotted line, and have each line get a distinct color. But if I run: from matplotlib import pyplot as plt for i in range(20): plt.plot([0, 1], [i, i]) plt.show() then I get this output: If you look at the image above, you can see that matplotlib attempts to pick colors for each line that are different, but eventually it re-uses colors - the top ten lines use the same colors as the bottom ten. I just want to stop it from repeating already used colors AND/OR feed it a list of colors to use.", "output": "matplotlib 1.5+ You can use axes.set_prop_cycle (example). matplotlib 1.0-1.4 You can use axes.set_color_cycle (example). matplotlib 0.x You can use Axes.set_default_color_cycle."} +{"question_id": 636561, "score": 179, "creation_date": 1236808902, "tags": ["python", "asynchronous", "subprocess", "scheduler"], "instruction": "How can I run an external command asynchronously from Python?\n\nI need to run a shell command asynchronously from a Python script. By this I mean that I want my Python script to continue running while the external command goes off and does whatever it needs to do. I read this post: Calling an external command in Python I then went off and did some testing, and it looks like os.system() will do the job provided that I use & at the end of the command so that I don't have to wait for it to return. What I am wondering is if this is the proper way to accomplish such a thing? I tried commands.call() but it will not work for me because it blocks on the external command. Please let me know if using os.system() for this is advisable or if I should try some other route.", "output": "subprocess.Popen does exactly what you want. from subprocess import Popen p = Popen(['watch', 'ls']) # something long running # ... do other stuff while subprocess is running p.terminate() (Edit to complete the answer from comments) The Popen instance can do various other things like you can poll() it to see if it is still running, and you can communicate() with it to send it data on stdin, and wait for it to terminate."} +{"question_id": 14261903, "score": 179, "creation_date": 1357833348, "tags": ["python", "matplotlib", "jupyter-notebook"], "instruction": "How can I open the interactive matplotlib window in IPython notebook?\n\nI am using IPython with --pylab=inline and would sometimes like to quickly switch to the interactive, zoomable matplotlib GUI for viewing plots (the one that pops up when you plot something in a terminal Python console). How could I do that? Preferably without leaving or restarting my notebook. The problem with inline plots in IPy notebook is that they are of a limited resolution and I can't zoom into them to see some smaller parts. With the maptlotlib GUI that starts from a terminal, I can select a rectangle of the graph that I want to zoom into and the axes adjust accordingly. I tried experimenting with from matplotlib import interactive interactive(True) and interactive(False) but that didn't do anything. I couldn't find any hint online either.", "output": "According to the documentation, you should be able to switch back and forth like this: In [2]: %matplotlib inline In [3]: plot(...) In [4]: %matplotlib qt # wx, gtk, osx, tk, empty uses default In [5]: plot(...) and that will pop up a regular plot window (a restart on the notebook may be necessary)."} +{"question_id": 26483254, "score": 179, "creation_date": 1413883585, "tags": ["python", "list", "pandas", "insert", "dataframe"], "instruction": "Python pandas insert list into a cell\n\nI have a list 'abc' and a dataframe 'df': abc = ['foo', 'bar'] df = A B 0 12 NaN 1 23 NaN I want to insert the list into cell 1B, so I want this result: A B 0 12 NaN 1 23 ['foo', 'bar'] Ho can I do that? 1) If I use this: df.ix[1,'B'] = abc I get the following error message: ValueError: Must have equal len keys and value when setting with an iterable because it tries to insert the list (that has two elements) into a row / column but not into a cell. 2) If I use this: df.ix[1,'B'] = [abc] then it inserts a list that has only one element that is the 'abc' list ( [['foo', 'bar']] ). 3) If I use this: df.ix[1,'B'] = ', '.join(abc) then it inserts a string: ( foo, bar ) but not a list. 4) If I use this: df.ix[1,'B'] = [', '.join(abc)] then it inserts a list but it has only one element ( ['foo, bar'] ) but not two as I want ( ['foo', 'bar'] ). Thanks for help! EDIT My new dataframe and the old list: abc = ['foo', 'bar'] df2 = A B C 0 12 NaN 'bla' 1 23 NaN 'bla bla' Another dataframe: df3 = A B C D 0 12 NaN 'bla' ['item1', 'item2'] 1 23 NaN 'bla bla' [11, 12, 13] I want insert the 'abc' list into df2.loc[1,'B'] and/or df3.loc[1,'B']. If the dataframe has columns only with integer values and/or NaN values and/or list values then inserting a list into a cell works perfectly. If the dataframe has columns only with string values and/or NaN values and/or list values then inserting a list into a cell works perfectly. But if the dataframe has columns with integer and string values and other columns then the error message appears if I use this: df2.loc[1,'B'] = abc or df3.loc[1,'B'] = abc. Another dataframe: df4 = A B 0 'bla' NaN 1 'bla bla' NaN These inserts work perfectly: df.loc[1,'B'] = abc or df4.loc[1,'B'] = abc.", "output": "df3.set_value(1, 'B', abc) works for any dataframe. Take care of the data type of column 'B'. For example, a list can not be inserted into a float column, at that case df['B'] = df['B'].astype(object) can help."} +{"question_id": 12645547, "score": 179, "creation_date": 1348857418, "tags": ["python", "numpy", "math", "scipy", "pi"], "instruction": "Is there a difference between scipy.pi, numpy.pi, or math.pi?\n\nIn a project using SciPy and NumPy, when should one use scipy.pi vs numpy.pi vs just math.pi? Is there a difference between these values?", "output": ">>> import math >>> import numpy as np >>> import scipy >>> math.pi == np.pi == scipy.pi True So it doesn't matter, they are all the same value. The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi."} +{"question_id": 31172719, "score": 178, "creation_date": 1435789569, "tags": ["python", "windows", "pip", "access-denied"], "instruction": "pip install access denied on Windows\n\nI am trying to run pip install mitmproxy on Windows, but I keep getting access denied, even with cmd and PowerShell using the Run as Administrator option. WindowsError: [Error 5] Access is denied: 'c:\\\\users\\\\bruno\\\\appdata\\\\local\\\\temp\\\\easy_install-0fme6u\\\\cryptography-0.9.1\\\\.eggs\\\\cffi-1.1.2-py2.7-win-amd64.egg\\\\_cffi_backend.pyd' How can I make this work?", "output": "For Windows, in Command Prompt (Admin) try to run pip install using the Python executable: python -m pip install mitmproxy This should work, at least it worked for me for other package installation."} +{"question_id": 9012008, "score": 178, "creation_date": 1327534325, "tags": ["python", "regex"], "instruction": "python's re: return True if string contains regex pattern\n\nI have a regular expression like this: regexp = u'ba[r|z|d]' Function must return True if word contains bar, baz or bad. In short, I need regexp analog for Python's 'any-string' in 'text' How can I realize it? Thanks!", "output": "import re word = 'fubar' regexp = re.compile(r'ba[rzd]') if regexp.search(word): print('matched')"} +{"question_id": 43634618, "score": 178, "creation_date": 1493210710, "tags": ["python", "enums"], "instruction": "How do I test if int value exists in Python Enum without using try/catch?\n\nUsing the Python Enum class, is there a way to test if an Enum contains a specific int value without using try/catch? With the following class: from enum import Enum class Fruit(Enum): Apple = 4 Orange = 5 Pear = 6 How can I test for the value 6 (returning true), or the value 7 (returning false)?", "output": "UPDATE my answer is outdated. for newer python versions see Filip Poplewski's answer. test for values variant 1 note that an Enum has a member called _value2member_map_ (which is undocumented and may be changed/removed in future python versions): print(Fruit._value2member_map_) # {4: <Fruit.Apple: 4>, 5: <Fruit.Orange: 5>, 6: <Fruit.Pear: 6>} you can test if a value is in your Enum against this map: 5 in Fruit._value2member_map_ # True 7 in Fruit._value2member_map_ # False variant 2 if you do not want to rely on this feature this is an alternative: values = [item.value for item in Fruit] # [4, 5, 6] or (probably better): use a set; the in operator will be more efficient: values = set(item.value for item in Fruit) # {4, 5, 6} then test with 5 in values # True 7 in values # False add has_value to your class you could then add this as a method to your class: class Fruit(Enum): Apple = 4 Orange = 5 Pear = 6 @classmethod def has_value(cls, value): return value in cls._value2member_map_ print(Fruit.has_value(5)) # True print(Fruit.has_value(7)) # False starting form python 3.9 (?) python offers IntEnum. with these you could do this: from enum import IntEnum class Fruit(IntEnum): Apple = 4 Orange = 5 Pear = 6 print(6 in iter(Fruit)) # True note there is no need to create a list; just iterating over iter(Fruit) will do. again, if this is needed repeatedly it may be worth creating a set as above: values = set(Fruit) print(5 in values) # True test for keys if you want to test for the names (and not the values) i would use _member_names_: 'Apple' in Fruit._member_names_ # True 'Mango' in Fruit._member_names_ # False"} +{"question_id": 29290359, "score": 178, "creation_date": 1427410606, "tags": ["python", "python-3.x", "mutable", "namedtuple"], "instruction": "Existence of mutable named tuple in Python?\n\nCan anyone amend namedtuple or provide an alternative class so that it works for mutable objects? Primarily for readability, I would like something similar to namedtuple that does this: from Camelot import namedgroup Point = namedgroup('Point', ['x', 'y']) p = Point(0, 0) p.x = 10 >>> p Point(x=10, y=0) >>> p.x *= 10 Point(x=100, y=0) It must be possible to pickle the resulting object. And per the characteristics of named tuple, the ordering of the output when represented must match the order of the parameter list when constructing the object.", "output": "There is a mutable alternative to collections.namedtuple \u2013 recordclass. It can be installed from PyPI: pip3 install recordclass It has the same API and memory footprint as namedtuple and it supports assignments (It should be faster as well). For example: from recordclass import recordclass Point = recordclass('Point', 'x y') >>> p = Point(1, 2) >>> p Point(x=1, y=2) >>> print(p.x, p.y) 1 2 >>> p.x += 2; p.y += 3; print(p) Point(x=3, y=5) recordclass (since 0.5) support typehints: from recordclass import recordclass, RecordClass class Point(RecordClass): x: int y: int >>> Point.__annotations__ {'x':int, 'y':int} >>> p = Point(1, 2) >>> p Point(x=1, y=2) >>> print(p.x, p.y) 1 2 >>> p.x += 2; p.y += 3; print(p) Point(x=3, y=5) There is a more complete example (it also includes performance comparisons). Recordclass library now provides another variant -- recordclass.make_dataclass factory function. It support dataclasses-like API (there are module level functions update, make, replace instead of self._update, self._replace, self._asdict, cls._make methods). from recordclass import dataobject, make_dataclass Point = make_dataclass('Point', [('x', int), ('y',int)]) Point = make_dataclass('Point', {'x':int, 'y':int}) class Point(dataobject): x: int y: int >>> p = Point(1, 2) >>> p Point(x=1, y=2) >>> p.x = 10; p.y += 3; print(p) Point(x=10, y=5) recordclass and make_dataclass can produce classes, whose instances occupy less memory than __slots__-based instances. This can be important for the instances with attribute values, which has not intended to have reference cycles. It may help reduce memory usage if you need to create millions of instances. Here is an illustrative example."} +{"question_id": 34578168, "score": 178, "creation_date": 1451834810, "tags": ["python", "pip"], "instruction": "Where is pip cache folder?\n\nWhere is the Python pip cache folder? I had an error during installation and now reinstall packages using cache files. Where is that directory? I want to take a backup of them for installation in the future. Is it possible? For example, I have this one Using cached cssselect-0.9.1.tar.gz I searched google for this directory but nothing I saw, is learning how to install from a folder, I want to find the default cache directory. And another question: Will these cache files stay in that directory, or will they be removed soon?", "output": "It depends on the operating system. With pip 20.1 or later, you can find it with: pip cache dir For example with macOS: $ pip cache dir /Users/hugo/Library/Caches/pip Docs: https://pip.pypa.io/en/stable/cli/pip_cache/ https://pip.pypa.io/en/stable/cli/pip_install/#caching"} +{"question_id": 6583877, "score": 178, "creation_date": 1309874379, "tags": ["python", "django", "django-templates", "django-admin", "extend"], "instruction": "How to override and extend basic Django admin templates?\n\nHow do I override an admin template (e.g. admin/index.html) while at the same time extending it (see https://docs.djangoproject.com/en/dev/ref/contrib/admin/#overriding-vs-replacing-an-admin-template)? First - I know that this question has been asked and answered before (see Django: Overriding AND extending an app template) but as the answer says it isn't directly applicable if you're using the app_directories template loader (which is most of the time). My current workaround is to make copies and extend from them instead of extending directly from the admin templates. This works great but it's really confusing and adds extra work when the admin templates change. It could think of some custom extend-tag for the templates but I don't want to reinvent the wheel if there already exists a solution. On a side note: Does anybody know if this problem will be addressed by Django itself?", "output": "Update: Read the Docs for your version of Django, e.g. the latest version or old LTS versions: 3.2, 2.2, 1.11 Original answer from 2011: I had the same issue about a year and a half ago and I found a nice template loader on djangosnippets.org that makes this easy. It allows you to extend a template in a specific app, giving you the ability to create your own admin/index.html that extends the admin/index.html template from the admin app. Like this: {% extends \"admin:admin/index.html\" %} {% block sidebar %} {{block.super}} <div> <h1>Extra links</h1> <a href=\"/admin/extra/\">My extra link</a> </div> {% endblock %} I've given a full example on how to use this template loader in a blog post on my website."} +{"question_id": 45147100, "score": 178, "creation_date": 1500302292, "tags": ["python", "pandas", "dataframe", "in-place"], "instruction": "Pandas: drop columns with all NaN's\n\nI have this DataFrame: 0 1 2 3 4 5 6 7 0 #0915-8 NaN NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN LIVE WGT NaN AMOUNT NaN TOTAL 2 GBW COD NaN NaN 2,280 NaN $0.60 NaN $1,368.00 3 POLLOCK NaN NaN 1,611 NaN $0.01 NaN $16.11 4 WHAKE NaN NaN 441 NaN $0.70 NaN $308.70 5 GBE HADDOCK NaN NaN 2,788 NaN $0.01 NaN $27.88 6 GBW HADDOCK NaN NaN 16,667 NaN $0.01 NaN $166.67 7 REDFISH NaN NaN 932 NaN $0.01 NaN $9.32 8 GB WINTER FLOUNDER NaN NaN 145 NaN $0.25 NaN $36.25 9 GOM WINTER FLOUNDER NaN NaN 25,070 NaN $0.35 NaN $8,774.50 10 GB YELLOWTAIL NaN NaN 26 NaN $1.75 NaN $45.50 I want to drop all NaNs as well as any columns with more than 3 NaNs (either one, or both, should work I think). I tried this code: fish_frame.dropna() fish_frame.dropna(thresh=len(fish_frame) - 3, axis=1) but it seems not to have any effect on the DataFrame - I see the same results afterward. What is wrong with the code, and how do I fix it?", "output": "From the dropna docstring: Drop the columns where all elements are NaN: df.dropna(axis=1, how='all') A B D 0 NaN 2.0 0 1 3.0 4.0 1 2 NaN NaN 5"} +{"question_id": 5334531, "score": 178, "creation_date": 1300332172, "tags": ["python", "documentation", "javadoc", "docstring"], "instruction": "Using javadoc for Python documentation\n\nI am currently beginning with Python and I have a strong PHP background and in PHP I have took the habit of using javadoc as a documentation template. I was wondering if javadoc has its place as docstring documentation in Python. What are the established conventions and/or official guildelines here? E.g. is something like this too elaborate to fit in the Python mindset or should I try to be as concise as possible? \"\"\" replaces template place holder with values @param string timestamp formatted date to display @param string priority priority number @param string priority_name priority name @param string message message to display @return string formatted string \"\"\" And if I am a bit too exhaustive should I go with something like this instead (where most of the documentation doesn't get printed through the __doc__ method)? # replaces template place holder with values # # @param string timestamp formatted date to display # @param string priority priority number # @param string priority_name priority name # @param string message message to display # # @return string formatted string def format(self, timestamp = '', priority = '', priority_name = '', message = ''): \"\"\" replaces template place holder with values \"\"\" values = {'%timestamp%' : timestamp, '%priorityName%' : priority_name, '%priority%' : priority, '%message%' : message} return self.__pattern.format(**values)", "output": "Have a look at the reStructuredText (also known as \"reST\") format, which is a plaintext/docstring markup format, and probably the most popular in the Python world. And you should certainly look at Sphinx, a tool to generate documentation from reStructuredText (used for eg. the Python documentation itself). Sphinx includes the possibility to extract documentation from the docstrings in your code (see sphinx.ext.autodoc), and recognizes reST field lists following certain conventions. This has probably become (or is becoming) the most popular way to do it. Your example could look as follows: \"\"\"Replace template placeholder with values. :param timestamp: formatted date to display :param priority: priority number :param priority_name: priority name :param message: message to display :returns: formatted string \"\"\" Or extended with type information: \"\"\"Replace template placeholder with values. :param timestamp: formatted date to display :type timestamp: str or unicode :param priority: priority number :type priority: str or unicode :param priority_name: priority name :type priority_name: str or unicode :param message: message to display :type message: str or unicode :returns: formatted string :rtype: str or unicode \"\"\""} +{"question_id": 54747253, "score": 178, "creation_date": 1550492622, "tags": ["python", "python-typing", "mypy"], "instruction": "Type hint for a tuple of variable length (variadic tuple)?\n\nI have a function that takes a tuple of different lengths as an argument: from typing import Tuple def process_tuple(t: Tuple[str]): # Do nasty tuple stuff process_tuple((\"a\",)) process_tuple((\"a\", \"b\")) process_tuple((\"a\", \"b\", \"c\")) When I annotate function like mentioned above, I get these error messages fool.py:9: error: Argument 1 to \"process_tuple\" has incompatible type \"Tuple[str, str]\"; expected \"Tuple[str]\" fool.py:10: error: Argument 1 to \"process_tuple\" has incompatible type \"Tuple[str, str, str]\"; expected \"Tuple[str]\" process_tuple really works with tuples and I use them as immutable lists of variable length. I haven't found any consensus on this topic on the internet, so I wonder how should I annotate this kind of input.", "output": "We can annotate variable-length homogeneous tuples using the ... literal (aka Ellipsis) like this: def process_tuple(t: Tuple[str, ...]): ... or for Python3.9+ def process_tuple(t: tuple[str, ...]): ... After that, the errors should go away. From the docs: To specify a variable-length tuple of homogeneous type, use literal ellipsis, e.g. Tuple[int, ...]. A plain Tuple is equivalent to Tuple[Any, ...], and in turn to tuple. More info about annotating tuples can be found at \"Annotating tuples\" section of the docs."} +{"question_id": 2281850, "score": 177, "creation_date": 1266420108, "tags": ["python"], "instruction": "Timeout function if it takes too long to finish\n\nI have a shell script that loops through a text file containing URL:s that I want to visit and take screenshots of. All this is done and simple. The script initializes a class that when run creates a screenshot of each site in the list. Some sites take a very, very long time to load, and some might not be loaded at all. So I want to wrap the screengrabber-function in a timeout script, making the function return False if it couldn't finish within 10 seconds. I'm content with the simplest solution possible, maybe setting a asynchronous timer that will return False after 10 seconds no matter what actually happens inside the function?", "output": "The process for timing out an operations is described in the documentation for signal. The basic idea is to use signal handlers to set an alarm for some time interval and raise an exception once that timer expires. Note that this will only work on UNIX, and the timer duration must be an integer (and so it can not be < 1s). Here's an implementation that creates a decorator (save the following code as timeout.py). import errno import os import signal import functools class TimeoutError(Exception): pass def timeout(seconds=10, error_message=os.strerror(errno.ETIME)): def decorator(func): def _handle_timeout(signum, frame): raise TimeoutError(error_message) @functools.wraps(func) def wrapper(*args, **kwargs): signal.signal(signal.SIGALRM, _handle_timeout) signal.alarm(seconds) try: result = func(*args, **kwargs) finally: signal.alarm(0) return result return wrapper return decorator This creates a decorator called @timeout that can be applied to any long running functions. So, in your application code, you can use the decorator like so: from timeout import timeout # Timeout a long running function with the default expiry of 10 seconds. @timeout def long_running_function1(): ... # Timeout after 5 seconds @timeout(5) def long_running_function2(): ... # Timeout after 30 seconds, with the error \"Connection timed out\" @timeout(30, os.strerror(errno.ETIMEDOUT)) def long_running_function3(): ..."} +{"question_id": 2197451, "score": 177, "creation_date": 1265260492, "tags": ["python", "string", "split"], "instruction": "Why are empty strings returned in split() results?\n\nWhat is the point of '/segment/segment/'.split('/') returning ['', 'segment', 'segment', '']? Notice the empty elements. If you're splitting on a delimiter that happens to be at position one and at the very end of a string, what extra value does it give you to have the empty string returned from each end?", "output": "str.split complements str.join, so \"/\".join(['', 'segment', 'segment', '']) gets you back the original string. If the empty strings were not there, the first and last '/' would be missing after the join()."} +{"question_id": 5616822, "score": 177, "creation_date": 1302496910, "tags": ["python", "regex"], "instruction": "How to use regex to find all overlapping matches\n\nI'm trying to find every 10 digit series of numbers within a larger series of numbers using re in Python 2.6. I'm easily able to grab no overlapping matches, but I want every match in the number series. Eg. in \"123456789123456789\" I should get the following list: [1234567891,2345678912,3456789123,4567891234,5678912345,6789123456,7891234567,8912345678,9123456789] I've found references to a \"lookahead\", but the examples I've seen only show pairs of numbers rather than larger groupings and I haven't been able to convert them beyond the two digits.", "output": "Use a capturing group inside a lookahead. The lookahead captures the text you're interested in, but the actual match is technically the zero-width substring before the lookahead, so the matches are technically non-overlapping: import re s = \"123456789123456789\" matches = re.finditer(r'(?=(\\d{10}))', s) results = [int(match.group(1)) for match in matches] # results: # [1234567891, # 2345678912, # 3456789123, # 4567891234, # 5678912345, # 6789123456, # 7891234567, # 8912345678, # 9123456789]"} +{"question_id": 7526625, "score": 177, "creation_date": 1316768713, "tags": ["python", "matplotlib", "legend", "title", "subplot"], "instruction": "Global legend and title aside subplots\n\nI've started with matplot and managed some basic plots, but now I find it hard to discover how to do some stuff I need now :( My actual question is how to place a global title and global legend on a figure with subplots. I'm doing 2x3 subplots where I have a lot of different graphs in various colors (about 200). To distinguish (most) of them I wrote something like def style(i, total): return dict(color=jet(i/total), linestyle=[\"-\", \"--\", \"-.\", \":\"][i%4], marker=[\"+\", \"*\", \"1\", \"2\", \"3\", \"4\", \"s\"][i%7]) fig=plt.figure() p0=fig.add_subplot(321) for i, y in enumerate(data): p0.plot(x, trans0(y), \"-\", label=i, **style(i, total)) # and more subplots with other transN functions (any thoughts on this? :)) Each subplot has the same style function. Now I'm trying to get a global title for all subplots and also a global legend which explains all styles. Also I need to make the font tiny to fit all 200 styles on there (I don't need completely unique styles, but at least some attempt) Can someone help me solve this task?", "output": "Global title: In newer releases of matplotlib one can use Figure.suptitle() method of Figure: import matplotlib.pyplot as plt fig = plt.gcf() fig.suptitle(\"Title centered above all subplots\", fontsize=14) Alternatively (based on @Steven C. Howell's comment below (thank you!)), use the matplotlib.pyplot.suptitle() function: import matplotlib.pyplot as plt # plot stuff # ... plt.suptitle(\"Title centered above all subplots\", fontsize=14)"} +{"question_id": 1052148, "score": 177, "creation_date": 1246079443, "tags": ["python", "group-by", "count", "sqlalchemy"], "instruction": "Group by & count function in sqlalchemy\n\nI want a \"group by and count\" command in sqlalchemy. How can I do this?", "output": "The documentation on counting says that for group_by queries it is better to use func.count(): from sqlalchemy import func session.query(Table.column, func.count(Table.column)).group_by(Table.column).all()"} +{"question_id": 19414060, "score": 177, "creation_date": 1381958205, "tags": ["python", "argparse"], "instruction": "Argparse: Required argument 'y' if 'x' is present\n\nI have a requirement as follows: ./xyifier --prox --lport lport --rport rport for the argument prox , I use action='store_true' to check if it is present or not. I do not require any of the arguments. But, if --prox is set I require rport and lport as well. Is there an easy way of doing this with argparse without writing custom conditional coding. More Code: non_int.add_argument('--prox', action='store_true', help='Flag to turn on proxy') non_int.add_argument('--lport', type=int, help='Listen Port.') non_int.add_argument('--rport', type=int, help='Proxy port.')", "output": "No, there isn't any option in argparse to make mutually inclusive sets of options. The simplest way to deal with this would be: if args.prox and (args.lport is None or args.rport is None): parser.error(\"--prox requires --lport and --rport.\") Actually there's already an open PR with an enhancement proposal : https://github.com/python/cpython/issues/55797"} +{"question_id": 1018886, "score": 177, "creation_date": 1245429191, "tags": ["python", "django", "django-models", "backend", "django-queryset"], "instruction": ".filter() vs .get() for single object? (Django)\n\nI was having a debate on this with some colleagues. Is there a preferred way to retrieve an object in Django when you're expecting only one? The two obvious ways are: try: obj = MyModel.objects.get(id=1) except MyModel.DoesNotExist: # We have no object! Do something... pass And: objs = MyModel.objects.filter(id=1) if len(objs) == 1: obj = objs[0] else: # We have no object! Do something... pass The first method seems behaviorally more correct, but uses exceptions in control flow which may introduce some overhead. The second is more roundabout but won't ever raise an exception. Any thoughts on which of these is preferable? Which is more efficient?", "output": "get() is provided specifically for this case. Use it. Option 2 is almost precisely how the get() method is actually implemented in Django, so there should be no \"performance\" difference (and the fact that you're thinking about it indicates you're violating one of the cardinal rules of programming, namely trying to optimize code before it's even been written and profiled -- until you have the code and can run it, you don't know how it will perform, and trying to optimize before then is a path of pain)."} +{"question_id": 4012340, "score": 177, "creation_date": 1287988973, "tags": ["python"], "instruction": "Colon (:) in Python list index\n\nI'm new to Python. I see : used in list indices especially when it's associated with function calls. Python 2.7 documentation suggests that lists.append translates to a[len(a):] = [x]. Why does one need to suffix len(a) with a colon? I understand that : is used to identify keys in dictionary.", "output": ": is the delimiter of the slice syntax to 'slice out' sub-parts in sequences , [start:end] [1:5] is equivalent to \"from 1 to 5\" (5 not included) [1:] is equivalent to \"1 to end\" [len(a):] is equivalent to \"from length of a to end\" Watch https://youtu.be/tKTZoB2Vjuk?t=41m40s at around 40:00 he starts explaining that. Works with tuples and strings, too."} +{"question_id": 35490148, "score": 177, "creation_date": 1455821932, "tags": ["python", "filenames", "python-os", "pathlib"], "instruction": "How to get folder name, in which given file resides, from pathlib.path?\n\nIs there something similar to os.path.dirname(path), but in pathlib?", "output": "It looks like there is a parents element that contains all the parent directories of a given path. E.g., if you start with: >>> import pathlib >>> p = pathlib.Path('/path/to/my/file') Then p.parents[0] is the directory containing file: >>> p.parents[0] PosixPath('/path/to/my') ...and p.parents[1] will be the next directory up: >>> p.parents[1] PosixPath('/path/to') Etc. p.parent is another way to ask for p.parents[0]. You can convert a Path into a string and get pretty much what you would expect: >>> str(p.parent) '/path/to/my' And also on any Path you can use the .absolute() method to get an absolute path: >>> os.chdir('/etc') >>> p = pathlib.Path('../relative/path') >>> str(p.parent) '../relative' >>> str(p.parent.absolute()) '/etc/../relative'"} +{"question_id": 38199008, "score": 177, "creation_date": 1467708492, "tags": ["python", "python-unittest", "magicmock"], "instruction": "Python returns MagicMock object instead of return_value\n\nI have a python file a.py which contains two classes A and B. class A(object): def method_a(self): return \"Class A method a\" class B(object): def method_b(self): a = A() print a.method_a() I would like to unittest method_b in class B by mocking A. Here is the content of the file testa.py for this purpose: import unittest import mock import a class TestB(unittest.TestCase): @mock.patch('a.A') def test_method_b(self, mock_a): mock_a.method_a.return_value = 'Mocked A' b = a.B() b.method_b() if __name__ == '__main__': unittest.main() I expect to get Mocked A in the output. But what I get is: <MagicMock name='A().method_a()' id='4326621392'> Where am I doing wrong?", "output": "When you @mock.patch('a.A'), you are replacing the class A in the code under test with mock_a. In B.method_b you then set a = A(), which is now a = mock_a() - i.e. a is the return_value of mock_a. As you haven't specified this value, it's a regular MagicMock; this isn't configured either, so you get the default response (yet another MagicMock) when calling methods on it. Instead, you want to configure the return_value of mock_a to have the appropriate method, which you can do as either: mock_a().method_a.return_value = 'Mocked A' # ^ note parentheses or, perhaps more explicitly (and, as mentioned in the comments, avoiding adding an entry to mock_a's call list): mock_a.return_value.method_a.return_value = 'Mocked A' Your code would have worked in the case a = A (assigning the class, not creating an instance), as then a.method_a() would have triggered your mock method."} +{"question_id": 22252397, "score": 176, "creation_date": 1394201536, "tags": ["python", "mysql", "flask", "sqlalchemy", "mysql-python"], "instruction": "ImportError: No module named MySQLdb\n\nI am referring the following tutorial to make a login page for my web application. http://code.tutsplus.com/tutorials/intro-to-flask-signing-in-and-out--net-29982 I am having issue with the database. I am getting an ImportError: No module named MySQLdb when I execute http://127.0.0.1:5000/testdb I have tried all possible ways to install python mysql, the one mentioned in the tutorial, easy_install, sudo apt-get install. I have installed mysql in my virtual env. My directory structure is just the same as whats explained in the tutorial. The module is sucessfully installed in my system and still I am getting this error. Please help. What could be causing this.", "output": "If you're having issues compiling the binary extension, or on a platform where you can't, you can try using the pure Python PyMySQL bindings. Simply pip install pymysql and switch your SQLAlchemy URI to start like this: SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://.....' There are some other drivers you could also try."} +{"question_id": 4628122, "score": 176, "creation_date": 1294419828, "tags": ["python", "datetime", "timedelta"], "instruction": "How to construct a timedelta object from a simple string\n\nI'm writing a function that needs to parse string to a timedelta. The user must enter something like \"32m\" or \"2h32m\", or even \"4:13\" or \"5hr34m56s\"... Is there a library or something that has this sort of thing already implemented?", "output": "For the first format (5hr34m56s), you should parse using regular expressions Here is re-based solution: import re from datetime import timedelta regex = re.compile(r'((?P<hours>\\d+?)hr)?((?P<minutes>\\d+?)m)?((?P<seconds>\\d+?)s)?') def parse_time(time_str): parts = regex.match(time_str) if not parts: return parts = parts.groupdict() time_params = {} for name, param in parts.items(): if param: time_params[name] = int(param) return timedelta(**time_params) >>> from parse_time import parse_time >>> parse_time('12hr') datetime.timedelta(0, 43200) >>> parse_time('12hr5m10s') datetime.timedelta(0, 43510) >>> parse_time('12hr10s') datetime.timedelta(0, 43210) >>> parse_time('10s') datetime.timedelta(0, 10) >>>"} +{"question_id": 3838329, "score": 176, "creation_date": 1285928681, "tags": ["python", "math", "geometry"], "instruction": "How can I check if two segments intersect?\n\nHow can I check if 2 segments intersect? I've the following data: Segment1 [ {x1,y1}, {x2,y2} ] Segment2 [ {x1,y1}, {x2,y2} ] I need to write a small algorithm in Python to detect if the 2 lines are intersecting.", "output": "The equation of a line is: f(x) = A*x + b = y For a segment, it is exactly the same, except that x is included on an interval I. If you have two segments, defined as follow: Segment1 = {(X1, Y1), (X2, Y2)} Segment2 = {(X3, Y3), (X4, Y4)} The abscissa Xa of the potential point of intersection (Xa,Ya) must be contained in both interval I1 and I2, defined as follow : I1 = [min(X1,X2), max(X1,X2)] I2 = [min(X3,X4), max(X3,X4)] And we could say that Xa is included into : Ia = [max( min(X1,X2), min(X3,X4) ), min( max(X1,X2), max(X3,X4) )] Now, we need to check that this interval Ia exists : if (max(X1,X2) < min(X3,X4)): return False # There is no mutual abscissae So, we have two line formula, and a mutual interval. Your line formulas are: f1(x) = A1*x + b1 = y f2(x) = A2*x + b2 = y As we got two points by segment, we are able to determine A1, A2, b1 and b2: A1 = (Y1-Y2)/(X1-X2) # Pay attention to not dividing by zero A2 = (Y3-Y4)/(X3-X4) # Pay attention to not dividing by zero b1 = Y1-A1*X1 = Y2-A1*X2 b2 = Y3-A2*X3 = Y4-A2*X4 If the segments are parallel, then A1 == A2 : if (A1 == A2): return False # Parallel segments A point (Xa,Ya) standing on both line must verify both formulas f1 and f2: Ya = A1 * Xa + b1 Ya = A2 * Xa + b2 A1 * Xa + b1 = A2 * Xa + b2 Xa = (b2 - b1) / (A1 - A2) # Once again, pay attention to not dividing by zero The last thing to do is check that Xa is included into Ia: if ( (Xa < max( min(X1,X2), min(X3,X4) )) or (Xa > min( max(X1,X2), max(X3,X4) )) ): return False # intersection is out of bound else: return True In addition to this, you may check at startup that two of the four provided points are not equals to avoid all that testing."} +{"question_id": 1233539, "score": 176, "creation_date": 1249481646, "tags": ["python", "dictionary", "url-parameters"], "instruction": "Python Dictionary to URL Parameters\n\nI am trying to convert a Python dictionary to a string for use as URL parameters. I am sure that there is a better, more Pythonic way of doing this. What is it? x = \"\" for key, val in {'a':'A', 'b':'B'}.items(): x += \"%s=%s&\" %(key,val) x = x[:-1]", "output": "Here is the correct way of using it in Python 3. from urllib.parse import urlencode params = {'a':'A', 'b':'B'} print(urlencode(params))"} +{"question_id": 42973223, "score": 176, "creation_date": 1490264447, "tags": ["python", "matplotlib", "axis"], "instruction": "How to share x axes of two subplots after they have been created\n\nI'm trying to share two subplots axes, but I need to share the x axis after the figure was created. E.g. I create this figure: import numpy as np import matplotlib.pyplot as plt t = np.arange(1000)/100. x = np.sin(2*np.pi*10*t) y = np.cos(2*np.pi*10*t) fig = plt.figure() ax1 = plt.subplot(211) plt.plot(t,x) ax2 = plt.subplot(212) plt.plot(t,y) # some code to share both x axes plt.show() Instead of the comment I want to insert some code to share both x axes. How do I do this? There are some relevant sounding attributes _shared_x_axes and _shared_x_axes when I check to figure axis (fig.get_axes()) but I don't know how to link them.", "output": "The usual way to share axes is to create the shared properties at creation. Either fig=plt.figure() ax1 = plt.subplot(211) ax2 = plt.subplot(212, sharex = ax1) or fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True) Sharing the axes after they have been created should therefore not be necessary. However if for any reason, you need to share axes after they have been created (actually, using a different library which creates some subplots, like here might be a reason), there would still be a solution: Using ax2.sharex(ax1) creates a link between the two axes, ax1 and ax2. In contrast to the sharing at creation time, you will have to set the xticklabels off manually for one of the axes (in case that is wanted). A complete example: import numpy as np import matplotlib.pyplot as plt t= np.arange(1000)/100. x = np.sin(2*np.pi*10*t) y = np.cos(2*np.pi*10*t) fig=plt.figure() ax1 = plt.subplot(211) ax2 = plt.subplot(212) ax1.plot(t,x) ax2.plot(t,y) ax2.sharex(ax1) ax1.set_xticklabels([]) # ax2.autoscale() ## call autoscale if needed plt.show() For a list of axes you would do: for ax in axes[1:]: ax.sharex(axes[0])"} +{"question_id": 3694371, "score": 176, "creation_date": 1284284630, "tags": ["python", "oop"], "instruction": "How do I initialize the base (super) class?\n\nIn Python, consider I have the following code: class SuperClass(object): def __init__(self, x): self.x = x class SubClass(SuperClass): def __init__(self, y): self.y = y # how do I initialize the SuperClass __init__ here? How do I initialize the SuperClass __init__ in the subclass? I am following the Python tutorial and it doesn't cover that. When I searched on Google, I found more than one way of doing. What is the standard way of handling this?", "output": "Python (until version 3) supports \"old-style\" and new-style classes. New-style classes are derived from object and are what you are using, and invoke their base class through super(), e.g. class X(object): def __init__(self, x): pass def doit(self, bar): pass class Y(X): def __init__(self): super(Y, self).__init__(123) def doit(self, foo): return super(Y, self).doit(foo) Because python knows about old- and new-style classes, there are different ways to invoke a base method, which is why you've found multiple ways of doing so. For completeness sake, old-style classes call base methods explicitly using the base class, i.e. def doit(self, foo): return X.doit(self, foo) But since you shouldn't be using old-style anymore, I wouldn't care about this too much. Python 3 only knows about new-style classes (no matter if you derive from object or not)."} +{"question_id": 9845292, "score": 176, "creation_date": 1332530529, "tags": ["python", "matlab", "numpy", "matplotlib", "scipy"], "instruction": "A tool to convert MATLAB code to Python\n\nI have a bunch of MATLAB code from my MS thesis which I now want to convert to Python (using numpy/scipy and matplotlib) and distribute as open-source. I know the similarity between MATLAB and Python scientific libraries, and converting them manually will be not more than a fortnight (provided that I work towards it every day for some time). I was wondering if there was already any tool available which can do the conversion.", "output": "There are several tools for converting Matlab to Python code. The only one that's seen recent activity (last commit from June 2018) is Small Matlab to Python compiler (also developed here: SMOP@chiselapp). Other options include: LiberMate: translate from Matlab to Python and SciPy (Requires Python 2, last update 4 years ago). OMPC: Matlab to Python (a bit outdated). Mat2py: Matlab to Python (Requires Python 2). Also, for those interested in an interface between the two languages and not conversion: pymatlab: communicate from Python by sending data to the MATLAB workspace, operating on them with scripts and pulling back the resulting data. Python-Matlab wormholes: both directions of interaction supported. Python-Matlab bridge: use Matlab from within Python, offers matlab_magic for iPython, to execute normal matlab code from within ipython. PyMat: Control Matlab session from Python. pymat2: continuation of the seemingly abandoned PyMat. mlabwrap, mlabwrap-purepy: make Matlab look like Python library (based on PyMat). oct2py (repository): run GNU Octave commands from within Python. pymex: Embeds the Python Interpreter in Matlab, also on File Exchange. matpy: Access MATLAB in various ways: create variables, access .mat files, direct interface to MATLAB engine (requires MATLAB be installed). MatPy: Python package for numerical linear algebra and plotting with a MatLab-like interface. Btw might be helpful to look here for other migration tips: http://bci2000.org/downloads/BCPy2000/Migration.html On a different note, for people who might find it useful there is: matlab2fortran"} +{"question_id": 14452145, "score": 175, "creation_date": 1358832700, "tags": ["python", "time", "profiling", "measure"], "instruction": "How to measure time taken between lines of code in python?\n\nSo in Java, we can do How to measure time taken by a function to execute But how is it done in python? To measure the time start and end time between lines of code? Something that does this: import some_time_library starttime = some_time_library.some_module() code_tobe_measured() endtime = some_time_library.some_module() time_taken = endtime - starttime", "output": "If you want to measure CPU time, can use time.process_time() for Python 3.3 and above: import time start = time.process_time() # your code here print(time.process_time() - start) First call turns the timer on, and second call tells you how many seconds have elapsed. There is also a function time.clock(), but it is deprecated since Python 3.3 and will be removed in Python 3.8. There are better profiling tools like timeit and profile, however time.process_time() will measure the CPU time and this is what you're are asking about. If you want to measure wall clock time instead, use time.time()."} +{"question_id": 12773763, "score": 175, "creation_date": 1349652961, "tags": ["python", "django", "reload", "gunicorn"], "instruction": "gunicorn autoreload on source change\n\nFinally I migrated my development env from runserver to gunicorn/nginx. It'd be convenient to replicate the autoreload feature of runserver to gunicorn, so the server automatically restarts when source changes. Otherwise I have to restart the server manually with kill -HUP. Any way to avoid the manual restart?", "output": "While this is old question you need to know that ever since version 19.0 gunicorn has had the --reload option. So now no third party tools are needed."} +{"question_id": 58068818, "score": 175, "creation_date": 1569266203, "tags": ["python", "jupyter-notebook", "environment-variables", "jupyter", "conda"], "instruction": "How to use Jupyter notebooks in a conda environment?\n\nTypically one runs jupyter notebook or jupyter-notebook or ipython notebook in a terminal to start a Jupyter notebook webserver locally (and open the URL in the browser). When using conda and conda environments, what is the best way to run a Jupyter notebook which allows to import Python modules installed in the conda environment? As it seems, this is not quite straight forward and many users have similar troubles. Most common error message seems to be: after installing a package XYZ in a conda environment my-env one can run import XYZ in a python console started in my-env, but running the same code in the Jupyter notebook will lead to an ImportError. This question has been asked many times, but there is no good place to answer it, most Q&A's and Github tickets are quite messy so let's start a new Q&A here.", "output": "Disclaimer: ATM tested only in Ubuntu and Windows (see comments to this answer). Jupyter runs the user's code in a separate process called kernel. The kernel can be a different Python installation (in a different conda environment or virtualenv or Python 2 instead of Python 3) or even an interpreter for a different language (e.g. Julia or R). Kernels are configured by specifying the interpreter and a name and some other parameters (see Jupyter documentation) and configuration can be stored system-wide, for the active environment (or virtualenv) or per user. If nb_conda_kernels is used, additional to statically configured kernels, a separate kernel for each conda environment with ipykernel installed will be available in Jupyter notebooks. In short, there are three options how to use a conda environment and Jupyter: Option 1: Run Jupyter server and kernel inside the conda environment Do something like: conda create -n my-conda-env # creates new virtual env conda activate my-conda-env # activate environment in terminal conda install jupyter # install jupyter + notebook jupyter notebook # start server + kernel inside my-conda-env Jupyter will be completely installed in the conda environment. Different versions of Jupyter can be used for different conda environments, but this option might be a bit of overkill. It is enough to include the kernel in the environment, which is the component wrapping Python which runs the code. The rest of Jupyter notebook can be considered as editor or viewer and it is not necessary to install this separately for every environment and include it in every env.yml file. Therefore one of the next two options might be preferable, but this one is the simplest one and definitely fine. Option 2: Create special kernel for the conda environment Do something like: conda create -n my-conda-env # creates new virtual env conda activate my-conda-env # activate environment in terminal conda install ipykernel # install Python kernel in new conda env ipython kernel install --user --name=my-conda-env-kernel # configure Jupyter to use Python kernel Then run jupyter from the system installation or a different conda environment: conda deactivate # this step can be omitted by using a different terminal window than before conda install jupyter # optional, might be installed already in system e.g. by 'apt install jupyter' on debian-based systems jupyter notebook # run jupyter from system Name of the kernel and the conda environment are independent from each other, but it might make sense to use a similar name. Only the Python kernel will be run inside the conda environment, Jupyter from system or a different conda environment will be used - it is not installed in the conda environment. By calling ipython kernel install the jupyter is configured to use the conda environment as kernel, see Jupyter documentation and IPython documentation for more information. In most Linux installations this configuration is a *.json file in ~/.local/share/jupyter/kernels/my-conda-env-kernel/kernel.json: { \"argv\": [ \"/opt/miniconda3/envs/my-conda-env/bin/python\", \"-m\", \"ipykernel_launcher\", \"-f\", \"{connection_file}\" ], \"display_name\": \"my-conda-env-kernel\", \"language\": \"python\" } Option 3: Use nb_conda_kernels to use a kernel in the conda environment When the package nb_conda_kernels is installed, a separate kernel is available automatically for each conda environment containing the conda package ipykernel or a different kernel (R, Julia, ...). conda activate my-conda-env # this is the environment for your project and code conda install ipykernel conda deactivate conda activate base # could be also some other environment conda install nb_conda_kernels jupyter notebook You should be able to choose the Kernel Python [conda env:my-conda-env]. Note that nb_conda_kernels seems to be available only via conda and not via pip or other package managers like apt. Troubleshooting Using Linux/Mac the command which on the command line will tell you which jupyter is used, if you are using option 1 (running Jupyter from inside the conda environment), it should be an executable from your conda environment: $ which jupyter /opt/miniconda3/envs/my-conda-env/bin/jupyter $ which jupyter-notebook # this might be different than 'which jupyter'! (see below) /opt/miniconda3/envs/my-conda-env/bin/jupyter-notebook Inside the notebook you should see that Python uses Python paths from the conda environment: [1] !which python /opt/miniconda3/envs/my-conda-env/bin/python [2] import sys; sys.executable '/opt/miniconda3/envs/my-conda-env/bin/python' ['/home/my_user', '/opt/miniconda3/envs/my-conda-env/lib/python37.zip', '/opt/miniconda3/envs/my-conda-env/lib/python3.7', '/opt/miniconda3/envs/my-conda-env/lib/python3.7/lib-dynload', '', '/opt/miniconda3/envs/my-conda-env/lib/python3.7/site-packages', '/opt/miniconda3/envs/my-conda-env/lib/python3.7/site-packages/IPython/extensions', '/home/my_user/.ipython'] Jupyter provides the command jupyter-troubleshoot or in a Jupyter notebook: !jupyter-troubleshoot This will print a lot of helpful information about including the outputs mentioned above as well as installed libraries and others. When asking for help regarding Jupyter installations questions, it might be good idea to provide this information in bug reports or questions. To list all configured Jupyter kernels run: jupyter kernelspec list Common errors and traps Jupyter notebook not installed in conda environment Note: symptoms are not unique to the issue described here. Symptoms: ImportError in Jupyter notebooks for modules installed in the conda environment (but not installed system wide), but no error when importing in a Python terminal Explaination: You tried to run jupyter notebook from inside your conda environment (option 1, see above), there is no configuration for a kernel for this conda environment (this would be option 2) and nb_conda_kernels is not installed (option 3), but jupyter notebook is not (fully) installed in the conda environment, even if which jupyter might make you believe it was. In GNU/Linux you can type which jupyter to check which executable of Jupyter is run. This means that system's Jupyter is used, probably because Jupyter is not installed: (my-conda-env) $ which jupyter-notebook /usr/bin/jupyter If the path points to a file in your conda environment, Jupyter is run from inside Jupyter: (my-conda-env) $ which jupyter-notebook /opt/miniconda3/envs/my-conda-env/bin/jupyter-notebook Note that when the conda package ipykernel is installed, an executable jupyter is shipped, but no executable jupyter-notebook. This means that which jupyter will return a path to the conda environment but jupyter notebook will start system's jupyter-nootebook (see also here): $ conda create -n my-conda-env $ conda activate my-conda-env $ conda install ipykernel $ which jupyter # this looks good, but is misleading! /opt/miniconda3/envs/my-conda-env/bin/jupyter $ which jupyter-notebook # jupyter simply runs jupyter-notebook from system... /usr/bin/jupyter-notebook This happens because jupyter notebook searches for jupyter-notebook, finds /usr/bin/jupyter-notebook and calls it starting a new Python process. The shebang in /usr/bin/jupyter-notebook is #!/usr/bin/python3 and not a dynamic #!/usr/bin/env python. Therefore Python manages to break out of the conda environment. I guess jupyter could call python /usr/bin/jupyter-notebook instead to overrule the shebang, but mixing system's bin files and the environment's python path can't work well anyway. Solution: Install jupyter notebook inside the conda environment: conda activate my-conda-env conda install jupyter jupyter notebook Wrong kernel configuration: Kernel is configured to use system Python Note: symptoms are not unique to the issue described here. Symptoms: ImportError in Jupyter notebooks for modules installed in the conda environment (but not installed system wide), but no error when importing in a Python terminal Explanation: Typically the system provides a kernel called python3 (display name \"Python 3\") configured to use /usr/bin/python3, see e.g. /usr/share/jupyter/kernels/python3/kernel.json. This is usually overridden by a kernel in the conda environment, which points to the environments python binary /opt/miniconda3/envs/my-conda-env/bin/python. Both are generated by the package ipykernel (see here and here). A user kernel specification in ~/.local/share/jupyter/kernels/python3/kernel.json might override the system-wide and environment kernel. If the environment kernel is missing or the user kernel points to a python installation outside the environment option 1 (installation of jupyter in the environment) will fail. For occurrences and discussions of this problem and variants see here, here, here and also here, here and here. Solution: Use jupyter kernelspec list to list the location active kernel locations. $ conda activate my-conda-env $ jupyter kernelspec list Available kernels: python3 /opt/miniconda3/envs/my-conda-env/share/jupyter/kernels/python3 If the kernel in the environment is missing, you can try creating it manually using ipython kernel install --sys-prefix in the activated environment, but it is probably better to check your installation, because conda install ipykernel should have created the environment (maybe try re-crate the environment and re-install all packages?). If a user kernel specification is blocking the environment kernel specification, you can either remove it or use a relative python path which will use $PATH to figure out which python to use. So something like this, should be totally fine: $ cat ~/.local/share/jupyter/kernels/python3/kernel.json { \"argv\": [ \"python\", \"-m\", \"ipykernel_launcher\", \"-f\", \"{connection_file}\" ], \"display_name\": \"Python 3\", \"language\": \"python\" } Correct conda environment not activated Symptoms: ImportError for modules installed in the conda environment (but not installed system wide) in Jupyter notebooks and Python terminals Explanation: Each terminal has a set of environment variables, which are lost when the terminal is closed. In order to use a conda environment certain environment variables need to be set, which is done by activating it using conda activate my-conda-env. If you attempted to run Jupyter notebook from inside the conda environment (option 1), but did not activate the conda environment before running it, it might run the system's jupyter. Solution: Activate conda environment before running Jupyter. conda activate my-conda-env jupyter notebook Broken kernel configuration Symptoms: Strange things happening. Maybe similar symptoms as above, e.g. ImportError Explanation: If you attempted to use option 2, i.e. running Jupyter from system and the Jupyter kernel inside the conda environment by using an explicit configuration for the kernel, but it does not behave as you expect, the configuration might be corrupted in some way. Solution: Check configuration in ~/.local/share/jupyter/kernels/my-kernel-name/kernel.json and fix mistakes manually or remove the entire directory and re-create it using the command provided above for option 2. If you can't find the kernel configuration there run jupyter kernelspec list. Python 2 vs 3 Symptoms: ImportError due to wrong Python version of the Jupyter kernel or other problems with Python 2/3 Explanation: The kernel configuration can have all sorts of confusing and misleading effects. For example the default Python 3 kernel configuration will allow me to launch a Jupyter notebook running on Python 2: conda create -n my-conda-env conda activate my-conda-env conda install python=2 conda install jupyter jupyter notebook The default Python 3 kernel: $ cat ~/.local/share/jupyter/kernels/python3/kernel.json { \"argv\": [ \"python\", \"-m\", \"ipykernel_launcher\", \"-f\", \"{connection_file}\" ], \"display_name\": \"Python 3\", \"language\": \"python\" } After creating a new Jupyter Notebook with the Python 3 kernel, Python 2 from the conda environment will be used even if \"Python 3\" is displayed by Jupyter. Solution: Don't use Python 2 ;-)"} +{"question_id": 30610675, "score": 175, "creation_date": 1433304522, "tags": ["javascript", "python"], "instruction": "Python Pandas equivalent in JavaScript\n\nWith this CSV example: Source,col1,col2,col3 foo,1,2,3 bar,3,4,5 The standard method I use Pandas is this: Parse CSV Select columns into a data frame (col1 and col3) Process the column (e.g. avarage the values of col1 and col3) Is there a JavaScript library that does that like Pandas?", "output": "This wiki will summarize and compare many pandas-like Javascript libraries. In general, you should check out the d3 JavaScript library. d3 is very useful \"swiss army knife\" for handling data in JavaScript, just like pandas is helpful for Python. You may see d3 used frequently like pandas, even if d3 is not exactly a DataFrame/Pandas replacement (i.e. d3 doesn't have the same API; d3 does not have Series / DataFrame classes with methods that match the pandas behavior). Ahmed's answer explains how d3 can be used to achieve some DataFrame functionality, and some of the libraries below were inspired by things like LearnJsData which uses d3 and lodash. As for DataFrame-style data transformation (splitting, joining, group by etc), here is a quick list of some of the JavaScript libraries. Note the libraries are written in different languages, including... browser-compatible aka client-side JavaScript Node.js aka Server-side JavaScript Typescript Some even use CPython transpiled to WebAssembly (but work with Node.js and/or browsers) ...so use the option that's right for you: Pyodide (browser-support AND Nodejs-support) A new but very strong contender from Ahmed Fasih's answer WebAssembly transpilation of CPython and a large portion of the numeric Python ecosystem (\"including NumPy, pandas, SciPy, Matplotlib, and scikit-learn\") \"for the browser and Node.js \" made by folks tightly connected to Project Jupyter see its list of included packages danfo-js (browser-support AND NodeJS-support) From Vignesh's answer danfo (which is often imported and aliased as dfd); has a basic DataFrame-type data structure, with the ability to plot directly Built by the team at Tensorflow: \"One of the main goals of Danfo.js is to bring data processing, machine learning and AI tools to JavaScript developers. ... Open-source libraries like Numpy and Pandas...\" pandas is built on top of numpy; likewise danfo-js is built on tensorflow-js please note danfo may not (yet?) support multi-column indexes pandas-js UPDATE The pandas-js repo has not been updated in awhile From STEEL and Feras' answers \"pandas.js is an open source (experimental) library mimicking the Python pandas library. It relies on Immutable.js as the NumPy logical equivalent. The main data objects in pandas.js are, like in Python pandas, the Series and the DataFrame.\" dataframe-js \"DataFrame-js provides an immutable data structure for javascript and datascience, the DataFrame, which allows to work on rows and columns with a sql and functional programming inspired api.\" data-forge Seen in Ashley Davis' answer \"JavaScript data transformation and analysis toolkit inspired by Pandas and LINQ.\" Note the old data-forge JS repository is no longer maintained; now a new repository uses Typescript jsdataframe \"Jsdataframe is a JavaScript data wrangling library inspired by data frame functionality in R and Python Pandas.\" dataframe \"explore data by grouping and reducing.\" SQL Frames \"DataFrames meet SQL, in the Browser\" \"SQL Frames is a low code data management framework that can be directly embedded in the browser to provide rich data visualization and UX. Complex DataFrames can be composed using familiar SQL constructs. With its powerful built-in analytics engine, data sources can come in any shape, form and frequency and they can be analyzed directly within the browser. It allows scaling to big data backends by transpiling the composed DataFrame logic to SQL.\" Jandas (browser- AND NodeJS-support; a new TypeScript library developed in 2023) Indexing and query very similar to Pandas Negative number and range indexing Support DataFrame with zero rows/columns Support index with duplicated values iterrows, sort_values, groupby and element-wise operation Then after coming to this question, checking other answers here and doing more searching, I found options like: Apache Arrow in JS Thanks to user Back2Basics suggestion: \"Apache Arrow is a columnar memory layout specification for encoding vectors and table-like containers of flat and nested data. Apache Arrow is the emerging standard for large in-memory columnar data (Spark, Pandas, Drill, Graphistry, ...)\" polars Polars is a blazingly fast DataFrames library implemented in Rust using Apache Arrow Columnar Format as memory model. Observable At first glance, seems like a JS alternative to the IPython/Jupyter \"notebooks\" Observable's page promises: \"Reactive programming\", a \"Community\", on a \"Web Platform\" See 5 minute intro here portal.js (formerly recline; from Rufus' answer) MAY BE OUTDATED: Does not use a \"DataFrame\" API MAY BE OUTDATED: Instead emphasizes its \"Multiview\" (the UI) API, (similar to jQuery/DOM model) which doesn't require jQuery but does require a browser! More examples MAY BE OUTDATED: Also emphasizes its MVC-ish architecture; including back-end stuff (i.e. database connections) js-data Really more of an ORM! Most of its modules correspond to different data storage questions (js-data-mongodb, js-data-redis, js-data-cloud-datastore), sorting, filtering, etc. On plus-side does work on Node.js as a first-priority; \"Works in Node.js and in the Browser.\" miso (another suggestion from Rufus) Impressive backers like Guardian and bocoup. AlaSQL \"AlaSQL\" is an open source SQL database for Javascript with a strong focus on query speed and data source flexibility for both relational data and schemaless data. It works in your browser, Node.js, and Cordova.\" Some thought experiments: \"Scaling a DataFrame in Javascript\" - Gary Sieling Here are the criteria we used to consider the above choices General Criteria Language (NodeJS vs browser JS vs Typescript) Dependencies (i.e. if it uses an underlying library / AJAX/remote API's) Actively supported (active user-base, active source repository, etc) Size/speed of JS library Panda's criterias in its R comparison Performance Functionality/flexibility Ease-of-use Similarity to Pandas / Dataframe API's Specifically hits on their main features Data-science emphasis Built-in visualization functions Demonstrated integration in combination with other tools like Jupyter (interactive notebooks), etc"} +{"question_id": 4160770, "score": 175, "creation_date": 1289523964, "tags": ["python", "django", "lazy-evaluation", "gettext", "django-i18n"], "instruction": "\"gettext()\" vs \"gettext_lazy()\" in Django\n\nI have a question about using ugettext and gettext_lazy() for translations. I learned that in models I should use gettext_lazy(), while in views ugettext. But are there any other places, where I should use gettext_lazy() too? What about form definitions? Are there any performance diffrences between them? Edit: And one more thing. Sometimes, instead of gettext_lazy(), gettext_noop() is used. As documentation says, gettext_noop() strings are only marked for translation and translated at the latest possible momment before displaying them to the user, but I'm little confused here, isn't that similar to what gettext_lazy() do? It's still hard for me to decide, which should I use in my models and forms.", "output": "gettext() vs. gettext_lazy() In definitions like forms or models you should use gettext_lazy because the code of this definitions is only executed once (mostly on django's startup); gettext_lazy translates the strings in a lazy fashion, which means, eg. every time you access the name of an attribute on a model the string will be newly translated-which totally makes sense because you might be looking at this model in different languages since django was started! In views and similar function calls you can use gettext without problems, because everytime the view is called gettext will be newly executed, so you will always get the right translation fitting the request! Regarding gettext_noop() As Bryce pointed out in his answer, this function marks a string as extractable for translation but does return the untranslated string. This is useful for using the string in two places \u2013 translated and untranslated. See the following example: import logging from django.http import HttpResponse from django.utils.translation import gettext as _, gettext_noop as _noop def view(request): msg = _noop(\"An error has occurred\") logging.error(msg) return HttpResponse(_(msg))"} +{"question_id": 551038, "score": 174, "creation_date": 1234711780, "tags": ["python", "access-modifiers"], "instruction": "\"Private\" (implementation) class in Python\n\nI am coding a small Python module composed of two parts: some functions defining a public interface, an implementation class used by the above functions, but which is not meaningful outside the module. At first, I decided to \"hide\" this implementation class by defining it inside the function using it, but this hampers readability and cannot be used if multiple functions reuse the same class. So, in addition to comments and docstrings, is there a mechanism to mark a class as \"private\" or \"internal\"? I am aware of the underscore mechanism, but as I understand it it only applies to variables, function and methods name.", "output": "Use a single underscore prefix: class _Internal: ... This is the official Python convention for 'internal' symbols; from module import * does not import underscore-prefixed objects. Reference to the single-underscore convention in the Python 2 and Python 3 documentation."} +{"question_id": 10593100, "score": 174, "creation_date": 1337045459, "tags": ["python", "numpy", "logarithm", "natural-logarithm"], "instruction": "How do you do natural logs (e.g. \"ln()\") with numpy in Python?\n\nUsing numpy, how can I do the following: ln(x) Is it equivalent to: np.log(x) I apologise for such a seemingly trivial question, but my understanding of the difference between log and ln is that ln is logspace e?", "output": "np.log is ln, whereas np.log10 is your standard base 10 log."} +{"question_id": 3038033, "score": 174, "creation_date": 1276525551, "tags": ["python", "function", "python-typing"], "instruction": "What are good uses for Python3's \"Function Annotations\"?\n\nFunction Annotations: PEP-3107 I ran across a snippet of code demonstrating Python3's function annotations. The concept is simple but I can't think of why these were implemented in Python3 or any good uses for them. Perhaps SO can enlighten me? How it works: def foo(a: 'x', b: 5 + 6, c: list) -> max(2, 9): ... function body ... Everything following the colon after an argument is an 'annotation', and the information following the -> is an annotation for the function's return value. foo.func_annotations would return a dictionary: {'a': 'x', 'b': 11, 'c': list, 'return': 9} What's the significance of having this available?", "output": "I think this is actually great. Coming from an academic background, I can tell you that annotations have proved themselves invaluable for enabling smart static analyzers for languages like Java. For instance, you could define semantics like state restrictions, threads that are allowed to access, architecture limitations, etc., and there are quite a few tools that can then read these and process them to provide assurances beyond what you get from the compilers. You could even write things that check preconditions/postconditions. I feel something like this is especially needed in Python because of its weaker typing, but there were really no constructs that made this straightforward and part of the official syntax. There are other uses for annotations beyond assurance. I can see how I could apply my Java-based tools to Python. For instance, I have a tool that lets you assign special warnings to methods, and gives you indications when you call them that you should read their documentation (E.g., imagine you have a method that must not be invoked with a negative value, but it's not intuitive from the name). With annotations, I could technically write something like this for Python. Similarly, a tool that organizes methods in a large class based on tags can be written if there is an official syntax."} +{"question_id": 1507082, "score": 174, "creation_date": 1254440858, "tags": ["python", "exception"], "instruction": "Python: Is it bad form to raise exceptions within __init__?\n\nIs it considered bad form to raise exceptions within __init__? If so, then what is the accepted method of throwing an error when certain class variables are initialized as None or of an incorrect type?", "output": "Raising exceptions within __init__() is absolutely fine. There's no other good way to indicate an error condition within an initializer, and there are many hundreds of examples in the standard library where initializing an object can raise an exception. The error class to raise, of course, is up to you. ValueError is best if the initializer was passed an invalid parameter."} +{"question_id": 19021765, "score": 174, "creation_date": 1380179872, "tags": ["python", "redis"], "instruction": "redis-py : What's the difference between StrictRedis() and Redis()?\n\nI want to use redis-py for caching some data, but I can't find a suitable explanation of the difference between redis.StrictRedis() and redis.Redis(). Are they equivalent? In addition, I can't find any clear documentation about redis.StrictRedis()'s arguments in Redis Python Docs. Any idea?", "output": "EDIT: They are now equivalent: redis-py 3.0 drops support for the legacy \"Redis\" client class. \"StrictRedis\" has been renamed to \"Redis\" and an alias named \"StrictRedis\" is provided so that users previously using \"StrictRedis\" can continue to run unchanged. Original answer: This seems pretty clear: redis-py exposes two client classes that implement these commands The StrictRedis class attempts to adhere to the official command syntax. and In addition to the changes above, the Redis class, a subclass of StrictRedis, overrides several other commands to provide backwards compatibility with older versions of redis-py Do you need backwards compatibility? Use Redis. Don't care? Use StrictRedis. 2017-03-31 Here are the specifics of the backwards compatibility, from the github.com link cited: In addition to the changes above, the Redis class, a subclass of StrictRedis, overrides several other commands to provide backwards compatibility with older versions of redis-py: LREM: Order of 'num' and 'value' arguments reversed such that 'num' can provide a default value of zero. ZADD: Redis specifies the 'score' argument before 'value'. These were swapped accidentally when being implemented and not discovered until after people were already using it. The Redis class expects *args in the form of: name1, score1, name2, score2, ... SETEX: Order of 'time' and 'value' arguments reversed."} +{"question_id": 4326658, "score": 173, "creation_date": 1291221616, "tags": ["python", "dictionary", "indexing"], "instruction": "How to index into a dictionary?\n\nI have a Dictionary below: colors = { \"blue\" : \"5\", \"red\" : \"6\", \"yellow\" : \"8\", } How do I index the first entry in the dictionary? colors[0] will return a KeyError for obvious reasons.", "output": "Dictionaries are unordered in Python versions up to and including Python 3.6. If you do not care about the order of the entries and want to access the keys or values by index anyway, you can create a list of keys for a dictionary d using keys = list(d), and then access keys in the list by index keys[i], and the associated values with d[keys[i]]. If you do care about the order of the entries, starting with Python 2.7 you can use collections.OrderedDict. Or use a list of pairs l = [(\"blue\", \"5\"), (\"red\", \"6\"), (\"yellow\", \"8\")] if you don't need access by key. (Why are your numbers strings by the way?) In Python 3.7, normal dictionaries are ordered, so you don't need to use OrderedDict anymore (but you still can \u2013 it's basically the same type). The CPython implementation of Python 3.6 already included that change, but since it's not part of the language specification, you can't rely on it in Python 3.6."} +{"question_id": 6169217, "score": 173, "creation_date": 1306689917, "tags": ["python"], "instruction": "Replace console output in Python\n\nI'm wondering how I could create one of those nifty console counters in Python as in certain C/C++-programs. I've got a loop doing things and the current output is along the lines of: Doing thing 0 Doing thing 1 Doing thing 2 ... what would be neater would be to just have the last line update; X things done. I've seen this in a number of console programs and am wondering if/how I'd do this in Python.", "output": "An easy solution is just writing \"\\r\" before the string and not adding a newline; if the string never gets shorter this is sufficient... sys.stdout.write(\"\\rDoing thing %i\" % i) sys.stdout.flush() Slightly more sophisticated is a progress bar... this is something I am using: def start_progress(title): global progress_x sys.stdout.write(title + \": [\" + \"-\"*40 + \"]\" + chr(8)*41) sys.stdout.flush() progress_x = 0 def progress(x): global progress_x x = int(x * 40 // 100) sys.stdout.write(\"#\" * (x - progress_x)) sys.stdout.flush() progress_x = x def end_progress(): sys.stdout.write(\"#\" * (40 - progress_x) + \"]\\n\") sys.stdout.flush() You call start_progress passing the description of the operation, then progress(x) where x is the percentage and finally end_progress()"} +{"question_id": 2673385, "score": 173, "creation_date": 1271749099, "tags": ["python", "random"], "instruction": "How to generate a random number with a specific amount of digits?\n\nLet's say I need a 3-digit number, so it would be something like: >>> random(3) 563 or >>> random(5) 26748 >> random(2) 56", "output": "You can use either of random.randint or random.randrange. So to get a random 3-digit number: from random import randint, randrange randint(100, 999) # randint is inclusive at both ends randrange(100, 1000) # randrange is exclusive at the stop * Assuming you really meant three digits, rather than \"up to three digits\". To use an arbitrary number of digits: from random import randint def random_with_N_digits(n): range_start = 10**(n-1) range_end = (10**n)-1 return randint(range_start, range_end) print random_with_N_digits(2) print random_with_N_digits(3) print random_with_N_digits(4) Output: 33 124 5127"} +{"question_id": 8482588, "score": 173, "creation_date": 1323733734, "tags": ["python", "matplotlib", "plot", "seaborn", "plot-annotations"], "instruction": "Putting text in top left corner of matplotlib plot\n\nHow can I put text in the top left (or top right) corner of a matplotlib figure, e.g. where a top left legend would be, or on top of the plot but in the top left corner? E.g. if it's a plt.scatter(), then something that would be within the square of the scatter, put in the top left most corner. I'd like to do this without ideally knowing the scale of the scatterplot being plotted for example, since it will change from dataset to data set. I just want it the text to be roughly in the upper left, or roughly in the upper right. With legend type positioning it should not overlap with any scatter plot points anyway.", "output": "You can use text. plt.text(x, y, s, fontsize=12) text coordinates can be given relative to the axis, so the position of your text will be independent of the size of the plot: The default transform specifies that text is in data coords, alternatively, you can specify text in axis coords (0,0 is lower-left and 1,1 is upper-right). The example below places text in the center of the axes:: plt.text(0.5, 0.5, 'matplotlib', horizontalalignment='center', verticalalignment='center', transform = ax.transAxes) To prevent the text to interfere with any point of your scatter is more difficult afaik. The easier method is to set y_axis (ymax in ylim((ymin,ymax))) to a value a bit higher than the max y-coordinate of your points. In this way you will always have this free space for the text. EDIT: here you have an example: from matplotlib import pyplot as plt f, ax = plt.subplots() plt.scatter([3,5,2,6,8],[5,3,2,1,5]) plt.text(.01, .99, 'matplotlib', ha='left', va='top', transform=ax.transAxes) f.tight_layout() The ha and va parameters set the alignment of your text relative to the insertion point. ie. ha='left' is a good set to prevent a long text to go out of the left axis when the frame is reduced (made narrower) manually."} +{"question_id": 20130227, "score": 173, "creation_date": 1385062299, "tags": ["python", "matplotlib", "scatter-plot"], "instruction": "How to connect scatterplot points with line using matplotlib\n\nI have two lists, dates and values. I want to plot them using matplotlib. The following creates a scatter plot of my data. import matplotlib.pyplot as plt plt.scatter(dates,values) plt.show() plt.plot(dates, values) creates a line graph. But what I really want is a scatterplot where the points are connected by a line. Similar to in R: plot(dates, values) lines(dates, value, type=\"l\") which gives me a scatterplot of points overlaid with a line connecting the points. How do I do this in python?", "output": "I think @Evert has the right answer: plt.scatter(dates,values) plt.plot(dates, values) plt.show() Which is pretty much the same as plt.plot(dates, values, '-o') plt.show() You can replace -o with another suitable format string as described in the documentation. You can also split the choices of line and marker styles using the linestyle= and marker= keyword arguments."} +{"question_id": 2736255, "score": 172, "creation_date": 1272534477, "tags": ["python", "oop", "scala", "abstract-class"], "instruction": "Abstract attributes in Python\n\nWhat is the shortest / most elegant way to implement the following Scala code with an abstract attribute in Python? abstract class Controller { val path: String } A subclass of Controller is enforced to define \"path\" by the Scala compiler. A subclass would look like this: class MyController extends Controller { override val path = \"/home\" }", "output": "Python has a built-in exception for this, though you won't encounter the exception until runtime. class Base(object): @property def path(self): raise NotImplementedError class SubClass(Base): path = 'blah'"} +{"question_id": 14134892, "score": 172, "creation_date": 1357199043, "tags": ["python", "image-processing", "opencv", "python-imaging-library"], "instruction": "Convert image from PIL to openCV format\n\nI'm trying to convert image from PIL to OpenCV format. I'm using OpenCV 2.4.3. here is what I've attempted till now. >>> from PIL import Image >>> import cv2 as cv >>> pimg = Image.open('D:\\\\traffic.jpg') #PIL Image >>> cimg = cv.cv.CreateImageHeader(pimg.size,cv.IPL_DEPTH_8U,3) #CV Image >>> cv.cv.SetData(cimg,pimg.tostring()) >>> cv.cv.NamedWindow('cimg') >>> cv.cv.ShowImage('cimg',cimg) >>> cv.cv.WaitKey() But I think the image is not getting converted to CV format. The Window shows me a large brown image. Where am I going wrong in Converting image from PIL to CV format? Also, why do I need to type cv.cv to access functions?", "output": "Use this: pil_image = PIL.Image.open('Image.jpg').convert('RGB') open_cv_image = numpy.array(pil_image) # Convert RGB to BGR open_cv_image = open_cv_image[:, :, ::-1].copy()"} +{"question_id": 55929472, "score": 172, "creation_date": 1556663506, "tags": ["python", "django", "django-templates", "django-3.0"], "instruction": "Django TemplateSyntaxError - 'staticfiles' is not a registered tag library\n\nAfter upgrading to Django 3.0, I get the following TemplateSyntaxError: In template /Users/alasdair//myproject/myapp/templates/index.html, error at line 1 'staticfiles' is not a registered tag library. Must be one of: admin_list admin_modify admin_urls cache i18n l10n log static tz Here is my template {% load staticfiles %} <img src=\"{% static 'my_image.html' %}\">", "output": "If you have any of the following tags in your template: {% load staticfiles %} {% load static from staticfiles %} {% load admin_static %} Then replace it with: {% load static %} You have to make this change because {% load staticfiles %} and {% load admin_static %} were deprecated in Django 2.1, and removed in Django 3.0."} +{"question_id": 16790375, "score": 172, "creation_date": 1369738851, "tags": ["python", "django", "json", "serialization", "django-class-based-views"], "instruction": "<Django object > is not JSON serializable\n\nI have the following code for serializing the queryset: def render_to_response(self, context, **response_kwargs): return HttpResponse(json.simplejson.dumps(list(self.get_queryset())), mimetype=\"application/json\") And following is my get_quersety() [{'product': <Product: hederello ()>, u'_id': u'9802', u'_source': {u'code': u'23981', u'facilities': [{u'facility': {u'name': {u'fr': u'G\\xe9n\\xe9ral', u'en': u'General'}, u'value': {u'fr': [u'bar', u'r\\xe9ception ouverte 24h/24', u'chambres non-fumeurs', u'chambres familiales',.........]}] Which I need to serialize. But it says not able to serialize the <Product: hederello ()>. Because the list is composed of both django objects and dicts. Any ideas?", "output": "simplejson and json don't work with django objects well. Django's built-in serializers can only serialize querysets filled with django objects: data = serializers.serialize('json', self.get_queryset()) return HttpResponse(data, content_type=\"application/json\") In your case, self.get_queryset() contains a mix of django objects and dicts inside. One option is to get rid of model instances in the self.get_queryset() and replace them with dicts using model_to_dict: from django.forms.models import model_to_dict data = self.get_queryset() for item in data: item['product'] = model_to_dict(item['product']) return HttpResponse(json.simplejson.dumps(data), mimetype=\"application/json\")"} +{"question_id": 22923775, "score": 172, "creation_date": 1396907370, "tags": ["python", "pandas", "datetime", "python-datetime", "timedelta"], "instruction": "Calculate Time Difference Between Two Pandas Columns in Hours and Minutes\n\nI have two columns, fromdate and todate, in a dataframe. import pandas as pd data = {'todate': [pd.Timestamp('2014-01-24 13:03:12.050000'), pd.Timestamp('2014-01-27 11:57:18.240000'), pd.Timestamp('2014-01-23 10:07:47.660000')], 'fromdate': [pd.Timestamp('2014-01-26 23:41:21.870000'), pd.Timestamp('2014-01-27 15:38:22.540000'), pd.Timestamp('2014-01-23 18:50:41.420000')]} df = pd.DataFrame(data) I add a new column, diff, to find the difference between the two dates using df['diff'] = df['fromdate'] - df['todate'] I get the diff column, but it contains days, when there's more than 24 hours. todate fromdate diff 0 2014-01-24 13:03:12.050 2014-01-26 23:41:21.870 2 days 10:38:09.820000 1 2014-01-27 11:57:18.240 2014-01-27 15:38:22.540 0 days 03:41:04.300000 2 2014-01-23 10:07:47.660 2014-01-23 18:50:41.420 0 days 08:42:53.760000 How do I convert my results to only hours and minutes (i.e. days are converted to hours)?", "output": "Pandas timestamp differences returns a datetime.timedelta object. This can easily be converted into hours by using the as_type method, like so import pandas as pd df = pd.DataFrame(columns=['to', 'fr', 'ans']) df.to = [pd.Timestamp('2014-01-24 13:03:12.050000'), pd.Timestamp('2014-01-27 11:57:18.240000'), pd.Timestamp('2014-01-23 10:07:47.660000')] df.fr = [pd.Timestamp('2014-01-26 23:41:21.870000'), pd.Timestamp('2014-01-27 15:38:22.540000'), pd.Timestamp('2014-01-23 18:50:41.420000')] (df.fr - df.to).astype('timedelta64[h]') to yield, 0 58 1 3 2 8 dtype: float64"} +{"question_id": 8389639, "score": 172, "creation_date": 1323107651, "tags": ["python", "unit-testing"], "instruction": "Unittest setUp/tearDown for several tests\n\nIs there a function that is fired at the beginning/end of a scenario of tests? The functions setUp and tearDown are fired before/after every single test. I typically would like to have this: class TestSequenceFunctions(unittest.TestCase): def setUpScenario(self): start() #launched at the beginning, once def test_choice(self): element = random.choice(self.seq) self.assertTrue(element in self.seq) def test_sample(self): with self.assertRaises(ValueError): random.sample(self.seq, 20) for element in random.sample(self.seq, 5): self.assertTrue(element in self.seq) def tearDownScenario(self): end() #launched at the end, once For now, these setUp and tearDown are unit tests and spread in all my scenarios (containing many tests), one is the first test, the other is the last test.", "output": "As of 2.7 (per the documentation) you get setUpClass and tearDownClass which execute before and after the tests in a given class are run, respectively. Alternatively, if you have a group of them in one file, you can use setUpModule and tearDownModule (documentation). EDIT: Note that setUpClass and tearDownClass must be declared using @classmethod. Another answer to this question includes sample code. Otherwise your best bet is probably going to be to create your own derived TestSuite and override run(). All other calls would be handled by the parent, and run would call your setup and teardown code around a call up to the parent's run method."} +{"question_id": 10443400, "score": 172, "creation_date": 1336110994, "tags": ["python", "string", "trim"], "instruction": "How to remove leading and trailing spaces from a string?\n\nI'm having a hard time trying to use .strip with the following line of code: f.write(re.split(\"Tech ID:|Name:|Account #:\",line)[-1])", "output": "You can use the strip() method to remove trailing and leading spaces: >>> s = ' abd cde ' >>> s.strip() 'abd cde' Note: the internal spaces are preserved."} +{"question_id": 38537905, "score": 172, "creation_date": 1469245109, "tags": ["python", "logging", "python-logging"], "instruction": "Set logging levels\n\nI'm trying to use the standard library to debug my code: This works fine: import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) logger.info('message') I can't make work the logger for the lower levels: logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(__name__) logger.info('message') logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(__name__) logger.debug('message') I don't get any response for neither of those.", "output": "What Python version? That works for me in 3.4. But note that basicConfig() won't affect the root handler if it's already setup: This function does nothing if the root logger already has handlers configured for it. To set the level on root explicitly do logging.getLogger().setLevel(logging.DEBUG). But ensure you've called basicConfig() before hand so the root logger initially has some setup. I.e.: import logging logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) logging.getLogger('foo').debug('bah') logging.getLogger().setLevel(logging.INFO) logging.getLogger('foo').debug('bah') Also note that \"Loggers\" and their \"Handlers\" both have distinct independent log levels. So if you've previously explicitly loaded some complex logger config in you Python script, and that has messed with the root logger's handler(s), then this can have an effect, and just changing the loggers log level with logging.getLogger().setLevel(..) may not work. This is because the attached handler may have a log level set independently. This is unlikely to be the case and not something you'd normally have to worry about."} +{"question_id": 68152730, "score": 172, "creation_date": 1624808189, "tags": ["python", "list", "indexing", "swap"], "instruction": "Understand Python swapping: why is a, b = b, a not always equivalent to b, a = a, b?\n\nAs we all know, the pythonic way to swap the values of two items a and b is a, b = b, a and it should be equivalent to b, a = a, b However, today when I was working on some code, I accidentally found that the following two swaps give different results: nums = [1, 2, 4, 3] i = 2 nums[i], nums[nums[i]-1] = nums[nums[i]-1], nums[i] print(nums) # [1, 2, 4, 3] nums = [1, 2, 4, 3] i = 2 nums[nums[i]-1], nums[i] = nums[i], nums[nums[i]-1] print(nums) # [1, 2, 3, 4] What is happening here? I thought in a Python swap the two assignments happen simultaneously and independently. See also Multiple assignment and evaluation order in Python regarding the basic semantics of this kind of assignment. See also Multiple assignment semantics regarding the effect and purpose of parentheses on the left-hand side of a multiple assignment.", "output": "From python.org Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is recursively defined as follows. ... Else: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets. So I interpret that to mean that your assignment nums[i], nums[nums[i]-1] = nums[nums[i]-1], nums[i] is roughly equivalent to tmp = nums[nums[i]-1], nums[i] nums[i] = tmp[0] nums[nums[i] - 1] = tmp[1] (with better error-checking, of course) whereas the other nums[nums[i]-1], nums[i] = nums[i], nums[nums[i]-1] is like tmp = nums[i], nums[nums[i]-1] nums[nums[i] - 1] = tmp[0] nums[i] = tmp[1] So the right-hand side is evaluated first in both cases. But then the two pieces of the left-hand side are evaluated in order, and the assignments are done immediately after evaluation. Crucially, this means that the second term on the left-hand side is only evaluated after the first assignment is already done. So if you update nums[i] first, then the nums[nums[i] - 1] refers to a different index than if you update nums[i] second."} +{"question_id": 15085864, "score": 171, "creation_date": 1361872073, "tags": ["python", "amazon-web-services", "amazon-s3", "boto"], "instruction": "How to upload a file to directory in S3 bucket using boto\n\nI want to copy a file in s3 bucket using python. Ex : I have bucket name = test. And in the bucket, I have 2 folders name \"dump\" & \"input\". Now I want to copy a file from local directory to S3 \"dump\" folder using python... Can anyone help me?", "output": "NOTE: This answer uses boto. See the other answer that uses boto3, which is newer. Try this... import boto import boto.s3 import sys from boto.s3.key import Key AWS_ACCESS_KEY_ID = '' AWS_SECRET_ACCESS_KEY = '' bucket_name = AWS_ACCESS_KEY_ID.lower() + '-dump' conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) bucket = conn.create_bucket(bucket_name, location=boto.s3.connection.Location.DEFAULT) testfile = \"replace this with an actual filename\" print 'Uploading %s to Amazon S3 bucket %s' % \\ (testfile, bucket_name) def percent_cb(complete, total): sys.stdout.write('.') sys.stdout.flush() k = Key(bucket) k.key = 'my test file' k.set_contents_from_filename(testfile, cb=percent_cb, num_cb=10) [UPDATE] I am not a pythonist, so thanks for the heads up about the import statements. Also, I'd not recommend placing credentials inside your own source code. If you are running this inside AWS use IAM Credentials with Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html), and to keep the same behaviour in your Dev/Test environment, use something like Hologram from AdRoll (https://github.com/AdRoll/hologram)"} +{"question_id": 8223939, "score": 171, "creation_date": 1321950779, "tags": ["python", "url"], "instruction": "How to join absolute and relative urls?\n\nI have two urls: url1 = \"http://127.0.0.1/test1/test2/test3/test5.xml\" url2 = \"../../test4/test6.xml\" How can I get an absolute url for url2?", "output": "You should use urlparse.urljoin : >>> import urlparse >>> urlparse.urljoin(url1, url2) 'http://127.0.0.1/test1/test4/test6.xml' With Python 3 (where urlparse is renamed to urllib.parse) you could use it as follows: >>> import urllib.parse >>> urllib.parse.urljoin(url1, url2) 'http://127.0.0.1/test1/test4/test6.xml'"} +{"question_id": 11147667, "score": 171, "creation_date": 1340317389, "tags": ["python", "pip", "requirements.txt"], "instruction": "Is there a way to list pip dependencies/requirements?\n\nWithout going through with the installation, I want to quickly see all the packages that pip install would install.", "output": "This was tested with pip versions 8.1.2, 9.0.1, 10.0.1, and 18.1. To get the output without cluttering your current directory on Linux use pip download [package] -d /tmp --no-binary :all: -v -d tells pip the directory that download should put files in. Better, just use this script with the argument being the package name to get only the dependencies as output: #!/bin/sh PACKAGE=$1 pip download $PACKAGE -d /tmp --no-binary :all:-v 2>&1 \\ | grep Collecting \\ | cut -d' ' -f2 \\ | grep -Ev \"$PACKAGE(~|=|\\!|>|<|$)\" Also available here."} +{"question_id": 22071987, "score": 171, "creation_date": 1393512623, "tags": ["python", "arrays", "random", "numpy"], "instruction": "Generate random array of floats between a range\n\nI haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at Random sampling but no function seems to do what I need. random.uniform comes close but it only returns a single element, not a specific number. This is what I'm after: ran_floats = some_function(low=0.5, high=13.3, size=50) which would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range [0.5, 13.3]. Is there such a function?", "output": "np.random.uniform fits your use case: sampl = np.random.uniform(low=0.5, high=13.3, size=(50,)) Update Oct 2019: While the syntax is still supported, it looks like the API changed with NumPy 1.17 to support greater control over the random number generator. Going forward the API has changed and you should look at https://docs.scipy.org/doc/numpy/reference/random/generated/numpy.random.Generator.uniform.html The enhancement proposal is here: https://numpy.org/neps/nep-0019-rng-policy.html"} +{"question_id": 43658999, "score": 171, "creation_date": 1493298938, "tags": ["python", "python-2.7", "generator", "python-typing"], "instruction": "What is the return type hint of a generator function?\n\nI'm trying to write a :rtype: type hint for a generator function. What is the type it returns? For example, say I have this functions which yields strings: def read_text_file(fn): \"\"\" Yields the lines of the text file one by one. :param fn: Path of text file to read. :type fn: str :rtype: ???????????????? <======================= what goes here? \"\"\" with open(fn, 'rt') as text_file: for line in text_file: yield line The return type isn't just a string, it's some kind of iterable of strings? So I can't just write :rtype: str. What's the right hint?", "output": "Generator Generator[str, None, None] or Iterator[str]"} +{"question_id": 207000, "score": 171, "creation_date": 1224116621, "tags": ["python", "attributes", "member-variables"], "instruction": "What is the difference between class and instance attributes?\n\nIs there any meaningful distinction between: class A(object): foo = 5 # some default value vs. class B(object): def __init__(self, foo=5): self.foo = foo If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different?", "output": "There is a significant semantic difference (beyond performance considerations): when the attribute is defined on the instance (which is what we usually do), there can be multiple objects referred to. Each gets a totally separate version of that attribute. when the attribute is defined on the class, there is only one underlying object referred to, so if operations on different instances of that class both attempt to set/(append/extend/insert/etc.) the attribute, then: if the attribute is a builtin type (like int, float, boolean, string), operations on one object will overwrite (clobber) the value if the attribute is a mutable type (like a list or a dict), we will get unwanted leakage. For example: >>> class A: foo = [] >>> a, b = A(), A() >>> a.foo.append(5) >>> b.foo [5] >>> class A: ... def __init__(self): self.foo = [] >>> a, b = A(), A() >>> a.foo.append(5) >>> b.foo []"} +{"question_id": 29280470, "score": 171, "creation_date": 1427378548, "tags": ["python", "ipython", "timeit", "magic-function", "magic-command"], "instruction": "What is %timeit in Python?\n\nI always read the code to calculate the time like this way: %timeit function() What does % mean here? I think, % is always used to replace something in a string, like %s means replace a string, %d replace a data, but I have no idea about this case.", "output": "%timeit is an IPython magic function, which can be used to time a particular piece of code (a single execution statement, or a single method). From the documentation: %timeit Time execution of a Python statement or expression Usage, in line mode: %timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] statement To use it, for example if we want to find out whether using xrange is any faster than using range, you can simply do: In [1]: %timeit for _ in range(1000): True 10000 loops, best of 3: 37.8 \u00b5s per loop In [2]: %timeit for _ in xrange(1000): True 10000 loops, best of 3: 29.6 \u00b5s per loop And you will get the timings for them. The major advantages of %timeit are: You don't have to import timeit.timeit from the standard library, and run the code multiple times to figure out which is the better approach. It will automatically calculate number of runs required for your code based on a total of 2 seconds execution window. You can make use of current console variables implicitly, whereas timeit.timeit requires them to be provided explicitly."} +{"question_id": 71898644, "score": 171, "creation_date": 1650158620, "tags": ["python", "python-typing"], "instruction": "How to use typing.Annotated\n\nI'm having a hard time understanding what typing.Annotated is good for from the documentation and an even harder time finding explanations/examples outside the documentation. In what context would you use Annotated? Does it depend on third-party libraries?", "output": "Annotated in python allows developers to declare the type of a reference and provide additional information related to it. name: Annotated[str, \"first letter is capital\"] This tells that name is of type str and that name[0] is a capital letter. On its own Annotated does not do anything other than assigning extra information (metadata) to a reference. It is up to another code, which can be a library, framework or your own code, to interpret the metadata and make use of it. For example, FastAPI uses Annotated for data validation: def read_items(q: Annotated[str, Query(max_length=50)]) Here the parameter q is of type str with a maximum length of 50. This information was communicated to FastAPI (or any other underlying library) using the Annotated keyword."} +{"question_id": 39913847, "score": 171, "creation_date": 1475832260, "tags": ["python", "build"], "instruction": "Is there a way to compile a python application into static binary?\n\nWhat I'm trying to do is ship my code to a remote server, that may have different python version installed and/or may not have packages my app requires. Right now to achieve such portability I have to build relocatable virtualenv with interpreter and code. That approach has some issues (for example, you have to manually copy a bunch of libraries into your virtualenv, since --always-copy doesn't work as expected) and generally slow. There's (in theory) a way to build python itself statically. I wonder if I could pack interpreter with my code into one binary and run my application as module. Something like that: ./mypython -m myapp run or ./mypython -m gunicorn -c ./gunicorn.conf myapp.wsgi:application.", "output": "There are two ways you could go about to solve your problem Use a static builder, like freeze, or pyinstaller, or py2exe Compile using cython and link against a statically-compiled version of CPython This answer explains how you can go about doing it using the second approach, since the first method is not cross platform and version, and has been explained in other answers. Also, using programs like pyinstaller typically results in huge file sizes, while using cython will result in a file that's much smaller First, install cython. sudo -H pip3 install cython Then, you can use cython to generate a C file out of the Python .py file (in reference to https://stackoverflow.com/a/22040484/5714445) cython example_file.py --embed Use GCC to compile it after getting your statically-compiled python version (Note: The below assumes you are trying to compile it to Python3) gcc -Os $(python3-config --includes) example_file.c -o output_bin_file $(python3-config --ldflags --embed) You will now have a binary file output_bin_file, which is what you are looking for Other things to note: Change example_file.py to whatever file you are actually trying to compile. Note: Cython may not approve of filenames containing dashes (-). Cython is used to use C-Type Variable definitions for static memory allocation to speed up Python programs. In your case however, you will still be using traditional Python definitions. If you are using additional libraries (like opencv, for example), you might have to provide the directory to them using -L and then specify the name of the library using -l in the GCC Flags. For more information on this, please refer to GCC flags The above method might not work for anaconda python, as you will likely have to install a version of gcc that is compatible with your conda-python. If your Python version is not statically-compiled, this will result in a dynamic binary"} +{"question_id": 1939058, "score": 171, "creation_date": 1261387027, "tags": ["python", "serialization", "pickle", "magic-methods"], "instruction": "Simple example of use of __setstate__ and __getstate__\n\nI don't know what the __setstate__ and __getstate__ methods do, so help me with a simple example.", "output": "Here's a very simple example for Python that should supplement the pickle docs. class Foo(object): def __init__(self, val=2): self.val = val def __getstate__(self): print(\"I'm being pickled\") self.val *= 2 return self.__dict__ def __setstate__(self, d): print(\"I'm being unpickled with these values: \" + repr(d)) self.__dict__ = d self.val *= 3 import pickle f = Foo() f_data = pickle.dumps(f) f_new = pickle.loads(f_data) Output: I'm being pickled I'm being unpickled with these values: {'val': 4}"} +{"question_id": 15837729, "score": 171, "creation_date": 1365175563, "tags": ["python", "list", "random", "set"], "instruction": "random.choice from set?\n\nI'm working on an AI portion of a guessing game. I want the AI to select a random letter from this list. I'm doing it as a set so I can easily remove letters from the list as they are guessed in the game and are therefore no longer available to be guessed again. it says set object isn't indexable. How can I work around this? import random aiTurn = True while aiTurn == True: allLetters = set(list('abcdefghijklmnopqrstuvwxyz')) aiGuess = random.choice(allLetters) print(aiGuess)", "output": "Note (Oct. 2020): as of v3.9, Python has officially deprecated random.sample() working on sets, with the official guidance being to explicitly convert the set to a list or tuple before passing it in, though this doesn't solve the efficiency problems. >>> random.sample(set('abcdefghijklmnopqrstuvwxyz'), 1) ['f'] Documentation: https://docs.python.org/3/library/random.html#random.sample Note that choosing random elements from a set is extremely inefficient no matter how you do it - it takes time proportional to the size of the set, or worse if the set's underlying hash table is sparse due to removed elements. Instead, you should probably use a different data structure that supports this operation efficiently."} +{"question_id": 40950310, "score": 170, "creation_date": 1480784935, "tags": ["python", "regex", "pandas", "dataframe", "trim"], "instruction": "Strip / trim all strings of a dataframe\n\nCleaning the values of a multitype data frame in python/pandas, I want to trim the strings. I am currently doing it in two instructions : import pandas as pd df = pd.DataFrame([[' a ', 10], [' c ', 5]]) df.replace('^\\s+', '', regex=True, inplace=True) #front df.replace('\\s+$', '', regex=True, inplace=True) #end df.values This is quite slow, what could I improve ?", "output": "You can use DataFrame.select_dtypes to select string columns and then apply function str.strip. Notice: Values cannot be types like dicts or lists, because their dtypes is object. df_obj = df.select_dtypes('object') #if need also processing string categories #df_obj = df.select_dtypes(['object', 'category']) print (df_obj) 0 a 1 c df[df_obj.columns] = df_obj.apply(lambda x: x.str.strip()) print (df) 0 1 0 a 10 1 c 5 But if there are only a few columns use str.strip: df[0] = df[0].str.strip()"} +{"question_id": 32011359, "score": 170, "creation_date": 1439559110, "tags": ["python", "pandas"], "instruction": "Convert categorical data in pandas dataframe\n\nI have a dataframe with this type of data (too many columns): col1 int64 col2 int64 col3 category col4 category col5 category Columns look like this: Name: col3, dtype: category Categories (8, object): [B, C, E, G, H, N, S, W] I want to convert all the values in each column to integer like this: [1, 2, 3, 4, 5, 6, 7, 8] I solved this for one column by this: dataframe['c'] = pandas.Categorical.from_array(dataframe.col3).codes Now I have two columns in my dataframe - old col3 and new c and need to drop old columns. That's bad practice. It works but in my dataframe there are too many columns and I don't want do it manually. How can I do this more cleverly?", "output": "First, to convert a Categorical column to its numerical codes, you can do this easier with: dataframe['c'].cat.codes. Further, it is possible to select automatically all columns with a certain dtype in a dataframe using select_dtypes. This way, you can apply above operation on multiple and automatically selected columns. First making an example dataframe: In [75]: df = pd.DataFrame({'col1':[1,2,3,4,5], 'col2':list('abcab'), 'col3':list('ababb')}) In [76]: df['col2'] = df['col2'].astype('category') In [77]: df['col3'] = df['col3'].astype('category') In [78]: df.dtypes Out[78]: col1 int64 col2 category col3 category dtype: object Then by using select_dtypes to select the columns, and then applying .cat.codes on each of these columns, you can get the following result: In [80]: cat_columns = df.select_dtypes(['category']).columns In [81]: cat_columns Out[81]: Index([u'col2', u'col3'], dtype='object') In [83]: df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) In [84]: df Out[84]: col1 col2 col3 0 1 0 0 1 2 1 1 2 3 2 0 3 4 0 1 4 5 1 1 Note: NaN becomes -1 This method is fast because the relationship between code and category is readily available and do not need to be computed."} +{"question_id": 29550414, "score": 170, "creation_date": 1428619838, "tags": ["python", "numpy", "pandas", "dataframe", "tuples"], "instruction": "How can I split a column of tuples in a Pandas dataframe?\n\nI have a Pandas dataframe (this is only a little piece) >>> d1 y norm test y norm train len(y_train) len(y_test) \\ 0 64.904368 116.151232 1645 549 1 70.852681 112.639876 1645 549 SVR RBF \\ 0 (35.652207342877873, 22.95533537448393) 1 (39.563683797747622, 27.382483096332511) LCV \\ 0 (19.365430594452338, 13.880062435173587) 1 (19.099614489458364, 14.018867136617146) RIDGE CV \\ 0 (4.2907610988480362, 12.416745648065584) 1 (4.18864306788194, 12.980833914392477) RF \\ 0 (9.9484841581029428, 16.46902345373697) 1 (10.139848213735391, 16.282141345406522) GB \\ 0 (0.012816232716538605, 15.950164822266007) 1 (0.012814519804493328, 15.305745202851712) ET DATA 0 (0.00034337162272515505, 16.284800366214057) j2m 1 (0.00024811554516431878, 15.556506191784194) j2m >>> I want to split all the columns that contain tuples. For example, I want to replace the column LCV with the columns LCV-a and LCV-b. How can I do that?", "output": "You can do this by doing pd.DataFrame(col.tolist()) on that column: In [2]: df = pd.DataFrame({'a':[1,2], 'b':[(1,2), (3,4)]}) In [3]: df Out[3]: a b 0 1 (1, 2) 1 2 (3, 4) In [4]: df['b'].tolist() Out[4]: [(1, 2), (3, 4)] In [5]: pd.DataFrame(df['b'].tolist(), index=df.index) Out[5]: 0 1 0 1 2 1 3 4 In [6]: df[['b1', 'b2']] = pd.DataFrame(df['b'].tolist(), index=df.index) In [7]: df Out[7]: a b b1 b2 0 1 (1, 2) 1 2 1 2 (3, 4) 3 4 Note: in an earlier version, this answer recommended to use df['b'].apply(pd.Series) instead of pd.DataFrame(df['b'].tolist(), index=df.index). That works as well (because it makes a Series of each tuple, which is then seen as a row of a dataframe), but it is slower / uses more memory than the tolist version, as noted by the other answers here (thanks to denfromufa)."} +{"question_id": 4465959, "score": 170, "creation_date": 1292538264, "tags": ["python", "linux", "sockets", "connection", "errno"], "instruction": "Python [Errno 98] Address already in use\n\nIn my Python socket program, I sometimes need to interrupt it with Ctrl-C. When I do this, it does close the connection using socket.close(). However, when I try to reopen it I have to wait what seems like a minute before I can connect again. How does one correctly close a socket? Or is this intended?", "output": "Yes, this is intended. Here you can read a detailed explanation. It is possible to override this behavior by setting the SO_REUSEADDR option on a socket. For example: sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)"} +{"question_id": 44689546, "score": 170, "creation_date": 1498101436, "tags": ["python"], "instruction": "How to print out a dictionary nicely in Python?\n\nI've just started to learn python and I'm building a text game. I want an inventory system, but I can't seem to print out the dictionary without it looking ugly. This is what I have so far: def inventory(): for numberofitems in len(inventory_content.keys()): inventory_things = list(inventory_content.keys()) inventory_amounts = list(inventory_content.values()) print(inventory_things[numberofitems])", "output": "I like the pprint module (Pretty Print) included in Python. It can be used to either print the object, or format a nice string version of it. import pprint # Prints the nicely formatted dictionary pprint.pprint(dictionary) # Sets 'pretty_dict_str' to the formatted string value pretty_dict_str = pprint.pformat(dictionary) But it sounds like you are printing out an inventory, which users will likely want shown as something more like the following: def print_inventory(dct): print(\"Items held:\") for item, amount in dct.items(): # dct.iteritems() in Python 2 print(\"{} ({})\".format(item, amount)) inventory = { \"shovels\": 3, \"sticks\": 2, \"dogs\": 1, } print_inventory(inventory) which prints: Items held: shovels (3) sticks (2) dogs (1)"} +{"question_id": 15479928, "score": 170, "creation_date": 1363618746, "tags": ["python", "dictionary", "set", "python-internals"], "instruction": "Why is the order in dictionaries and sets arbitrary?\n\nI don't understand how looping over a dictionary or set in python is done by 'arbitrary' order. I mean, it's a programming language so everything in the language must be 100% determined, correct? Python must have some kind of algorithm that decides which part of the dictionary or set is chosen, 1st, second and so on. What am I missing?", "output": "Note: This answer was written before the implementation of the dict type changed, in Python 3.6. Most of the implementation details in this answer still apply, but the listing order of keys in dictionaries is no longer determined by hash values. The set implementation remains unchanged. The order is not arbitrary, but depends on the insertion and deletion history of the dictionary or set, as well as on the specific Python implementation. For the remainder of this answer, for 'dictionary', you can also read 'set'; sets are implemented as dictionaries with just keys and no values. Keys are hashed, and hash values are assigned to slots in a dynamic table (it can grow or shrink based on needs). And that mapping process can lead to collisions, meaning that a key will have to be slotted in a next slot based on what is already there. Listing the contents loops over the slots, and so keys are listed in the order they currently reside in the table. Take the keys 'foo' and 'bar', for example, and lets assume the table size is 8 slots. In Python 2.7, hash('foo') is -4177197833195190597, hash('bar') is 327024216814240868. Modulo 8, that means these two keys are slotted in slots 3 and 4 then: >>> hash('foo') -4177197833195190597 >>> hash('foo') % 8 3 >>> hash('bar') 327024216814240868 >>> hash('bar') % 8 4 This informs their listing order: >>> {'bar': None, 'foo': None} {'foo': None, 'bar': None} All slots except 3 and 4 are empty, looping over the table first lists slot 3, then slot 4, so 'foo' is listed before 'bar'. bar and baz, however, have hash values that are exactly 8 apart and thus map to the exact same slot, 4: >>> hash('bar') 327024216814240868 >>> hash('baz') 327024216814240876 >>> hash('bar') % 8 4 >>> hash('baz') % 8 4 Their order now depends on which key was slotted first; the second key will have to be moved to a next slot: >>> {'baz': None, 'bar': None} {'bar': None, 'baz': None} >>> {'bar': None, 'baz': None} {'baz': None, 'bar': None} The table order differs here, because one or the other key was slotted first. The technical name for the underlying structure used by CPython (the most commonly used Python implemenation) is a hash table, one that uses open addressing. If you are curious, and understand C well enough, take a look at the C implementation for all the (well documented) details. You could also watch this Pycon 2010 presentation by Brandon Rhodes about how CPython dict works, or pick up a copy of Beautiful Code, which includes a chapter on the implementation written by Andrew Kuchling. Note that as of Python 3.3, a random hash seed is used as well, making hash collisions unpredictable to prevent certain types of denial of service (where an attacker renders a Python server unresponsive by causing mass hash collisions). This means that the order of a given dictionary or set is then also dependent on the random hash seed for the current Python invocation. Other implementations are free to use a different structure for dictionaries, as long as they satisfy the documented Python interface for them, but I believe that all implementations so far use a variation of the hash table. CPython 3.6 introduces a new dict implementation that maintains insertion order, and is faster and more memory efficient to boot. Rather than keep a large sparse table where each row references the stored hash value, and the key and value objects, the new implementation adds a smaller hash array that only references indices in a separate 'dense' table (one that only contains as many rows as there are actual key-value pairs), and it is the dense table that happens to list the contained items in order. See the proposal to Python-Dev for more details. Note that in Python 3.6 this is considered an implementation detail, Python-the-language does not specify that other implementations have to retain order. This changed in Python 3.7, where this detail was elevated to be a language specification; for any implementation to be properly compatible with Python 3.7 or newer it must copy this order-preserving behaviour. And to be explicit: this change doesn't apply to sets, as sets already have a 'small' hash structure. Python 2.7 and newer also provides an OrderedDict class, a subclass of dict that adds an additional data structure to record key order. At the price of some speed and extra memory, this class remembers in what order you inserted keys; listing keys, values or items will then do so in that order. It uses a doubly-linked list stored in an additional dictionary to keep the order up-to-date efficiently. See the post by Raymond Hettinger outlining the idea. OrderedDict objects have other advantages, such as being re-orderable. If you wanted an ordered set, you can install the oset package; it works on Python 2.5 and up."} +{"question_id": 25011078, "score": 170, "creation_date": 1406622370, "tags": ["python"], "instruction": "What does Pythonic mean?\n\nOn many websites I often see comments that code isn't Pythonic, or that there is a more Pythonic way to achieve the same goal. What does Pythonic mean in this context? For example, why is while i < someValue: do_something(myList[i]) i += 1 not Pythonic while for x in myList: doSomething(x) is Pythonic?", "output": "Exploiting the features of the Python language to produce code that is clear, concise and maintainable. Pythonic means code that doesn't just get the syntax right, but that follows the conventions of the Python community and uses the language in the way it is intended to be used. This is maybe easiest to explain by negative example, as in the linked article from the other answers. Examples of un-Pythonic code often come from users of other languages, who instead of learning a Python programming patterns such as list comprehensions or generator expressions, attempt to crowbar in patterns more commonly used in C or Java. Loops are particularly common examples of this. For example, in Java I might use for (int index = 0; index < items.length; index++) { items[index].performAction(); } In Python we can try and replicate this using while loops, but it would be cleaner to use: for item in items: item.perform_action() Or, even a generator expression (item.some_attribute for item in items) So essentially when someone says something is un-Pythonic, they are saying that the code could be rewritten in a way that is a better fit for Python's coding style. Typing import this at the command line gives a summary of Python principles. Less well known is that the source code for import this is decidedly, and by design, un-Pythonic! Take a look at it for an example of what not to do."} +{"question_id": 12118695, "score": 169, "creation_date": 1345862111, "tags": ["python", "dictionary"], "instruction": "Efficient way to remove keys with empty strings from a dict\n\nI have a dict and would like to remove all the keys for which there are empty value strings. metadata = {u'Composite:PreviewImage': u'(Binary data 101973 bytes)', u'EXIF:CFAPattern2': u''} What is the best way to do this?", "output": "Python 2.X dict((k, v) for k, v in metadata.iteritems() if v) Python 2.7 - 3.X {k: v for k, v in metadata.items() if v} Note that all of your keys have values. It's just that some of those values are the empty string. There's no such thing as a key in a dict without a value; if it didn't have a value, it wouldn't be in the dict."} +{"question_id": 13197574, "score": 169, "creation_date": 1351867805, "tags": ["python", "openpyxl"], "instruction": "openpyxl - adjust column width size\n\nI have following script which is converting a CSV file to an XLSX file, but my column size is very narrow. Each time I have to drag them with mouse to read data. Does anybody know how to set column width in openpyxl? Here is the code I am using. #!/usr/bin/python2.6 import csv from openpyxl import Workbook from openpyxl.cell import get_column_letter f = open('users_info_cvs.txt', \"rU\") csv.register_dialect('colons', delimiter=':') reader = csv.reader(f, dialect='colons') wb = Workbook() dest_filename = r\"account_info.xlsx\" ws = wb.worksheets[0] ws.title = \"Users Account Information\" for row_index, row in enumerate(reader): for column_index, cell in enumerate(row): column_letter = get_column_letter((column_index + 1)) ws.cell('%s%s'%(column_letter, (row_index + 1))).value = cell wb.save(filename = dest_filename)", "output": "You could estimate (or use a mono width font) to achieve this. Let's assume data is a nested array like [['a1','a2'],['b1','b2']] We can get the max characters in each column. Then set the width to that. Width is exactly the width of a monospace font (if not changing other styles at least). Even if you use a variable width font it is a decent estimation. This will not work with formulas. from openpyxl.utils import get_column_letter column_widths = [] for row in data: for i, cell in enumerate(row): if len(column_widths) > i: if len(cell) > column_widths[i]: column_widths[i] = len(cell) else: column_widths += [len(cell)] for i, column_width in enumerate(column_widths,1): # ,1 to start at 1 worksheet.column_dimensions[get_column_letter(i)].width = column_width A bit of a hack but your reports will be more readable."} +{"question_id": 8991506, "score": 169, "creation_date": 1327427042, "tags": ["python", "iterator"], "instruction": "Iterate an iterator by chunks (of n) in Python?\n\nCan you think of a nice way (maybe with itertools) to split an iterator into chunks of given size? Therefore l=[1,2,3,4,5,6,7] with chunks(l,3) becomes an iterator [1,2,3], [4,5,6], [7] I can think of a small program to do that but not a nice way with maybe itertools.", "output": "The grouper() recipe from the itertools documentation's recipes comes close to what you want: def grouper(iterable, n, *, incomplete='fill', fillvalue=None): \"Collect data into non-overlapping fixed-length chunks or blocks\" # grouper('ABCDEFG', 3, fillvalue='x') --> ABC DEF Gxx # grouper('ABCDEFG', 3, incomplete='strict') --> ABC DEF ValueError # grouper('ABCDEFG', 3, incomplete='ignore') --> ABC DEF args = [iter(iterable)] * n if incomplete == 'fill': return zip_longest(*args, fillvalue=fillvalue) if incomplete == 'strict': return zip(*args, strict=True) if incomplete == 'ignore': return zip(*args) else: raise ValueError('Expected fill, strict, or ignore') This won't work well when the last chunk is incomplete though, as, depending on the incomplete mode, it will either fill up the last chunk with a fill value, raise an exception, or silently drop the incomplete chunk. In more recent versions of the recipes they added the batched recipe that does exactly what you want: def batched(iterable, n): \"Batch data into tuples of length n. The last batch may be shorter.\" # batched('ABCDEFG', 3) --> ABC DEF G if n < 1: raise ValueError('n must be at least one') it = iter(iterable) while (batch := tuple(islice(it, n))): yield batch Finally, a less general solution that only works on sequences but does handle the last chunk as desired and preserves the type of the original sequence is: (my_list[i:i + chunk_size] for i in range(0, len(my_list), chunk_size)) Since python 3.12, you can also just use itertools.batched. From docs: itertools.batched(iterable, n) Batch data from the iterable into tuples of length n. The last batch may be shorter than n."} +{"question_id": 22963263, "score": 169, "creation_date": 1397048190, "tags": ["python", "pandas", "dataframe"], "instruction": "Creating a zero-filled pandas data frame\n\nWhat is the best way to create a zero-filled pandas data frame of a given size? I have used: zero_data = np.zeros(shape=(len(data),len(feature_list))) d = pd.DataFrame(zero_data, columns=feature_list) Is there a better way to do it?", "output": "Create and fill a pandas dataframe with zeros feature_list = [\"foo\", \"bar\", 37] df = pd.DataFrame(0, index=np.arange(7), columns=feature_list) print(df) which prints: foo bar 37 0 0 0 0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0"} +{"question_id": 13566200, "score": 169, "creation_date": 1353937793, "tags": ["python", "git", "pip"], "instruction": "How can I install from a git subdirectory with pip?\n\nI have a git repository with many folders, one of them being a python module installable with pip, like this: repo.git/ repo.git/folder1/ repo.git/folder2/ repo.git/mymodule/ repo.git/mymodule/__init__.py repo.git/mymodule/setup.py repo.git/mymodule/... Right now I have to do the following to install: git clone http://server/repo.git cd repo pip install mymodule cd .. rm -rf repo Is it possible to install the module directly with pip without explicitly cloning ? I tried: pip install git+https://server/repo.git/mymodule/ pip install git+https://server/repo.git:mymodule/ But I get: IOError: [Errno 2] No such file or directory: '/tmp/pip-88tlLm-build/setup.py'", "output": "There is a pull request regarding this feature, and it seems to have been merged to develop branch a month ago. The syntax is the following: pip install -e \"git+https://git.repo/some_repo.git#egg=$NAME_OF_PACKAGE&subdirectory=$SUBDIR_IN_REPO\" # install a python package from a repo subdirectory We probably have to wait for a while until it gets merged to master and is distributed. UPDATE: This is now available and documented at https://pip.pypa.io/en/stable/cli/pip_install/#vcs-support as follows: For projects where setup.py is not in the root of project, \"subdirectory\" component is used. Value of \"subdirectory\" component should be a path starting from root of the project to where setup.py is located. So if your repository layout is: - pkg_dir/ - setup.py # setup.py for package ``pkg`` - some_module.py - other_dir/ - some_file - some_other_file You'll need to use pip install -e \"vcs+protocol://repo_url/#egg=pkg&subdirectory=pkg_dir\" Note: Make sure to surround it with quotes otherwise subdirectory won't work."} +{"question_id": 587345, "score": 169, "creation_date": 1235588449, "tags": ["python", "regex", "multiline"], "instruction": "Regular expression matching a multiline block of text\n\nI'm having a bit of trouble getting a Python regex to work when matching against text that spans multiple lines. The example text is (\\n is a newline) some Varying TEXT\\n \\n DSJFKDAFJKDAFJDSAKFJADSFLKDLAFKDSAF\\n [more of the above, ending with a newline]\\n [yep, there is a variable number of lines here]\\n \\n (repeat the above a few hundred times). I'd like to capture two things: the some Varying TEXT part all lines of uppercase text that come two lines below it in one capture (I can strip out the newline characters later). I've tried a few approaches: re.compile(r\"^>(\\w+)$$([.$]+)^$\", re.MULTILINE) # try to capture both parts re.compile(r\"(^[^>][\\w\\s]+)$\", re.MULTILINE|re.DOTALL) # just textlines ...and a lot of variations hereof with no luck. The last one seems to match the lines of text one by one, which is not what I really want. I can catch the first part, no problem, but I can't seem to catch the 4-5 lines of uppercase text. I'd like match.group(1) to be some Varying Text and group(2) to be line1+line2+line3+etc until the empty line is encountered. If anyone's curious, it's supposed to be a sequence of amino acids that make up a protein.", "output": "Try this: re.compile(r\"^(.+)\\n((?:\\n.+)+)\", re.MULTILINE) I think your biggest problem is that you're expecting the ^ and $ anchors to match linefeeds, but they don't. In multiline mode, ^ matches the position immediately following a newline and $ matches the position immediately preceding a newline. Be aware, too, that a newline can consist of a linefeed (\\n), a carriage-return (\\r), or a carriage-return+linefeed (\\r\\n). If you aren't certain that your target text uses only linefeeds, you should use this more inclusive version of the regex: re.compile(r\"^(.+)(?:\\n|\\r\\n?)((?:(?:\\n|\\r\\n?).+)+)\", re.MULTILINE) BTW, you don't want to use the DOTALL modifier here; you're relying on the fact that the dot matches everything except newlines."} +{"question_id": 8381735, "score": 168, "creation_date": 1323066615, "tags": ["python"], "instruction": "How to toggle a value?\n\nWhat is the most efficient way to toggle between 0 and 1?", "output": "Solution using NOT If the values are boolean, the fastest approach is to use the not operator: >>> x = True >>> x = not x # toggle >>> x False >>> x = not x # toggle >>> x True >>> x = not x # toggle >>> x False Solution using subtraction If the values are numerical, then subtraction from the total is a simple and fast way to toggle values: >>> A = 5 >>> B = 3 >>> total = A + B >>> x = A >>> x = total - x # toggle >>> x 3 >>> x = total - x # toggle >>> x 5 >>> x = total - x # toggle >>> x 3 Solution using XOR If the value toggles between 0 and 1, you can use a bitwise exclusive-or: >>> x = 1 >>> x ^= 1 >>> x 0 >>> x ^= 1 >>> x 1 The technique generalizes to any pair of integers. The xor-by-one step is replaced with a xor-by-precomputed-constant: >>> A = 205 >>> B = -117 >>> t = A ^ B # precomputed toggle constant >>> x = A >>> x ^= t # toggle >>> x -117 >>> x ^= t # toggle >>> x 205 >>> x ^= t # toggle >>> x -117 (This idea was submitted by Nick Coghlan and later generalized by @zxxc.) Solution using a dictionary If the values are hashable, you can use a dictionary: >>> A = 'xyz' >>> B = 'pdq' >>> d = {A:B, B:A} >>> x = A >>> x = d[x] # toggle >>> x 'pdq' >>> x = d[x] # toggle >>> x 'xyz' >>> x = d[x] # toggle >>> x 'pdq' Solution using a conditional expression The slowest way is to use a conditional expression: >>> A = [1,2,3] >>> B = [4,5,6] >>> x = A >>> x = B if x == A else A >>> x [4, 5, 6] >>> x = B if x == A else A >>> x [1, 2, 3] >>> x = B if x == A else A >>> x [4, 5, 6] Solution using itertools If you have more than two values, the itertools.cycle() function provides a generic fast way to toggle between successive values: >>> import itertools >>> toggle = itertools.cycle(['red', 'green', 'blue']).__next__ >>> toggle() 'red' >>> toggle() 'green' >>> toggle() 'blue' >>> toggle() 'red' >>> toggle() 'green' >>> toggle() 'blue'"} +{"question_id": 58608361, "score": 168, "creation_date": 1572355615, "tags": ["python", "python-3.x", "string", "enums"], "instruction": "String-based enum in Python\n\nTo encapsulate a list of states I am using enum module: from enum import Enum class MyEnum(Enum): state1='state1' state2 = 'state2' state = MyEnum.state1 MyEnum['state1'] == state # here it works 'state1' == state # here it does not throw but returns False (fail!) However, the issue is that I need to seamlessly use the values as strings in many contexts in my script, like: select_query1 = select(...).where(Process.status == str(MyEnum.state1)) # works but ugly select_query2 = select(...).where(Process.status == MyEnum.state1) # throws exeption How to do it avoiding calling additional type conversion (str(state) above) or the underlying value (state.value)?", "output": "It seems that it is enough to inherit from str class at the same time as Enum: from enum import Enum class MyEnum(str, Enum): state1 = 'state1' state2 = 'state2' The tricky part is that the order of classes in the inheritance chain is important as this: class MyEnum(Enum, str): state1 = 'state1' state2 = 'state2' throws: TypeError: new enumerations should be created as `EnumName([mixin_type, ...] [data_type,] enum_type)` With the correct class the following operations on MyEnum are fine: print('This is the state value: ' + state) As a side note, it seems that the special inheritance trick is not needed for formatted strings which work even for Enum inheritance only: msg = f'This is the state value: {state}' # works without inheriting from str"} +{"question_id": 21214270, "score": 168, "creation_date": 1390116708, "tags": ["python", "asynchronous", "flask", "cron", "scheduled-tasks"], "instruction": "How to schedule a function to run every hour on Flask?\n\nI have a Flask web hosting with no access to cron command. How can I execute some Python function every hour?", "output": "You can use BackgroundScheduler() from APScheduler package (v3.5.3): import time import atexit from apscheduler.schedulers.background import BackgroundScheduler def print_date_time(): print(time.strftime(\"%A, %d. %B %Y %I:%M:%S %p\")) scheduler = BackgroundScheduler() scheduler.add_job(func=print_date_time, trigger=\"interval\", seconds=60) scheduler.start() # Shut down the scheduler when exiting the app atexit.register(lambda: scheduler.shutdown()) Note that two of these schedulers will be launched when Flask is in debug mode. For more information, check out this question."} +{"question_id": 5284646, "score": 168, "creation_date": 1299955931, "tags": ["python", "sorting", "numpy"], "instruction": "Rank items in an array using Python/NumPy, without sorting array twice\n\nI have an array of numbers and I'd like to create another array that represents the rank of each item in the first array. I'm using Python and NumPy. For example: array = [4,2,7,1] ranks = [2,1,3,0] Here's the best method I've come up with: array = numpy.array([4,2,7,1]) temp = array.argsort() ranks = numpy.arange(len(array))[temp.argsort()] Are there any better/faster methods that avoid sorting the array twice?", "output": "Use advanced indexing on the left-hand side in the last step: array = numpy.array([4,2,7,1]) temp = array.argsort() ranks = numpy.empty_like(temp) ranks[temp] = numpy.arange(len(array)) This avoids sorting twice by inverting the permutation in the last step."} +{"question_id": 3172929, "score": 168, "creation_date": 1278192402, "tags": ["python", "django", "database", "sqlite", "locked"], "instruction": "OperationalError: database is locked\n\nI have made some repetitive operations in my application (testing it), and suddenly I\u2019m getting a weird error: OperationalError: database is locked I've restarted the server, but the error persists. What can it be all about?", "output": "From Django's documentation: SQLite is meant to be a lightweight database, and thus can't support a high level of concurrency. OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default configuration. This error means that one thread or process has an exclusive lock on the database connection and another thread timed out waiting for the lock the be released. Python's SQLite wrapper has a default timeout value that determines how long the second thread is allowed to wait on the lock before it times out and raises the OperationalError: database is locked error. If you're getting this error, you can solve it by: Switching to another database backend. At a certain point SQLite becomes too \"lite\" for real-world applications, and these sorts of concurrency errors indicate you've reached that point. Rewriting your code to reduce concurrency and ensure that database transactions are short-lived. Increase the default timeout value by setting the timeout database option: \"OPTIONS\": { # ... \"timeout\": 20, # ... } This will make SQLite wait a bit longer before throwing \"database is locked\" errors; it won't really do anything to solve them."} +{"question_id": 862412, "score": 168, "creation_date": 1242293494, "tags": ["python", "python-2.x"], "instruction": "Is it possible to have multiple statements in a python lambda expression?\n\nI have a list of lists: lst = [[567, 345, 234], [253, 465, 756, 2345], [333, 777, 111, 555]] I want map lst into another list containing only the second smallest number from each sublist. So the result should be [345, 465, 333]. If I were just interested in the smallest number, I could write this as map(lambda x: min(x), lst). To get the second smallest numbers, I thought of sorting and then indexing the results, like map(lambda x: sort(x)[1], lst); however, sort returns None rather than chaining. If it were possible to use multiple statements in a lambda, I could write map(lambda x: sort(x); x[1], lst), but this is not allowed. Can I use map to solve the problem without defining a named function? How?", "output": "There are several different answers I can give here, from your specific question to more general concerns. So from most specific to most general: Q. Can you put multiple statements in a lambda? A. No. But you don't actually need to use a lambda. You can put the statements in a def instead. i.e.: def second_lowest(l): l.sort() return l[1] map(second_lowest, lst) Q. Can you get the second lowest item from a lambda by sorting the list? A. Yes. As alex's answer points out, sorted() is a version of sort that creates a new list, rather than sorting in-place, and can be chained. Note that this is probably what you should be using - it's bad practice for your map to have side effects on the original list. Q. How should I get the second lowest item from each list in a sequence of lists? A. sorted(l)[1] is not actually the best way for this. It has O(N log(N)) complexity, while an O(n) solution exists. This can be found in the heapq module. >>> import heapq >>> l = [5,2,6,8,3,5] >>> heapq.nsmallest(l, 2) [2, 3] So just use: map(lambda x: heapq.nsmallest(x,2)[1], list_of_lists) It's also usually considered clearer to use a list comprehension, which avoids the lambda altogether: [heapq.nsmallest(x,2)[1] for x in list_of_lists]"} +{"question_id": 3108285, "score": 168, "creation_date": 1277367955, "tags": ["python", "linux", "unix", "environment-variables"], "instruction": "In Python script, how do I set PYTHONPATH?\n\nI know how to set it in my /etc/profile and in my environment variables. But what if I want to set it during a script? Is it import os, sys? How do I do it?", "output": "You don't set PYTHONPATH, you add entries to sys.path. It's a list of directories that should be searched for Python packages, so you can just append your directories to that list. import sys sys.path.append('/path/to/whatever') In fact, sys.path is initialized by splitting the value of PYTHONPATH on the path separator character (: on Linux-like systems, ; on Windows). You can also add directories using site.addsitedir, and that method will also take into account .pth files existing within the directories you pass. (That would not be the case with directories you specify in PYTHONPATH.)"} +{"question_id": 40059654, "score": 168, "creation_date": 1476538455, "tags": ["python", "json"], "instruction": "Convert a bytes array into JSON format\n\nI want to parse a bytes string in JSON format to convert it into python objects. This is the source I have: my_bytes_value = b'[{\\'Date\\': \\'2016-05-21T21:35:40Z\\', \\'CreationDate\\': \\'2012-05-05\\', \\'LogoType\\': \\'png\\', \\'Ref\\': 164611595, \\'Classe\\': [\\'Email addresses\\', \\'Passwords\\'],\\'Link\\':\\'http://some_link.com\\'}]' And this is the desired outcome I want to have: [{ \"Date\": \"2016-05-21T21:35:40Z\", \"CreationDate\": \"2012-05-05\", \"LogoType\": \"png\", \"Ref\": 164611595, \"Classes\": [ \"Email addresses\", \"Passwords\" ], \"Link\": \"http://some_link.com\"}] First, I converted the bytes to string: my_new_string_value = my_bytes_value.decode(\"utf-8\") but when I try to invoke loads to parse it as JSON: my_json = json.loads(my_new_string_value) I get this error: json.decoder.JSONDecodeError: Expecting value: line 1 column 174 (char 173)", "output": "Your bytes object is almost JSON, but it's using single quotes instead of double quotes, and it needs to be a string. So one way to fix it is to decode the bytes to str and replace the quotes. Another option is to use ast.literal_eval; see below for details. If you want to print the result or save it to a file as valid JSON you can load the JSON to a Python list and then dump it out. Eg, import json my_bytes_value = b'[{\\'Date\\': \\'2016-05-21T21:35:40Z\\', \\'CreationDate\\': \\'2012-05-05\\', \\'LogoType\\': \\'png\\', \\'Ref\\': 164611595, \\'Classe\\': [\\'Email addresses\\', \\'Passwords\\'],\\'Link\\':\\'http://some_link.com\\'}]' # Decode UTF-8 bytes to Unicode, and convert single quotes # to double quotes to make it valid JSON my_json = my_bytes_value.decode('utf8').replace(\"'\", '\"') print(my_json) print('- ' * 20) # Load the JSON to a Python list & dump it back out as formatted JSON data = json.loads(my_json) s = json.dumps(data, indent=4, sort_keys=True) print(s) output [{\"Date\": \"2016-05-21T21:35:40Z\", \"CreationDate\": \"2012-05-05\", \"LogoType\": \"png\", \"Ref\": 164611595, \"Classe\": [\"Email addresses\", \"Passwords\"],\"Link\":\"http://some_link.com\"}] - - - - - - - - - - - - - - - - - - - - [ { \"Classe\": [ \"Email addresses\", \"Passwords\" ], \"CreationDate\": \"2012-05-05\", \"Date\": \"2016-05-21T21:35:40Z\", \"Link\": \"http://some_link.com\", \"LogoType\": \"png\", \"Ref\": 164611595 } ] As Antti Haapala mentions in the comments, we can use ast.literal_eval to convert my_bytes_value to a Python list, once we've decoded it to a string. from ast import literal_eval import json my_bytes_value = b'[{\\'Date\\': \\'2016-05-21T21:35:40Z\\', \\'CreationDate\\': \\'2012-05-05\\', \\'LogoType\\': \\'png\\', \\'Ref\\': 164611595, \\'Classe\\': [\\'Email addresses\\', \\'Passwords\\'],\\'Link\\':\\'http://some_link.com\\'}]' data = literal_eval(my_bytes_value.decode('utf8')) print(data) print('- ' * 20) s = json.dumps(data, indent=4, sort_keys=True) print(s) Generally, this problem arises because someone has saved data by printing its Python repr instead of using the json module to create proper JSON data. If it's possible, it's better to fix that problem so that proper JSON data is created in the first place."} +{"question_id": 20375561, "score": 168, "creation_date": 1386160548, "tags": ["python", "pandas", "dataframe", "join", "pandas-merge"], "instruction": "Joining pandas DataFrames by Column names\n\nI have two DataFrames with the following column names: frame_1: event_id, date, time, county_ID frame_2: countyid, state I would like to get a DataFrame with the following columns by left-joining on county_ID = countyid: joined_dataframe: event_id, date, time, county, state I cannot figure out how to do it if the columns on which I want to join are not the index.", "output": "You can use the left_on and right_on options of pd.merge as follows: pd.merge(frame_1, frame_2, left_on='county_ID', right_on='countyid') Or equivalently with DataFrame.merge: frame_1.merge(frame_2, left_on='county_ID', right_on='countyid') I was not sure from the question if you only wanted to merge if the key was in the left hand DataFrame. If that is the case then the following will do that (the above will in effect do a many to many merge) pd.merge(frame_1, frame_2, how='left', left_on='county_ID', right_on='countyid') Or frame_1.merge(frame_2, how='left', left_on='county_ID', right_on='countyid')"} +{"question_id": 33246771, "score": 168, "creation_date": 1445375148, "tags": ["python", "pandas", "dataframe", "series"], "instruction": "How to convert single-row pandas data frame to series?\n\nI'm somewhat new to pandas. I have a pandas data frame that is 1 row by 23 columns. I want to convert this into a series. I'm wondering what the most pythonic way to do this is? I've tried pd.Series(myResults) but it complains ValueError: cannot copy sequence with size 23 to array axis with dimension 1. It's not smart enough to realize it's still a \"vector\" in math terms.", "output": "If you have a one column dataframe df, you can convert it to a series: df.iloc[:,0] # pandas Series Since you have a one row dataframe df, you can transpose it so you're in the previous case: df.T.iloc[:,0]"} +{"question_id": 41499857, "score": 168, "creation_date": 1483682434, "tags": ["python", "seaborn"], "instruction": "Why import seaborn as sns?\n\nWhy do you always import seaborn as sns and not with the letters of the name as sbn? Is sns an acronym for something? Or is it some kind of joke? As cel commented, someone put this as an issue in github. There, they do not answer the question but say that it is a joke related to the original name of the library.", "output": "Samuel Norman \"Sam\" Seaborn is a fictional character portrayed by Rob Lowe on the television serial drama The West Wing. So, it's a joked initialism. (source: seaborn FAQ)"} +{"question_id": 32639074, "score": 167, "creation_date": 1442520442, "tags": ["python", "windows-7", "pip"], "instruction": "Why am I getting ImportError: No module named pip ' right after installing pip?\n\nI have installed pip and ez setup. I also checked the system path and I can see the module in the folder structure. Still when I try to run the pip command, I get an Import error saying no module named pip. I am running 32-bit Python on a Windows 7 machine.", "output": "Just be sure that you have included the Python executable in the Windows Environment Variables, System PATH variable, and then run: python -m ensurepip. or if you have permission issue, run: python -m ensurepip --user"} +{"question_id": 9319317, "score": 167, "creation_date": 1329427481, "tags": ["python", "openfiledialog"], "instruction": "Quick and easy file dialog in Python?\n\nI have a simple script which parses a file and loads it's contents to a database. I don't need a UI, but right now I'm prompting the user for the file to parse using raw_input which is most unfriendly, especially because the user can't copy/paste the path. I would like a quick and easy way to present a file selection dialog to the user, they can select the file, and then it's loaded to the database. (In my use case, if they happened to chose the wrong file, it would fail parsing, and wouldn't be a problem even if it was loaded to the database.) import tkFileDialog file_path_string = tkFileDialog.askopenfilename() This code is close to what I want, but it leaves an annoying empty frame open (which isn't able to be closed, probably because I haven't registered a close event handler). I don't have to use tkInter, but since it's in the Python standard library it's a good candidate for quickest and easiest solution. Whats a quick and easy way to prompt for a file or filename in a script without any other UI?", "output": "Tkinter is the easiest way if you don't want to have any other dependencies. To show only the dialog without any other GUI elements, you have to hide the root window using the withdraw method: import tkinter as tk from tkinter import filedialog root = tk.Tk() root.withdraw() file_path = filedialog.askopenfilename() Python 2 variant: import Tkinter, tkFileDialog root = Tkinter.Tk() root.withdraw() file_path = tkFileDialog.askopenfilename()"} +{"question_id": 48608894, "score": 167, "creation_date": 1517753259, "tags": ["python", "django", "django-urls"], "instruction": "ImproperlyConfiguredError about app_name when using namespace in include()\n\nI am currently trying out Django. I use the namespace argument in one of my include()s in urls.py. When I run the server and try to browse, I get this error. File \"C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\site-packages\\django\\urls\\conf.py\", line 39, in include 'Specifying a namespace in include() without providing an app_name ' django.core.exceptions.ImproperlyConfigured: Specifying a namespace in include() without providing an app_name is not supported. Set the app_name attribute in the included module, or pass a 2-tuple containing the list of patterns and app_name instead. These are my urls.py files: #project/urls.py from django.conf.urls import include, url from django.contrib import admin urlpatterns = [ url(r'^reviews/', include('reviews.urls', namespace='reviews')), url(r'^admin/', include(admin.site.urls)), ] and #app/urls.py from django.conf.urls import url from . import views urlpatterns = [ # ex: / url(r'^$', views.review_list, name='review_list'), # ex: /review/5/ url(r'^review/(?P<review_id>[0-9]+)/$', views.review_detail, name='review_detail'), # ex: /wine/ url(r'^wine$', views.wine_list, name='wine_list'), # ex: /wine/5/ url(r'^wine/(?P<wine_id>[0-9]+)/$', views.wine_detail, name='wine_detail'), ] What do I pass the app_name as stated in the error message?", "output": "Check the docs for include here. What you've done is not an acceptable way of passing parameters to include. You could do: url(r'^reviews/', include(('reviews.urls', 'reviews'), namespace='reviews')),"} +{"question_id": 58636087, "score": 167, "creation_date": 1572488283, "tags": ["python", "tensorflow", "keras", "lstm"], "instruction": "Tensorflow - ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float)\n\nContinuation from previous question: Tensorflow - TypeError: 'int' object is not iterable My training data is a list of lists each comprised of 1000 floats. For example, x_train[0] = [0.0, 0.0, 0.1, 0.25, 0.5, ...] Here is my model: model = Sequential() model.add(LSTM(128, activation='relu', input_shape=(1000, 1), return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(128, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=3, validation_data=(x_test, y_test)) Here is the error I'm getting: Traceback (most recent call last): File \"C:\\Users\\bencu\\Desktop\\ProjectFiles\\Code\\Program.py\", line 88, in FitModel model.fit(x_train, y_train, epochs=3, validation_data=(x_test, y_test)) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training.py\", line 728, in fit use_multiprocessing=use_multiprocessing) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 224, in fit distribution_strategy=strategy) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 547, in _process_training_inputs use_multiprocessing=use_multiprocessing) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\training_v2.py\", line 606, in _process_inputs use_multiprocessing=use_multiprocessing) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\data_adapter.py\", line 479, in __init__ batch_size=batch_size, shuffle=shuffle, **kwargs) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\data_adapter.py\", line 321, in __init__ dataset_ops.DatasetV2.from_tensors(inputs).repeat() File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\data\\ops\\dataset_ops.py\", line 414, in from_tensors return TensorDataset(tensors) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\data\\ops\\dataset_ops.py\", line 2335, in __init__ element = structure.normalize_element(element) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\data\\util\\structure.py\", line 111, in normalize_element ops.convert_to_tensor(t, name=\"component_%d\" % i)) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1184, in convert_to_tensor return convert_to_tensor_v2(value, dtype, preferred_dtype, name) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1242, in convert_to_tensor_v2 as_ref=False) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1296, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\tensor_conversion_registry.py\", line 52, in _default_conversion_function return constant_op.constant(value, dtype, name=name) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\constant_op.py\", line 227, in constant allow_broadcast=True) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\constant_op.py\", line 235, in _constant_impl t = convert_to_eager_tensor(value, ctx, dtype) File \"C:\\Users\\bencu\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_core\\python\\framework\\constant_op.py\", line 96, in convert_to_eager_tensor return ops.EagerTensor(value, ctx.device_name, dtype) ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float). I've tried googling the error myself, I found something about using the tf.convert_to_tensor function. I tried passing my training and testing lists through this but the function won't take them.", "output": "TL;DR Several possible errors, most fixed with x = np.asarray(x).astype('float32'). Others may be faulty data preprocessing; ensure everything is properly formatted (categoricals, nans, strings, etc). Below shows what the model expects: [print(i.shape, i.dtype) for i in model.inputs] [print(o.shape, o.dtype) for o in model.outputs] [print(l.name, l.input_shape, l.dtype) for l in model.layers] The problem's rooted in using lists as inputs, as opposed to Numpy arrays; Keras/TF doesn't support former. A simple conversion is: x_array = np.asarray(x_list). The next step's to ensure data is fed in expected format; for LSTM, that'd be a 3D tensor with dimensions (batch_size, timesteps, features) - or equivalently, (num_samples, timesteps, channels). Lastly, as a debug pro-tip, print ALL the shapes for your data. Code accomplishing all of the above, below: Sequences = np.asarray(Sequences) Targets = np.asarray(Targets) show_shapes() Sequences = np.expand_dims(Sequences, -1) Targets = np.expand_dims(Targets, -1) show_shapes() # OUTPUTS Expected: (num_samples, timesteps, channels) Sequences: (200, 1000) Targets: (200,) Expected: (num_samples, timesteps, channels) Sequences: (200, 1000, 1) Targets: (200, 1) As a bonus tip, I notice you're running via main(), so your IDE probably lacks a Jupyter-like cell-based execution; I strongly recommend the Spyder IDE. It's as simple as adding # In[], and pressing Ctrl + Enter below: Function used: def show_shapes(): # can make yours to take inputs; this'll use local variable values print(\"Expected: (num_samples, timesteps, channels)\") print(\"Sequences: {}\".format(Sequences.shape)) print(\"Targets: {}\".format(Targets.shape))"} +{"question_id": 43102442, "score": 167, "creation_date": 1490815432, "tags": ["python", "mysql", "database", "mysql-python", "mysql-connector"], "instruction": "What's the difference between MySQLdb, mysqlclient and MySQL connector/Python?\n\nSo I've been trying to do some database update with python and while setting up the whole dev environment, I came across these three things which made me dizzy. There's MySQLdb There's mysqlclient And then there's a mysql connector python What's each of them, the difference and where to use them? Thanks", "output": "MySQLdb is a thin python wrapper around C module which implements API for MySQL database. There was MySQLDb1 version of wrapper used some time ago and now it is considered to be a legacy. As MySQLDb1 started evolving to MySQLDb2 with bug fixes and Python3 support, a MySQLDb1 was forked and here is how mysqlclient appeared, with bugfixes and Python3 support. Sum up, so now we have MySQLDb2 which is not ready for production use, MySQLDb1 as an outdated driver and a community supported mysqlclient with bug fixes and Python3 support. Now, to solve that mess, MySQL provides their own version of MySQL adapter - mysql connector, an all-in python module that uses MySQL API with no C modules dependencies and only standard python modules used. So now the question comes down to: mysqlclient vs mysql connector. As for me, I would go with officially supported library, however mysqlclient should be a good choice as well. Both of them are being actively updated with fixes and new features which you can see by active commits in last days. Note: I did not have much experience with them, so there might be cases when one or another does not suite your needs. Both libraries follow PEP-249 standard which means you should be fine with at least base functionality everywhere. Installation and Dependencies mysqlclient As a fork of C wrapper it requires C modules to work with MySQL which adds python header files to build these extensions (read python-dev). Installation depends on the system you use, just make sure you aware of package names and can install them. mysql connector Main documentation is pretty clear, however you should be aware of Protobuf C++ dependency (for mysql connector versions >= 2.2.3 < 2.2.6)."} +{"question_id": 32032697, "score": 167, "creation_date": 1439708877, "tags": ["python", "python-2.7"], "instruction": "Why does using \"from __future__ import print_function\" break Python2-style print?\n\nI tried this code in Python 2.7: from __future__ import print_function import sys, os, time for x in range(0,10): print x, sep=' ', end='' time.sleep(1) But I get an error that says: $ python2 xy.py File \"xy.py\", line 5 print x, sep=' ', end='' ^ SyntaxError: invalid syntax $ I thought that using the __future__ import should make it possible to use sep and end in a print statement, but now it apparently doesn't work at all. Why not?", "output": "The whole point of from __future__ import print_function is to bring the print function from Python 3 into Python 2.6+. Thus, it must be used like a function here: from __future__ import print_function import sys, os, time for x in range(0,10): print(x, sep=' ', end='') # No need for sep here, but okay :) time.sleep(1) __future__ statements change fundamental things about the language. From the documentation: A future statement is recognized and treated specially at compile time: Changes to the semantics of core constructs are often implemented by generating different code. It may even be the case that a new feature introduces new incompatible syntax (such as a new reserved word), in which case the compiler may need to parse the module differently. Such decisions cannot be pushed off until runtime. (For the same reason, they must also appear first in the source code, before any other imports. The only things that can precede a __future__ statement are the module docstring, comments, blank lines, and other future statements.)"} +{"question_id": 31866796, "score": 167, "creation_date": 1438899884, "tags": ["python", "asynchronous", "flask"], "instruction": "Making an asynchronous task in Flask\n\nI am writing an application in Flask, which works really well except that WSGI is synchronous and blocking. I have one task in particular which calls out to a third party API and that task can take several minutes to complete. I would like to make that call (it's actually a series of calls) and let it run. while control is returned to Flask. My view looks like: @app.route('/render/<id>', methods=['POST']) def render_script(id=None): ... data = json.loads(request.data) text_list = data.get('text_list') final_file = audio_class.render_audio(data=text_list) # do stuff return Response( mimetype='application/json', status=200 ) Now, what I want to do is have the line final_file = audio_class.render_audio() run and provide a callback to be executed when the method returns, whilst Flask can continue to process requests. This is the only task which I need Flask to run asynchronously, and I would like some advice on how best to implement this. I have looked at Twisted and Klein, but I'm not sure they are overkill, as maybe Threading would suffice. Or maybe Celery is a good choice for this?", "output": "I would use Celery to handle the asynchronous task for you. You'll need to install a broker to serve as your task queue (RabbitMQ and Redis are recommended). app.py: from flask import Flask from celery import Celery broker_url = 'amqp://guest@localhost' # Broker URL for RabbitMQ task queue app = Flask(__name__) celery = Celery(app.name, broker=broker_url) celery.config_from_object('celeryconfig') # Your celery configurations in a celeryconfig.py @celery.task(bind=True) def some_long_task(self, x, y): # Do some long task ... @app.route('/render/<id>', methods=['POST']) def render_script(id=None): ... data = json.loads(request.data) text_list = data.get('text_list') final_file = audio_class.render_audio(data=text_list) some_long_task.delay(x, y) # Call your async task and pass whatever necessary variables return Response( mimetype='application/json', status=200 ) Run your Flask app, and start another process to run your celery worker. $ celery worker -A app.celery --loglevel=debug I would also refer to Miguel Gringberg's write up for a more in depth guide to using Celery with Flask."} +{"question_id": 39475978, "score": 167, "creation_date": 1473788385, "tags": ["python", "pandas", "dataframe", "function", "apply"], "instruction": "Apply function to each cell in DataFrame\n\nI have a dataframe that may look like this: A B C foo bar foo bar bar foo foo bar I want to look through every element of each row (or every element of each column) and apply the following function to get the subsequent dataframe: def foo_bar(x): return x.replace('foo', 'wow') After applying the function, my dataframe will look like this: A B C wow bar wow bar bar wow wow bar Is there a simple one-liner that can apply a function to each cell? This is a simplistic example so there may be an easier way to execute this specific example other than applying a function, but what I am really asking about is how to apply a function in every cell within a dataframe.", "output": "You can use map() which is concise for your case. df.map(foo_bar) # A B C #0 wow bar wow bar #1 bar wow wow bar Another option is to vectorize your function and then use apply method: import numpy as np df.apply(np.vectorize(foo_bar)) # A B C #0 wow bar wow bar #1 bar wow wow bar"} +{"question_id": 37053379, "score": 167, "creation_date": 1462458916, "tags": ["python", "python-3.x", "performance", "bit-shift", "integer-arithmetic"], "instruction": "Times-two faster than bit-shift, for Python 3.x integers?\n\nI was looking at the source of sorted_containers and was surprised to see this line: self._load, self._twice, self._half = load, load * 2, load >> 1 Here load is an integer. Why use bit shift in one place, and multiplication in another? It seems reasonable that bit shifting may be faster than integral division by 2, but why not replace the multiplication by a shift as well? I benchmarked the the following cases: (times, divide) (shift, shift) (times, shift) (shift, divide) and found that #3 is consistently faster than other alternatives: # self._load, self._twice, self._half = load, load * 2, load >> 1 import random import timeit import pandas as pd x = random.randint(10 ** 3, 10 ** 6) def test_naive(): a, b, c = x, 2 * x, x // 2 def test_shift(): a, b, c = x, x << 1, x >> 1 def test_mixed(): a, b, c = x, x * 2, x >> 1 def test_mixed_swapped(): a, b, c = x, x << 1, x // 2 def observe(k): print(k) return { 'naive': timeit.timeit(test_naive), 'shift': timeit.timeit(test_shift), 'mixed': timeit.timeit(test_mixed), 'mixed_swapped': timeit.timeit(test_mixed_swapped), } def get_observations(): return pd.DataFrame([observe(k) for k in range(100)]) The question: Is my test valid? If so, why is (multiply, shift) faster than (shift, shift)? I run Python 3.5 on Ubuntu 14.04. Edit Above is the original statement of the question. Dan Getz provides an excellent explanation in his answer. For the sake of completeness, here are sample illustrations for larger x when multiplication optimizations do not apply.", "output": "This seems to be because multiplication of small numbers is optimized in CPython 3.5, in a way that left shifts by small numbers are not. Positive left shifts always create a larger integer object to store the result, as part of the calculation, while for multiplications of the sort you used in your test, a special optimization avoids this and creates an integer object of the correct size. This can be seen in the source code of Python's integer implementation. Because integers in Python are arbitrary-precision, they are stored as arrays of integer \"digits\", with a limit on the number of bits per integer digit. So in the general case, operations involving integers are not single operations, but instead need to handle the case of multiple \"digits\". In pyport.h, this bit limit is defined as 30 bits on 64-bit platform, or 15 bits otherwise. (I'll just call this 30 from here on to keep the explanation simple. But note that if you were using Python compiled for 32-bit, your benchmark's result would depend on if x were less than 32,768 or not.) When an operation's inputs and outputs stay within this 30-bit limit, the operation can be handled in an optimized way instead of the general way. The beginning of the integer multiplication implementation is as follows: static PyObject * long_mul(PyLongObject *a, PyLongObject *b) { PyLongObject *z; CHECK_BINOP(a, b); /* fast path for single-digit multiplication */ if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) { stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b); #ifdef HAVE_LONG_LONG return PyLong_FromLongLong((PY_LONG_LONG)v); #else /* if we don't have long long then we're almost certainly using 15-bit digits, so v will fit in a long. In the unlikely event that we're using 30-bit digits on a platform without long long, a large v will just cause us to fall through to the general multiplication code below. */ if (v >= LONG_MIN && v <= LONG_MAX) return PyLong_FromLong((long)v); #endif } So when multiplying two integers where each fits in a 30-bit digit, this is done as a direct multiplication by the CPython interpreter, instead of working with the integers as arrays. (MEDIUM_VALUE() called on a positive integer object simply gets its first 30-bit digit.) If the result fits in a single 30-bit digit, PyLong_FromLongLong() will notice this in a relatively small number of operations, and create a single-digit integer object to store it. In contrast, left shifts are not optimized this way, and every left shift deals with the integer being shifted as an array. In particular, if you look at the source code for long_lshift(), in the case of a small but positive left shift, a 2-digit integer object is always created, if only to have its length truncated to 1 later: (my comments in /*** ***/) static PyObject * long_lshift(PyObject *v, PyObject *w) { /*** ... ***/ wordshift = shiftby / PyLong_SHIFT; /*** zero for small w ***/ remshift = shiftby - wordshift * PyLong_SHIFT; /*** w for small w ***/ oldsize = Py_ABS(Py_SIZE(a)); /*** 1 for small v > 0 ***/ newsize = oldsize + wordshift; if (remshift) ++newsize; /*** here newsize becomes at least 2 for w > 0, v > 0 ***/ z = _PyLong_New(newsize); /*** ... ***/ } Integer division You didn't ask about the worse performance of integer floor division compared to right shifts, because that fit your (and my) expectations. But dividing a small positive number by another small positive number is not as optimized as small multiplications, either. Every // computes both the quotient and the remainder using the function long_divrem(). This remainder is computed for a small divisor with a multiplication, and is stored in a newly-allocated integer object, which in this situation is immediately discarded. Or at least, that was the case when this question was originally asked. In CPython 3.6, a fast path for small int // was added, so // now beats >> for small ints too."} +{"question_id": 25749621, "score": 167, "creation_date": 1410279884, "tags": ["python", "pip"], "instruction": "What's the difference between \"pip install\" and \"python -m pip install\"?\n\nI have a local version of Python 3.4.1 and I can run python -m pip install, but I'm unable to find the pip binary to run pip install. What's the difference between these two?", "output": "They do exactly the same thing, assuming pip is using the same version of Python as the python executable. The docs for distributing Python modules were just updated to suggest using python -m pip instead of the pip executable, because it allows you to be explicit about which version of Python to use. In systems with more than one version of Python installed, it's not always clear which one pip is linked to. Here's some more concrete \"proof\" that both commands should do the same thing, beyond just trusting my word and the bug report I linked :) If you take a look at the pip executable script, it's just doing this: from pkg_resources import load_entry_point <snip> load_entry_point('pip==1.5.4', 'console_scripts', 'pip')() It's calling load_entry_point, which returns a function, and then executing that function. The entry point it's using is called 'console_scripts'. If you look at the entry_points.txt file for pip (/usr/lib/python2.7/dist-packages/pip-1.5.4.egg-info/entry_points.txt on my Ubuntu machine), you'll see this: [console_scripts] pip = pip:main pip2.7 = pip:main pip2 = pip:main So the entry point returned is the main function in the pip module. When you run python -m pip, you're executing the __main__.py script inside the pip package. That looks like this: import sys from .runner import run if __name__ == '__main__': exit = run() if exit: sys.exit(exit) And the runner.run function looks like this: def run(): base = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) ## FIXME: this is kind of crude; if we could create a fake pip ## module, then exec into it and update pip.__path__ properly, we ## wouldn't have to update sys.path: sys.path.insert(0, base) import pip return pip.main() As you can see, it's just calling the pip.main function, too. So both commands end up calling the same main function in pip/__init__.py."} +{"question_id": 4348733, "score": 167, "creation_date": 1291401671, "tags": ["python", "matplotlib"], "instruction": "Saving interactive Matplotlib figures\n\nIs there a way to save a Matplotlib figure such that it can be re-opened and have typical interaction restored? (Like the .fig format in MATLAB?) I find myself running the same scripts many times to generate these interactive figures. Or I'm sending my colleagues multiple static PNG files to show different aspects of a plot. I'd rather send the figure object and have them interact with it themselves.", "output": "Matplotlib since version 1.2 allowed you to pickle figures. As the release notes state, it is an experimental feature and does not support saving a figure in one matplotlib version and opening in another. It's also generally unsecure to restore a pickle from an untrusted source. For sharing/later editing plots (that require significant data processing first and may need to be tweaked months later say during peer review for a scientific publication), I still recommend the workflow of (1) have a data processing script that before generating a plot saves the processed data (that goes into your plot) into a file, and (2) have a separate plot generation script (that you adjust as necessary) to recreate the plot. This way for each plot you can quickly run a script and re-generate it (and quickly copy over your plot settings with new data). That said, pickling a figure could be convenient for short term/interactive/exploratory data analysis. If you are using a really old version that doesn't have support yet, I'd suggest either (a) separate processing the data from generating the figure (which saves data with a unique name) and write a figure generating script (loading a specified file of the saved data) and editing as you see fit or (b) save as PDF/SVG/PostScript format and edit in some fancy figure editor like Adobe Illustrator (or Inkscape)."} +{"question_id": 19261833, "score": 166, "creation_date": 1381284466, "tags": ["python", "flask"], "instruction": "What is an 'endpoint' in Flask?\n\nThe Flask documentation shows: add_url_rule(*args, **kwargs) Connects a URL rule. Works exactly like the route() decorator. If a view_func is provided it will be registered with the endpoint. endpoint \u2013 the endpoint for the registered URL rule. Flask itself assumes the name of the view function as endpoint What exactly is meant by an \"endpoint\"?", "output": "How Flask Routing Works The entire idea of Flask (and the underlying Werkzeug library) is to map URL paths to some logic that you will run (typically, the \"view function\"). Your basic view is defined like this: @app.route('/greeting/<name>') def give_greeting(name): return 'Hello, {0}!'.format(name) Note that the function you referred to (add_url_rule) achieves the same goal, just without using the decorator notation. Therefore, the following is the same: # No \"route\" decorator here. We will add routing using a different method below. def give_greeting(name): return 'Hello, {0}!'.format(name) app.add_url_rule('/greeting/<name>', 'give_greeting', give_greeting) Let's say your website is located at 'www.example.org' and uses the above view. The user enters the following URL into their browser: http://www.example.org/greeting/Mark The job of Flask is to take this URL, figure out what the user wants to do, and pass it on to one of your many python functions for handling. It takes the path: /greeting/Mark ...and matches it to the list of routes. In our case, we defined this path to go to the give_greeting function. However, while this is the typical way that you might go about creating a view, it actually abstracts some extra info from you. Behind the scenes, Flask did not make the leap directly from URL to the view function that should handle this request. It does not simply say... URL (http://www.example.org/greeting/Mark) should be handled by View Function (the function \"give_greeting\") Actually, it there is another step, where it maps the URL to an endpoint: URL (http://www.example.org/greeting/Mark) should be handled by Endpoint \"give_greeting\". Requests to Endpoint \"give_greeting\" should be handled by View Function \"give_greeting\" Basically, the \"endpoint\" is an identifier that is used in determining what logical unit of your code should handle the request. Normally, an endpoint is just the name of a view function. However, you can actually change the endpoint, as is done in the following example. @app.route('/greeting/<name>', endpoint='say_hello') def give_greeting(name): return 'Hello, {0}!'.format(name) Now, when Flask routes the request, the logic looks like this: URL (http://www.example.org/greeting/Mark) should be handled by Endpoint \"say_hello\". Endpoint \"say_hello\" should be handled by View Function \"give_greeting\" How You Use the Endpoint The endpoint is commonly used for the \"reverse lookup\". For example, in one view of your Flask application, you want to reference another view (perhaps when you are linking from one area of the site to another). Rather than hard-code the URL, you can use url_for(). Assume the following @app.route('/') def index(): print url_for('give_greeting', name='Mark') # This will print '/greeting/Mark' @app.route('/greeting/<name>') def give_greeting(name): return 'Hello, {0}!'.format(name) This is advantageous, as now we can change the URLs of our application without needing to change the line where we reference that resource. Why not just always use the name of the view function? One question that might come up is the following: \"Why do we need this extra layer?\" Why map a path to an endpoint, then an endpoint to a view function? Why not just skip that middle step? The reason is because it is more powerful this way. For example, Flask Blueprints allow you to split your application into various parts. I might have all of my admin-side resources in a blueprint called \"admin\", and all of my user-level resources in an endpoint called \"user\". Blueprints allow you to separate these into namespaces. For example... main.py: from flask import Flask, Blueprint from admin import admin from user import user app = Flask(__name__) app.register_blueprint(admin, url_prefix='/admin') app.register_blueprint(user, url_prefix='/user') admin.py: admin = Blueprint('admin', __name__) @admin.route('/greeting') def greeting(): return 'Hello, administrative user!' user.py: user = Blueprint('user', __name__) @user.route('/greeting') def greeting(): return 'Hello, lowly normal user!' Note that in both blueprints, the '/greeting' route is a function called \"greeting\". If I wanted to refer to the admin \"greeting\" function, I couldn't just say \"greeting\" because there is also a user \"greeting\" function. Endpoints allow for a sort of namespacing by having you specify the name of the blueprint as part of the endpoint. So, I could do the following... print url_for('admin.greeting') # Prints '/admin/greeting' print url_for('user.greeting') # Prints '/user/greeting'"} +{"question_id": 38640109, "score": 166, "creation_date": 1469718133, "tags": ["python", "python-3.x", "scikit-learn", "logistic-regression"], "instruction": "Logistic regression python solvers' definitions\n\nI am using the logistic regression function from sklearn, and was wondering what each of the solver is actually doing behind the scenes to solve the optimization problem. Can someone briefly describe what \"newton-cg\", \"sag\", \"lbfgs\" and \"liblinear\" are doing?", "output": "Well, I hope I'm not too late for the party! Let me first try to establish some intuition before digging into loads of information (warning: this is not a brief comparison, TL;DR) Introduction A hypothesis h(x), takes an input and gives us the estimated output value. This hypothesis can be as simple as a one-variable linear equation, .. up to a very complicated and long multivariate equation with respect to the type of algorithm we\u2019re using (e.g. linear regression, logistic regression..etc). Our task is to find the best Parameters (a.k.a Thetas or Weights) that give us the least error in predicting the output. We call the function that calculates this error a Cost or Loss Function, and apparently, our goal is to minimize the error in order to get the best-predicted output! One more thing to recall is, the relation between the parameter value and its effect on the cost function (i.e. the error) looks like a bell curve (i.e. Quadratic; recall this because it\u2019s important). So if we start at any point in that curve and keep taking the derivative (i.e. tangent line) of each point we stop at (assuming it's a univariate problem, otherwise, if we have multiple features, we take the partial derivative), we will end up at what so-called the Global Optima as shown in this image: If we take the partial derivative at the minimum cost point (i.e. global optima) we find the slope of the tangent line = 0 (then we know that we reached our target). That\u2019s valid only if we have a Convex Cost Function, but if we don\u2019t, we may end up stuck at what is called Local Optima; consider this non-convex function: Now you should have the intuition about the heck relationship between what we are doing and the terms: Derivative, Tangent Line, Cost Function, Hypothesis ..etc. Side Note: The above-mentioned intuition is also related to the Gradient Descent Algorithm (see later). Background Linear Approximation: Given a function, f(x), we can find its tangent at x=a. The equation of the tangent line L(x) is: L(x)=f(a)+f\u2032(a)(x\u2212a). Take a look at the following graph of a function and its tangent line: From this graph we can see that near x=a, the tangent line and the function have nearly the same graph. On occasion, we will use the tangent line, L(x), as an approximation to the function, f(x), near x=a. In these cases, we call the tangent line the \"Linear Approximation\" to the function at x=a. Quadratic Approximation: Same as a linear approximation, yet this time we are dealing with a curve where we cannot find the point near to 0 by using only the tangent line. Instead, we use the parabola as it's shown in the following graph: In order to fit a good parabola, both parabola and quadratic function should have the same value, the same first derivative, AND the same second derivative. The formula will be (just out of curiosity): Qa(x) = f(a) + f'(a)(x-a) + f''(a)(x-a)2/2 Now we should be ready to do the comparison in detail. Comparison between the methods 1. Newton\u2019s Method Recall the motivation for the gradient descent step at x: we minimize the quadratic function (i.e. Cost Function). Newton\u2019s method uses in a sense a better quadratic function minimisation. It's better because it uses the quadratic approximation (i.e. first AND second partial derivatives). You can imagine it as a twisted Gradient Descent with the Hessian (the Hessian is a square matrix of second-order partial derivatives of order n X n). Moreover, the geometric interpretation of Newton's method is that at each iteration one approximates f(x) by a quadratic function around xn, and then takes a step towards the maximum/minimum of that quadratic function (in higher dimensions, this may also be a saddle point). Note that if f(x) happens to be a quadratic function, then the exact extremum is found in one step. Drawbacks: It\u2019s computationally expensive because of the Hessian Matrix (i.e. second partial derivatives calculations). It attracts to Saddle Points which are common in multivariable optimization (i.e. a point that its partial derivatives disagree over whether this input should be a maximum or a minimum point!). 2. Limited-memory Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno Algorithm: In a nutshell, it is an analogue of Newton\u2019s Method, yet here the Hessian matrix is approximated using updates specified by gradient evaluations (or approximate gradient evaluations). In other words, using estimation to the inverse Hessian matrix. The term Limited-memory simply means it stores only a few vectors that represent the approximation implicitly. If I dare say that when the dataset is small, L-BFGS relatively performs the best compared to other methods especially because it saves a lot of memory, however, there are some \u201cserious\u201d drawbacks such that if it is unsafeguarded, it may not converge to anything. Side note: This solver has become the default solver in sklearn LogisticRegression since version 0.22, replacing LIBLINEAR. 3. A Library for Large Linear Classification: It\u2019s a linear classification that supports logistic regression and linear support vector machines. The solver uses a Coordinate Descent (CD) algorithm that solves optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. LIBLINEAR is the winner of the ICML 2008 large-scale learning challenge. It applies automatic parameter selection (a.k.a L1 Regularization) and it\u2019s recommended when you have high dimension dataset (recommended for solving large-scale classification problems) Drawbacks: It may get stuck at a non-stationary point (i.e. non-optima) if the level curves of a function are not smooth. Also cannot run in parallel. It cannot learn a true multinomial (multiclass) model; instead, the optimization problem is decomposed in a \u201cone-vs-rest\u201d fashion, so separate binary classifiers are trained for all classes. Side note: According to Scikit Documentation: The \u201cliblinear\u201d solver was the one used by default for historical reasons before version 0.22. Since then, the default use is Limited-memory Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno Algorithm. 4. Stochastic Average Gradient: The SAG method optimizes the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values, the SAG method achieves a faster convergence rate than black-box SG methods. It is faster than other solvers for large datasets when both the number of samples and the number of features are large. Drawbacks: It only supports L2 penalization. This is not really a drawback, but more like a comparison: although SAG is suitable for large datasets, with a memory cost of O(N), it can be less practical for very large N (as the most recent gradient evaluation for each function needs to be maintained in the memory). This is usually not a problem, but a better option would be SVRG 1, 2 which is unfortunately not implemented in scikit-learn! 5. SAGA: The SAGA solver is a variant of SAG that also supports the non-smooth penalty L1 option (i.e. L1 Regularization). This is therefore the solver of choice for sparse multinomial logistic regression. It also has a better theoretical convergence compared to SAG. Drawbacks: This is not really a drawback, but more like a comparison: SAGA is similar to SAG with regard to memory cost. That's it's suitable for large datasets, yet in edge cases where the dataset is very large, the SVRG 1, 2 would be a better option (unfortunately not implemented in scikit-learn)! Side note: According to Scikit Documentation: The SAGA solver is often the best choice. Please note the attributes \"Large\" and \"Small\" used in Scikit-Learn and in this comparison are relative. AFAIK, there is no universal unanimous and accurate definition of the dataset boundaries to be considered as \"Large\", \"Too Large\", \"Small\", \"Too Small\"...etc! Summary The following table is taken from Scikit Documentation Updated Table from the same link above (accessed 02/11/2021):"} +{"question_id": 2069662, "score": 166, "creation_date": 1263532854, "tags": ["python", "control-flow"], "instruction": "How to exit an if clause\n\nWhat sorts of methods exist for prematurely exiting an if clause? There are times when I'm writing code and want to put a break statement inside of an if clause, only to remember that those can only be used for loops. Lets take the following code as an example: if some_condition: ... if condition_a: # do something # and then exit the outer if block ... if condition_b: # do something # and then exit the outer if block # more code here I can think of one way to do this: assuming the exit cases happen within nested if statements, wrap the remaining code in a big else block. Example: if some_condition: ... if condition_a: # do something # and then exit the outer if block else: ... if condition_b: # do something # and then exit the outer if block else: # more code here The problem with this is that more exit locations mean more nesting/indented code. Alternatively, I could write my code to have the if clauses be as small as possible and not require any exits. Does anyone know of a good/better way to exit an if clause? If there are any associated else-if and else clauses, I figure that exiting would skip over them.", "output": "This method works for ifs, multiple nested loops, and other constructs that you can't break from easily. Wrap the code in its own function. Instead of break, use return. Example: def some_function(): if condition_a: # do something and return early ... return ... if condition_b: # do something else and return early ... return ... return if outer_condition: ... some_function() ..."} +{"question_id": 45808140, "score": 166, "creation_date": 1503369840, "tags": ["python", "python-3.x", "progress-bar", "tqdm"], "instruction": "Using tqdm progress bar in a while loop\n\nI am making a code that simulates a pawn going around a monopoly board a million times. I would like to have a tqdm progress bar that is updated every time a turn around the board is achieved. Below is my current code. I am using a while loop which stops when the number of turns around the board surpasses the desired number. import os from openpyxl import Workbook from monopolyfct import * def main(runs, fileOutput): ### EXCEL SETUP ### theWorkbook = Workbook() # Creates the workbook interface. defaultSheet = theWorkbook.active # Creates the used worksheet. currentData = [\"Current Table Turn\", \"Current Tile\"] # Makes EXCEL column titles. defaultSheet.append(currentData) # Appends column titles. ### CONTENT SETUP ### currentData = [1, 0] # Sets starting position. defaultSheet.append(currentData) # Appends starting position. while currentData[0] <= runs: ### ROLLING THE DICES PROCESS ### dices = twinDiceRoll() currentData[1] += dices[2] # Updating the current tile ### SURPASSING THE NUMBER OF TILES ONBOARD ### if currentData[1] > 37: # If more than a table turn is achieved, currentData[0] += 1 # One more turn is registered currentData[1] -= 38 # Update the tile to one coresponding to a board tile. else: pass ### APPENDING AQUIRED DATA ### defaultSheet.append(currentData) ### MANAGIING SPECIAL TILES ### if currentData[1] == 2 or 15 or 31: # Community chess pass #TODO: Make a mechanic simulating the community chest card draw and it's related action. elif currentData[1] == 5 or 20 or 34: # Chance pass #TODO: Make a mechanic simulating the chance card draw and it's related action. elif currentData[1] == 28: # Go to Jail pass #TODO: Make a mechanic simulating the entire jail process ### TWIN DICE ROLL EXCEPTION ### if dices[3] is True: # If the dices roll a double, pass #TODO: Make a mechanic considering that three doubles sends one to Jail. ### STORING THE ACCUMULATED DATA ### theWorkbook.save(fileOutput) # Compiles the data in a .xlxs file. if __name__ == \"__main__\": terminalWidth = os.get_terminal_size().columns # Gets current terminal width. space(3) print(\"Python Monopoly Statistics Renderer\".upper().center(terminalWidth)) # Prints the title. print(\"(PMSR)\".center(terminalWidth)) # Prints the acronym. space(2) runs = int(request(\"For how many table turns do you want the simulation to run?\")) # Prompts for the desired run ammount #runs = 1000 fileOutput = request(\"What should be the name of the file in which statistics are stored?\") # Prompts for the desired store filename #fileOutput = \"test\" fileOutput += \".xlsx\" # Adds file extension to filename main(runs, fileOutput)", "output": "You can use manual control in tqdm by specifying a total argument in the constructor. Verbatim from the manual: with tqdm(total=100) as pbar: for i in range(10): sleep(0.1) pbar.update(10) UPDATE To manually control the tqdm without the context manager (aka with statement), you will need to close the progress bar after you are done using it. Here is another example from the manual: pbar = tqdm(total=100) for i in range(10): sleep(0.1) pbar.update(10) pbar.close() For this to work you need to know the total number of expected runs. In your code it could look something like ... pbar = tqdm(total = runs+1) while currentData[0] <= runs: ### ROLLING THE DICES PROCESS ### dices = twinDiceRoll() currentData[1] += dices[2] # Updating the current tile ### SURPASSING THE NUMBER OF TILES ONBOARD ### if currentData[1] > 37: # If more than a table turn is achieved, currentData[0] += 1 # One more turn is registered currentData[1] -= 38 # Update the tile to one coresponding to a board tile. pbar.update(1) else: pass ... pbar.close() However, this code isn't perfect: consider if the currentData[1] is always less than 37 -- the progress bar will just stop and not update. If you try to update it in the else:... part, you might violate the total upper bound. This is a start tho :)"} +{"question_id": 14688306, "score": 166, "creation_date": 1359986358, "tags": ["python", "pandas"], "instruction": "Adding meta-information/metadata to pandas DataFrame\n\nIs it possible to add some meta-information/metadata to a pandas DataFrame? For example, the instrument's name used to measure the data, the instrument responsible, etc. One workaround would be to create a column with that information, but it seems wasteful to store a single piece of information in every row!", "output": "Sure, like most Python objects, you can attach new attributes to a pandas.DataFrame: import pandas as pd df = pd.DataFrame([]) df.instrument_name = 'Binky' Note, however, that while you can attach attributes to a DataFrame, operations performed on the DataFrame (such as groupby, pivot, join, assign or loc to name just a few) may return a new DataFrame without the metadata attached. Pandas does not yet have a robust method of propagating metadata attached to DataFrames. Preserving the metadata in a file is possible. You can find an example of how to store metadata in an HDF5 file here."} +{"question_id": 3329773, "score": 166, "creation_date": 1280072086, "tags": ["python", "django", "uninstallation", "django-apps", "drop-table"], "instruction": "How to completely uninstall a Django app?\n\nWhat is the procedure for completely uninstalling a Django app, complete with database removal?", "output": "Django < 1.7 has a handy management command that will give you the necessary SQL to drop all the tables for an app. See the sqlclear docs for more information. Basically, running ./manage.py sqlclear my_app_name gets you get the SQL statements that should be executed to get rid of all traces of the app in your DB. You still need to copy and paste (or pipe) those statements into your SQL client. For Django 1.7 and up, use ./manage.py migrate my_app_name zero (see the migrate docs), which runs the database cleaning automatically. To remove the app from your project, all you need to do is remove it from INSTALLED_APPS in your project's settings.py. Django will no longer load the app. If you no longer want the app's files hanging around, delete the app directory from your project directory or other location on your PYTHONPATH where it resides. (optional) If the app stored media files, cache files, or other temporary files somewhere, you may want to delete those as well. Also be wary of lingering session data that might be leftover from the app. (optional) I would also remove any stale content types. Like so. from django.contrib.contenttypes.models import ContentType for c in ContentType.objects.all(): if not c.model_class(): print \"deleting %s\"%c # print(f\"deleting {c}\") # for Python 3.6+ c.delete()"} +{"question_id": 54028199, "score": 166, "creation_date": 1546541660, "tags": ["python", "pandas", "iteration", "vectorization", "list-comprehension"], "instruction": "Are for-loops in pandas really bad? When should I care?\n\nAre for loops really \"bad\"? If not, in what situation(s) would they be better than using a more conventional \"vectorized\" approach?1 I am familiar with the concept of \"vectorization\", and how pandas employs vectorized techniques to speed up computation. Vectorized functions broadcast operations over the entire series or DataFrame to achieve speedups much greater than conventionally iterating over the data. However, I am quite surprised to see a lot of code (including from answers on Stack Overflow) offering solutions to problems that involve looping through data using for loops and list comprehensions. The documentation and API say that loops are \"bad\", and that one should \"never\" iterate over arrays, series, or DataFrames. So, how come I sometimes see users suggesting loop-based solutions? 1 - While it is true that the question sounds somewhat broad, the truth is that there are very specific situations when for loops are usually better than conventionally iterating over data. This post aims to capture this for posterity.", "output": "TLDR; No, for loops are not blanket \"bad\", at least, not always. It is probably more accurate to say that some vectorized operations are slower than iterating, versus saying that iteration is faster than some vectorized operations. Knowing when and why is key to getting the most performance out of your code. In a nutshell, these are the situations where it is worth considering an alternative to vectorized pandas functions: When your data is small (...depending on what you're doing), When dealing with object/mixed dtypes When using the str/regex accessor functions Let's examine these situations individually. Iteration v/s Vectorization on Small Data Pandas follows a \"Convention Over Configuration\" approach in its API design. This means that the same API has been fitted to cater to a broad range of data and use cases. When a pandas function is called, the following things (among others) must internally be handled by the function, to ensure working Index/axis alignment Handling mixed datatypes Handling missing data Almost every function will have to deal with these to varying extents, and this presents an overhead. The overhead is less for numeric functions (for example, Series.add), while it is more pronounced for string functions (for example, Series.str.replace). for loops, on the other hand, are faster then you think. What's even better is list comprehensions (which create lists through for loops) are even faster as they are optimized iterative mechanisms for list creation. List comprehensions follow the pattern [f(x) for x in seq] Where seq is a pandas series or DataFrame column. Or, when operating over multiple columns, [f(x, y) for x, y in zip(seq1, seq2)] Where seq1 and seq2 are columns. Numeric Comparison Consider a simple boolean indexing operation. The list comprehension method has been timed against Series.ne (!=) and query. Here are the functions: # Boolean indexing with Numeric value comparison. df[df.A != df.B] # vectorized != df.query('A != B') # query (numexpr) df[[x != y for x, y in zip(df.A, df.B)]] # list comp For simplicity, I have used the perfplot package to run all the timeit tests in this post. The timings for the operations above are below: The list comprehension outperforms query for moderately sized N, and even outperforms the vectorized not equals comparison for tiny N. Unfortunately, the list comprehension scales linearly, so it does not offer much performance gain for larger N. Note It is worth mentioning that much of the benefit of list comprehension come from not having to worry about the index alignment, but this means that if your code is dependent on indexing alignment, this will break. In some cases, vectorised operations over the underlying NumPy arrays can be considered as bringing in the \"best of both worlds\", allowing for vectorisation without all the unneeded overhead of the pandas functions. This means that you can rewrite the operation above as df[df.A.values != df.B.values] Which outperforms both the pandas and list comprehension equivalents: NumPy vectorization is out of the scope of this post, but it is definitely worth considering, if performance matters. Value Counts Taking another example - this time, with another vanilla python construct that is faster than a for loop - collections.Counter. A common requirement is to compute the value counts and return the result as a dictionary. This is done with value_counts, np.unique, and Counter: # Value Counts comparison. ser.value_counts(sort=False).to_dict() # value_counts dict(zip(*np.unique(ser, return_counts=True))) # np.unique Counter(ser) # Counter The results are more pronounced, Counter wins out over both vectorized methods for a larger range of small N (~3500). Note More trivia (courtesy @user2357112). The Counter is implemented with a C accelerator, so while it still has to work with python objects instead of the underlying C datatypes, it is still faster than a for loop. Python power! Of course, the take away from here is that the performance depends on your data and use case. The point of these examples is to convince you not to rule out these solutions as legitimate options. If these still don't give you the performance you need, there is always cython and numba. Let's add this test into the mix. from numba import njit, prange @njit(parallel=True) def get_mask(x, y): result = [False] * len(x) for i in prange(len(x)): result[i] = x[i] != y[i] return np.array(result) df[get_mask(df.A.values, df.B.values)] # numba Numba offers JIT compilation of loopy python code to very powerful vectorized code. Understanding how to make numba work involves a learning curve. Operations with Mixed/object dtypes String-based Comparison Revisiting the filtering example from the first section, what if the columns being compared are strings? Consider the same 3 functions above, but with the input DataFrame cast to string. # Boolean indexing with string value comparison. df[df.A != df.B] # vectorized != df.query('A != B') # query (numexpr) df[[x != y for x, y in zip(df.A, df.B)]] # list comp So, what changed? The thing to note here is that string operations are inherently difficult to vectorize. Pandas treats strings as objects, and all operations on objects fall back to a slow, loopy implementation. Now, because this loopy implementation is surrounded by all the overhead mentioned above, there is a constant magnitude difference between these solutions, even though they scale the same. When it comes to operations on mutable/complex objects, there is no comparison. List comprehension outperforms all operations involving dicts and lists. Accessing Dictionary Value(s) by Key Here are timings for two operations that extract a value from a column of dictionaries: map and the list comprehension. The setup is in the Appendix, under the heading \"Code Snippets\". # Dictionary value extraction. ser.map(operator.itemgetter('value')) # map pd.Series([x.get('value') for x in ser]) # list comprehension Positional List Indexing Timings for 3 operations that extract the 0th element from a list of columns (handling exceptions), map, str.get accessor method, and the list comprehension: # List positional indexing. def get_0th(lst): try: return lst[0] # Handle empty lists and NaNs gracefully. except (IndexError, TypeError): return np.nan ser.map(get_0th) # map ser.str[0] # str accessor pd.Series([x[0] if len(x) > 0 else np.nan for x in ser]) # list comp pd.Series([get_0th(x) for x in ser]) # list comp safe Note If the index matters, you would want to do: pd.Series([...], index=ser.index) When reconstructing the series. List Flattening A final example is flattening lists. This is another common problem, and demonstrates just how powerful pure python is here. # Nested list flattening. pd.DataFrame(ser.tolist()).stack().reset_index(drop=True) # stack pd.Series(list(chain.from_iterable(ser.tolist()))) # itertools.chain pd.Series([y for x in ser for y in x]) # nested list comp Both itertools.chain.from_iterable and the nested list comprehension are pure python constructs, and scale much better than the stack solution. These timings are a strong indication of the fact that pandas is not equipped to work with mixed dtypes, and that you should probably refrain from using it to do so. Wherever possible, data should be present as scalar values (ints/floats/strings) in separate columns. Lastly, the applicability of these solutions depend widely on your data. So, the best thing to do would be to test these operations on your data before deciding what to go with. Notice how I have not timed apply on these solutions, because it would skew the graph (yes, it's that slow). Regex Operations, and .str Accessor Methods Pandas can apply regex operations such as str.contains, str.extract, and str.extractall, as well as other \"vectorized\" string operations (such as str.split, str.find, str.translate, and so on) on string columns. These functions are slower than list comprehensions, and are meant to be more convenience functions than anything else. It is usually much faster to pre-compile a regex pattern and iterate over your data with re.compile (also see Is it worth using Python's re.compile?). The list comp equivalent to str.contains looks something like this: p = re.compile(...) ser2 = pd.Series([x for x in ser if p.search(x)]) Or, ser2 = ser[[bool(p.search(x)) for x in ser]] If you need to handle NaNs, you can do something like ser[[bool(p.search(x)) if pd.notnull(x) else False for x in ser]] The list comp equivalent to str.extract (without groups) will look something like: df['col2'] = [p.search(x).group(0) for x in df['col']] If you need to handle no-matches and NaNs, you can use a custom function (still faster!): def matcher(x): m = p.search(str(x)) if m: return m.group(0) return np.nan df['col2'] = [matcher(x) for x in df['col']] The matcher function is very extensible. It can be fitted to return a list for each capture group, as needed. Just extract query the group or groups attribute of the matcher object. For str.extractall, change p.search to p.findall. String Extraction Consider a simple filtering operation. The idea is to extract 4 digits if it is preceded by an upper case letter. # Extracting strings. p = re.compile(r'(?<=[A-Z])(\\d{4})') def matcher(x): m = p.search(x) if m: return m.group(0) return np.nan ser.str.extract(r'(?<=[A-Z])(\\d{4})', expand=False) # str.extract pd.Series([matcher(x) for x in ser]) # list comprehension More Examples Full disclosure - I am the author (in part or whole) of these posts listed below. Fast punctuation removal with pandas String concatenation of two pandas columns Remove unwanted parts from strings in a column Replace all but the last occurrence of a character in a dataframe Conclusion As shown from the examples above, iteration shines when working with small rows of DataFrames, mixed datatypes, and regular expressions. The speedup you get depends on your data and your problem, so your mileage may vary. The best thing to do is to carefully run tests and see if the payout is worth the effort. The \"vectorized\" functions shine in their simplicity and readability, so if performance is not critical, you should definitely prefer those. Another side note, certain string operations deal with constraints that favour the use of NumPy. Here are two examples where careful NumPy vectorization outperforms python: Create new column with incremental values in a faster and efficient way - Answer by Divakar Fast punctuation removal with pandas - Answer by Paul Panzer Additionally, sometimes just operating on the underlying arrays via .values as opposed to on the Series or DataFrames can offer a healthy enough speedup for most usual scenarios (see the Note in the Numeric Comparison section above). So, for example df[df.A.values != df.B.values] would show instant performance boosts over df[df.A != df.B]. Using .values may not be appropriate in every situation, but it is a useful hack to know. As mentioned above, it's up to you to decide whether these solutions are worth the trouble of implementing. Appendix: Code Snippets import perfplot import operator import pandas as pd import numpy as np import re from collections import Counter from itertools import chain # Boolean indexing with Numeric value comparison. perfplot.show( setup=lambda n: pd.DataFrame(np.random.choice(1000, (n, 2)), columns=['A','B']), kernels=[ lambda df: df[df.A != df.B], lambda df: df.query('A != B'), lambda df: df[[x != y for x, y in zip(df.A, df.B)]], lambda df: df[get_mask(df.A.values, df.B.values)] ], labels=['vectorized !=', 'query (numexpr)', 'list comp', 'numba'], n_range=[2**k for k in range(0, 15)], xlabel='N' ) # Value Counts comparison. perfplot.show( setup=lambda n: pd.Series(np.random.choice(1000, n)), kernels=[ lambda ser: ser.value_counts(sort=False).to_dict(), lambda ser: dict(zip(*np.unique(ser, return_counts=True))), lambda ser: Counter(ser), ], labels=['value_counts', 'np.unique', 'Counter'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=lambda x, y: dict(x) == dict(y) ) # Boolean indexing with string value comparison. perfplot.show( setup=lambda n: pd.DataFrame(np.random.choice(1000, (n, 2)), columns=['A','B'], dtype=str), kernels=[ lambda df: df[df.A != df.B], lambda df: df.query('A != B'), lambda df: df[[x != y for x, y in zip(df.A, df.B)]], ], labels=['vectorized !=', 'query (numexpr)', 'list comp'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) # Dictionary value extraction. ser1 = pd.Series([{'key': 'abc', 'value': 123}, {'key': 'xyz', 'value': 456}]) perfplot.show( setup=lambda n: pd.concat([ser1] * n, ignore_index=True), kernels=[ lambda ser: ser.map(operator.itemgetter('value')), lambda ser: pd.Series([x.get('value') for x in ser]), ], labels=['map', 'list comprehension'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) # List positional indexing. ser2 = pd.Series([['a', 'b', 'c'], [1, 2], []]) perfplot.show( setup=lambda n: pd.concat([ser2] * n, ignore_index=True), kernels=[ lambda ser: ser.map(get_0th), lambda ser: ser.str[0], lambda ser: pd.Series([x[0] if len(x) > 0 else np.nan for x in ser]), lambda ser: pd.Series([get_0th(x) for x in ser]), ], labels=['map', 'str accessor', 'list comprehension', 'list comp safe'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) # Nested list flattening. ser3 = pd.Series([['a', 'b', 'c'], ['d', 'e'], ['f', 'g']]) perfplot.show( setup=lambda n: pd.concat([ser2] * n, ignore_index=True), kernels=[ lambda ser: pd.DataFrame(ser.tolist()).stack().reset_index(drop=True), lambda ser: pd.Series(list(chain.from_iterable(ser.tolist()))), lambda ser: pd.Series([y for x in ser for y in x]), ], labels=['stack', 'itertools.chain', 'nested list comp'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) # Extracting strings. ser4 = pd.Series(['foo xyz', 'test A1234', 'D3345 xtz']) perfplot.show( setup=lambda n: pd.concat([ser4] * n, ignore_index=True), kernels=[ lambda ser: ser.str.extract(r'(?<=[A-Z])(\\d{4})', expand=False), lambda ser: pd.Series([matcher(x) for x in ser]) ], labels=['str.extract', 'list comprehension'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None )"} +{"question_id": 55084977, "score": 166, "creation_date": 1552198287, "tags": ["python", "python-3.x", "python-import", "importerror"], "instruction": "Attempted relative import with no known parent package\n\nfrom ..box_utils import decode, nms This line is giving ImportError: attempted relative import with no known parent package What is this error and how to resolve it?", "output": "Apparently, box_utils.py isn't part of a package. You still can import functions defined in this file, but only if the python script that tries to import these functions lives in the same directory as box_utils.py, see this answer. Nota bene: In my case, I stumbled upon this error with an import statement with one period, like this: from .foo import foo. This syntax, however, tells Python that foo.py is part of a package, which wasn't the case. The error disappeared when I removed the period."} +{"question_id": 7762948, "score": 165, "creation_date": 1318565587, "tags": ["python", "image", "opencv", "numpy"], "instruction": "how to convert an RGB image to numpy array?\n\nI have an RGB image. I want to convert it to numpy array. I did the following im = cv.LoadImage(\"abc.tiff\") a = numpy.asarray(im) It creates an array with no shape. I assume it is a iplimage object.", "output": "You can use newer OpenCV python interface (if I'm not mistaken it is available since OpenCV 2.2). It natively uses numpy arrays: import cv2 im = cv2.imread(\"abc.tiff\",mode='RGB') print(type(im)) result: <type 'numpy.ndarray'>"} +{"question_id": 64807163, "score": 165, "creation_date": 1605196086, "tags": ["python", "django", "importerror", "python-module", "circular-dependency"], "instruction": "ImportError: cannot import name '...' from partially initialized module '...' (most likely due to a circular import)\n\nI'm upgrading an application from Django 1.11.25 (Python 2.6) to Django 3.1.3 (Python 3.8.5) and, when I run manage.py makemigrations, I receive this message: File \"/home/eduardo/projdevs/upgrade-intra/corporate/models/section.py\", line 9, in <module> from authentication.models import get_sentinel** ImportError: cannot import name 'get_sentinel' from partially initialized module 'authentication.models' (most likely due to a circular import) (/home/eduardo/projdevs/upgrade-intra/authentication/models.py)** My models are: authentication / models.py from django.conf import settings from django.contrib.auth.models import AbstractUser, UserManager from django.db import models from django.db.models.signals import post_save from django.utils import timezone from corporate.constants import GROUP_SUPPORT from corporate.models import Phone, Room, Section from library.exceptions import ErrorMessage from library.model import update_through_dict from .constants import INTERNAL_USER, EXTERNAL_USER, SENTINEL_USERNAME, SPECIAL_USER, USER_TYPES_DICT class UserProfile(models.Model): user = models.OneToOneField( 'User', on_delete=models.CASCADE, unique=True, db_index=True ) ... phone = models.ForeignKey('corporate.Phone', on_delete=models.SET_NULL, ...) room = models.ForeignKey('corporate.Room', on_delete=models.SET_NULL, ...) section = models.ForeignKey('corporate.Section', on_delete=models.SET_NULL, ...) objects = models.Manager() ... class CustomUserManager(UserManager): def __init__(self, type=None): super(CustomUserManager, self).__init__() self.type = type def get_queryset(self): qs = super(CustomUserManager, self).get_queryset() if self.type: qs = qs.filter(type=self.type).order_by('first_name', 'last_name') return qs def get_this_types(self, types): qs = super(CustomUserManager, self).get_queryset() qs = qs.filter(type__in=types).order_by('first_name', 'last_name') return qs def get_all_excluding(self, types): qs = super(CustomUserManager, self).get_queryset() qs = qs.filter(~models.Q(type__in=types)).order_by('first_name', 'last_name') return qs class User(AbstractUser): type = models.PositiveIntegerField('...', default=SPECIAL_USER) username = models.CharField('...', max_length=256, unique=True) first_name = models.CharField('...', max_length=40, blank=True) last_name = models.CharField('...', max_length=80, blank=True) date_joined = models.DateTimeField('...', default=timezone.now) previous_login = models.DateTimeField('...', default=timezone.now) objects = CustomUserManager() ... def get_profile(self): if self.type == INTERNAL_USER: ... return None def get_or_create_profile(self): profile = self.get_profile() if not profile and self.type == INTERNAL_USER: ... return profile def update(self, changes): ... class ExternalUserProxy(User): objects = CustomUserManager(type=EXTERNAL_USER) class Meta: proxy = True verbose_name = '...' verbose_name_plural = '...' class InternalUserProxy(User): objects = CustomUserManager(type=INTERNAL_USER) class Meta: proxy = True verbose_name = '...' verbose_name_plural = '...' def create_profile(sender, instance, created, **kwargs): if created and instance.type == INTERNAL_USER: try: profile = UserProfile() profile.user = instance profile.save() except: pass post_save.connect(create_profile, sender=User) def get_sentinel(): try: sentinel = User.objects.get(username__exact=SENTINEL_USERNAME) except User.DoesNotExist: settings.LOGGER.error(\"...\") from django.contrib.auth.models import Group sentinel = User() sentinel.username = SENTINEL_USERNAME sentinel.first_name = \"...\" sentinel.last_name = \"...\" sentinel.set_unusable_password() sentinel.save() technical = Group.objects.get(name=GROUP_SUPPORT) sentinel = User.objects.get(username__exact=SENTINEL_USERNAME) sentinel.groups.add(technical) sentinel.save() return sentinel corporate / models / __init__.py ... from .section import Section ... corporate / models / section.py from django.conf import settings from authentication.models import get_sentinel from .room import Room class Section(models.Model): ... boss = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.SET(get_sentinel), ...) surrogate = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.SET(get_sentinel), ...) room = models.ForeignKey(Room, on_delete=models.SET_NULL, ...) is_subordinate_to = models.ForeignKey('self', on_delete=models.SET_NULL, ...) ... What am I doing wrong?", "output": "You have a circular import. authentication/models imports corporate/models, which imports corporate/models/section, which imports authentication/models. You can't do that. Rewrite and/or rearrange your modules so that circular imports aren't needed. One strategy to do this is to organize your modules into a hierarchy, and make sure that a module only imports other modules that are lower in the hierarchy. (This hierarchy can be an actual directory structure, but it doesn't have to be; it can just be a mental note in the programmer's mind.)"} +{"question_id": 25648393, "score": 165, "creation_date": 1409758596, "tags": ["python", "mysql", "database", "django", "schema-migration"], "instruction": "How to move a model between two Django apps (Django 1.7)\n\nSo about a year ago I started a project and like all new developers I didn't really focus too much on the structure, however now I am further along with Django it has started to appear that my project layout mainly my models are horrible in structure. I have models mainly held in a single app and really most of these models should be in their own individual apps, I did try and resolve this and move them with south however I found it tricky and really difficult due to foreign keys ect. However due to Django 1.7 and built in support for migrations is there a better way to do this now?", "output": "I am removing the old answer as may result in data loss. As ozan mentioned, we can create 2 migrations one in each app. The comments below this post refer to my old answer. First migration to remove model from 1st app. $ python manage.py makemigrations old_app --empty Edit migration file to include these operations. class Migration(migrations.Migration): database_operations = [migrations.AlterModelTable('TheModel', 'newapp_themodel')] state_operations = [migrations.DeleteModel('TheModel')] operations = [ migrations.SeparateDatabaseAndState( database_operations=database_operations, state_operations=state_operations) ] Second migration which depends on first migration and create the new table in 2nd app. After moving model code to 2nd app $ python manage.py makemigrations new_app and edit migration file to something like this. class Migration(migrations.Migration): dependencies = [ ('old_app', 'above_migration') ] state_operations = [ migrations.CreateModel( name='TheModel', fields=[ ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), ], options={ 'db_table': 'newapp_themodel', }, bases=(models.Model,), ) ] operations = [ migrations.SeparateDatabaseAndState(state_operations=state_operations) ]"} +{"question_id": 4703390, "score": 165, "creation_date": 1295143899, "tags": ["python", "regex", "floating-point", "data-extraction"], "instruction": "How to extract a floating number from a string\n\nI have a number of strings similar to Current Level: 13.4 db. and I would like to extract just the floating point number. I say floating and not decimal as it's sometimes whole. Can RegEx do this or is there a better way?", "output": "If your float is always expressed in decimal notation something like >>> import re >>> re.findall(\"\\d+\\.\\d+\", \"Current Level: 13.4db.\") ['13.4'] may suffice. A more robust version would be: >>> re.findall(r\"[-+]?(?:\\d*\\.*\\d+)\", \"Current Level: -13.2db or 14.2 or 3\") ['-13.2', '14.2', '3'] If you want to validate user input, you could alternatively also check for a float by stepping to it directly: user_input = \"Current Level: 1e100 db\" for token in user_input.split(): try: # if this succeeds, you have your (first) float print(float(token), \"is a float\") except ValueError: print(token, \"is something else\") # => Would print ... # # Current is something else # Level: is something else # 1e+100 is a float # db is something else"} +{"question_id": 11904981, "score": 165, "creation_date": 1344613151, "tags": ["python", "python-3.x"], "instruction": "Local (?) variable referenced before assignment\n\ntest1 = 0 def test_func(): test1 += 1 test_func() I am receiving the following error: UnboundLocalError: local variable 'test1' referenced before assignment. Error says that 'test1' is local variable but i thought that this variable is global So is it global or local and how to solve this error without passing global test1 as argument to test_func?", "output": "In order for you to modify test1 while inside a function you will need to do define test1 as a global variable, for example: test1 = 0 def test_func(): global test1 test1 += 1 test_func() However, if you only need to read the global variable you can print it without using the keyword global, like so: test1 = 0 def test_func(): print(test1) test_func() But whenever you need to modify a global variable you must use the keyword global."} +{"question_id": 7456807, "score": 165, "creation_date": 1316282869, "tags": ["python", "naming-conventions"], "instruction": "Should I use name mangling in Python?\n\nIn other languages, a general guideline that helps produce better code is always make everything as hidden as possible. If in doubt about whether a variable should be private or protected, it's better to go with private. Does the same hold true for Python? Should I use two leading underscores on everything at first, and only make them less hidden (only one underscore) as I need them? If the convention is to use only one underscore, I'd also like to know the rationale. Here's a comment I left on JBernardo's answer. It explains why I asked this question and also why I'd like to know why Python is different from the other languages: I come from languages that train you to think everything should be only as public as needed and no more. The reasoning is that this will reduce dependencies and make the code safer to alter. The Python way of doing things in reverse -- starting from public and going towards hidden -- is odd to me.", "output": "When in doubt, leave it \"public\" - I mean, do not add anything to obscure the name of your attribute. If you have a class with some internal value, do not bother about it. Instead of writing: class Stack(object): def __init__(self): self.__storage = [] # Too uptight def push(self, value): self.__storage.append(value) write this by default: class Stack(object): def __init__(self): self.storage = [] # No mangling def push(self, value): self.storage.append(value) This is for sure a controversial way of doing things. Python newbies hate it, and even some old Python guys despise this default - but it is the default anyway, so I recommend you to follow it, even if you feel uncomfortable. If you really want to send the message \"Can't touch this!\" to your users, the usual way is to precede the variable with one underscore. This is just a convention, but people understand it and take double care when dealing with such stuff: class Stack(object): def __init__(self): self._storage = [] # This is ok, but Pythonistas use it to be relaxed about it def push(self, value): self._storage.append(value) This can be useful, too, for avoiding conflict between property names and attribute names: class Person(object): def __init__(self, name, age): self.name = name self._age = age if age >= 0 else 0 @property def age(self): return self._age @age.setter def age(self, age): if age >= 0: self._age = age else: self._age = 0 What about the double underscore? Well, we use the double underscore magic mainly to avoid accidental overloading of methods and name conflicts with superclasses' attributes. It can be pretty valuable if you write a class to be extended many times. If you want to use it for other purposes, you can, but it is neither usual nor recommended. EDIT: Why is this so? Well, the usual Python style does not emphasize making things private - on the contrary! There are many reasons for that - most of them controversial... Let us see some of them. Python has properties Today, most OO languages use the opposite approach: what should not be used should not be visible, so attributes should be private. Theoretically, this would yield more manageable, less coupled classes because no one would change the objects' values recklessly. However, it is not so simple. For example, Java classes have many getters that only get the values and setters that only set the values. You need, let us say, seven lines of code to declare a single attribute - which a Python programmer would say is needlessly complex. Also, you write a lot of code to get one public field since you can change its value using the getters and setters in practice. So why follow this private-by-default policy? Just make your attributes public by default. Of course, this is problematic in Java because if you decide to add some validation to your attribute, it would require you to change all: person.age = age; in your code to, let us say, person.setAge(age); setAge() being: public void setAge(int age) { if (age >= 0) { this.age = age; } else { this.age = 0; } } So in Java (and other languages), the default is to use getters and setters anyway because they can be annoying to write but can spare you much time if you find yourself in the situation I've described. However, you do not need to do it in Python since Python has properties. If you have this class: class Person(object): def __init__(self, name, age): self.name = name self.age = age ...and then you decide to validate ages, you do not need to change the person.age = age pieces of your code. Just add a property (as shown below) class Person(object): def __init__(self, name, age): self.name = name self._age = age if age >= 0 else 0 @property def age(self): return self._age @age.setter def age(self, age): if age >= 0: self._age = age else: self._age = 0 Suppose you can do it and still use person.age = age, why would you add private fields and getters and setters? (Also, see Python is not Java and this article about the harms of using getters and setters.). Everything is visible anyway - and trying to hide complicates your work Even in languages with private attributes, you can access them through some reflection/introspection library. And people do it a lot, in frameworks and for solving urgent needs. The problem is that introspection libraries are just a complicated way of doing what you could do with public attributes. Since Python is a very dynamic language, adding this burden to your classes is counterproductive. The problem is not being possible to see - it is being required to see For a Pythonista, encapsulation is not the inability to see the internals of classes but the possibility of avoiding looking at it. Encapsulation is the property of a component that the user can use without concerning about the internal details. If you can use a component without bothering yourself about its implementation, then it is encapsulated (in the opinion of a Python programmer). Now, if you wrote a class you can use it without thinking about implementation details, there is no problem if you want to look inside the class for some reason. The point is: your API should be good, and the rest is details. Guido said so Well, this is not controversial: he said so, actually. (Look for \"open kimono.\") This is culture Yes, there are some reasons, but no critical reason. This is primarily a cultural aspect of programming in Python. Frankly, it could be the other way, too - but it is not. Also, you could just as easily ask the other way around: why do some languages use private attributes by default? For the same main reason as for the Python practice: because it is the culture of these languages, and each choice has advantages and disadvantages. Since there already is this culture, you are well-advised to follow it. Otherwise, you will get annoyed by Python programmers telling you to remove the __ from your code when you ask a question in Stack Overflow :)"} +{"question_id": 30483246, "score": 165, "creation_date": 1432731593, "tags": ["python", "python-import"], "instruction": "How can I check if a module has been imported?\n\nHow do I check if I imported a module somewhere in the code? if not has_imported(\"somemodule\"): print('you have not imported somemodule') The reason that I would like to check if I already imported a module is because I have a module that I don't want to import, because sometimes it messes up my program.", "output": "Test for the module name in the sys.modules dictionary: import sys modulename = 'datetime' if modulename not in sys.modules: print 'You have not imported the {} module'.format(modulename) From the documenation: This is a dictionary that maps module names to modules which have already been loaded. Note that an import statement does two things: if the module has never been imported before (== not present in sys.modules), then it is loaded and added to sys.modules. Bind 1 or more names in the current namespace that reference the module object or to objects that are members of the module namespace. The expression modulename not in sys.modules tests if step 1 has taken place. Testing for the result of step 2 requires knowing what exact import statement was used as they set different names to reference different objects: import modulename sets modulename = sys.modules['modulename'] import packagename.nestedmodule sets packagename = sys.modules['packagename'] (no matter how many addional levels you add) import modulename as altname sets altname = sys.module['modulename'] import packagename.nestedmodule as altname sets altname = sys.modules['packagename.nestedmodule'] from somemodule import objectname sets objectname = sys.modules['somemodule'].objectname from packagename import nestedmodulename sets nestedmodulename = sys.modules['packagename.nestedmodulename'] (only when there was no object named nestedmodulename in the packagename namespace before this import, an additional name for the nested module is added to the parent package namespace at this point) from somemodule import objectname as altname sets altname = sys.modules['somemodule'].objectname from packagename import nestedmodulename as altname sets altname = sys.modules['packagename.nestedmodulename'] (only when there was no object named nestedmodulename in the packagename namespace before this import, an additional name for the nested module is added to the parent package namespace at this point) You can test if the name to which the imported object was bound exists in a given namespace: # is this name visible in the current scope: 'importedname' in dir() # or, is this a name in the globals of the current module: 'importedname' in globals() # or, does the name exist in the namespace of another module: 'importedname' in globals(sys.modules['somemodule']) This only tells you of the name exists (has been bound), not if it refers to a specific module or object from that module. You could further introspect that object or test if it\u2019s the same object as what\u2019s available in sys.modules, if you need to rule out that the name has been set to something else entirely since then."} +{"question_id": 1984104, "score": 165, "creation_date": 1262238607, "tags": ["python", "self"], "instruction": "How to avoid explicit 'self' in Python?\n\nI have been learning Python by following some pygame tutorials. Therein I found extensive use of the keyword self, and coming from a primarily Java background, I find that I keep forgetting to type self. For example, instead of self.rect.centerx I would type rect.centerx, because, to me, rect is already a member variable of the class. The Java parallel I can think of for this situation is having to prefix all references to member variables with this. Am I stuck prefixing all member variables with self, or is there a way to declare them that would allow me to avoid having to do so? Even if what I am suggesting isn't pythonic, I'd still like to know if it is possible. I have taken a look at these related SO questions, but they don't quite answer what I am after: Python - why use \u201cself\u201d in a class? Why do you need explicitly have the \u201cself\u201d argument into a Python method?", "output": "In Java terms: Python doesn't have member functions, all class functions are static, and are called with a reference to the actual class instance as first argument when invoked as member function. This means that when your code has a class MyClass and you build an instance m = MyClass(), calling m.do_something() will be executed as MyClass.do_something(m). Also note that this first argument can technically be called anything you want, but the convention is to use self, and you should stick to that convention if you want others (including your future self) to be able to easily read your code. The result is there's never any confusion over what's a member and what's not, even without the full class definition visible. This leads to useful properties, such as: you can't add members which accidentally shadow non-members and thereby break code. One extreme example: you can write a class without any knowledge of what base classes it might have, and always know whether you are accessing a member or not: class A(some_function()): def f(self): self.member = 42 self.method() That's the complete code! (some_function returns the type used as a base.) Another, where the methods of a class are dynamically composed: class B(object): pass print B() # <__main__.B object at 0xb7e4082c> def B_init(self): self.answer = 42 def B_str(self): return \"<The answer is %s.>\" % self.answer # notice these functions require no knowledge of the actual class # how hard are they to read and realize that \"members\" are used? B.__init__ = B_init B.__str__ = B_str print B() # <The answer is 42.> Remember, both of these examples are extreme and you won't see them every day, nor am I suggesting you should often write code like this, but they do clearly show aspects of self being explicitly required."} +{"question_id": 14452824, "score": 165, "creation_date": 1358836228, "tags": ["python", "save", "python-imaging-library"], "instruction": "How can I save an image with PIL?\n\nI have just done some image processing using the Python image library (PIL) using a post I found earlier to perform fourier transforms of images and I can't get the save function to work. The whole code works fine but it just wont save the resulting image: from PIL import Image import numpy as np i = Image.open(\"C:/Users/User/Desktop/mesh.bmp\") i = i.convert(\"L\") a = np.asarray(i) b = np.abs(np.fft.rfft2(a)) j = Image.fromarray(b) j.save(\"C:/Users/User/Desktop/mesh_trans\",\".bmp\") The error I get is the following: save_handler = SAVE[string.upper(format)] # unknown format KeyError: '.BMP' How can I save an image with Pythons PIL?", "output": "The error regarding the file extension has been handled, you either use BMP (without the dot) or pass the output name with the extension already. Now to handle the error you need to properly modify your data in the frequency domain to be saved as an integer image, PIL is telling you that it doesn't accept float data to save as BMP. Here is a suggestion (with other minor modifications, like using fftshift and numpy.array instead of numpy.asarray) for doing the conversion for proper visualization: import sys import numpy from PIL import Image img = Image.open(sys.argv[1]).convert('L') im = numpy.array(img) fft_mag = numpy.abs(numpy.fft.fftshift(numpy.fft.fft2(im))) visual = numpy.log(fft_mag) visual = (visual - visual.min()) / (visual.max() - visual.min()) result = Image.fromarray((visual * 255).astype(numpy.uint8)) result.save('out.bmp')"} +{"question_id": 101128, "score": 164, "creation_date": 1221822567, "tags": ["python", "windows", "interop", "clipboard"], "instruction": "How do I read text from the Windows clipboard in Python?\n\nHow do I read text from the (windows) clipboard with python?", "output": "You can use the module called win32clipboard, which is part of pywin32. Here is an example that first sets the clipboard data then gets it: import win32clipboard # set clipboard data win32clipboard.OpenClipboard() win32clipboard.EmptyClipboard() win32clipboard.SetClipboardText('testing 123') win32clipboard.CloseClipboard() # get clipboard data win32clipboard.OpenClipboard() data = win32clipboard.GetClipboardData() win32clipboard.CloseClipboard() print data An important reminder from the documentation: When the window has finished examining or changing the clipboard, close the clipboard by calling CloseClipboard. This enables other windows to access the clipboard. Do not place an object on the clipboard after calling CloseClipboard."} +{"question_id": 18948491, "score": 164, "creation_date": 1379882108, "tags": ["python", "vim"], "instruction": "Running Python code in Vim\n\nI am writing Python code using Vim, and every time I want to run my code, I type this inside Vim: :w !python This gets frustrating, so I was looking for a quicker method to run Python code inside Vim. Executing Python scripts from a terminal maybe? I am using Linux.", "output": "How about adding an autocmd to your ~/.vimrc-file, creating a mapping: autocmd FileType python map <buffer> <F9> :w<CR>:exec '!python3' shellescape(@%, 1)<CR> autocmd FileType python imap <buffer> <F9> <esc>:w<CR>:exec '!python3' shellescape(@%, 1)<CR> then you could press <F9> to execute the current buffer with python Explanation: autocmd: command that Vim will execute automatically on {event} (here: if you open a python file) [i]map: creates a keyboard shortcut to <F9> in insert/normal mode <buffer>: If multiple buffers/files are open: just use the active one <esc>: leaving insert mode :w<CR>: saves your file !: runs the following command in your shell (try :!ls) %: is replaced by the filename of your active buffer. But since it can contain things like whitespace and other \"bad\" stuff it is better practise not to write :python %, but use: shellescape: escape the special characters. The 1 means with a backslash TL;DR: The first line will work in normal mode and once you press <F9> it first saves your file and then run the file with python. The second does the same thing, but leaves insert mode first"} +{"question_id": 17558552, "score": 164, "creation_date": 1373405195, "tags": ["python", "logging", "python-logging"], "instruction": "How do I add custom field to Python log format string?\n\nMy current format string is: formatter = logging.Formatter('%(asctime)s : %(message)s') and I want to add a new field called app_name which will have a different value in each script that contains this formatter. import logging formatter = logging.Formatter('%(asctime)s %(app_name)s : %(message)s') syslog.setFormatter(formatter) logger.addHandler(syslog) But I'm not sure how to pass that app_name value to the logger to interpolate into the format string. I can obviously get it to appear in the log message by passing it each time but this is messy. I've tried: logging.info('Log message', app_name='myapp') logging.info('Log message', {'app_name', 'myapp'}) logging.info('Log message', 'myapp') but none work.", "output": "Python3 As of Python3.2 you can now use LogRecordFactory import logging logging.basicConfig(format=\"%(custom_attribute)s - %(message)s\") old_factory = logging.getLogRecordFactory() def record_factory(*args, **kwargs): record = old_factory(*args, **kwargs) record.custom_attribute = \"my-attr\" return record logging.setLogRecordFactory(record_factory) >>> logging.info(\"hello\") my-attr - hello Of course, record_factory can be customized to be any callable and the value of custom_attribute could be updated if you keep a reference to the factory callable. Why is that better than using Adapters / Filters? You do not need to pass your logger around the application It actually works with 3rd party libraries that use their own logger (by just calling logger = logging.getLogger(..)) would now have the same log format. (this is not the case with Filters / Adapters where you need to be using the same logger object) You can stack/chain multiple factories"} +{"question_id": 4913349, "score": 164, "creation_date": 1296996105, "tags": ["python", "gps", "distance", "haversine", "bearing"], "instruction": "Haversine formula in Python (bearing and distance between two GPS points)\n\nProblem I would like to know how to get the distance and bearing between two GPS points. I have researched on the haversine distance. Someone told me that I could also find the bearing using the same data. Everything is working fine, but the bearing doesn't quite work right yet. The bearing outputs negative, but it should be between 0 - 360 degrees. The set data should make the horizontal bearing 96.02166666666666 and is: Start point: 53.32055555555556, -1.7297222222222221 Bearing: 96.02166666666666 Distance: 2 km Destination point: 53.31861111111111, -1.6997222222222223 Final bearing: 96.04555555555555 Here is my new code: from math import * Aaltitude = 2000 Oppsite = 20000 lat1 = 53.32055555555556 lat2 = 53.31861111111111 lon1 = -1.7297222222222221 lon2 = -1.6997222222222223 lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * atan2(sqrt(a), sqrt(1-a)) Base = 6371 * c Bearing = atan2(cos(lat1)*sin(lat2)-sin(lat1)*cos(lat2)*cos(lon2-lon1), sin(lon2-lon1)*cos(lat2)) Bearing = degrees(Bearing) print \"\" print \"\" print \"--------------------\" print \"Horizontal Distance: \" print Base print \"--------------------\" print \"Bearing: \" print Bearing print \"--------------------\" Base2 = Base * 1000 distance = Base * 2 + Oppsite * 2 / 2 Caltitude = Oppsite - Aaltitude a = Oppsite/Base b = atan(a) c = degrees(b) distance = distance / 1000 print \"The degree of vertical angle is: \" print c print \"--------------------\" print \"The distance between the Balloon GPS and the Antenna GPS is: \" print distance print \"--------------------\"", "output": "Here's a Python version: from math import radians, cos, sin, asin, sqrt def haversine(lon1, lat1, lon2, lat2): \"\"\" Calculate the great circle distance in kilometers between two points on the earth (specified in decimal degrees) \"\"\" # convert decimal degrees to radians lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) r = 6371 # Radius of earth in kilometers. Use 3956 for miles. Determines return value units. return c * r"} +{"question_id": 8922060, "score": 164, "creation_date": 1326955518, "tags": ["python", "algorithm", "graph", "breadth-first-search"], "instruction": "How to trace the path in a Breadth-First Search?\n\nHow do you trace the path of a Breadth-First Search, such that in the following example: If searching for key 11, return the shortest list connecting 1 to 11. [1, 4, 7, 11]", "output": "You should have look at http://en.wikipedia.org/wiki/Breadth-first_search first. Below is a quick implementation, in which I used a list of list to represent the queue of paths. # graph is in adjacent list representation graph = { '1': ['2', '3', '4'], '2': ['5', '6'], '5': ['9', '10'], '4': ['7', '8'], '7': ['11', '12'] } def bfs(graph, start, end): # maintain a queue of paths queue = [] # push the first path into the queue queue.append([start]) while queue: # get the first path from the queue path = queue.pop(0) # get the last node from the path node = path[-1] # path found if node == end: return path # enumerate all adjacent nodes, construct a # new path and push it into the queue for adjacent in graph.get(node, []): new_path = list(path) new_path.append(adjacent) queue.append(new_path) print bfs(graph, '1', '11') This prints: ['1', '4', '7', '11'] Another approach would be maintaining a mapping from each node to its parent, and when inspecting the adjacent node, record its parent. When the search is done, simply backtrace according the parent mapping. graph = { '1': ['2', '3', '4'], '2': ['5', '6'], '5': ['9', '10'], '4': ['7', '8'], '7': ['11', '12'] } def backtrace(parent, start, end): path = [end] while path[-1] != start: path.append(parent[path[-1]]) path.reverse() return path def bfs(graph, start, end): parent = {} queue = [] queue.append(start) while queue: node = queue.pop(0) if node == end: return backtrace(parent, start, end) for adjacent in graph.get(node, []): if node not in queue : parent[adjacent] = node # <<<<< record its parent queue.append(adjacent) print bfs(graph, '1', '11') The above codes are based on the assumption that there's no cycles."} +{"question_id": 10999990, "score": 164, "creation_date": 1339515430, "tags": ["python", "flask", "werkzeug"], "instruction": "Get raw POST body in Python Flask regardless of Content-Type header\n\nPreviously, I asked How to get data received in Flask request because request.data was empty. The answer explained that request.data is the raw post body, but will be empty if form data is parsed. How can I get the raw post body unconditionally? @app.route('/', methods=['POST']) def parse_request(): data = request.data # empty in some cases # always need raw data here, not parsed form data", "output": "Use request.get_data() to get the raw data, regardless of content type. The data is cached and you can subsequently access request.data, request.json, request.form at will. If you access request.data first, it will call get_data with an argument to parse form data first. If the request has a form content type (multipart/form-data, application/x-www-form-urlencoded, or application/x-url-encoded) then the raw data will be consumed. request.data and request.json will appear empty in this case."} +{"question_id": 23435782, "score": 164, "creation_date": 1399059852, "tags": ["python", "numpy"], "instruction": "NumPy selecting specific column index per row by using a list of indexes\n\nI'm struggling to select the specific columns per row of a NumPy matrix. Suppose I have the following matrix which I would call X: [1, 2, 3] [4, 5, 6] [7, 8, 9] I also have a list of column indexes per every row which I would call Y: [1, 0, 2] I need to get the values: [2] [4] [9] Instead of a list with indexes Y, I can also produce a matrix with the same shape as X where every column is a bool / int in the range 0-1 value, indicating whether this is the required column. [0, 1, 0] [1, 0, 0] [0, 0, 1] I know this can be done with iterating over the array and selecting the column values I need. However, this will be executed frequently on big arrays of data and that's why it has to run as fast as it can. I was thus wondering if there is a better solution?", "output": "If you've got a boolean array you can do direct selection based on that like so: >>> a = np.array([True, True, True, False, False]) >>> b = np.array([1,2,3,4,5]) >>> b[a] array([1, 2, 3]) To go along with your initial example you could do the following: >>> a = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> b = np.array([[False,True,False],[True,False,False],[False,False,True]]) >>> a[b] array([2, 4, 9]) You can also add in an arange and do direct selection on that, though depending on how you're generating your boolean array and what your code looks like YMMV. >>> a = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> a[np.arange(len(a)), [1,0,2]] array([2, 4, 9])"} +{"question_id": 1994488, "score": 164, "creation_date": 1262513202, "tags": ["python"], "instruction": "Copy file or directories recursively in Python\n\nPython seems to have functions for copying files (e.g. shutil.copy) and functions for copying directories (e.g. shutil.copytree) but I haven't found any function that handles both. Sure, it's trivial to check whether you want to copy a file or a directory, but it seems like a strange omission. Is there really no standard function that works like the unix cp -r command, i.e. supports both directories and files and copies recursively? What would be the most elegant way to work around this problem in Python?", "output": "I suggest you first call shutil.copytree, and if an exception is thrown, then retry with shutil.copy. import shutil, errno def copyanything(src, dst): try: shutil.copytree(src, dst) except OSError as exc: # python >2.5 if exc.errno in (errno.ENOTDIR, errno.EINVAL): shutil.copy(src, dst) else: raise"} +{"question_id": 7984169, "score": 163, "creation_date": 1320252645, "tags": ["python", "list", "strip"], "instruction": "Remove trailing newline from the elements of a string list\n\nI have to take a large list of words in the form: ['this\\n', 'is\\n', 'a\\n', 'list\\n', 'of\\n', 'words\\n'] and then using the strip function, turn it into: ['this', 'is', 'a', 'list', 'of', 'words'] I thought that what I had written would work, but I keep getting an error saying: \"'list' object has no attribute 'strip'\" Here is the code that I tried: strip_list = [] for lengths in range(1,20): strip_list.append(0) #longest word in the text file is 20 characters long for a in lines: strip_list.append(lines[a].strip())", "output": "You can either use a list comprehension my_list = ['this\\n', 'is\\n', 'a\\n', 'list\\n', 'of\\n', 'words\\n'] stripped = [s.strip() for s in my_list] or alternatively use map(): stripped = list(map(str.strip, my_list)) In Python 2, map() directly returned a list, so you didn't need the call to list. In Python 3, the list comprehension is more concise and generally considered more idiomatic."} +{"question_id": 37039685, "score": 163, "creation_date": 1462403806, "tags": ["python", "matplotlib"], "instruction": "Hide tick label values but keep axis labels\n\nI have this image: plt.plot(sim_1['t'],sim_1['V'],'k') plt.ylabel('V') plt.xlabel('t') plt.show() I want to hide the numbers; if I use: plt.axis('off') ...I get this image: It also hide the labels, V and t. How can I keep the labels while hiding the values?", "output": "If you use the matplotlib object-oriented approach, this is a simple task using ax.set_xticklabels() and ax.set_yticklabels(). Here we can just set them to an empty list to remove any labels: import matplotlib.pyplot as plt # Create Figure and Axes instances fig,ax = plt.subplots(1) # Make your plot, set your axes labels ax.plot(sim_1['t'],sim_1['V'],'k') ax.set_ylabel('V') ax.set_xlabel('t') # Turn off tick labels ax.set_yticklabels([]) ax.set_xticklabels([]) plt.show() If you also want to remove the tick marks as well as the labels, you can use ax.set_xticks() and ax.set_yticks() and set those to an empty list as well: ax.set_xticks([]) ax.set_yticks([])"} +{"question_id": 13243766, "score": 163, "creation_date": 1352171293, "tags": ["python", "generator"], "instruction": "How to define an empty generator function?\n\nA generator function can be defined by putting the yield keyword in the function\u2019s body: def gen(): for i in range(10): yield i How to define an empty generator function? The following code doesn\u2019t work, since Python cannot know that it is supposed to be a generator function instead of a normal function: def empty(): pass I could do something like this: def empty(): if False: yield But that would be very ugly. Is there a nicer way?", "output": "You can use return once in a generator; it stops iteration without yielding anything, and thus provides an explicit alternative to letting the function run out of scope. So use yield to turn the function into a generator, but precede it with return to terminate the generator before yielding anything. >>> def f(): ... return ... yield ... >>> list(f()) [] I'm not sure it's that much better than what you have -- it just replaces a no-op if statement with a no-op yield statement. But it is more idiomatic. Note that just using yield doesn't work. >>> def f(): ... yield ... >>> list(f()) [None] Why not just use iter(())? This question asks specifically about an empty generator function. For that reason, I take it to be a question about the internal consistency of Python's syntax, rather than a question about the best way to create an empty iterator in general. If question is actually about the best way to create an empty iterator, then you might agree with Zectbumo about using iter(()) instead. However, it's important to observe that iter(()) doesn't return a function! It directly returns an empty iterable. Suppose you're working with an API that expects a callable that returns an iterable each time it's called, just like an ordinary generator function. You'll have to do something like this: def empty(): return iter(()) (Credit should go to Unutbu for giving the first correct version of this answer.) Now, you may find the above clearer, but I can imagine situations in which it would be less clear. Consider this example of a long list of (contrived) generator function definitions: def zeros(): while True: yield 0 def ones(): while True: yield 1 ... At the end of that long list, I'd rather see something with a yield in it, like this: def empty(): return yield or, in Python 3.3 and above (as suggested by DSM), this: def empty(): yield from () The presence of the yield keyword makes it clear at the briefest glance that this is just another generator function, exactly like all the others. It takes a bit more time to see that the iter(()) version is doing the same thing. It's a subtle difference, but I honestly think the yield-based functions are more readable and maintainable. See also this great answer from user3840170 that uses dis to show another reason why this approach is preferable: it emits the fewest instructions when compiled."} +{"question_id": 1919044, "score": 163, "creation_date": 1261015255, "tags": ["python", "list", "iteration"], "instruction": "Is there a better way to iterate over two lists, getting one element from each list for each iteration?\n\nI have a list of Latitudes and one of Longitudes and need to iterate over the latitude and longitude pairs. Is it better to: A. Assume that the lists are of equal lengths: for i in range(len(Latitudes)): Lat,Long=(Latitudes[i],Longitudes[i]) B. Or: for Lat,Long in [(x,y) for x in Latitudes for y in Longitudes]: (Note that B is incorrect. This gives me all the pairs, equivalent to itertools.product()) Any thoughts on the relative merits of each, or which is more pythonic?", "output": "This is as pythonic as you can get: for lat, long in zip(Latitudes, Longitudes): print(lat, long)"} +{"question_id": 39618943, "score": 163, "creation_date": 1474466841, "tags": ["python", "floating-point", "rounding", "floating-accuracy", "ieee-754"], "instruction": "Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn't?\n\nI know that most decimals don't have an exact floating point representation (Is floating point math broken?). But I don't see why 4*0.1 is printed nicely as 0.4, but 3*0.1 isn't, when both values actually have ugly decimal representations: >>> 3*0.1 0.30000000000000004 >>> 4*0.1 0.4 >>> from decimal import Decimal >>> Decimal(3*0.1) Decimal('0.3000000000000000444089209850062616169452667236328125') >>> Decimal(4*0.1) Decimal('0.40000000000000002220446049250313080847263336181640625')", "output": "The simple answer is because 3*0.1 != 0.3 due to quantization (roundoff) error (whereas 4*0.1 == 0.4 because multiplying by a power of two is usually an \"exact\" operation). Python tries to find the shortest string that would round to the desired value, so it can display 4*0.1 as 0.4 as these are equal, but it cannot display 3*0.1 as 0.3 because these are not equal. You can use the .hex method in Python to view the internal representation of a number (basically, the exact binary floating point value, rather than the base-10 approximation). This can help to explain what's going on under the hood. >>> (0.1).hex() '0x1.999999999999ap-4' >>> (0.3).hex() '0x1.3333333333333p-2' >>> (0.1*3).hex() '0x1.3333333333334p-2' >>> (0.4).hex() '0x1.999999999999ap-2' >>> (0.1*4).hex() '0x1.999999999999ap-2' 0.1 is 0x1.999999999999a times 2^-4. The \"a\" at the end means the digit 10 - in other words, 0.1 in binary floating point is very slightly larger than the \"exact\" value of 0.1 (because the final 0x0.99 is rounded up to 0x0.a). When you multiply this by 4, a power of two, the exponent shifts up (from 2^-4 to 2^-2) but the number is otherwise unchanged, so 4*0.1 == 0.4. However, when you multiply by 3, the tiny little difference between 0x0.99 and 0x0.a0 (0x0.07) magnifies into a 0x0.15 error, which shows up as a one-digit error in the last position. This causes 0.1*3 to be very slightly larger than the rounded value of 0.3. Python 3's float repr is designed to be round-trippable, that is, the value shown should be exactly convertible into the original value (float(repr(f)) == f for all floats f). Therefore, it cannot display 0.3 and 0.1*3 exactly the same way, or the two different numbers would end up the same after round-tripping. Consequently, Python 3's repr engine chooses to display one with a slight apparent error."} +{"question_id": 23482668, "score": 163, "creation_date": 1399327487, "tags": ["python", "sorting", "pandas"], "instruction": "sorting by a custom list in pandas\n\nAfter reading through: http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.sort.html I still can't seem to figure out how to sort a column by a custom list. Obviously, the default sort is alphabetical. I'll give an example. Here is my (very abridged) dataframe: Player Year Age Tm G 2967 Cedric Hunter 1991 27 CHH 6 5335 Maurice Baker 2004 25 VAN 7 13950 Ratko Varda 2001 22 TOT 60 6141 Ryan Bowen 2009 34 OKC 52 6169 Adrian Caldwell 1997 31 DAL 81 I want to be able to sort by Player, Year and then Tm. The default sort by Player and Year is fine for me, in normal order. However, I do not want Team sorted alphabetically b/c I want TOT always at the top. Here is the list I created: sorter = ['TOT', 'ATL', 'BOS', 'BRK', 'CHA', 'CHH', 'CHI', 'CLE', 'DAL', 'DEN', 'DET', 'GSW', 'HOU', 'IND', 'LAC', 'LAL', 'MEM', 'MIA', 'MIL', 'MIN', 'NJN', 'NOH', 'NOK', 'NOP', 'NYK', 'OKC', 'ORL', 'PHI', 'PHO', 'POR', 'SAC', 'SAS', 'SEA', 'TOR', 'UTA', 'VAN', 'WAS', 'WSB'] After reading through the link above, I thought this would work but it didn't: df.sort(['Player', 'Year', 'Tm'], ascending = [True, True, sorter]) It still has ATL at the top, meaning that it sorted alphabetically and not according to my custom list. Any help would really be greatly appreciated, I just can't figure this out.", "output": "Below is an example that performs lexicographic sort on a dataframe. The idea is to create an numerical index based on the specific sort. Then to perform a numerical sort based on the index. A column is added to the dataframe to do so, and is then removed. import pandas as pd # Create DataFrame df = pd.DataFrame( {'id':[2967, 5335, 13950, 6141, 6169], 'Player': ['Cedric Hunter', 'Maurice Baker', 'Ratko Varda' ,'Ryan Bowen' ,'Adrian Caldwell'], 'Year': [1991, 2004, 2001, 2009, 1997], 'Age': [27, 25, 22, 34, 31], 'Tm': ['CHH' ,'VAN' ,'TOT' ,'OKC', 'DAL'], 'G': [6, 7, 60, 52, 81]}) # Define the sorter sorter = ['TOT', 'ATL', 'BOS', 'BRK', 'CHA', 'CHH', 'CHI', 'CLE', 'DAL','DEN', 'DET', 'GSW', 'HOU', 'IND', 'LAC', 'LAL', 'MEM', 'MIA', 'MIL', 'MIN', 'NJN', 'NOH', 'NOK', 'NOP', 'NYK', 'OKC', 'ORL', 'PHI', 'PHO', 'POR', 'SAC', 'SAS', 'SEA', 'TOR', 'UTA', 'VAN', 'WAS', 'WSB'] # Create the dictionary that defines the order for sorting sorterIndex = dict(zip(sorter, range(len(sorter)))) # Generate a rank column that will be used to sort # the dataframe numerically df['Tm_Rank'] = df['Tm'].map(sorterIndex) # Here is the result asked with the lexicographic sort # Result may be hard to analyze, so a second sorting is # proposed next ## NOTE: ## Newer versions of pandas use 'sort_values' instead of 'sort' df.sort_values(['Player', 'Year', 'Tm_Rank'], ascending = [True, True, True], inplace = True) df.drop('Tm_Rank', 1, inplace = True) print(df) # Here is an example where 'Tm' is sorted first, that will # give the first row of the DataFrame df to contain TOT as 'Tm' df['Tm_Rank'] = df['Tm'].map(sorterIndex) ## NOTE: ## Newer versions of pandas use 'sort_values' instead of 'sort' df.sort_values(['Tm_Rank', 'Player', 'Year'], ascending = [True , True, True], inplace = True) df.drop('Tm_Rank', 1, inplace = True) print(df)"} +{"question_id": 2152898, "score": 163, "creation_date": 1264663061, "tags": ["python", "list"], "instruction": "Filtering a list of strings based on contents\n\nGiven the list ['a','ab','abc','bac'], I want to compute a list with strings that have 'ab' in them. I.e. the result is ['ab','abc']. How can this be done in Python?", "output": "This simple filtering can be achieved in many ways with Python. The best approach is to use \"list comprehensions\" as follows: >>> lst = ['a', 'ab', 'abc', 'bac'] >>> [k for k in lst if 'ab' in k] ['ab', 'abc'] Another way is to use the filter function. In Python 2: >>> filter(lambda k: 'ab' in k, lst) ['ab', 'abc'] In Python 3, it returns an iterator instead of a list, but you can cast it: >>> list(filter(lambda k: 'ab' in k, lst)) ['ab', 'abc'] Though it's better practice to use a comprehension."} +{"question_id": 59823283, "score": 163, "creation_date": 1579523213, "tags": ["python", "python-3.x", "tensorflow", "keras", "tensorflow2.0"], "instruction": "Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation\n\nI just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error?", "output": "Tensorflow 2.1+ What's going on? With the new Tensorflow 2.1 release, the default tensorflow pip package contains both CPU and GPU versions of TF. In previous TF versions, not finding the CUDA libraries would emit an error and raise an exception, while now the library dynamically searches for the correct CUDA version and, if it doesn't find it, emits the warning (The W in the beginning stands for warnings, errors have an E (or F for fatal errors) and falls back to CPU-only mode. In fact, this is also written in the log as an info message right after the warning (do note that if you have a higher minimum log level that the default, you might not see info messages). The full log is (emphasis mine): 2020-01-20 12:27:44.554767: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-01-20 12:27:44.554964: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Should I worry? How do I fix it? If you don't have a CUDA-enabled GPU on your machine, or if you don't care about not having GPU acceleration, no need to worry. If, on the other hand, you installed tensorflow and wanted GPU acceleration, check your CUDA installation (TF 2.1 requires CUDA 10.1, not 10.2 or 10.0). If you just want to get rid of the warning, you can adapt TF's logging level to suppress warnings, but that might be overkill, as it will silence all warnings. Tensorflow 1.X or 2.0: Your CUDA setup is broken, ensure you have the correct version installed."} +{"question_id": 22288569, "score": 163, "creation_date": 1394401418, "tags": ["python", "shell", "pycharm", "virtualenv"], "instruction": "How do I activate a virtualenv inside PyCharm's terminal?\n\nI've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using \"Tools, Open Terminal\", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is \"workable\" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have \"Tools, Open Terminal\" automatically activate the virtual environment?", "output": "Edit: According to https://www.jetbrains.com/pycharm/whatsnew/#v2016-3-venv-in-terminal, PyCharm 2016.3 (released Nov 2016) has virutalenv support for terminals out of the box Auto virtualenv is supported for bash, zsh, fish, and Windows cmd. You can customize your shell preference in Settings (Preferences) | Tools | Terminal | check Activate virtaulenv you also need to make sure to have the path of virtual environment path included in the content root folder of your project structure. You can go to settings (preference) | project | Project Structure | if your environment is not included in the project directory. ***Old Method:*** Create a file .pycharmrc in your home folder with the following contents source ~/.bashrc source ~/pycharmvenv/bin/activate Use your virtualenv path as the last parameter. Then set the shell Preferences->Project Settings->Shell path to /bin/bash --rcfile ~/.pycharmrc"} +{"question_id": 23248435, "score": 163, "creation_date": 1398265855, "tags": ["python", "matplotlib"], "instruction": "Fill between two vertical lines\n\nI went through the examples in the matplotlib documentation, but it wasn't clear to me how I can make a plot that fills the area between two specific vertical lines. For example, say I want to create a plot between x=0.2 and x=4 (for the full y range of the plot). Should I use fill_between, fill or fill_betweenx? Can I use the where condition for this?", "output": "It sounds like you want axvspan, rather than one of the fill between functions. The differences is that axvspan (and axhspan) will fill up the entire y (or x) extent of the plot regardless of how you zoom. For example, let's use axvspan to highlight the x-region between 8 and 14: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, alpha=0.5, color='red') plt.show() You could use fill_betweenx to do this, but the extents (both x and y) of the rectangle would be in data coordinates. With axvspan, the y-extents of the rectangle default to 0 and 1 and are in axes coordinates (in other words, percentages of the height of the plot). To illustrate this, let's make the rectangle extend from 10% to 90% of the height (instead of taking up the full extent). Try zooming or panning, and notice that the y-extents say fixed in display space, while the x-extents move with the zoom/pan: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, ymin=0.1, ymax=0.9, alpha=0.5, color='red') plt.show()"} +{"question_id": 6677332, "score": 163, "creation_date": 1310552109, "tags": ["python", "time", "strftime"], "instruction": "How to use %f with strftime() in Python to get microseconds?\n\nI'm trying to use strftime() to microsecond precision, which seems possible using %f (as stated here). However when I try the following code: import time import strftime from time print strftime(\"%H:%M:%S.%f\") ...I get the hour, the minutes and the seconds, but %f prints as %f, with no sign of the microseconds. I'm running Python 2.6.5 on Ubuntu, so it should be fine and %f should be supported (it's supported for 2.6 and above, as far as I know.)", "output": "You can use datetime's strftime() function to get this. The problem is that time's strftime() accepts a timetuple that does not carry microsecond information. from datetime import datetime datetime.now().strftime(\"%H:%M:%S.%f\") Should do the trick!"} +{"question_id": 56729764, "score": 163, "creation_date": 1561349692, "tags": ["python", "python-3.x", "asynchronous", "python-asyncio", "sleep"], "instruction": "asyncio.sleep() vs time.sleep()\n\nWhen I go to the asyncio page, the first example is a hello world program. When I run it on python 3.73, I can't see any different from the normal one. can anyone tell me the difference and give a non-trivial example? In [1]: import asyncio ...: ...: async def main(): ...: print('Hello ...') ...: await asyncio.sleep(5) ...: print('... World!') ...: ...: # Python 3.7+ ...: asyncio.run(main()) Hello ... ... World! In [2]: In [2]: import time ...: ...: def main(): ...: print('Hello ...') ...: time.sleep(5) ...: print('... World!') ...: ...: # Python 3.7+ ...: main() Hello ... ... World! I intentionally increase the time from 1s to 5s, hope to see something special but I didn't.", "output": "You aren't seeing anything special because there's nothing much asynchronous work in your code. However, the main difference is that time.sleep(5) is blocking, and asyncio.sleep(5) is non-blocking. When time.sleep(5) is called, it will block the entire execution of the script and it will be put on hold, just frozen, doing nothing. But when you call await asyncio.sleep(5), it will ask the event loop to run something else while your await statement finishes its execution. Here's an improved example. import asyncio async def hello(): print('Hello ...') await asyncio.sleep(1) print('... World!') async def main(): await asyncio.gather(hello(), hello()) asyncio.run(main()) Will output: ~$ python3.7 async.py Hello ... Hello ... ... World! ... World! You can see that await asyncio.sleep(1) is not blocking the execution of the script. In contrast, replacing the line await asyncio.sleep(1) with time.sleep(1), the output will be Hello ... ... World! Hello ... ... World! because time.sleep is blocking and the first call of hello() has to finish first before the second call of hello() starts running."} +{"question_id": 11515944, "score": 163, "creation_date": 1342498628, "tags": ["python", "multithreading", "multiprocessing"], "instruction": "How to use multiprocessing queue in Python?\n\nI'm having much trouble trying to understand just how the multiprocessing queue works on python and how to implement it. Lets say I have two python modules that access data from a shared file, let's call these two modules a writer and a reader. My plan is to have both the reader and writer put requests into two separate multiprocessing queues, and then have a third process pop these requests in a loop and execute as such. My main problem is that I really don't know how to implement multiprocessing.queue correctly, you cannot really instantiate the object for each process since they will be separate queues, how do you make sure that all processes relate to a shared queue (or in this case, queues)", "output": "Short Summary As of CY2023, the technique described in this answer is quite out of date. These days, use concurrent.futures.ProcessPoolExecutor() instead of multiprocessing, below... This answer describes the benefits and shortcomings of using concurrent.futures.ProcessPoolExecutor(). FYI, multiple python processes are sometimes used instead of threading to get the most benefit from concurrency. That said, python threading works pretty well as long as there is sufficient CPU activity to avoid the GIL (activity such as sending / receiving network traffic). Original Answer My main problem is that I really don't know how to implement multiprocessing.queue correctly, you cannot really instantiate the object for each process since they will be separate queues, how do you make sure that all processes relate to a shared queue (or in this case, queues) This is a simple example of a reader and writer sharing a single queue... The writer sends a bunch of integers to the reader; when the writer runs out of numbers, it sends 'DONE', which lets the reader know to break out of the read loop. You can spawn as many reader processes as you like... from multiprocessing import Process, Queue import time import sys def reader_proc(queue): \"\"\"Read from the queue; this spawns as a separate Process\"\"\" while True: msg = queue.get() # Read from the queue and do nothing if msg == \"DONE\": break def writer(count, num_of_reader_procs, queue): \"\"\"Write integers into the queue. A reader_proc() will read them from the queue\"\"\" for ii in range(0, count): queue.put(ii) # Put 'count' numbers into queue ### Tell all readers to stop... for ii in range(0, num_of_reader_procs): queue.put(\"DONE\") def start_reader_procs(qq, num_of_reader_procs): \"\"\"Start the reader processes and return all in a list to the caller\"\"\" all_reader_procs = list() for ii in range(0, num_of_reader_procs): ### reader_p() reads from qq as a separate process... ### you can spawn as many reader_p() as you like ### however, there is usually a point of diminishing returns reader_p = Process(target=reader_proc, args=((qq),)) reader_p.daemon = True reader_p.start() # Launch reader_p() as another proc all_reader_procs.append(reader_p) return all_reader_procs if __name__ == \"__main__\": num_of_reader_procs = 2 qq = Queue() # writer() writes to qq from _this_ process for count in [10**4, 10**5, 10**6]: assert 0 < num_of_reader_procs < 4 all_reader_procs = start_reader_procs(qq, num_of_reader_procs) writer(count, len(all_reader_procs), qq) # Queue stuff to all reader_p() print(\"All reader processes are pulling numbers from the queue...\") _start = time.time() for idx, a_reader_proc in enumerate(all_reader_procs): print(\" Waiting for reader_p.join() index %s\" % idx) a_reader_proc.join() # Wait for a_reader_proc() to finish print(\" reader_p() idx:%s is done\" % idx) print( \"Sending {0} integers through Queue() took {1} seconds\".format( count, (time.time() - _start) ) ) print(\"\")"} +{"question_id": 18548370, "score": 163, "creation_date": 1377953652, "tags": ["python", "pandas"], "instruction": "Pandas \"Can only compare identically-labeled DataFrame objects\" error\n\nI'm using Pandas to compare the outputs of two files loaded into two data frames (uat, prod): ... uat = uat[['Customer Number','Product']] prod = prod[['Customer Number','Product']] print uat['Customer Number'] == prod['Customer Number'] print uat['Product'] == prod['Product'] print uat == prod The first two match exactly: 74357 True 74356 True Name: Customer Number, dtype: bool 74357 True 74356 True Name: Product, dtype: bool For the third print, I get an error: Can only compare identically-labeled DataFrame objects. If the first two compared fine, what's wrong with the 3rd? Thanks", "output": "Here's a small example to demonstrate this (which only applied to DataFrames, not Series, until Pandas 0.19 where it applies to both): In [1]: df1 = pd.DataFrame([[1, 2], [3, 4]]) In [2]: df2 = pd.DataFrame([[3, 4], [1, 2]], index=[1, 0]) In [3]: df1 == df2 Exception: Can only compare identically-labeled DataFrame objects One solution is to sort the index first (Note: some functions require sorted indexes): In [4]: df2.sort_index(inplace=True) In [5]: df1 == df2 Out[5]: 0 1 0 True True 1 True True Note: == is also sensitive to the order of columns, so you may have to use sort_index(axis=1): In [11]: df1.sort_index().sort_index(axis=1) == df2.sort_index().sort_index(axis=1) Out[11]: 0 1 0 True True 1 True True Note: This can still raise (if the index/columns aren't identically labelled after sorting)."} +{"question_id": 7784148, "score": 163, "creation_date": 1318766875, "tags": ["python", "repr"], "instruction": "Why does the repr() of a string have two sets of quotes? And why don't I get the object back when I eval() it?\n\nrepr(): evaluatable string representation of an object (you can eval() it, meaning it is a string representation that evaluates to a Python object) In other words: >>> x = 'foo' >>> repr(x) \"'foo'\" Questions: Why do I get the double quotes when I do repr(x)? (I don't get them when I do str(x)) Why do I get 'foo' when I do eval(\"'foo'\") and not x which is the object?", "output": "Consider: >>> x = 'foo' >>> x 'foo' So the name x is attached to 'foo' string. When you call for example repr(x), the interpreter puts 'foo' instead of x and then calls repr('foo'). >>> repr(x) \"'foo'\" >>> x.__repr__() \"'foo'\" repr actually calls a magic method, __repr__, of x, which gives the string containing the representation of the value 'foo' assigned to x. So it returns 'foo' inside the string \"\" resulting in \"'foo'\". The idea of repr is to give a string which contains a series of symbols which we can type in the interpreter and get the same value which was sent as an argument to repr. >>> eval(\"'foo'\") 'foo' When we call eval(\"'foo'\"), it's the same as if we type 'foo' in the interpreter. It's as if we directly type the contents of the outer string \"\" in the interpreter. >>> eval('foo') Traceback (most recent call last): File \"<pyshell#5>\", line 1, in <module> eval('foo') File \"<string>\", line 1, in <module> NameError: name 'foo' is not defined If we call eval('foo'), it's the same as if we type foo in the interpreter. But there is no foo variable available and an exception is raised. >>> str(x) 'foo' >>> x.__str__() 'foo' >>> str is just the string representation of the object (remember, x variable refers to 'foo'), so this function returns string. >>> str(5) '5' The string representation of integer 5 is '5'. >>> str('foo') 'foo' And the string representation of string 'foo' is the same string 'foo'."} +{"question_id": 2720319, "score": 162, "creation_date": 1272363624, "tags": ["python", "datetime", "time", "timezone", "utc"], "instruction": "Python: Figure out local timezone\n\nI want to compare UTC timestamps from a log file with local timestamps. When creating the local datetime object, I use something like: >>> local_time=datetime.datetime(2010, 4, 27, 12, 0, 0, 0, tzinfo=pytz.timezone('Israel')) I want to find an automatic tool that would replace thetzinfo=pytz.timezone('Israel') with the current local time zone. Any ideas?", "output": "Try dateutil, which has a tzlocal type that does what you need."} +{"question_id": 1622793, "score": 162, "creation_date": 1256522787, "tags": ["python", "django", "cookies", "browser", "django-cookies"], "instruction": "How to set and get cookies in Django?\n\nI have a web site which shows different content based on a location the visitor chooses. e.g: User enters in 55812 as the zip. I know what city and area lat/long. that is and give them their content pertinent to that area. My question is how can I store this in a cookie so that when they return they are not required to always enter their zip code? I see it as follows: Set persistent cookie based on their area. When they return read cookie, grab zipcode. Return content based on the zip code in their cookie. I can't seem to find any solid information on setting a cookie. Any help is greatly appreciated.", "output": "UPDATE : check Peter's answer below for a builtin solution : This is a helper to set a persistent cookie: import datetime def set_cookie(response, key, value, days_expire=7): if days_expire is None: max_age = 365 * 24 * 60 * 60 # one year else: max_age = days_expire * 24 * 60 * 60 expires = datetime.datetime.strftime( datetime.datetime.utcnow() + datetime.timedelta(seconds=max_age), \"%a, %d-%b-%Y %H:%M:%S GMT\", ) response.set_cookie( key, value, max_age=max_age, expires=expires, domain=settings.SESSION_COOKIE_DOMAIN, secure=settings.SESSION_COOKIE_SECURE or None, ) Use the following code before sending a response. def view(request): response = HttpResponse(\"hello\") set_cookie(response, 'name', 'jujule') return response UPDATE : check Peter's answer below for a builtin solution :"} +{"question_id": 8875706, "score": 162, "creation_date": 1326687376, "tags": ["python", "algorithm", "sorting", "dictionary", "containers"], "instruction": "heapq with custom compare predicate\n\nI am trying to build a heap with a custom sort predicate. Since the values going into it are of \"user-defined\" type, I cannot modify their built-in comparison predicate. Is there a way to do something like: h = heapq.heapify([...], key=my_lt_pred) h = heapq.heappush(h, key=my_lt_pred) Or even better, I could wrap the heapq functions in my own container so I don't need to keep passing the predicate.", "output": "According to the heapq documentation, the way to customize the heap order is to have each element on the heap to be a tuple, with the first tuple element being one that accepts normal Python comparisons. The functions in the heapq module are a bit cumbersome (since they are not object-oriented), and always require our heap object (a heapified list) to be explicitly passed as the first parameter. We can kill two birds with one stone by creating a very simple wrapper class that will allow us to specify a key function, and present the heap as an object. The class below keeps an internal list, where each element is a tuple, the first member of which is a key, calculated at element insertion time using the key parameter, passed at Heap instantiation: # -*- coding: utf-8 -*- import heapq class MyHeap(object): def __init__(self, initial=None, key=lambda x:x): self.key = key self.index = 0 if initial: self._data = [(key(item), i, item) for i, item in enumerate(initial)] self.index = len(self._data) heapq.heapify(self._data) else: self._data = [] def push(self, item): heapq.heappush(self._data, (self.key(item), self.index, item)) self.index += 1 def pop(self): return heapq.heappop(self._data)[2] (The extra self.index part is to avoid clashes when the evaluated key value is a draw and the stored value is not directly comparable - otherwise heapq could fail with TypeError)"} +{"question_id": 2451821, "score": 162, "creation_date": 1268707831, "tags": ["python", "string", "syntax"], "instruction": "String formatting named parameters?\n\nI know it's a really simple question, but I have no idea how to google it. how can I do print '<a href=\"%s\">%s</a>' % (my_url) So that my_url is used twice? I assume I have to \"name\" the %s and then use a dict in the params, but I'm not sure of the proper syntax? just FYI, I'm aware I can just use my_url twice in the params, but that's not the point :)", "output": "Solution in Python 3.6+ Python 3.6 introduces literal string formatting, so that you can format the named parameters without any repeating any of your named parameters outside the string: print(f'<a href=\"{my_url:s}\">{my_url:s}</a>') This will evaluate my_url, so if it's not defined you will get a NameError. In fact, instead of my_url, you can write an arbitrary Python expression, as long as it evaluates to a string (because of the :s formatting code). If you want a string representation for the result of an expression that might not be a string, replace :s by !s, just like with regular, pre-literal string formatting. For details on literal string formatting, see PEP 498, where it was first introduced, and PEP 701, where permitted syntax was extended (starting with Python 3.12)."} +{"question_id": 11178426, "score": 162, "creation_date": 1340549439, "tags": ["javascript", "python", "flask", "jinja2"], "instruction": "How can I pass data from Flask to JavaScript in a template?\n\nMy app makes a call to an API that returns a dictionary. I want to pass information from this dict to JavaScript in the view. I am using the Google Maps API in the JS, specifically, so I'd like to pass it a list of tuples with the long/lat information. I know that render_template will pass these variables to the view so they can be used in HTML, but how could I pass them to JavaScript in the template? from flask import Flask from flask import render_template app = Flask(__name__) import foo_api api = foo_api.API('API KEY') @app.route('/') def get_data(): events = api.call(get_event, arg0, arg1) geocode = event['latitude'], event['longitude'] return render_template('get_data.html', geocode=geocode)", "output": "You can use {{ variable }} anywhere in your template, not just in the HTML part. So this should work: <html> <head> <script> var someJavaScriptVar = '{{ geocode[1] }}'; </script> </head> <body> <p>Hello World</p> <button onclick=\"alert('Geocode: {{ geocode[0] }} ' + someJavaScriptVar)\" /> </body> </html> Think of it as a two-stage process: First, Jinja (the template engine Flask uses) generates your text output. This gets sent to the user who executes the JavaScript he sees. If you want your Flask variable to be available in JavaScript as an array, you have to generate an array definition in your output: <html> <head> <script> var myGeocode = ['{{ geocode[0] }}', '{{ geocode[1] }}']; </script> </head> <body> <p>Hello World</p> <button onclick=\"alert('Geocode: ' + myGeocode[0] + ' ' + myGeocode[1])\" /> </body> </html> Jinja also offers more advanced constructs from Python, so you can shorten it to: <html> <head> <script> var myGeocode = [{{ ', '.join(geocode) }}]; </script> </head> <body> <p>Hello World</p> <button onclick=\"alert('Geocode: ' + myGeocode[0] + ' ' + myGeocode[1])\" /> </body> </html> You can also use for loops, if statements and many more, see the Jinja2 documentation for more. Also, have a look at Ford's answer who points out the tojson filter which is an addition to Jinja2's standard set of filters. Edit Nov 2018: tojson is now included in Jinja2's standard set of filters."} +{"question_id": 24870953, "score": 162, "creation_date": 1405963157, "tags": ["python", "performance", "pandas", "iteration"], "instruction": "Does pandas iterrows have performance issues?\n\nI have noticed very poor performance when using iterrows from pandas. Is it specific to iterrows and should this function be avoided for data of a certain size (I'm working with 2-3 million rows)? This discussion on GitHub led me to believe it is caused when mixing dtypes in the dataframe, however the simple example below shows it is there even when using one dtype (float64). This takes 36 seconds on my machine: import pandas as pd import numpy as np import time s1 = np.random.randn(2000000) s2 = np.random.randn(2000000) dfa = pd.DataFrame({'s1': s1, 's2': s2}) start = time.time() i=0 for rowindex, row in dfa.iterrows(): i+=1 end = time.time() print end - start Why are vectorized operations like apply so much quicker? I imagine there must be some row by row iteration going on there too. I cannot figure out how to not use iterrows in my case (this I'll save for a future question). Therefore I would appreciate hearing if you have consistently been able to avoid this iteration. I'm making calculations based on data in separate dataframes. A simplified version of what I want to run: import pandas as pd import numpy as np #%% Create the original tables t1 = {'letter':['a','b'], 'number1':[50,-10]} t2 = {'letter':['a','a','b','b'], 'number2':[0.2,0.5,0.1,0.4]} table1 = pd.DataFrame(t1) table2 = pd.DataFrame(t2) #%% Create the body of the new table table3 = pd.DataFrame(np.nan, columns=['letter','number2'], index=[0]) #%% Iterate through filtering relevant data, optimizing, returning info for row_index, row in table1.iterrows(): t2info = table2[table2.letter == row['letter']].reset_index() table3.ix[row_index,] = optimize(t2info,row['number1']) #%% Define optimization def optimize(t2info, t1info): calculation = [] for index, r in t2info.iterrows(): calculation.append(r['number2']*t1info) maxrow = calculation.index(max(calculation)) return t2info.ix[maxrow]", "output": "Generally, iterrows should only be used in very, very specific cases. This is the general order of precedence for performance of various operations: vectorization using a custom Cython routine apply reductions that can be performed in Cython iteration in Python space itertuples iterrows updating an empty frame (e.g., using loc one-row-at-a-time) Using a custom Cython routine is usually too complicated, so let's skip that for now. Vectorization is always, always the first and best choice. However, there is a small set of cases (usually involving a recurrence) which cannot be vectorized in obvious ways. Furthermore, on a smallish DataFrame, it may be faster to use other methods. apply usually can be handled by an iterator in Cython space. This is handled internally by pandas, though it depends on what is going on inside the apply expression. For example, df.apply(lambda x: np.sum(x)) will be executed pretty swiftly, though of course, df.sum(1) is even better. However something like df.apply(lambda x: x['b'] + 1) will be executed in Python space, and consequently is much slower. itertuples does not box the data into a Series. It just returns the data in the form of tuples. iterrows does box the data into a Series. Unless you really need this, use another method. Updating an empty frame a-single-row-at-a-time. I have seen this method used WAY too much. It is by far the slowest. It is probably common place (and reasonably fast for some Python structures), but a DataFrame does a fair number of checks on indexing, so this will always be very slow to update a row at a time. Much better to create new structures and concat."} +{"question_id": 11555468, "score": 162, "creation_date": 1342681105, "tags": ["python"], "instruction": "How should I read a file line-by-line in Python?\n\nIn pre-historic times (Python 1.4) we did: fp = open('filename.txt') while 1: line = fp.readline() if not line: break print(line) after Python 2.1, we did: for line in open('filename.txt').xreadlines(): print(line) before we got the convenient iterator protocol in Python 2.3, and could do: for line in open('filename.txt'): print(line) I've seen some examples using the more verbose: with open('filename.txt') as fp: for line in fp: print(line) Is this the preferred method going forward? [edit] I get that the with statement ensures closing of the file. Why isn't that included in the iterator protocol for file objects?", "output": "There is exactly one reason why the following is preferred: with open('filename.txt') as fp: for line in fp: print(line) We are all spoiled by CPython's relatively deterministic reference-counting scheme for garbage collection. Other, hypothetical implementations of Python will not necessarily close the file \"quickly enough\" without the with block if they use some other scheme to reclaim memory. In such an implementation, you might get a \"too many files open\" error from the OS if your code opens files faster than the garbage collector calls finalizers on orphaned file handles. The usual workaround is to trigger the GC immediately, but this is a nasty hack and it has to be done by every function that could encounter the error, including those in libraries. What a nightmare. Or you could just use the with block. Bonus Question (Stop reading now if are only interested in the objective aspects of the question.) Why isn't that included in the iterator protocol for file objects? This is a subjective question about API design, so I have a subjective answer in two parts. On a gut level, this feels wrong, because it makes iterator protocol do two separate things\u2014iterate over lines and close the file handle\u2014and it's often a bad idea to make a simple-looking function do two actions. In this case, it feels especially bad because iterators relate in a quasi-functional, value-based way to the contents of a file, but managing file handles is a completely separate task. Squashing both, invisibly, into one action, is surprising to humans who read the code and makes it more difficult to reason about program behavior. Other languages have essentially come to the same conclusion. Haskell briefly flirted with so-called \"lazy IO\" which allows you to iterate over a file and have it automatically closed when you get to the end of the stream, but it's almost universally discouraged to use lazy IO in Haskell these days, and Haskell users have mostly moved to more explicit resource management like Conduit which behaves more like the with block in Python. On a technical level, there are some things you may want to do with a file handle in Python which would not work as well if iteration closed the file handle. For example, suppose I need to iterate over the file twice: with open('filename.txt') as fp: for line in fp: ... fp.seek(0) for line in fp: ... While this is a less common use case, consider the fact that I might have just added the three lines of code at the bottom to an existing code base which originally had the top three lines. If iteration closed the file, I wouldn't be able to do that. So keeping iteration and resource management separate makes it easier to compose chunks of code into a larger, working Python program. Composability is one of the most important usability features of a language or API."} +{"question_id": 8895208, "score": 162, "creation_date": 1326806088, "tags": ["python", "sql", "database", "orm", "sqlalchemy"], "instruction": "SQLAlchemy: how to filter date field?\n\nHere is model: class User(Base): ... birthday = Column(Date, index=True) #in database it's like '1987-01-17' ... I want to filter between two dates, for example to choose all users in interval 18-30 years. How to implement it with SQLAlchemy? I think of: query = DBSession.query(User).filter( and_(User.birthday >= '1988-01-17', User.birthday <= '1985-01-17') ) # means age >= 24 and age <= 27 I know this is not correct, but how to do correct?", "output": "In fact, your query is right except for the typo: your filter is excluding all records: you should change the <= for >= and vice versa: qry = DBSession.query(User).filter( and_(User.birthday <= '1988-01-17', User.birthday >= '1985-01-17')) # or same: qry = DBSession.query(User).filter(User.birthday <= '1988-01-17').\\ filter(User.birthday >= '1985-01-17') Also you can use between: qry = DBSession.query(User).filter(User.birthday.between('1985-01-17', '1988-01-17'))"} +{"question_id": 8290397, "score": 161, "creation_date": 1322441566, "tags": ["python", "algorithm", "generator", "chunking"], "instruction": "how to split an iterable in constant-size chunks\n\nI am surprised I could not find a \"batch\" function that would take as input an iterable and return an iterable of iterables. For example: for i in batch(range(0,10), 1): print i [0] [1] ... [9] or: for i in batch(range(0,10), 3): print i [0,1,2] [3,4,5] [6,7,8] [9] Now, I wrote what I thought was a pretty simple generator: def batch(iterable, n = 1): current_batch = [] for item in iterable: current_batch.append(item) if len(current_batch) == n: yield current_batch current_batch = [] if current_batch: yield current_batch But the above does not give me what I would have expected: for x in batch(range(0,10),3): print x [0] [0, 1] [0, 1, 2] [3] [3, 4] [3, 4, 5] [6] [6, 7] [6, 7, 8] [9] So, I have missed something and this probably shows my complete lack of understanding of python generators. Anyone would care to point me in the right direction ? [Edit: I eventually realized that the above behavior happens only when I run this within ipython rather than python itself]", "output": "This is probably more efficient (faster) def batch(iterable, n=1): l = len(iterable) for ndx in range(0, l, n): yield iterable[ndx:min(ndx + n, l)] for x in batch(range(0, 10), 3): print x Example using list data = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # list of data for x in batch(data, 3): print(x) # Output [0, 1, 2] [3, 4, 5] [6, 7, 8] [9, 10] It avoids building new lists."} +{"question_id": 22187279, "score": 161, "creation_date": 1393986365, "tags": ["python", "import", "circular-dependency"], "instruction": "Why do circular imports seemingly work further up in the call stack but then raise an ImportError further down?\n\nI'm getting this error Traceback (most recent call last): File \"/Users/alex/dev/runswift/utils/sim2014/simulator.py\", line 3, in <module> from world import World File \"/Users/alex/dev/runswift/utils/sim2014/world.py\", line 2, in <module> from entities.field import Field File \"/Users/alex/dev/runswift/utils/sim2014/entities/field.py\", line 2, in <module> from entities.goal import Goal File \"/Users/alex/dev/runswift/utils/sim2014/entities/goal.py\", line 2, in <module> from entities.post import Post File \"/Users/alex/dev/runswift/utils/sim2014/entities/post.py\", line 4, in <module> from physics import PostBody File \"/Users/alex/dev/runswift/utils/sim2014/physics.py\", line 21, in <module> from entities.post import Post ImportError: cannot import name Post and you can see that I use the same import statement further up and it works. Is there some unwritten rule about circular importing? How do I use the same class further down the call stack? See also What happens when using mutual or circular (cyclic) imports in Python? for a general overview of what is allowed and what causes a problem WRT circular imports. See What can I do about \"ImportError: Cannot import name X\" or \"AttributeError: ... (most likely due to a circular import)\"? for techniques for resolving and avoiding circular dependencies.", "output": "I think the answer by jpmc26, while by no means wrong, comes down too heavily on circular imports. They can work just fine, if you set them up correctly. The easiest way to do so is to use import my_module syntax, rather than from my_module import some_object. The former will almost always work, even if my_module included imports us back. The latter only works if my_object is already defined in my_module, which in a circular import may not be the case. To be specific to your case: Try changing entities/post.py to do import physics and then refer to physics.PostBody rather than just PostBody directly. Similarly, change physics.py to do import entities.post and then use entities.post.Post rather than just Post."} +{"question_id": 27327513, "score": 161, "creation_date": 1417831463, "tags": ["python", "pdf"], "instruction": "Create PDF from a list of images\n\nIs there any practical way to create a PDF from a list of images files, using Python? In Perl I know that module. With it I can create a PDF in just 3 lines: use PDF::FromImage; ... my $pdf = PDF::FromImage->new; $pdf->load_images(@allPagesDir); $pdf->write_file($bookName . '.pdf'); I need to do something very similar to this, but in Python. I know the pyPdf module, but I would like something simple.", "output": "Install fpdf2 for Python: pip install fpdf2 Now you can use the same logic: from fpdf import FPDF pdf = FPDF() # imagelist is the list with all image filenames for image in imagelist: pdf.add_page() pdf.image(image,x,y,w,h) pdf.output(\"yourfile.pdf\", \"F\") You can find more info at the tutorial page or the official documentation."} +{"question_id": 233673, "score": 161, "creation_date": 1224857305, "tags": ["python", "closures", "lazy-evaluation", "late-binding", "python-closures"], "instruction": "How do lexical closures work?\n\nWhile I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python: flist = [] for i in xrange(3): def func(x): return x * i flist.append(func) for f in flist: print f(2) Note that this example mindfully avoids lambda. It prints \"4 4 4\", which is surprising. I'd expect \"0 2 4\". This equivalent Perl code does it right: my @flist = (); foreach my $i (0 .. 2) { push(@flist, sub {$i * $_[0]}); } foreach my $f (@flist) { print $f->(2), \"\\n\"; } \"0 2 4\" is printed. Can you please explain the difference ? Update: The problem is not with i being global. This displays the same behavior: flist = [] def outer(): for i in xrange(3): def inner(x): return x * i flist.append(inner) outer() #~ print i # commented because it causes an error for f in flist: print f(2) As the commented line shows, i is unknown at that point. Still, it prints \"4 4 4\".", "output": "Python is actually behaving as defined. Three separate functions are created, but they each have the closure of the environment they're defined in - in this case, the global environment (or the outer function's environment if the loop is placed inside another function). This is exactly the problem, though - in this environment, i is modified, and the closures all refer to the same i. Here is the best solution I can come up with - create a function creater and invoke that instead. This will force different environments for each of the functions created, with a different i in each one. flist = [] for i in xrange(3): def funcC(j): def func(x): return x * j return func flist.append(funcC(i)) for f in flist: print f(2) This is what happens when you mix side effects and functional programming."} +{"question_id": 36542169, "score": 161, "creation_date": 1460358987, "tags": ["python", "pandas"], "instruction": "Extract first and last row of a dataframe in pandas\n\nHow can I extract the first and last rows of a given dataframe as a new dataframe in pandas? I've tried to use iloc to select the desired rows and then concat as in: df=pd.DataFrame({'a':range(1,5), 'b':['a','b','c','d']}) pd.concat([df.iloc[0,:], df.iloc[-1,:]]) but this does not produce a pandas dataframe: a 1 b a a 4 b d dtype: object", "output": "Some of the other answers duplicate the first row if the frame only contains a single row. If that's a concern df[0::len(df)-1 if len(df) > 1 else 1] works even for single row-dataframes. Example: For the following dataframe this will not create a duplicate: df = pd.DataFrame({'a': [1], 'b':['a']}) df2 = df[0::len(df)-1 if len(df) > 1 else 1] print df2 a b 0 1 a whereas this does: df3 = df.iloc[[0, -1]] print df3 a b 0 1 a 0 1 a because the single row is the first AND last row at the same time."} +{"question_id": 17649875, "score": 161, "creation_date": 1373877221, "tags": ["python", "list", "random", "shuffle"], "instruction": "Why does random.shuffle return None?\n\nWhy is random.shuffle returning None in Python? >>> x = ['foo','bar','black','sheep'] >>> from random import shuffle >>> print shuffle(x) None How do I get the shuffled value instead of None?", "output": "random.shuffle() changes the x list in place. Python API methods that alter a structure in-place generally return None, not the modified data structure. >>> x = ['foo', 'bar', 'black', 'sheep'] >>> random.shuffle(x) >>> x ['black', 'bar', 'sheep', 'foo'] If you wanted to create a new randomly-shuffled list based on an existing one, where the existing list is kept in order, you could use random.sample() with the full length of the input: random.sample(x, len(x)) You could also use sorted() with random.random() for a sorting key: shuffled = sorted(x, key=lambda k: random.random()) but this invokes sorting (an O(N log N) operation), while sampling to the input length only takes O(N) operations (the same process as random.shuffle() is used, swapping out random values from a shrinking pool). Demo: >>> import random >>> x = ['foo', 'bar', 'black', 'sheep'] >>> random.sample(x, len(x)) ['bar', 'sheep', 'black', 'foo'] >>> sorted(x, key=lambda k: random.random()) ['sheep', 'foo', 'black', 'bar'] >>> x ['foo', 'bar', 'black', 'sheep']"} +{"question_id": 65298241, "score": 161, "creation_date": 1607990716, "tags": ["python", "tensorflow", "anaconda"], "instruction": "What does this tensorflow message mean? Any side effect? Was the installation successful?\n\nI just installed tensorflow v2.3 on anaconda python. I tried to test out the installation using the python command below; $ python -c \"import tensorflow as tf; x = [[2.]]; print('tensorflow version', tf.__version__); print('hello, {}'.format(tf.matmul(x, x)))\" I got the following message; 2020-12-15 07:59:12.411952: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. hello, [[4.]] From the message, it seems that the installation was installed successfully. But what does This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 mean exactly? Am I using a tensorflow version with some limited features? Any side effects? I am using Windows 10.", "output": "An important part of Tensorflow is that it is supposed to be fast. With a suitable installation, it works with CPUs, GPUs, or TPUs. Part of going fast means that it uses different code depending on your hardware. Some CPUs support operations that other CPUs do not, such as vectorized addition (adding multiple variables at once). Tensorflow is simply telling you that the version you have installed can use the AVX and AVX2 operations and is set to do so by default in certain situations (say inside a forward or back-prop matrix multiply), which can speed things up. This is not an error, it is just telling you that it can and will take advantage of your CPU to get that extra speed out. Note: AVX stands for Advanced Vector Extensions."} +{"question_id": 8081545, "score": 161, "creation_date": 1320937345, "tags": ["python", "list", "tuples"], "instruction": "How to convert list of tuples to multiple lists?\n\nSuppose I have a list of tuples and I want to convert to multiple lists. For example, the list of tuples is [(1,2),(3,4),(5,6),] Is there any built-in function in Python that convert it to: [1,3,5],[2,4,6] This can be a simple program. But I am just curious about the existence of such built-in function in Python.", "output": "The built-in function zip() will almost do what you want: >>> list(zip(*[(1, 2), (3, 4), (5, 6)])) [(1, 3, 5), (2, 4, 6)] This will give you a list of tuples. If you want to go further and arrive at a list of lists: >>> list(map(list, zip(*[(1, 2), (3, 4), (5, 6)]))) [[1, 3, 5], [2, 4, 6]]"} +{"question_id": 10948920, "score": 161, "creation_date": 1339158816, "tags": ["python", "sorting"], "instruction": "What algorithm does python's sorted() use?\n\nIn Python 2.7, how does Python's built-in sorted function work - what algorithm does it use?", "output": "Older versions of Python (2.3 - 3.10) used an algorithm called Timsort: Timsort is a hybrid sorting algorithm, derived from merge sort and insertion sort, designed to perform well on many kinds of real-world data. It was invented by Tim Peters in 2002 for use in the Python programming language. The algorithm finds subsets of the data that are already ordered, and uses the subsets to sort the data more efficiently. This is done by merging an identified subset, called a run, with existing runs until certain criteria are fulfilled. Timsort was Python's standard sorting algorithm from version 2.3 to version 3.10. It is now also used to sort arrays in Java SE 7, and on the Android platform. Since 3.11, Python uses Powersort, which was designed by Ian Munro and Sebastian Wild. It is an improved nearly-optimal mergesort that adapts to existing runs of sorted data."} +{"question_id": 2547554, "score": 161, "creation_date": 1269973175, "tags": ["python"], "instruction": "Multiple Python versions on the same machine?\n\nIs there official documentation on the Python website somewhere, on how to install and run multiple versions of Python on the same machine on Linux? I can find gazillions of blog posts and answers, but I want to know if there is a \"standard\" official way of doing this? Or is this all dependent on OS?", "output": "I think it is totally independent. Just install them, then you have the commands e.g. /usr/bin/python2.5 and /usr/bin/python2.6. Link /usr/bin/python to the one you want to use as default. All the libraries are in separate folders (named after the version) anyway. If you want to compile the versions manually, this is from the readme file of the Python source code: Installing multiple versions On Unix and Mac systems if you intend to install multiple versions of Python using the same installation prefix (--prefix argument to the configure script) you must take care that your primary python executable is not overwritten by the installation of a different version. All files and directories installed using make altinstall contain the major and minor version and can thus live side-by-side. make install also creates ${prefix}/bin/python3 which refers to ${prefix}/bin/pythonX.Y. If you intend to install multiple versions using the same prefix you must decide which version (if any) is your \"primary\" version. Install that version using make install. Install all other versions using make altinstall. For example, if you want to install Python 2.5, 2.6 and 3.0 with 2.6 being the primary version, you would execute make install in your 2.6 build directory and make altinstall in the others."} +{"question_id": 3595363, "score": 160, "creation_date": 1283096189, "tags": ["python", "properties", "configparser"], "instruction": "Properties file in python (similar to Java Properties)\n\nGiven the following format (.properties or .ini): propertyName1=propertyValue1 propertyName2=propertyValue2 ... propertyNameN=propertyValueN For Java there is the Properties class that offers functionality to parse / interact with the above format. Is there something similar in python's standard library (2.x) ? If not, what other alternatives do I have ?", "output": "For .ini files there is the configparser module that provides a format compatible with .ini files. Anyway there's nothing available for parsing complete .properties files, when I have to do that I simply use jython (I'm talking about scripting)."} +{"question_id": 11696472, "score": 160, "creation_date": 1343428127, "tags": ["python", "file-io", "seek"], "instruction": "seek() function?\n\nPlease excuse my confusion here but I have read the documentation regarding the seek() function in Python (after having to use it). Although it helped me I am still a bit confused on the actual meaning of what it does, any explanations are much appreciated. Thank you.", "output": "Regarding seek() there's not too much to worry about. First of all, it is useful when operating over an open file. It's important to note that its syntax is as follows: fp.seek(offset, from_what) where fp is the file pointer you're working with; offset means how many positions you will move; from_what defines your point of reference: 0: means your reference point is the beginning of the file 1: means your reference point is the current file position 2: means your reference point is the end of the file if omitted, from_what defaults to 0. Never forget that when managing files, there'll always be a position inside that file where you are currently working on. When just open, that position is the beginning of the file, but as you work with it, you may advance. seek will be useful to you when you need to walk along that open file, just as a path you are traveling into."} +{"question_id": 3091316, "score": 160, "creation_date": 1277194051, "tags": ["python", "conditional-operator", "language-features"], "instruction": "Conditional/ternary operator for expressions in Python\n\nMany languages have a conditional (AKA ternary) operator. This allows you to make terse choices between two values based on a condition, which makes expressions, including assignments, concise. My code has conditional assignments: if condition: var = something else: var = something_else In C it'd be: var = condition ? something : something_else; In Python, is there a trick you can use to get the assignment onto a single line to approximate the advantages of the conditional operator?", "output": "Python has such an operator: variable = something if condition else something_else Alternatively, although not recommended (see karadoc's comment): variable = (condition and something) or something_else"} +{"question_id": 40050397, "score": 160, "creation_date": 1476472038, "tags": ["python", "tensorflow", "machine-learning", "keras", "theano"], "instruction": "Deep-Learning Nan loss reasons\n\nWhat would cause a Convolutional Neural Network to diverge? Specifics: I am using Tensorflow's iris_training model with some of my own data and keep getting ERROR:tensorflow:Model diverged with loss = NaN. Traceback... tensorflow.contrib.learn.python.learn.monitors.NanLossDuringTrainingError: NaN loss during training. Traceback originated with line: tf.contrib.learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[300, 300, 300], #optimizer=tf.train.ProximalAdagradOptimizer(learning_rate=0.001, l1_regularization_strength=0.00001), n_classes=11, model_dir=\"/tmp/iris_model\") I've tried adjusting the optimizer, using a zero for learning rate, and using no optimizer.", "output": "There are lots of things I have seen make a model diverge. Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity. I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue. Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root whose derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier. You may have an issue with the input data. Try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255]. The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below)."} +{"question_id": 7514964, "score": 160, "creation_date": 1316695764, "tags": ["python", "django", "django-models"], "instruction": "Django - how to create a file and save it to a model's FileField?\n\nHere's my model. What I want to do is generate a new file and overwrite the existing one whenever a model instance is saved: class Kitten(models.Model): claw_size = ... license_file = models.FileField(blank=True, upload_to='license') def save(self, *args, **kwargs): #Generate a new license file overwriting any previous version #and update file path self.license_file = ??? super(Request,self).save(*args, **kwargs) I see lots of documentation about how to upload a file. But how do I generate a file, assign it to a model field and have Django store it in the right place?", "output": "You want to have a look at FileField and FieldFile in the Django docs, and especially FieldFile.save(). Basically, a field declared as a FileField, when accessed, gives you an instance of class FieldFile, which gives you several methods to interact with the underlying file. So, what you need to do is: self.license_file.save(new_name, new_contents) where new_name is the filename you wish assigned and new_contents is the content of the file. Note that new_contents must be an instance of either django.core.files.File or django.core.files.base.ContentFile (see given links to manual for the details). The two choices boil down to: from django.core.files.base import ContentFile, File # Using File with open('/path/to/file') as f: self.license_file.save(new_name, File(f)) # Using ContentFile self.license_file.save(new_name, ContentFile('A string with the file content'))"} +{"question_id": 17534106, "score": 160, "creation_date": 1373310377, "tags": ["python", "numpy", "pandas", "nan"], "instruction": "What is the difference between NaN and None?\n\nI am reading two columns of a csv file using pandas readcsv() and then assigning the values to a dictionary. The columns contain strings of numbers and letters. Occasionally there are cases where a cell is empty. In my opinion, the value read to that dictionary entry should be None but instead nan is assigned. Surely None is more descriptive of an empty cell as it has a null value, whereas nan just says that the value read is not a number. Is my understanding correct, what IS the difference between None and nan? Why is nan assigned instead of None? Also, my dictionary check for any empty cells has been using numpy.isnan(): for k, v in my_dict.iteritems(): if np.isnan(v): But this gives me an error saying that I cannot use this check for v. I guess it is because an integer or float variable, not a string is meant to be used. If this is true, how can I check v for an \"empty cell\"/nan case?", "output": "NaN is used as a placeholder for missing data consistently in pandas, consistency is good. I usually read/translate NaN as \"missing\". Also see the 'working with missing data' section in the docs. Wes writes in the docs 'choice of NA-representation': After years of production use [NaN] has proven, at least in my opinion, to be the best decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used everywhere as the NA value, and there are API functions isna and notna which can be used across the dtypes to detect NA values. ... Thus, I have chosen the Pythonic \u201cpracticality beats purity\u201d approach and traded integer NA capability for a much simpler approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when NAs must be introduced. Note: the \"gotcha\" that integer Series containing missing data are upcast to floats. In my opinion the main reason to use NaN (over None) is that it can be stored with numpy's float64 dtype, rather than the less efficient object dtype, see NA type promotions. # without forcing dtype it changes None to NaN! s_bad = pd.Series([1, None], dtype=object) s_good = pd.Series([1, np.nan]) In [13]: s_bad.dtype Out[13]: dtype('O') In [14]: s_good.dtype Out[14]: dtype('float64') Jeff comments (below) on this: np.nan allows for vectorized operations; its a float value, while None, by definition, forces object type, which basically disables all efficiency in numpy. So repeat 3 times fast: object==bad, float==good Saying that, many operations may still work just as well with None vs NaN (but perhaps are not supported i.e. they may sometimes give surprising results): In [15]: s_bad.sum() Out[15]: 1 In [16]: s_good.sum() Out[16]: 1.0 To answer the second question: You should be using isna and notna to test for missing data (NaN)."} +{"question_id": 3424899, "score": 159, "creation_date": 1281106129, "tags": ["python", "date"], "instruction": "Return datetime object of previous month\n\nIf only timedelta had a month argument in it's constructor. So what's the simplest way to do this? EDIT: I wasn't thinking too hard about this as was pointed out below. Really what I wanted was any day in the last month because eventually I'm going to grab the year and month only. So given a datetime object, what's the simplest way to return any datetime object that falls in the previous month?", "output": "Try this: def monthdelta(date, delta): m, y = (date.month+delta) % 12, date.year + ((date.month)+delta-1) // 12 if not m: m = 12 d = min(date.day, [31, 29 if y%4==0 and (not y%100==0 or y%400 == 0) else 28, 31,30,31,30,31,31,30,31,30,31][m-1]) return date.replace(day=d,month=m, year=y) >>> for m in range(-12, 12): print(monthdelta(datetime.now(), m)) 2009-08-06 16:12:27.823000 2009-09-06 16:12:27.855000 2009-10-06 16:12:27.870000 2009-11-06 16:12:27.870000 2009-12-06 16:12:27.870000 2010-01-06 16:12:27.870000 2010-02-06 16:12:27.870000 2010-03-06 16:12:27.886000 2010-04-06 16:12:27.886000 2010-05-06 16:12:27.886000 2010-06-06 16:12:27.886000 2010-07-06 16:12:27.886000 2010-08-06 16:12:27.901000 2010-09-06 16:12:27.901000 2010-10-06 16:12:27.901000 2010-11-06 16:12:27.901000 2010-12-06 16:12:27.901000 2011-01-06 16:12:27.917000 2011-02-06 16:12:27.917000 2011-03-06 16:12:27.917000 2011-04-06 16:12:27.917000 2011-05-06 16:12:27.917000 2011-06-06 16:12:27.933000 2011-07-06 16:12:27.933000 >>> monthdelta(datetime(2010,3,30), -1) datetime.datetime(2010, 2, 28, 0, 0) >>> monthdelta(datetime(2008,3,30), -1) datetime.datetime(2008, 2, 29, 0, 0) Edit Corrected to handle the day as well. Edit See also the answer from puzzlement which points out a simpler calculation for d: d = min(date.day, calendar.monthrange(y, m)[1])"} +{"question_id": 1364173, "score": 159, "creation_date": 1251832633, "tags": ["python"], "instruction": "Stopping Python using Ctrl + C\n\nI have a Python script that uses threads and makes lots of HTTP requests. I think, while a HTTP request (using urllib2) is reading, it's blocking and not responding to Ctrl + C to stop the program. Is there a way around this?", "output": "On Windows, the only sure way is to use Ctrl + Break. It stops every Python script instantly! (Note that on some keyboards, \"Break\" is labeled as \"Pause\".)"} +{"question_id": 25470844, "score": 159, "creation_date": 1408876889, "tags": ["python", "argparse"], "instruction": "Specify date format for Python argparse input arguments\n\nI have a Python script that requires some command line inputs and I am using argparse for parsing them. I found the documentation a bit confusing and couldn't find a way to check for a format in the input parameters. What I mean by checking format is explained with this example script: parser.add_argument('-s', \"--startdate\", help=\"The Start Date - format YYYY-MM-DD \", required=True) parser.add_argument('-e', \"--enddate\", help=\"The End Date format YYYY-MM-DD (Inclusive)\", required=True) parser.add_argument('-a', \"--accountid\", type=int, help='Account ID for the account for which data is required (Default: 570)') parser.add_argument('-o', \"--outputpath\", help='Directory where output needs to be stored (Default: ' + os.path.dirname(os.path.abspath(__file__))) I need to check for option -s and -e that the input by the user is in the format YYYY-MM-DD. Is there an option in argparse that I do not know of which accomplishes this?", "output": "Per the documentation: The type keyword argument of add_argument() allows any necessary type-checking and type conversions to be performed ... The argument to type can be any callable that accepts a single string. You could do something like: import argparse import datetime def valid_date(s: str) -> datetime.datetime: try: return datetime.datetime.strptime(s, \"%Y-%m-%d\") except ValueError: raise argparse.ArgumentTypeError(f\"not a valid date: {s!r}\") Then use that as type: parser.add_argument( \"-s\", \"--startdate\", help=\"The Start Date - format YYYY-MM-DD\", required=True, type=valid_date ) If the user supplies an invalid value, the feedback will look like: error: argument -s/--startdate: not a valid date: 'foo'"} +{"question_id": 1546367, "score": 159, "creation_date": 1255127386, "tags": ["python", "email", "testing"], "instruction": "How to send mail with To, CC and BCC?\n\nI need to populate a few hundred email boxes with various messages for testing purposes and I was going to use smtplib for that. I need to send messages not only To specific mailboxes, but CC and BCC them as well. How do I do that?", "output": "Email headers don't matter to the SMTP server. Just add CC and BCC recipients to to_addrs when sending emails. For CC, add them to the CC header. toaddr = 'buffy@sunnydale.k12.ca.us' cc = ['alexander@sunydale.k12.ca.us','willow@sunnydale.k12.ca.us'] bcc = ['chairman@slayerscouncil.uk'] fromaddr = 'giles@sunnydale.k12.ca.us' message_subject = \"disturbance in sector 7\" message_text = \"Three are dead in an attack in the sewers below sector 7.\" message = \"From: %s\\r\\n\" % fromaddr + \"To: %s\\r\\n\" % toaddr + \"CC: %s\\r\\n\" % \",\".join(cc) + \"Subject: %s\\r\\n\" % message_subject + \"\\r\\n\" + message_text toaddrs = [toaddr] + cc + bcc server = smtplib.SMTP('smtp.sunnydale.k12.ca.us') server.set_debuglevel(1) server.sendmail(fromaddr, toaddrs, message) server.quit()"} +{"question_id": 3357369, "score": 159, "creation_date": 1280350149, "tags": ["python", "max"], "instruction": "How do I find the maximum (larger, greater) of 2 numbers?\n\nI have two variables value and run: value = -9999 run = problem.getscore() How can I find out which one is greater, and get the greater value? See also Find the greatest (largest, maximum) number in a list of numbers - those approaches work (and are shown here), but two numbers can also be compared directly.", "output": "Use the builtin function max. Example: max(2, 4) returns 4. Just for giggles, there's a min as well...should you need it. :P"} +{"question_id": 16414559, "score": 159, "creation_date": 1367915317, "tags": ["python"], "instruction": "How to use hex() without 0x in Python?\n\nThe hex() function in python, puts the leading characters 0x in front of the number. Is there anyway to tell it NOT to put them? So 0xfa230 will be fa230. The code is import fileinput f = open('hexa', 'w') for line in fileinput.input(['pattern0.txt']): f.write(hex(int(line))) f.write('\\n')", "output": "(Recommended) Python 3 f-strings: Answered by @GringoSuave >>> i = 3735928559 >>> f'{i:x}' 'deadbeef' Alternatives: format builtin function (good for single values only) >>> format(3735928559, 'x') 'deadbeef' And sometimes we still may need to use str.format formatting in certain situations @Eumiro (Though I would still recommend f-strings in most situations) >>> '{:x}'.format(3735928559) 'deadbeef' (Legacy) f-strings should solve all of your needs, but printf-style formatting is what we used to do @msvalkon >>> '%x' % 3735928559 'deadbeef' Without string formatting @jsbueno >>> i = 3735928559 >>> i.to_bytes(4, \"big\").hex() 'deadbeef' Hacky Answers (avoid) hex(i)[2:] @GuillaumeLema\u00eetre >>> i = 3735928559 >>> hex(i)[2:] 'deadbeef' This relies on string slicing instead of using a function / method made specifically for formatting as hex. This is why it may give unexpected output for negative numbers: >>> i = -3735928559 >>> hex(i)[2:] 'xdeadbeef' >>> f'{i:x}' '-deadbeef'"} +{"question_id": 14693701, "score": 159, "creation_date": 1360004824, "tags": ["python", "string", "escaping", "ansi-escape"], "instruction": "How can I remove the ANSI escape sequences from a string in python\n\nHere is a snippet that includes my string. 'ls\\r\\n\\x1b[00m\\x1b[01;31mexamplefile.zip\\x1b[00m\\r\\n\\x1b[01;31m' The string was returned from an SSH command that I executed. I can't use the string in its current state because it contains ANSI standardized escape sequences. How can I programmatically remove the escape sequences so that the only part of the string remaining is 'examplefile.zip'.", "output": "Delete them with a regular expression: import re # 7-bit C1 ANSI sequences ansi_escape = re.compile(r''' \\x1B # ESC (?: # 7-bit C1 Fe (except CSI) [@-Z\\\\-_] | # or [ for CSI, followed by a control sequence \\[ [0-?]* # Parameter bytes [ -/]* # Intermediate bytes [@-~] # Final byte ) ''', re.VERBOSE) result = ansi_escape.sub('', sometext) or, without the VERBOSE flag, in condensed form: ansi_escape = re.compile(r'\\x1B(?:[@-Z\\\\-_]|\\[[0-?]*[ -/]*[@-~])') result = ansi_escape.sub('', sometext) Demo: >>> import re >>> ansi_escape = re.compile(r'\\x1B(?:[@-Z\\\\-_]|\\[[0-?]*[ -/]*[@-~])') >>> sometext = 'ls\\r\\n\\x1b[00m\\x1b[01;31mexamplefile.zip\\x1b[00m\\r\\n\\x1b[01;31m' >>> ansi_escape.sub('', sometext) 'ls\\r\\nexamplefile.zip\\r\\n' The above regular expression covers all 7-bit ANSI C1 escape sequences, but not the 8-bit C1 escape sequence openers. The latter are never used in today's UTF-8 world where the same range of bytes have a different meaning. If you do need to cover the 8-bit codes too (and are then, presumably, working with bytes values) then the regular expression becomes a bytes pattern like this: # 7-bit and 8-bit C1 ANSI sequences ansi_escape_8bit = re.compile(br''' (?: # either 7-bit C1, two bytes, ESC Fe (omitting CSI) \\x1B [@-Z\\\\-_] | # or a single 8-bit byte Fe (omitting CSI) [\\x80-\\x9A\\x9C-\\x9F] | # or CSI + control codes (?: # 7-bit CSI, ESC [ \\x1B\\[ | # 8-bit CSI, 9B \\x9B ) [0-?]* # Parameter bytes [ -/]* # Intermediate bytes [@-~] # Final byte ) ''', re.VERBOSE) result = ansi_escape_8bit.sub(b'', somebytesvalue) which can be condensed down to # 7-bit and 8-bit C1 ANSI sequences ansi_escape_8bit = re.compile( br'(?:\\x1B[@-Z\\\\-_]|[\\x80-\\x9A\\x9C-\\x9F]|(?:\\x1B\\[|\\x9B)[0-?]*[ -/]*[@-~])' ) result = ansi_escape_8bit.sub(b'', somebytesvalue) For more information, see: the ANSI escape codes overview on Wikipedia ECMA-48 standard, 5th edition (especially sections 5.3 and 5.4) The example you gave contains 4 CSI (Control Sequence Introducer) codes, as marked by the \\x1B[ or ESC [ opening bytes, and each contains a SGR (Select Graphic Rendition) code, because they each end in m. The parameters (separated by ; semicolons) in between those tell your terminal what graphic rendition attributes to use. So for each \\x1B[....m sequence, the 3 codes that are used are: 0 (or 00 in this example): reset, disable all attributes 1 (or 01 in the example): bold 31: red (foreground) However, there is more to ANSI than just CSI SGR codes. With CSI alone you can also control the cursor, clear lines or the whole display, or scroll (provided the terminal supports this of course). And beyond CSI, there are codes to select alternative fonts (SS2 and SS3), to send 'private messages' (think passwords), to communicate with the terminal (DCS), the OS (OSC), or the application itself (APC, a way for applications to piggy-back custom control codes on to the communication stream), and further codes to help define strings (SOS, Start of String, ST String Terminator) or to reset everything back to a base state (RIS). The above regexes cover all of these. Note that the above regex only removes the ANSI C1 codes, however, and not any additional data that those codes may be marking up (such as the strings sent between an OSC opener and the terminating ST code). Removing those would require additional work outside the scope of this answer."} +{"question_id": 33778664, "score": 159, "creation_date": 1447845411, "tags": ["python", "apache-spark", "dataframe", "pyspark", "apache-spark-sql"], "instruction": "Spark Dataframe distinguish columns with duplicated name\n\nSo as I know in Spark Dataframe, that for multiple columns can have the same name as shown in below dataframe snapshot: [ Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})), Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=125231, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0047, 3: 0.0, 4: 0.0043})), Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=145831, f=SparseVector(5, {0: 0.0, 1: 0.2356, 2: 0.0036, 3: 0.0, 4: 0.4132})), Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=147031, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})), Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=149231, f=SparseVector(5, {0: 0.0, 1: 0.0032, 2: 0.2451, 3: 0.0, 4: 0.0042})) ] Above result is created by join with a dataframe to itself, you can see there are 4 columns with both two a and f. The problem is is there when I try to do more calculation with the a column, I cant find a way to select the a, I have try df[0] and df.select('a'), both returned me below error mesaage: AnalysisException: Reference 'a' is ambiguous, could be: a#1333L, a#1335L. Is there anyway in Spark API that I can distinguish the columns from the duplicated names again? or maybe some way to let me change the column names?", "output": "I would recommend that you change the column names for your join. df1.select(col(\"a\") as \"df1_a\", col(\"f\") as \"df1_f\") .join(df2.select(col(\"a\") as \"df2_a\", col(\"f\") as \"df2_f\"), col(\"df1_a\" === col(\"df2_a\")) The resulting DataFrame will have schema (df1_a, df1_f, df2_a, df2_f)"} +{"question_id": 14524322, "score": 159, "creation_date": 1359125741, "tags": ["python", "date", "datetime", "strptime", "strftime"], "instruction": "How to convert a date string to different format\n\nI need to convert date string \"2013-1-25\" to string \"1/25/13\" in python. I looked at the datetime.strptime but still can't find a way for this.", "output": "I assume I have import datetime before running each of the lines of code below datetime.datetime.strptime(\"2013-1-25\", '%Y-%m-%d').strftime('%m/%d/%y') prints \"01/25/13\". If you can't live with the leading zero, try this: dt = datetime.datetime.strptime(\"2013-1-25\", '%Y-%m-%d') print '{0}/{1}/{2:02}'.format(dt.month, dt.day, dt.year % 100) This prints \"1/25/13\". EDIT: This may not work on every platform: datetime.datetime.strptime(\"2013-1-25\", '%Y-%m-%d').strftime('%m/%d/%y')"} +{"question_id": 19632075, "score": 159, "creation_date": 1382955285, "tags": ["python", "pandas", "csv"], "instruction": "How to read file with space separated values in pandas\n\nI try to read the file into pandas. The file has values separated by space, but with different number of spaces I tried: pd.read_csv('file.csv', delimiter=' ') but it doesn't work", "output": "Add delim_whitespace=True argument, it's faster than regex. Note however that after pandas 2.2.0 this is deprecated and it's better to use sep=\"\\s+\""} +{"question_id": 33679930, "score": 159, "creation_date": 1447357327, "tags": ["python", "python-3.x", "enums"], "instruction": "How to extend Python Enum?\n\nIs it possible to extend classes created using the new Enum functionality in Python 3.4? How? Simple subclassing doesn't appear to work. An example like from enum import Enum class EventStatus(Enum): success = 0 failure = 1 class BookingStatus(EventStatus): duplicate = 2 unknown = 3 will give an exception like TypeError: Cannot extend enumerations or (in more recent versions) TypeError: BookingStatus: cannot extend enumeration 'EventStatus'. How can I make it so that BookingStatus reuses the enumeration values from EventStatus and adds more?", "output": "Subclassing an enumeration is allowed only if the enumeration does not define any members. Allowing subclassing of enums that define members would lead to a violation of some important invariants of types and instances. https://docs.python.org/3/howto/enum.html#restricted-enum-subclassing So no, it's not directly possible."} +{"question_id": 39869793, "score": 159, "creation_date": 1475658537, "tags": ["python", "sqlalchemy"], "instruction": "When do I need to use sqlalchemy back_populates?\n\nWhen I try SQLAlchemy Relation Example following this guide: Basic Relationship Patterns I have this code #!/usr/bin/env python # encoding: utf-8 from sqlalchemy import create_engine from sqlalchemy import Table, Column, Integer, ForeignKey from sqlalchemy.orm import relationship, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///:memory:', echo=True) Session = sessionmaker(bind=engine) session = Session() Base = declarative_base(bind=engine) class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) children = relationship(\"Child\") class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id')) parent = relationship(\"Parent\") Base.metadata.create_all() p = Parent() session.add(p) session.commit() c = Child(parent_id=p.id) session.add(c) session.commit() print \"children: {}\".format(p.children[0].id) print \"parent: {}\".format(c.parent.id) It works well, but in the guide, it says the model should be: class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) children = relationship(\"Child\", back_populates=\"parent\") class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id')) parent = relationship(\"Parent\", back_populates=\"children\") Why don't I need back_populates or backref in my example? When should I use one or the other?", "output": "If you use backref you don't need to declare the relationship on the second table. class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) children = relationship(\"Child\", backref=\"parent\") class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id')) If you're not using backref, and defining the relationship's separately, then if you don't use back_populates, sqlalchemy won't know to connect the relationships, so that modifying one also modifies the other. So, in your example, where you've defined the relationship's separately, but didn't provide a back_populates argument, modifying one field wouldn't automatically update the other in your transaction. >>> parent = Parent() >>> child = Child() >>> child.parent = parent >>> print(parent.children) [] See how it didn't automatically fill out the children field? Now, if you supply a back_populates argument, sqlalchemy will connect the fields. class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) children = relationship(\"Child\", back_populates=\"parent\") class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id')) parent = relationship(\"Parent\", back_populates=\"children\") So now we get >>> parent = Parent() >>> child = Child() >>> child.parent = parent >>> print(parent.children) [Child(...)] Sqlalchemy knows these two fields are related now, and will update each as the other is updated. It's worth noting that using backref will do this, too. Using back_populates is nice if you want to define the relationships on every class, so it's easy to see all the fields just be glancing at the model class, instead of having to look at other classes that define fields via backref."} +{"question_id": 9189172, "score": 159, "creation_date": 1328685255, "tags": ["python", "string"], "instruction": "Why doesn't calling a string method (such as .replace or .strip) modify (mutate) the string?\n\nI wrote this Python program to do a simple string replacement: X = \"hello world\" X.replace(\"hello\", \"goodbye\") After this, the value of X was \"hello world\". Why didn't it changed to \"goodbye world\" instead?", "output": "This is because strings are immutable in Python. Which means that X.replace(\"hello\",\"goodbye\") returns a copy of X with replacements made. Because of that you need to replace this line: X.replace(\"hello\", \"goodbye\") with this line: X = X.replace(\"hello\", \"goodbye\") More broadly, this is true for all Python string methods that change a string's content, e.g. replace,strip,translate,lower/upper,join,... You must assign their output to something if you want to use it and not throw it away, e.g. X = X.strip(' \\t') X2 = X.translate(...) Y = X.lower() Z = X.upper() A = X.join(':') B = X.capitalize() C = X.casefold() and so on."} +{"question_id": 9127434, "score": 159, "creation_date": 1328266290, "tags": ["python", "matplotlib", "gridlines"], "instruction": "How to create major and minor gridlines with different linestyles\n\nI am currently using matplotlib.pyplot to create graphs and would like to have the major gridlines solid and black and the minor ones either greyed or dashed. In the grid properties, which=both/major/mine, and then color and linestyle are defined simply by linestyle. Is there a way to specify minor linestyle only? The appropriate code I have so far is plt.plot(current, counts, 'rd', markersize=8) plt.yscale('log') plt.grid(b=True, which='both', color='0.65', linestyle='-')", "output": "Actually, it is as simple as setting major and minor separately: In [9]: plot([23, 456, 676, 89, 906, 34, 2345]) Out[9]: [<matplotlib.lines.Line2D at 0x6112f90>] In [10]: yscale('log') In [11]: grid(visible=True, which='major', color='b', linestyle='-') In [12]: grid(visible=True, which='minor', color='r', linestyle='--') The gotcha with minor grids is that you have to have minor tick marks turned on too. In the above code this is done by yscale('log'), but it can also be done with plt.minorticks_on(). Note: before matplotlib 3.5, visible parameter was named b"} +{"question_id": 8853063, "score": 159, "creation_date": 1326469054, "tags": ["python", "pep8"], "instruction": "PEP 8, why no spaces around '=' in keyword argument or a default parameter value?\n\nWhy does PEP 8 recommend not having spaces around = in a keyword argument or a default parameter value? Is this inconsistent with recommending spaces around every other occurrence of = in Python code? How is: func(1, 2, very_long_variable_name=another_very_long_variable_name) better than: func(1, 2, very_long_variable_name = another_very_long_variable_name) Any links to discussion/explanation by Python's BDFL will be appreciated. Mind, this question is more about kwargs than default values, i just used the phrasing from PEP 8. I'm not soliciting opinions. I'm asking for reasons behind this decision. It's more like asking why would I use { on the same line as if statement in a C program, not whether I should use it or not.", "output": "I guess that it is because a keyword argument is essentially different than a variable assignment. For example, there is plenty of code like this: kw1 = some_value kw2 = some_value kw3 = some_value some_func( 1, 2, kw1=kw1, kw2=kw2, kw3=kw3) As you see, it makes complete sense to assign a variable to a keyword argument named exactly the same, so it improves readability to see them without spaces. It is easier to recognize that we are using keyword arguments and not assigning a variable to itself. Also, parameters tend to go in the same line whereas assignments usually are each one in their own line, so saving space is likely to be an important matter there."} +{"question_id": 48483348, "score": 158, "creation_date": 1517116109, "tags": ["python", "python-3.x", "asynchronous", "concurrency", "python-asyncio"], "instruction": "How to limit concurrency with Python asyncio?\n\nLet's assume we have a bunch of links to download and each of the link may take a different amount of time to download. And I'm allowed to download using utmost 3 connections only. Now, I want to ensure that I do this efficiently using asyncio. Here's what I'm trying to achieve: At any point in time, try to ensure that I have atleast 3 downloads running. Connection 1: 1---------7---9--- Connection 2: 2---4----6----- Connection 3: 3-----5---8----- The numbers represent the download links, while hyphens represent Waiting for download. Here is the code that I'm using right now from random import randint import asyncio count = 0 async def download(code, permit_download, no_concurrent, downloading_event): global count downloading_event.set() wait_time = randint(1, 3) print('downloading {} will take {} second(s)'.format(code, wait_time)) await asyncio.sleep(wait_time) # I/O, context will switch to main function print('downloaded {}'.format(code)) count -= 1 if count < no_concurrent and not permit_download.is_set(): permit_download.set() async def main(loop): global count permit_download = asyncio.Event() permit_download.set() downloading_event = asyncio.Event() no_concurrent = 3 i = 0 while i < 9: if permit_download.is_set(): count += 1 if count >= no_concurrent: permit_download.clear() loop.create_task(download(i, permit_download, no_concurrent, downloading_event)) await downloading_event.wait() # To force context to switch to download function downloading_event.clear() i += 1 else: await permit_download.wait() await asyncio.sleep(9) if __name__ == '__main__': loop = asyncio.get_event_loop() try: loop.run_until_complete(main(loop)) finally: loop.close() And the output is as expected: downloading 0 will take 2 second(s) downloading 1 will take 3 second(s) downloading 2 will take 1 second(s) downloaded 2 downloading 3 will take 2 second(s) downloaded 0 downloading 4 will take 3 second(s) downloaded 1 downloaded 3 downloading 5 will take 2 second(s) downloading 6 will take 2 second(s) downloaded 5 downloaded 6 downloaded 4 downloading 7 will take 1 second(s) downloading 8 will take 1 second(s) downloaded 7 downloaded 8 But here are my questions: At the moment, I'm simply waiting for 9 seconds to keep the main function running till the downloads are complete. Is there an efficient way of waiting for the last download to complete before exiting the main function? (I know there's asyncio.wait, but I'll need to store all the task references for it to work) What's a good library that does this kind of task? I know javascript has a lot of async libraries, but what about Python? Edit: 2. What's a good library that takes care of common async patterns? (Something like async)", "output": "Before reading the rest of this answer, please note that the idiomatic way of limiting the number of parallel tasks this with asyncio is using asyncio.Semaphore, as shown in Mikhail's answer and elegantly encapsulated in Andrei's answer. This answer contains working, but a bit more complicated ways of achieving the same. I am leaving the answer because in some cases this approach can have advantages over a semaphore, specifically when the amount of items to process is very large or unbounded, and you cannot create all the coroutines in advance. In that case the second (queue-based) solution in this answer is what you want. But in most everyday situations, such as parallel download through aiohttp, one should use a semaphore instead. You basically need a fixed-size pool of download tasks. asyncio doesn't come with a pre-made task pool, but it is easy to create one: simply keep a set of tasks and don't allow it to grow past the limit. Although the question states your reluctance to go down that route, the code ends up much more elegant: import asyncio, random async def download(code): wait_time = random.randint(1, 3) print('downloading {} will take {} second(s)'.format(code, wait_time)) await asyncio.sleep(wait_time) # I/O, context will switch to main function print('downloaded {}'.format(code)) async def main(loop): no_concurrent = 3 dltasks = set() i = 0 while i < 9: if len(dltasks) >= no_concurrent: # Wait for some download to finish before adding a new one _done, dltasks = await asyncio.wait( dltasks, return_when=asyncio.FIRST_COMPLETED) dltasks.add(loop.create_task(download(i))) i += 1 # Wait for the remaining downloads to finish await asyncio.wait(dltasks) An alternative is to create a fixed number of coroutines doing the downloading, much like a fixed-size thread pool, and feed them work using an asyncio.Queue. This removes the need to manually limit the number of downloads, which will be automatically limited by the number of coroutines invoking download(): # download() defined as above async def download_worker(q): while True: code = await q.get() await download(code) q.task_done() async def main(loop): q = asyncio.Queue() workers = [loop.create_task(download_worker(q)) for _ in range(3)] i = 0 while i < 9: await q.put(i) i += 1 await q.join() # wait for all tasks to be processed for worker in workers: worker.cancel() await asyncio.gather(*workers, return_exceptions=True) As for your other question, the obvious choice would be aiohttp."} +{"question_id": 11892729, "score": 158, "creation_date": 1344550334, "tags": ["python", "python-requests", "pyquery"], "instruction": "How can I \"log in\" to a website using Python's Requests module?\n\nI am trying to post a request to log in to a website using the Requests module in Python, but it\u2019s not really working. I'm new to this...so I can't figure out if I should make my Username and Password cookies or some type of HTTP authorization thing I found (??). from pyquery import PyQuery import requests url = 'http://www.locationary.com/home/index2.jsp' So now, I think I'm supposed to use post() and cookies.... ck = {'inUserName': 'USERNAME/EMAIL', 'inUserPass': 'PASSWORD'} r = requests.post(url, cookies=ck) content = r.text q = PyQuery(content) title = q(\"title\").text() print title I have a feeling that I'm doing the cookies thing wrong...I don't know. If it doesn't log in correctly, the title of the home page should come out to \"Locationary.com\", and if it does, it should be \"Home Page.\" What is the explanation for a few things about Requests and cookies? ...It still didn't really work yet. Okay...so this is what the home page HTML says before you log in: </td><td><img src=\"http://www.locationary.com/img/LocationaryImgs/icons/txt_email.gif\"> </td> <td><input class=\"Data_Entry_Field_Login\" type=\"text\" name=\"inUserName\" id=\"inUserName\" size=\"25\"></td> <td><img src=\"http://www.locationary.com/img/LocationaryImgs/icons/txt_password.gif\"> </td> <td><input class=\"Data_Entry_Field_Login\" type=\"password\" name=\"inUserPass\" id=\"inUserPass\"></td> So I think I'm doing it right, but the output is still \"Locationary.com\" I want to be able to stay logged in for a long time and whenever I request a page under that domain, I want the content to show up as if I were logged in.", "output": "If the information you want is on the page you are directed to immediately after login... Let\u2019s call your ck variable payload instead, like in the python-requests documentation: payload = {'inUserName': 'USERNAME/EMAIL', 'inUserPass': 'PASSWORD'} url = 'http://www.locationary.com/home/index2.jsp' requests.post(url, data=payload) Otherwise... See tigerFinch's answer."} +{"question_id": 26980713, "score": 158, "creation_date": 1416253706, "tags": ["javascript", "python", "ajax", "flask", "cors"], "instruction": "Solve Cross Origin Resource Sharing with Flask\n\nFor the following ajax post request for Flask (how can I use data posted from ajax in flask?): $.ajax({ url: \"http://127.0.0.1:5000/foo\", type: \"POST\", contentType: \"application/json\", data: JSON.stringify({'inputVar': 1}), success: function( data ) { alert( \"success\" + data ); } }); I get a Cross Origin Resource Sharing (CORS) error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. The response had HTTP status code 500. I tried solving it in the two following ways, but none seems to work. Using Flask-CORS This is a Flask extension for handling CORS that should make cross-origin AJAX possible. http://flask-cors.readthedocs.org/en/latest/ How to enable CORS in flask and heroku Flask-cors wrapper not working when jwt auth wrapper is applied. Javascript - No 'Access-Control-Allow-Origin' header is present on the requested resource My pythonServer.py using this solution: from flask import Flask from flask.ext.cors import CORS, cross_origin app = Flask(__name__) cors = CORS(app, resources={r\"/foo\": {\"origins\": \"*\"}}) app.config['CORS_HEADERS'] = 'Content-Type' @app.route('/foo', methods=['POST','OPTIONS']) @cross_origin(origin='*',headers=['Content-Type','Authorization']) def foo(): return request.json['inputVar'] if __name__ == '__main__': app.run() Using specific Flask Decorator This is an official Flask code snippet defining a decorator that should allow CORS on the functions it decorates. http://flask.pocoo.org/snippets/56/ Python Flask cross site HTTP POST - doesn't work for specific allowed origins http://chopapp.com/#351l7gc3 My pythonServer.py using this solution: from flask import Flask, make_response, request, current_app from datetime import timedelta from functools import update_wrapper app = Flask(__name__) def crossdomain(origin=None, methods=None, headers=None, max_age=21600, attach_to_all=True, automatic_options=True): if methods is not None: methods = ', '.join(sorted(x.upper() for x in methods)) if headers is not None and not isinstance(headers, basestring): headers = ', '.join(x.upper() for x in headers) if not isinstance(origin, basestring): origin = ', '.join(origin) if isinstance(max_age, timedelta): max_age = max_age.total_seconds() def get_methods(): if methods is not None: return methods options_resp = current_app.make_default_options_response() return options_resp.headers['allow'] def decorator(f): def wrapped_function(*args, **kwargs): if automatic_options and request.method == 'OPTIONS': resp = current_app.make_default_options_response() else: resp = make_response(f(*args, **kwargs)) if not attach_to_all and request.method != 'OPTIONS': return resp h = resp.headers h['Access-Control-Allow-Origin'] = origin h['Access-Control-Allow-Methods'] = get_methods() h['Access-Control-Max-Age'] = str(max_age) if headers is not None: h['Access-Control-Allow-Headers'] = headers return resp f.provide_automatic_options = False return update_wrapper(wrapped_function, f) return decorator @app.route('/foo', methods=['GET','POST','OPTIONS']) @crossdomain(origin=\"*\") def foo(): return request.json['inputVar'] if __name__ == '__main__': app.run() Can you please give some some indication of why that is?", "output": "It worked like a champ, after bit modification to your code # initialization app = Flask(__name__) app.config['SECRET_KEY'] = 'the quick brown fox jumps over the lazy dog' app.config['CORS_HEADERS'] = 'Content-Type' cors = CORS(app, resources={r\"/foo\": {\"origins\": \"http://localhost:port\"}}) @app.route('/foo', methods=['POST']) @cross_origin(origin='localhost',headers=['Content- Type','Authorization']) def foo(): return request.json['inputVar'] if __name__ == '__main__': app.run() I replaced * by localhost. Since as I read in many blogs and posts, you should allow access for specific domain"} +{"question_id": 36965507, "score": 158, "creation_date": 1462097964, "tags": ["python", "python-3.x", "dictionary", "file", "file-writing"], "instruction": "Writing a dictionary to a text file?\n\nI have a dictionary and am trying to write it to a file. exDict = {1:1, 2:2, 3:3} with open('file.txt', 'r') as file: file.write(exDict) I then have the error file.write(exDict) TypeError: must be str, not dict So I fixed that error but another error came exDict = {111:111, 222:222} with open('file.txt', 'r') as file: file.write(str(exDict)) The error: file.write(str(exDict)) io.UnsupportedOperation: not writable How do I resolve this issue?", "output": "First of all you are opening file in read mode and trying to write into it. Consult - IO modes python Secondly, you can only write a string or bytes to a file. If you want to write a dictionary object, you either need to convert it into string or serialize it. import json # as requested in comment exDict = {'exDict': exDict} with open('file.txt', 'w') as file: file.write(json.dumps(exDict)) # use `json.loads` to do the reverse In case of serialization import cPickle as pickle with open('file.txt', 'w') as file: file.write(pickle.dumps(exDict)) # use `pickle.loads` to do the reverse For python 3.x pickle package import would be different import _pickle as pickle"} +{"question_id": 14270391, "score": 158, "creation_date": 1357868920, "tags": ["python", "pandas", "matplotlib", "seaborn", "grouped-bar-chart"], "instruction": "How to plot multiple bars grouped\n\nHow to plot multiple bars in matplotlib, when I tried to call the bar function multiple times, they overlap and as seen the below figure the highest value red can be seen only. How can I plot the multiple bars with dates on the x-axes? So far, I tried this: import matplotlib.pyplot as plt import datetime x = [ datetime.datetime(2011, 1, 4, 0, 0), datetime.datetime(2011, 1, 5, 0, 0), datetime.datetime(2011, 1, 6, 0, 0) ] y = [4, 9, 2] z = [1, 2, 3] k = [11, 12, 13] ax = plt.subplot(111) ax.bar(x, y, width=0.5, color='b', align='center') ax.bar(x, z, width=0.5, color='g', align='center') ax.bar(x, k, width=0.5, color='r', align='center') ax.xaxis_date() plt.show() I got this: The results should be something like, but with the dates are on the x-axes and bars are next to each other:", "output": "import matplotlib.pyplot as plt from matplotlib.dates import date2num import datetime x = [ datetime.datetime(2011, 1, 4, 0, 0), datetime.datetime(2011, 1, 5, 0, 0), datetime.datetime(2011, 1, 6, 0, 0) ] x = date2num(x) y = [4, 9, 2] z = [1, 2, 3] k = [11, 12, 13] ax = plt.subplot(111) ax.bar(x-0.2, y, width=0.2, color='b', align='center') ax.bar(x, z, width=0.2, color='g', align='center') ax.bar(x+0.2, k, width=0.2, color='r', align='center') ax.xaxis_date() plt.show() I don't know what's the \"y values are also overlapping\" means, does the following code solve your problem? ax = plt.subplot(111) w = 0.3 ax.bar(x-w, y, width=w, color='b', align='center') ax.bar(x, z, width=w, color='g', align='center') ax.bar(x+w, k, width=w, color='r', align='center') ax.xaxis_date() ax.autoscale(tight=True) plt.show()"} +{"question_id": 12078667, "score": 158, "creation_date": 1345657672, "tags": ["python", "unit-testing", "celery"], "instruction": "How do you unit test a Celery task?\n\nThe Celery documentation mentions testing Celery within Django but doesn't explain how to test a Celery task if you are not using Django. How do you do this?", "output": "It is possible to test tasks synchronously using any unittest lib out there. I normally do 2 different test sessions when working with celery tasks. The first one (as I'm suggesting below) is completely synchronous and should be the one that makes sure the algorithm does what it should do. The second session uses the whole system (including the broker) and makes sure I'm not having serialization issues or any other distribution, communication problem. So: from celery import Celery celery = Celery() @celery.task def add(x, y): return x + y And your test: from nose.tools import eq_ def test_add_task(): rst = add.apply(args=(4, 4)).get() eq_(rst, 8)"} +{"question_id": 4856583, "score": 157, "creation_date": 1296510865, "tags": ["python", "shell", "subprocess"], "instruction": "How do I pipe a subprocess call to a text file?\n\nsubprocess.call([\"/home/myuser/run.sh\", \"/tmp/ad_xml\", \"/tmp/video_xml\"]) RIght now I have a script that I run. When I run it and it hits this line, it starts printing stuff because run.sh has prints in it. How do I pipe this to a text file also? (And also print, if possible)", "output": "If you want to write the output to a file you can use the stdout-argument of subprocess.call. It takes either None (the default, stdout is inherited from the parent (your script)) subprocess.PIPE (allows you to pipe from one command/process to another) a file object or a file descriptor (what you want, to have the output written to a file) You need to open a file with something like open and pass the object or file descriptor integer to call: f = open(\"blah.txt\", \"w\") subprocess.call([\"/home/myuser/run.sh\", \"/tmp/ad_xml\", \"/tmp/video_xml\"], stdout=f) I'm guessing any valid file-like object would work, like a socket (gasp :)), but I've never tried. As marcog mentions in the comments you might want to redirect stderr as well, you can redirect this to the same location as stdout with stderr=subprocess.STDOUT. Any of the above mentioned values works as well, you can redirect to different places."} +{"question_id": 19371860, "score": 157, "creation_date": 1381800197, "tags": ["python", "zip"], "instruction": "Python: Open file in zip without temporarily extracting it\n\nHow can I open files in a zip archive without extracting them first? I'm using pygame. To save disk space, I have all the images zipped up. Is it possible to load a given image directly from the zip file? For example: pygame.image.load('zipFile/img_01')", "output": "Vincent Povirk's answer won't work completely; import zipfile archive = zipfile.ZipFile('images.zip', 'r') imgfile = archive.open('img_01.png') ... You have to change it in: import zipfile archive = zipfile.ZipFile('images.zip', 'r') imgdata = archive.read('img_01.png') ... For details read the ZipFile docs here."} +{"question_id": 33952142, "score": 157, "creation_date": 1448608804, "tags": ["python", "pandas"], "instruction": "Prevent pandas from interpreting 'NA' as NaN in a string\n\nThe pandas read_csv() method interprets 'NA' as nan (not a number) instead of a valid string. In the simple case below note that the output in row 1, column 2 (zero based count) is 'nan' instead of 'NA'. sample.tsv (tab delimited) PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END 5d8b N P60490 1 146 1 146 1 146 5d8b NA P80377 1 126 1 126 1 126 5d8b O P60491 1 118 1 118 1 118 read_sample.py import pandas as pd df = pd.read_csv( 'sample.tsv', sep='\\t', encoding='utf-8', ) for df_tuples in df.itertuples(index=True): print(df_tuples) output (0, u'5d8b', u'N', u'P60490', 1, 146, 1, 146, 1, 146) (1, u'5d8b', nan, u'P80377', 1, 126, 1, 126, 1, 126) (2, u'5d8b', u'O', u'P60491', 1, 118, 1, 118, 1, 118) Additional Information Re-writing the file with quotes for data in the 'CHAIN' column and then using the quotechar parameter quotechar='\\'' has the same result. And passing a dictionary of types via the dtype parameter dtype=dict(valid_cols) does not change the result. An old answer to Prevent pandas from automatically inferring type in read_csv suggests first using a numpy record array to parse the file, but given the ability to now specify column dtypes, this shouldn't be necessary. Note that itertuples() is used to preserve dtypes as described in the iterrows documentation: \"To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns tuples of the values and which is generally faster as iterrows.\" Example was tested on Python 2 and 3 with pandas version 0.16.2, 0.17.0, and 0.17.1. Is there a way to capture a valid string 'NA' instead of it being converted to nan?", "output": "You could use parameters keep_default_na and na_values to set all NA values by hand docs: import pandas as pd from io import StringIO data = \"\"\" PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END 5d8b N P60490 1 146 1 146 1 146 5d8b NA P80377 _ 126 1 126 1 126 5d8b O P60491 1 118 1 118 1 118 \"\"\" df = pd.read_csv(StringIO(data), sep=' ', keep_default_na=False, na_values=['_']) In [130]: df Out[130]: PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END 0 5d8b N P60490 1 146 1 146 1 146 1 5d8b NA P80377 NaN 126 1 126 1 126 2 5d8b O P60491 1 118 1 118 1 118 In [144]: df.CHAIN.apply(type) Out[144]: 0 <class 'str'> 1 <class 'str'> 2 <class 'str'> Name: CHAIN, dtype: object EDIT All default NA values from na-values (as of pandas 1.0.0): The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A', 'n/a', 'NA', '', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', '']."} +{"question_id": 9158294, "score": 157, "creation_date": 1328521800, "tags": ["python", "default-arguments"], "instruction": "Good uses for mutable function argument default values?\n\nIt is a common mistake in Python to set a mutable object as the default value of an argument in a function. Here's an example taken from this excellent write-up by David Goodger: >>> def bad_append(new_item, a_list=[]): a_list.append(new_item) return a_list >>> print bad_append('one') ['one'] >>> print bad_append('two') ['one', 'two'] The explanation why this happens is here. And now for my question: Is there a good use-case for this syntax? I mean, if everybody who encounters it makes the same mistake, debugs it, understands the issue and from thereon tries to avoid it, what use is there for such syntax?", "output": "Canonical answer is this page: http://effbot.org/zone/default-values.htm It also mentions 3 \"good\" use cases for mutable default argument: binding local variable to current value of outer variable in a callback cache/memoization local rebinding of global names (for highly optimized code)"} +{"question_id": 34342155, "score": 157, "creation_date": 1450378404, "tags": ["python", "ipython", "jupyter-notebook", "jupyter"], "instruction": "How to pickle or store Jupyter (IPython) notebook session for later\n\nLet's say I am doing a larger data analysis in Jupyter/Ipython notebook with lots of time consuming computations done. Then, for some reason, I have to shut down the jupyter local server I, but I would like to return to doing the analysis later, without having to go through all the time-consuming computations again. What I would like love to do is pickle or store the whole Jupyter session (all pandas dataframes, np.arrays, variables, ...) so I can safely shut down the server knowing I can return to my session in exactly the same state as before. Is it even technically possible? Is there a built-in functionality I overlooked? EDIT: based on this answer there is a %store magic which should be \"lightweight pickle\". However you have to store the variables manually like so: #inside a ipython/nb session foo = \"A dummy string\" %store foo closing seesion, restarting kernel %store -r foo # r for refresh print(foo) # \"A dummy string\" which is fairly close to what I would want, but having to do it manually and being unable to distinguish between different sessions makes it less useful.", "output": "I think Dill (pip install dill) answers your question well. Use dill.dump_session to save a Notebook session: import dill dill.dump_session('notebook_env.db') Use dill.load_session to restore a Notebook session: import dill dill.load_session('notebook_env.db') (source)"} +{"question_id": 8087184, "score": 156, "creation_date": 1320963704, "tags": ["python", "python-3.x", "rhel"], "instruction": "Installing Python 3 on RHEL\n\nI'm trying to install python3 on RHEL using the following steps: yum search python3 Which returned No matches found for: python3 Followed by: yum search python None of the search results contained python3. What should I try next?", "output": "It is easy to install python manually (i.e. build from source): Download (there may be newer releases on Python.org): $ wget https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tar.xz Unzip $ tar xf Python-3.* $ cd Python-3.* Prepare compilation $ ./configure Build $ make Install $ make install OR if you don't want to overwrite the python executable (safer, at least on some distros yum needs python to be 2.x, such as for RHEL6) - you can install python3.* as a concurrent instance to the system default with an altinstall: $ make altinstall Now if you want an alternative installation directory, you can pass --prefix to the configurecommand. Example: for 'installing' Python in /opt/local, just add --prefix=/opt/local. After the make install step: In order to use your new Python installation, it could be, that you still have to add the [prefix]/bin to the $PATH and [prefix]/lib to the $LD_LIBRARY_PATH (depending of the --prefix you passed)"} +{"question_id": 109087, "score": 156, "creation_date": 1221939024, "tags": ["python", "methods", "instance-variables"], "instruction": "How to get instance variables in Python?\n\nIs there a built-in method in Python to get an array of all a class' instance variables? For example, if I have this code: class hi: def __init__(self): self.ii = \"foo\" self.kk = \"bar\" Is there a way for me to do this: >>> mystery_method(hi) [\"ii\", \"kk\"] Edit: I originally had asked for class variables erroneously.", "output": "Every object has a __dict__ variable containing all the variables and its values in it. Try this >>> hi_obj = hi() >>> hi_obj.__dict__.keys() Output dict_keys(['ii', 'kk'])"} +{"question_id": 5164642, "score": 156, "creation_date": 1299051257, "tags": ["python", "iterator", "generator"], "instruction": "How to print a generator expression?\n\nIn the Python shell, if I enter a list comprehension such as: >>> [x for x in string.letters if x in [y for y in \"BigMan on campus\"]] I get a nicely printed result: ['a', 'c', 'g', 'i', 'm', 'n', 'o', 'p', 's', 'u', 'B', 'M'] Same for a dictionary comprehension: >>> {x:x*2 for x in range(1,10)} {1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18} If I enter a generator expression, I get not such a friendly response: >>> (x for x in string.letters if x in (y for y in \"BigMan on campus\")) <generator object <genexpr> at 0x1004a0be0> I know I can do this: >>> for i in _: print i, a c g i m n o p s u B M Other than that (or writing a helper function) can I easily evaluate and print that generator object in the interactive shell?", "output": "Quick answer: Doing list() around a generator expression is (almost) exactly equivalent to having [] brackets around it. So yeah, you can do >>> list((x for x in string.letters if x in (y for y in \"BigMan on campus\"))) But you can just as well do >>> [x for x in string.letters if x in (y for y in \"BigMan on campus\")] Yes, that will turn the generator expression into a list comprehension. It's the same thing and calling list() on it. So the way to make a generator expression into a list is to put brackets around it. In Python 3, you could unpack the generator expression into a print statement: >>> print(*(x for x in string.ascii_letters if x in (y for y in \"BigMan on campus\"))) a c g i m n o p s u B M Detailed explanation: A generator expression is a \"naked\" for expression. Like so: x*x for x in range(10) Now, you can't stick that on a line by itself, you'll get a syntax error. But you can put parenthesis around it. >>> (x*x for x in range(10)) <generator object <genexpr> at 0xb7485464> This is sometimes called a generator comprehension, although I think the official name still is generator expression, there isn't really any difference, the parenthesis are only there to make the syntax valid. You do not need them if you are passing it in as the only parameter to a function for example: >>> sorted(x*x for x in range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] Basically all the other comprehensions available in Python 3 and Python 2.7 is just syntactic sugar around a generator expression. Set comprehensions: >>> {x*x for x in range(10)} {0, 1, 4, 81, 64, 9, 16, 49, 25, 36} >>> set(x*x for x in range(10)) {0, 1, 4, 81, 64, 9, 16, 49, 25, 36} Dict comprehensions: >>> dict((x, x*x) for x in range(10)) {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81} >>> {x: x*x for x in range(10)} {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81} And list comprehensions under Python 3: >>> list(x*x for x in range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> [x*x for x in range(10)] [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] Under Python 2, list comprehensions is not just syntactic sugar. But the only difference is that x will under Python 2 leak into the namespace. >>> x 9 While under Python 3 you'll get >>> x Traceback (most recent call last): File \"<stdin>\", line 1, in <module> NameError: name 'x' is not defined This means that the best way to get a nice printout of the content of your generator expression in Python is to make a list comprehension out of it! However, this will obviously not work if you already have a generator object. Doing that will just make a list of one generator: >>> foo = (x*x for x in range(10)) >>> [foo] [<generator object <genexpr> at 0xb7559504>] In that case you will need to call list(): >>> list(foo) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] Although this works, but is kinda stupid: >>> [x for x in foo] [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]"} +{"question_id": 2424000, "score": 156, "creation_date": 1268301703, "tags": ["python", "file", "overwrite"], "instruction": "Read and overwrite a file in Python\n\nCurrently I'm using this: f = open(filename, 'r+') text = f.read() text = re.sub('foobar', 'bar', text) f.seek(0) f.write(text) f.close() But the problem is that the old file is larger than the new file. So I end up with a new file that has a part of the old file on the end of it.", "output": "If you don't want to close and reopen the file, to avoid race conditions, you could truncate it: f = open(filename, 'r+') text = f.read() text = re.sub('foobar', 'bar', text) f.seek(0) f.write(text) f.truncate() f.close() The functionality will likely also be cleaner and safer using open as a context manager, which will close the file handler, even if an error occurs! with open(filename, 'r+') as f: text = f.read() text = re.sub('foobar', 'bar', text) f.seek(0) f.write(text) f.truncate()"} +{"question_id": 16768302, "score": 156, "creation_date": 1369639659, "tags": ["python", "django-templates"], "instruction": "Does Python have a toString() equivalent, and can I convert a class to String?\n\nI'm writing a ToDo list app to help myself get started with Python. The app is running on GAE and I'm storing todo items in the Data Store. I want to display everyone's items to them, and them alone. The problem is that the app currently displays all items to all users, so I can see what you write, and you see what I write. I thought casting my todo.author object to a string and seeing if it matches the user's name would be a good start, but I can't figure out how to do that. This is what I have in my main.py ... user = users.get_current_user() if user: nickname = user.nickname() todos = Todo.all() template_values = {'nickname':nickname, 'todos':todos} ... def post(self): todo = Todo() todo.author = users.get_current_user() todo.item = self.request.get(\"item\") todo.completed = False todo.put() self.redirect('/') In my index.html I had this originally: <input type=\"text\" name=\"item\" class=\"form-prop\" placeholder=\"What needs to be done?\" required/> ... <ul> {% for todo in todos %} <input type=\"checkbox\"> {{todo.item}} <hr /> {% endfor %} </ul> but I'd like to display items only to the user who created them. I thought of trying {% for todo in todos %} {% ifequal todo.author nickname %} <input type=\"checkbox\"> {{todo.item}} <hr /> {% endifequal %} {% endfor %} to no avail. The list turns up blank. I assumed it is because todo.author is not a string. Can I read the value out as a string, or can I cast the object to String? Thanks! Edit: Here is my Todo class class Todo(db.Model): author = db.UserProperty() item = db.StringProperty() completed = db.BooleanProperty() date = db.DateTimeProperty(auto_now_add=True) Will changing my author to a StringProperty effect anything negatively? Maybe I can forgo casting altogether.", "output": "In python, the str() method is similar to the toString() method in other languages. It is called passing the object to convert to a string as a parameter. Internally it calls the __str__() method of the parameter object to get its string representation. In this case, however, you are comparing a UserProperty author from the database, which is of type users.User with the nickname string. You will want to compare the nickname property of the author instead with todo.author.nickname in your template."} +{"question_id": 47007680, "score": 156, "creation_date": 1509333405, "tags": ["python", "operators", "logical-operators"], "instruction": "How do \"and\" and \"or\" act with non-boolean values?\n\nI'm trying to learn python and came across some code that is nice and short but doesn't totally make sense the context was: def fn(*args): return len(args) and max(args)-min(args) I get what it's doing, but why does python do this - ie return the value rather than True/False? 10 and 7-2 returns 5. Similarly, changing the and to or will result in a change in functionality. So 10 or 7 - 2 Would return 10. Is this legit/reliable style, or are there any gotchas on this?", "output": "TL;DR We start by summarising the two behaviour of the two logical operators and and or. These idioms will form the basis of our discussion below. and Return the first Falsy value if there are any, else return the last value in the expression. or Return the first Truthy value if there are any, else return the last value in the expression. The behaviour is also summarised in the docs, especially in this table: Operation Result x or y if x is false, then y, else x x and y if x is false, then x, else y not x if x is false, then True, else False The only operator returning a boolean value regardless of its operands is the not operator. \"Truthiness\", and \"Truthy\" Evaluations The statement len(args) and max(args) - min(args) Is a very pythonic concise (and arguably less readable) way of saying \"if args is not empty, return the result of max(args) - min(args)\", otherwise return 0. In general, it is a more concise representation of an if-else expression. For example, exp1 and exp2 Should (roughly) translate to: r1 = exp1 if r1: r1 = exp2 Or, equivalently, r1 = exp2 if exp1 else exp1 Similarly, exp1 or exp2 Should (roughly) translate to: r1 = exp1 if not r1: r1 = exp2 Or, equivalently, r1 = exp1 if exp1 else exp2 Where exp1 and exp2 are arbitrary python objects, or expressions that return some object. The key to understanding the uses of the logical and and or operators here is understanding that they are not restricted to operating on, or returning boolean values. Any object with a truthiness value can be tested here. This includes int, str, list, dict, tuple, set, NoneType, and user defined objects. Short circuiting rules still apply as well. But what is truthiness? It refers to how objects are evaluated when used in conditional expressions. @Patrick Haugh summarises truthiness nicely in this post. All values are considered \"truthy\" except for the following, which are \"falsy\": None False 0 0.0 0j Decimal(0) Fraction(0, 1) [] - an empty list {} - an empty dict () - an empty tuple '' - an empty str b'' - an empty bytes set() - an empty set an empty range, like range(0) objects for which obj.__bool__() returns False obj.__len__() returns 0 A \"truthy\" value will satisfy the check performed by if or while statements. We use \"truthy\" and \"falsy\" to differentiate from the bool values True and False. How and Works We build on OP's question as a segue into a discussion on how these operators in these instances. Given a function with the definition def foo(*args): ... How do I return the difference between the minimum and maximum value in a list of zero or more arguments? Finding the minimum and maximum is easy (use the inbuilt functions!). The only snag here is appropriately handling the corner case where the argument list could be empty (for example, calling foo()). We can do both in a single line thanks to the and operator: def foo(*args): return len(args) and max(args) - min(args) foo(1, 2, 3, 4, 5) # 4 foo() # 0 Since and is used, the second expression must also be evaluated if the first is True. Note that, if the first expression is evaluated to be truthy, the return value is always the result of the second expression. If the first expression is evaluated to be Falsy, then the result returned is the result of the first expression. In the function above, If foo receives one or more arguments, len(args) is greater than 0 (a positive number), so the result returned is max(args) - min(args). OTOH, if no arguments are passed, len(args) is 0 which is Falsy, and 0 is returned. Note that an alternative way to write this function would be: def foo(*args): if not len(args): return 0 return max(args) - min(args) Or, more concisely, def foo(*args): return 0 if not args else max(args) - min(args) If course, none of these functions perform any type checking, so unless you completely trust the input provided, do not rely on the simplicity of these constructs. How or Works I explain the working of or in a similar fashion with a contrived example. Given a function with the definition def foo(*args): ... How would you complete foo to return all numbers over 9000? We use or to handle the corner case here. We define foo as: def foo(*args): return [x for x in args if x > 9000] or 'No number over 9000!' foo(9004, 1, 2, 500) # [9004] foo(1, 2, 3, 4) # 'No number over 9000!' foo performs a filtration on the list to retain all numbers over 9000. If there exist any such numbers, the result of the list comprehension is a non-empty list which is Truthy, so it is returned (short circuiting in action here). If there exist no such numbers, then the result of the list comp is [] which is Falsy. So the second expression is now evaluated (a non-empty string) and is returned. Using conditionals, we could re-write this function as, def foo(*args): r = [x for x in args if x > 9000] if not r: return 'No number over 9000!' return r As before, this structure is more flexible in terms of error handling."} +{"question_id": 3441358, "score": 156, "creation_date": 1281366266, "tags": ["python", "types", "numbers"], "instruction": "What is the most pythonic way to check if an object is a number?\n\nGiven an arbitrary python object, what's the best way to determine whether it is a number? Here is is defined as acts like a number in certain circumstances. For example, say you are writing a vector class. If given another vector, you want to find the dot product. If given a scalar, you want to scale the whole vector. Checking if something is int, float, long, bool is annoying and doesn't cover user-defined objects that might act like numbers. But, checking for __mul__, for example, isn't good enough because the vector class I just described would define __mul__, but it wouldn't be the kind of number I want.", "output": "Use Number from the numbers module to test isinstance(n, Number) (available since 2.6). >>> from numbers import Number ... from decimal import Decimal ... from fractions import Fraction ... for n in [2, 2.0, Decimal('2.0'), complex(2, 0), Fraction(2, 1), '2']: ... print(f'{n!r:>14} {isinstance(n, Number)}') 2 True 2.0 True Decimal('2.0') True (2+0j) True Fraction(2, 1) True '2' False This is, of course, contrary to duck typing. If you are more concerned about how an object acts rather than what it is, perform your operations as if you have a number and use exceptions to tell you otherwise."} +{"question_id": 1024049, "score": 156, "creation_date": 1245595260, "tags": ["python", "conventions"], "instruction": "Is it pythonic to import inside functions?\n\nPEP 8 says: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. On occation, I violate PEP 8. Some times I import stuff inside functions. As a general rule, I do this if there is an import that is only used within a single function. Any opinions? EDIT (the reason I feel importing in functions can be a good idea): Main reason: It can make the code clearer. When looking at the code of a function I might ask myself: \"What is function/class xxx?\" (xxx being used inside the function). If I have all my imports at the top of the module, I have to go look there to determine what xxx is. This is more of an issue when using from m import xxx. Seeing m.xxx in the function probably tells me more. Depending on what m is: Is it a well-known top-level module/package (import m)? Or is it a sub-module/package (from a.b.c import m)? In some cases having that extra information (\"What is xxx?\") close to where xxx is used can make the function easier to understand.", "output": "In the long run I think you'll appreciate having most of your imports at the top of the file, that way you can tell at a glance how complicated your module is by what it needs to import. If I'm adding new code to an existing file I'll usually do the import where it's needed and then if the code stays I'll make things more permanent by moving the import line to the top of the file. One other point, I prefer to get an ImportError exception before any code is run \u2014 as a sanity check, so that's another reason to import at the top. You can use a linter to check for unused modules."} +{"question_id": 11426560, "score": 155, "creation_date": 1341987053, "tags": ["python", "amazon-web-services", "amazon-s3", "boto"], "instruction": "Amazon S3 boto - how to delete folder?\n\nI created a folder in s3 named \"test\" and I pushed \"test_1.jpg\", \"test_2.jpg\" into \"test\". How can I use boto to delete folder \"test\"?", "output": "There are no folders in S3. Instead, the keys form a flat namespace. However a key with slashes in its name shows specially in some programs, including the AWS console (see for example Amazon S3 boto - how to create a folder?). Instead of deleting \"a directory\", you can (and have to) list files by prefix and delete. In essence: for key in bucket.list(prefix='your/directory/'): key.delete() However the other accomplished answers on this page feature more efficient approaches. Notice that the prefix is just searched using dummy string search. If the prefix were your/directory, that is, without the trailing slash appended, the program would also happily delete your/directory-that-you-wanted-to-remove-is-definitely-not-t\u200c\u200bhis-one. For more information, see S3 boto list keys sometimes returns directory key."} +{"question_id": 3719631, "score": 155, "creation_date": 1284567331, "tags": ["python", "logarithm"], "instruction": "Log to the base 2 in python\n\nHow should I compute log to the base two in python. Eg. I have this equation where I am using log base 2 import math e = -(t/T)* math.log((t/T)[, 2])", "output": "It's good to know that but also know that math.log takes an optional second argument which allows you to specify the base: In [22]: import math In [23]: math.log? Type: builtin_function_or_method Base Class: <type 'builtin_function_or_method'> String Form: <built-in function log> Namespace: Interactive Docstring: log(x[, base]) -> the logarithm of x to the given base. If the base not specified, returns the natural logarithm (base e) of x. In [25]: math.log(8,2) Out[25]: 3.0"} +{"question_id": 43069780, "score": 155, "creation_date": 1490704619, "tags": ["python", "python-3.x", "virtualenv"], "instruction": "How can I create a virtual environment with Python 3?\n\nI am using Python 2.7 + virtualenv version 1.10.1 for running myproject project. Due to some other projects requirement, I have to work with another version of Python (3.5). For this, I have installed Python in my user directory. Also, I have downloaded and installed virtualenv (version 15.1.0) into my user directory. But whenever I am trying to create a virtual env, I am getting the below error python virtualenv/virtualenv.py myproject Output: Using base prefix '/home/myuser/python3' New python executable in /home/mount/myuser/project_python3/myproject/bin/python ERROR: The executable /home/mount/myuser/project_python3/myproject/bin/python is not functioning ERROR: It thinks sys.prefix is '/home/myuser/python3' (should be '/home/mount/myuser/project_python3/myproject') ERROR: virtualenv is not compatible with this system or executable What am I doing wrong here, and how do I solve it?", "output": "Use: python3 -m venv ./path-to-new-venv This is the recommended way to create virtual environments. Historically, a wrapper command pyvenv was provided for this. However, the wrapper was deprecated in Python 3.6, and removed in 3.8."} +{"question_id": 6930144, "score": 155, "creation_date": 1312390334, "tags": ["python"], "instruction": "Underscore vs Double underscore with variables and methods\n\nSomebody was nice enough to explain to me that __method() mangles but instead of bothering him further since there are a lot of other people who need help I was wondering if somebody could elaborate the differences further. For example I don't need mangling but does _ stay private so somebody couldn't do instance._method()? Or does it just keep it from overwriting another variable by making it unique? I don't need my internal methods \"hidden\" but since they are specific to use I don't want them being used outside of the class.", "output": "From PEP 8: _single_leading_underscore: weak \"internal use\" indicator. E.g. from M import * does not import objects whose name starts with an underscore. single_trailing_underscore_: used by convention to avoid conflicts with Python keyword, e.g. Tkinter.Toplevel(master, class_='ClassName') __double_leading_underscore: when naming a class attribute, invokes name mangling (inside class FooBar, __boo becomes _FooBar__boo; see below). __double_leading_and_trailing_underscore__: \"magic\" objects or attributes that live in user-controlled namespaces. E.g. __init__, __import__ or __file__. Never invent such names; only use them as documented. Also, from David Goodger's Code Like a Pythonista: Attributes: interface, _internal, __private But try to avoid the __private form. I never use it. Trust me. If you use it, you WILL regret it later. Explanation: People coming from a C++/Java background are especially prone to overusing/misusing this \"feature\". But __private names don't work the same way as in Java or C++. They just trigger a name mangling whose purpose is to prevent accidental namespace collisions in subclasses: MyClass.__private just becomes MyClass._MyClass__private. (Note that even this breaks down for subclasses with the same name as the superclass, e.g. subclasses in different modules.) It is possible to access __private names from outside their class, just inconvenient and fragile (it adds a dependency on the exact name of the superclass). The problem is that the author of a class may legitimately think \"this attribute/method name should be private, only accessible from within this class definition\" and use the __private convention. But later on, a user of that class may make a subclass that legitimately needs access to that name. So either the superclass has to be modified (which may be difficult or impossible), or the subclass code has to use manually mangled names (which is ugly and fragile at best). There's a concept in Python: \"we're all consenting adults here\". If you use the __private form, who are you protecting the attribute from? It's the responsibility of subclasses to use attributes from superclasses properly, and it's the responsibility of superclasses to document their attributes properly. It's better to use the single-leading-underscore convention, _internal. \"This isn't name mangled at all; it just indicates to others to \"be careful with this, it's an internal implementation detail; don't touch it if you don't fully understand it\". It's only a convention though."} +{"question_id": 3418050, "score": 155, "creation_date": 1281033890, "tags": ["python", "datetime", "calendar"], "instruction": "How to map month name to month number and vice versa?\n\nI am trying to create a function that can convert a month number to an abbreviated month name or an abbreviated month name to a month number. I thought this might be a common question but I could not find it online. I was thinking about the calendar module. I see that to convert from month number to abbreviated month name you can just do calendar.month_abbr[num]. I do not see a way to go in the other direction though. Would creating a dictionary for converting the other direction be the best way to handle this? Or is there a better way to go from month name to month number and vice versa?", "output": "Create a reverse dictionary using the calendar module (which, like any module, you will need to import): {month: index for index, month in enumerate(calendar.month_abbr) if month} In Python versions before 2.7, due to dict comprehension syntax not being supported in the language, you would have to do dict((month, index) for index, month in enumerate(calendar.month_abbr) if month)"} +{"question_id": 23416381, "score": 155, "creation_date": 1398977462, "tags": ["python", "list", "iterator"], "instruction": "Circular list iterator in Python\n\nI need to iterate over a circular list, possibly many times, each time starting with the last visited item. The use case is a connection pool. A client asks for connection, an iterator checks if pointed-to connection is available and returns it, otherwise loops until it finds one that is available. How can I do this neatly in Python? If you instead need an immediately created list of the results up to a certain length, rather than iterating on demand: see Repeat list to max number of elements for general techniques, and How to replicate array to specific length array for Numpy-specific techniques.", "output": "Use itertools.cycle, that's its exact purpose: from itertools import cycle lst = ['a', 'b', 'c'] pool = cycle(lst) for item in pool: print(item) Output: a b c a b c ... (Loops forever, obviously) In order to manually advance the iterator and pull values from it one by one, simply call next(pool): >>> next(pool) 'a' >>> next(pool) 'b'"} +{"question_id": 18967441, "score": 155, "creation_date": 1379965167, "tags": ["python", "routes", "flask"], "instruction": "Add a prefix to all Flask routes\n\nI have a prefix that I want to add to every route. Right now I add a constant to the route at every definition. Is there a way to do this automatically? PREFIX = \"/abc/123\" @app.route(PREFIX + \"/\") def index_page(): return \"This is a website about burritos\" @app.route(PREFIX + \"/about\") def about_page(): return \"This is a website about burritos\"", "output": "The answer depends on how you are serving this application. Sub-mounted inside of another WSGI container Assuming that you are going to run this application inside of a WSGI container (mod_wsgi, uwsgi, gunicorn, etc); you need to actually mount, at that prefix the application as a sub-part of that WSGI container (anything that speaks WSGI will do) and to set your APPLICATION_ROOT config value to your prefix: app.config[\"APPLICATION_ROOT\"] = \"/abc/123\" @app.route(\"/\") def index(): return \"The URL for this page is {}\".format(url_for(\"index\")) # Will return \"The URL for this page is /abc/123/\" Setting the APPLICATION_ROOT config value simply limit Flask's session cookie to that URL prefix. Everything else will be automatically handled for you by Flask and Werkzeug's excellent WSGI handling capabilities. An example of properly sub-mounting your app If you are not sure what the first paragraph means, take a look at this example application with Flask mounted inside of it: from flask import Flask, url_for from werkzeug.serving import run_simple from werkzeug.middleware.dispatcher import DispatcherMiddleware app = Flask(__name__) app.config['APPLICATION_ROOT'] = '/abc/123' @app.route('/') def index(): return 'The URL for this page is {}'.format(url_for('index')) def simple(env, resp): resp(b'200 OK', [(b'Content-Type', b'text/plain')]) return [b'Hello WSGI World'] app.wsgi_app = DispatcherMiddleware(simple, {'/abc/123': app.wsgi_app}) if __name__ == '__main__': app.run('localhost', 5000) Proxying requests to the app If, on the other hand, you will be running your Flask application at the root of its WSGI container and proxying requests to it (for example, if it's being FastCGI'd to, or if nginx is proxy_pass-ing requests for a sub-endpoint to your stand-alone uwsgi / gevent server then you can either: Use a Blueprint, as Miguel points out in his answer. or use the DispatcherMiddleware from werkzeug (or the PrefixMiddleware from su27's answer) to sub-mount your application in the stand-alone WSGI server you're using. (See An example of properly sub-mounting your app above for the code to use)."} +{"question_id": 2603956, "score": 155, "creation_date": 1270766749, "tags": ["python"], "instruction": "Can we have assignment in a condition?\n\nIs it possible to have assignment in a condition? For ex. if (a=some_func()): # Use a", "output": "Why not try it out? >>> def some_func(): ... return 2 ... >>> if (a = some_func()): File \"<stdin>\", line 1 if (a = some_func()): ^ SyntaxError: invalid syntax So, no. Update: This is possible (with different syntax) in Python 3.8 if a := some_func():"} +{"question_id": 28170623, "score": 155, "creation_date": 1422361977, "tags": ["python", "hdf5"], "instruction": "How to read HDF5 files in Python\n\nI am trying to read data from hdf5 file in Python. I can read the hdf5 file using h5py, but I cannot figure out how to access data within the file. My code import h5py import numpy as np f1 = h5py.File(file_name,'r+') This works and the file is read. But how can I access data inside the file object f1?", "output": "Read HDF5 import h5py filename = \"file.hdf5\" with h5py.File(filename, \"r\") as f: # Print all root level object names (aka keys) # these can be group or dataset names print(\"Keys: %s\" % f.keys()) # get first object name/key; may or may NOT be a group a_group_key = list(f.keys())[0] # get the object type for a_group_key: usually group or dataset print(type(f[a_group_key])) # If a_group_key is a group name, # this gets the object names in the group and returns as a list data = list(f[a_group_key]) # If a_group_key is a dataset name, # this gets the dataset values and returns as a list data = list(f[a_group_key]) # preferred methods to get dataset values: ds_obj = f[a_group_key] # returns as a h5py dataset object ds_arr = f[a_group_key][()] # returns as a numpy array Write HDF5 import h5py # Create random data import numpy as np data_matrix = np.random.uniform(-1, 1, size=(10, 3)) # Write data to HDF5 with h5py.File(\"file.hdf5\", \"w\") as data_file: data_file.create_dataset(\"dataset_name\", data=data_matrix) See h5py docs for more information. Alternatives JSON: Nice for writing human-readable data; VERY commonly used (read & write) CSV: Super simple format (read & write) pickle: A Python serialization format (read & write) MessagePack (Python package): More compact representation (read & write) HDF5 (Python package): Nice for matrices (read & write) XML: exists too *sigh* (read & write) For your application, the following might be important: Support by other programming languages Reading / writing performance Compactness (file size) See also: Comparison of data serialization formats In case you are rather looking for a way to make configuration files, you might want to read my short article Configuration files in Python"} +{"question_id": 2522005, "score": 155, "creation_date": 1269593540, "tags": ["python", "performance", "exception", "micro-optimization"], "instruction": "Cost of exception handlers in Python\n\nIn another question, the accepted answer suggested replacing a (very cheap) if statement in Python code with a try/except block to improve performance. Coding style issues aside, and assuming that the exception is never triggered, how much difference does it make (performance-wise) to have an exception handler, versus not having one, versus having a compare-to-zero if-statement?", "output": "Why don't you measure it using the timeit module? That way you can see whether it's relevant to your application. OK, so I've just tried the following (using Python 3.11.1 on Windows 11): import timeit statements=[\"\"\"\\ try: b = 10/a except ZeroDivisionError: pass\"\"\", \"\"\"\\ if a: b = 10/a\"\"\", \"b = 10/a\"] for a in (1,0): for s in statements: t = timeit.Timer(stmt=s, setup='a={}'.format(a)) print(\"a = {}\\n{}\".format(a,s)) print(\"%.2f usec/pass\\n\" % (1000000 * t.timeit(number=100000)/100000)) Result: a = 1 try: b = 10/a except ZeroDivisionError: pass 0.06 usec/pass a = 1 if a: b = 10/a 0.05 usec/pass a = 1 b = 10/a 0.03 usec/pass a = 0 try: b = 10/a except ZeroDivisionError: pass 0.27 usec/pass a = 0 if a: b = 10/a 0.02 usec/pass a = 0 b = 10/a Traceback (most recent call last): File \"<stdin>\", line 5, in <module> File \"C:\\Python311\\Lib\\timeit.py\", line 178, in timeit timing = self.inner(it, self.timer) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"<timeit-src>\", line 6, in inner ZeroDivisionError: division by zero As you can see, there is not much of a difference between using a try/except clause vs. an explicit if statement, unless the exception gets triggered. (And of course, not having any control structure is fastest, though not by much, and it will crash the program if anything goes wrong). Compare this to the results obtained in 2010: a = 1 try: b = 10/a except ZeroDivisionError: pass 0.25 usec/pass a = 1 if a: b = 10/a 0.29 usec/pass a = 1 b = 10/a 0.22 usec/pass a = 0 try: b = 10/a except ZeroDivisionError: pass 0.57 usec/pass a = 0 if a: b = 10/a 0.04 usec/pass a = 0 b = 10/a ZeroDivisionError: int division or modulo by zero I appears that the PC I'm using now is about twice as fast as the one I had back then. The cost of handling an Exception appears identical, and the \"normal\" operations (arithmetic) have been improved even more than the handling of control structures, but the point from all those years ago still stands: It's all within the same order of magnitude and unlikely to matter either way. Only if the condition is actually met (often), then the if version is significantly faster."} +{"question_id": 2472552, "score": 154, "creation_date": 1268938514, "tags": ["python", "git", "module"], "instruction": "Python way to clone a git repository\n\nIs there a Python way without using a subprocess to clone a git repository? I'm up for using any sort of modules you recommend.", "output": "There is GitPython. Haven\u2019t heard of it before and internally, it relies on having the git executables somewhere; additionally, they might have plenty of bugs. But it could be worth a try. How to clone: import git # pip install gitpython git.Git(\"/your/directory/to/clone\").clone(\"git://gitorious.org/git-python/mainline.git\") (It\u2019s not nice and I don\u2019t know if it is the supported way to do it, but it worked.)"} +{"question_id": 8609153, "score": 154, "creation_date": 1324584399, "tags": ["python", "class"], "instruction": "Why do we use __init__ in Python classes?\n\nI am having trouble understanding the Initialization of classes. What's the point of them and how do we know what to include in them? Does writing in classes require a different type of thinking versus creating functions (I figured I could just create functions and then just wrap them in a class so I can re-use them. Will that work?) Here's an example: class crawler: # Initialize the crawler with the name of database def __init__(self,dbname): self.con=sqlite.connect(dbname) def __del__(self): self.con.close() def dbcommit(self): self.con.commit() Or another code sample: class bicluster: def __init__(self,vec,left=None,right=None,distance=0.0,id=None): self.left=left self.right=right self.vec=vec self.id=id self.distance=distance There are so many classes with __init__ I come across when trying to read other people's code, but I don't understand the logic in creating them.", "output": "By what you wrote, you are missing a critical piece of understanding: the difference between a class and an object. __init__ doesn't initialize a class, it initializes an instance of a class or an object. Each dog has colour, but dogs as a class don't. Each dog has four or fewer feet, but the class of dogs doesn't. The class is a concept of an object. When you see Fido and Spot, you recognise their similarity, their doghood. That's the class. When you say class Dog: def __init__(self, legs, colour): self.legs = legs self.colour = colour fido = Dog(4, \"brown\") spot = Dog(3, \"mostly yellow\") You're saying, Fido is a brown dog with 4 legs while Spot is a bit of a cripple and is mostly yellow. The __init__ function is called a constructor, or initializer, and is automatically called when you create a new instance of a class. Within that function, the newly created object is assigned to the parameter self. The notation self.legs is an attribute called legs of the object in the variable self. Attributes are kind of like variables, but they describe the state of an object, or particular actions (functions) available to the object. However, notice that you don't set colour for the doghood itself - it's an abstract concept. There are attributes that make sense on classes. For instance, population_size is one such - it doesn't make sense to count the Fido because Fido is always one. It does make sense to count dogs. Let us say there're 200 million dogs in the world. It's the property of the Dog class. Fido has nothing to do with the number 200 million, nor does Spot. It's called a \"class attribute\", as opposed to \"instance attributes\" that are colour or legs above. Now, to something less canine and more programming-related. As I write below, class to add things is not sensible - what is it a class of? Classes in Python make up of collections of different data, that behave similarly. Class of dogs consists of Fido and Spot and 199999999998 other animals similar to them, all of them peeing on lampposts. What does the class for adding things consist of? By what data inherent to them do they differ? And what actions do they share? However, numbers... those are more interesting subjects. Say, Integers. There's a lot of them, a lot more than dogs. I know that Python already has integers, but let's play dumb and \"implement\" them again (by cheating and using Python's integers). So, Integers are a class. They have some data (value), and some behaviours (\"add me to this other number\"). Let's show this: class MyInteger: def __init__(self, newvalue): # imagine self as an index card. # under the heading of \"value\", we will write # the contents of the variable newvalue. self.value = newvalue def add(self, other): # when an integer wants to add itself to another integer, # we'll take their values and add them together, # then make a new integer with the result value. return MyInteger(self.value + other.value) three = MyInteger(3) # three now contains an object of class MyInteger # three.value is now 3 five = MyInteger(5) # five now contains an object of class MyInteger # five.value is now 5 eight = three.add(five) # here, we invoked the three's behaviour of adding another integer # now, eight.value is three.value + five.value = 3 + 5 = 8 print eight.value # ==> 8 This is a bit fragile (we're assuming other will be a MyInteger), but we'll ignore now. In real code, we wouldn't; we'd test it to make sure, and maybe even coerce it (\"you're not an integer? by golly, you have 10 nanoseconds to become one! 9... 8....\") We could even define fractions. Fractions also know how to add themselves. class MyFraction: def __init__(self, newnumerator, newdenominator): self.numerator = newnumerator self.denominator = newdenominator # because every fraction is described by these two things def add(self, other): newdenominator = self.denominator * other.denominator newnumerator = self.numerator * other.denominator + self.denominator * other.numerator return MyFraction(newnumerator, newdenominator) There's even more fractions than integers (not really, but computers don't know that). Let's make two: half = MyFraction(1, 2) third = MyFraction(1, 3) five_sixths = half.add(third) print five_sixths.numerator # ==> 5 print five_sixths.denominator # ==> 6 You're not actually declaring anything here. Attributes are like a new kind of variable. Normal variables only have one value. Let us say you write colour = \"grey\". You can't have another variable named colour that is \"fuchsia\" - not in the same place in the code. Arrays solve that to a degree. If you say colour = [\"grey\", \"fuchsia\"], you have stacked two colours into the variable, but you distinguish them by their position (0, or 1, in this case). Attributes are variables that are bound to an object. Like with arrays, we can have plenty colour variables, on different dogs. So, fido.colour is one variable, but spot.colour is another. The first one is bound to the object within the variable fido; the second, spot. Now, when you call Dog(4, \"brown\"), or three.add(five), there will always be an invisible parameter, which will be assigned to the dangling extra one at the front of the parameter list. It is conventionally called self, and will get the value of the object in front of the dot. Thus, within the Dog's __init__ (constructor), self will be whatever the new Dog will turn out to be; within MyInteger's add, self will be bound to the object in the variable three. Thus, three.value will be the same variable outside the add, as self.value within the add. If I say the_mangy_one = fido, I will start referring to the object known as fido with yet another name. From now on, fido.colour is exactly the same variable as the_mangy_one.colour. So, the things inside the __init__. You can think of them as noting things into the Dog's birth certificate. colour by itself is a random variable, could contain anything. fido.colour or self.colour is like a form field on the Dog's identity sheet; and __init__ is the clerk filling it out for the first time. Any clearer? EDIT: Expanding on the comment below: You mean a list of objects, don't you? First of all, fido is actually not an object. It is a variable, which is currently containing an object, just like when you say x = 5, x is a variable currently containing the number five. If you later change your mind, you can do fido = Cat(4, \"pleasing\") (as long as you've created a class Cat), and fido would from then on \"contain\" a cat object. If you do fido = x, it will then contain the number five, and not an animal object at all. A class by itself doesn't know its instances unless you specifically write code to keep track of them. For instance: class Cat: census = [] #define census array def __init__(self, legs, colour): self.colour = colour self.legs = legs Cat.census.append(self) Here, census is a class-level attribute of Cat class. fluffy = Cat(4, \"white\") spark = Cat(4, \"fiery\") Cat.census # ==> [<__main__.Cat instance at 0x108982cb0>, <__main__.Cat instance at 0x108982e18>] # or something like that Note that you won't get [fluffy, sparky]. Those are just variable names. If you want cats themselves to have names, you have to make a separate attribute for the name, and then override the __str__ method to return this name. This method's (i.e. class-bound function, just like add or __init__) purpose is to describe how to convert the object to a string, like when you print it out."} +{"question_id": 12433695, "score": 154, "creation_date": 1347671125, "tags": ["python", "list", "slice"], "instruction": "Extract elements of list at odd positions\n\nSo I want to create a list which is a sublist of some existing list. For example, L = [1, 2, 3, 4, 5, 6, 7], I want to create a sublist li such that li contains all the elements in L at odd positions. While I can do it by L = [1, 2, 3, 4, 5, 6, 7] li = [] count = 0 for i in L: if count % 2 == 1: li.append(i) count += 1 But I want to know if there is another way to do the same efficiently and in fewer number of steps.", "output": "Solution Yes, you can: l = L[1::2] And this is all. The result will contain the elements placed on the following positions (0-based, so first element is at position 0, second at 1 etc.): 1, 3, 5 so the result (actual numbers) will be: 2, 4, 6 Explanation The [1::2] at the end is just a notation for list slicing. Usually it is in the following form: some_list[start:stop:step] If we omitted start, the default (0) would be used. So the first element (at position 0, because the indexes are 0-based) would be selected. In this case the second element will be selected. Because the second argument (stop) is omitted, the default is being used (the end of the list). So the list is being iterated from the second element to the end. We also provided third argument (step) which is 2. Which means that one element will be selected, the next will be skipped, and so on... So, to sum up, in this case [1::2] means: take the second element (which, by the way, is an odd element, if you judge from the index), skip one element (because we have step=2, so we are skipping one, as a contrary to step=1 which is default), take the next element, Repeat steps 2.-3. until the end of the list is reached, EDIT: @PreetKukreti gave a link for another explanation on Python's list slicing notation. See here: How slicing in Python works Extras - replacing counter with enumerate() In your code, you explicitly create and increase the counter. In Python this is not necessary, as you can enumerate through some iterable using enumerate(): for count, i in enumerate(L): if count % 2 == 1: l.append(i) The above serves exactly the same purpose as the code you were using: count = 0 for i in L: if count % 2 == 1: l.append(i) count += 1 More on emulating for loops with counter in Python: How to access the index value in a 'for' loop?"} +{"question_id": 64505, "score": 154, "creation_date": 1221496595, "tags": ["python", "smtp"], "instruction": "Sending mail from Python using SMTP\n\nI'm using the following method to send mail from Python using SMTP. Is it the right method to use or are there gotchas I'm missing ? from smtplib import SMTP import datetime debuglevel = 0 smtp = SMTP() smtp.set_debuglevel(debuglevel) smtp.connect('YOUR.MAIL.SERVER', 26) smtp.login('USERNAME@DOMAIN', 'PASSWORD') from_addr = \"John Doe <john@doe.net>\" to_addr = \"foo@bar.com\" subj = \"hello\" date = datetime.datetime.now().strftime( \"%d/%m/%Y %H:%M\" ) message_text = \"Hello\\nThis is a mail from your server\\n\\nBye\\n\" msg = \"From: %s\\nTo: %s\\nSubject: %s\\nDate: %s\\n\\n%s\" % ( from_addr, to_addr, subj, date, message_text ) smtp.sendmail(from_addr, to_addr, msg) smtp.quit()", "output": "The script I use is quite similar; I post it here as an example of how to use the email.* modules to generate MIME messages; so this script can be easily modified to attach pictures, etc. I rely on my ISP to add the date time header. My ISP requires me to use a secure smtp connection to send mail, I rely on the smtplib module (downloadable at http://www1.cs.columbia.edu/~db2501/ssmtplib.py) As in your script, the username and password, (given dummy values below), used to authenticate on the SMTP server, are in plain text in the source. This is a security weakness; but the best alternative depends on how careful you need (want?) to be about protecting these. ======================================= #! /usr/local/bin/python SMTPserver = 'smtp.att.yahoo.com' sender = 'me@my_email_domain.net' destination = ['recipient@her_email_domain.com'] USERNAME = \"USER_NAME_FOR_INTERNET_SERVICE_PROVIDER\" PASSWORD = \"PASSWORD_INTERNET_SERVICE_PROVIDER\" # typical values for text_subtype are plain, html, xml text_subtype = 'plain' content=\"\"\"\\ Test message \"\"\" subject=\"Sent from Python\" import sys from smtplib import SMTP_SSL as SMTP # this invokes the secure SMTP protocol (port 465, uses SSL) # from smtplib import SMTP # use this for standard SMTP protocol (port 25, no encryption) # old version # from email.MIMEText import MIMEText from email.mime.text import MIMEText try: msg = MIMEText(content, text_subtype) msg['Subject']= subject msg['From'] = sender # some SMTP servers will do this automatically, not all conn = SMTP(SMTPserver) conn.set_debuglevel(False) conn.login(USERNAME, PASSWORD) try: conn.sendmail(sender, destination, msg.as_string()) finally: conn.quit() except: sys.exit( \"mail failed; %s\" % \"CUSTOM_ERROR\" ) # give an error message"} +{"question_id": 3114252, "score": 154, "creation_date": 1277417193, "tags": ["python", "iterator"], "instruction": "One-liner to check whether an iterator yields at least one element?\n\nCurrently I'm doing this: try: something = next(iterator) # ... except StopIteration: # ... But I would like an expression that I can place inside a simple if statement. Is there anything built-in which would make this code look less clumsy? I only need to check for the first item.", "output": "if any(True for _ in iterator): print('iterator had at least one element') if all(False for _ in iterator): print('iterator was empty') Note that this will consume the first element of the iterable if it has at least one element."} +{"question_id": 33271098, "score": 154, "creation_date": 1445470883, "tags": ["python", "pandas", "group-by", "dataframe"], "instruction": "Get a frequency count based on multiple dataframe columns\n\nI have the following dataframe. Group Size Short Small Short Small Moderate Medium Moderate Small Tall Large I want to count the frequency of how many times the same row appears in the dataframe. Group Size Time Short Small 2 Moderate Medium 1 Moderate Small 1 Tall Large 1", "output": "You can use groupby's size import pandas as pd # load the sample data data = {'Group': ['Short', 'Short', 'Moderate', 'Moderate', 'Tall'], 'Size': ['Small', 'Small', 'Medium', 'Small', 'Large']} df = pd.DataFrame(data) Option 1: dfg = df.groupby(by=[\"Group\", \"Size\"]).size() # which results in a pandas.core.series.Series Group Size Moderate Medium 1 Small 1 Short Small 2 Tall Large 1 dtype: int64 Option 2: dfg = df.groupby(by=[\"Group\", \"Size\"]).size().reset_index(name=\"Time\") # which results in a pandas.core.frame.DataFrame Group Size Time 0 Moderate Medium 1 1 Moderate Small 1 2 Short Small 2 3 Tall Large 1 Option 3: dfg = df.groupby(by=[\"Group\", \"Size\"], as_index=False).size() # which results in a pandas.core.frame.DataFrame Group Size Time 0 Moderate Medium 1 1 Moderate Small 1 2 Short Small 2 3 Tall Large 1"} +{"question_id": 19735250, "score": 154, "creation_date": 1383339261, "tags": ["php", "python", "linux", "exec", "archlinux"], "instruction": "Running a Python script from PHP\n\nI'm trying to run a Python script from PHP using the following command: exec('/usr/bin/python2.7 /srv/http/assets/py/switch.py arg1 arg2'); However, PHP simply doesn't produce any output. Error reporting is set to E_ALL and display_errors is on. Here's what I've tried: I used python2, /usr/bin/python2 and python2.7 instead of /usr/bin/python2.7 I also used a relative path instead of an absolute path which didn't change anything either. I tried using the commands exec, shell_exec, system. However, if I run if (exec('echo TEST') == 'TEST') { echo 'exec works!'; } it works perfectly fine while shutdown now doesn't do anything. PHP has the permissions to access and execute the file. EDIT: Thanks to Alejandro, I was able to fix the problem. If you have the same problem, don't forget that your webserver probably/hopefully doesn't run as root. Try logging in as your webserver's user or a user with similar permissions and try to run the commands yourself.", "output": "Tested on Ubuntu Server 10.04. I hope it helps you also on Arch Linux. In PHP use shell_exec function: Execute command via shell and return the complete output as a string. It returns the output from the executed command or NULL if an error occurred or the command produces no output. <?php $command = escapeshellcmd('/usr/custom/test.py'); $output = shell_exec($command); echo $output; ?> Into Python file test.py, verify this text in first line: (see shebang explain): #!/usr/bin/env python If you have several versions of Python installed, /usr/bin/env will ensure the interpreter used is the first one on your environment's $PATH. The alternative would be to hardcode something like #!/usr/bin/python; that's ok, but less flexible. In Unix, an executable file that's meant to be interpreted can indicate what interpreter to use by having a #! at the start of the first line, followed by the interpreter (and any flags it may need). If you're talking about other platforms, of course, this rule does not apply (but that \"shebang line\" does no harm, and will help if you ever copy that script to a platform with a Unix base, such as Linux, Mac, etc). This applies when you run it in Unix by making it executable (chmod +x myscript.py) and then running it directly: ./myscript.py, rather than just python myscript.py To make executable a file on unix-type platforms: chmod +x myscript.py Also Python file must have correct privileges (execution for user www-data / apache if PHP script runs in browser or curl) and/or must be \"executable\". Also all commands into .py file must have correct privileges. Taken from php manual: Just a quick reminder for those trying to use shell_exec on a unix-type platform and can't seem to get it to work. PHP executes as the web user on the system (generally www for Apache), so you need to make sure that the web user has rights to whatever files or directories that you are trying to use in the shell_exec command. Other wise, it won't appear to be doing anything."} +{"question_id": 61321503, "score": 154, "creation_date": 1587382490, "tags": ["python", "python-3.x", "pathlib"], "instruction": "Is there a Pathlib alternate for os.path.join?\n\nI am currently accessing the parent directory of my file using Pathlib as follows: Path(__file__).parent When I print it, and this gives me the following output: print('Parent: ', Path(__file__).parent) #output /home/user/EC/main-folder The main-folder has a .env file which I want to access and for that I want to join the parent path with the .env. Right now, I did: dotenv_path = os.path.join(Path(__file__).parent, \".env\") which works. But I would like to know, if there is a Pathlib alternate to os.path.join()? Something like: dotenv_path = pathlib_alternate_for_join(Path(__file__).parent, \".env\")", "output": "Use pathlib.Path.joinpath: (Path(__file__).parent).joinpath('.env')"} +{"question_id": 5243596, "score": 154, "creation_date": 1299662408, "tags": ["python", "sql", "string-formatting"], "instruction": "Python SQL query string formatting\n\nI'm trying to find the best way to format an sql query string. When I'm debugging my application I'd like to log to file all the sql query strings, and it is important that the string is properly formated. Option 1 def myquery(): sql = \"select field1, field2, field3, field4 from table where condition1=1 and condition2=2\" con = mymodule.get_connection() ... This is good for printing the sql string. It is not a good solution if the string is long and not fits the standard width of 80 characters. Option 2 def query(): sql = \"\"\" select field1, field2, field3, field4 from table where condition1=1 and condition2=2\"\"\" con = mymodule.get_connection() ... Here the code is clear but when you print the sql query string you get all these annoying white spaces. u'\\nselect field1, field2, field3, field4\\n_____from table\\n____where condition1=1 \\n_____and condition2=2' Note: I have replaced white spaces with underscore _, because they are trimmed by the editor Option 3 def query(): sql = \"\"\"select field1, field2, field3, field4 from table where condition1=1 and condition2=2\"\"\" con = mymodule.get_connection() ... I don't like this option because it breaks the clearness of the well tabulated code. Option 4 def query(): sql = \"select field1, field2, field3, field4 \" \\ \"from table \" \\ \"where condition1=1 \" \\ \"and condition2=2 \" con = mymodule.get_connection() ... I don't like this option because all the extra typing in each line and is difficult to edit the query also. For me the best solution would be Option 2 but I don't like the extra whitespaces when I print the sql string. Do you know of any other options?", "output": "Security disclaimer: DO NOT use any kind of string interpolation in SQL queries if the interpolated values can be manipulated by any attacker. See e.g. this question for an example of why this is dangerous, leading directly to SQL injection vulnerabilities. Use parametrized queries instead, which all database engines support. The only reason why the below answer is safe is that the interpolated strings are literals defined inside the same function (they are not related to any kind of user input). Sorry for posting to such an old thread -- but as someone who also shares a passion for pythonic 'best', I thought I'd share our solution. The solution is to build SQL statements using python's String Literal Concatenation (http://docs.python.org/), which could be qualified a somewhere between Option 2 and Option 4 Code Sample: sql = (\"SELECT field1, field2, field3, field4 \" \"FROM table \" \"WHERE condition1=1 \" \"AND condition2=2;\") Works as well with f-strings: fields = \"field1, field2, field3, field4\" table = \"table\" conditions = \"condition1=1 AND condition2=2\" sql = (f\"SELECT {fields} \" f\"FROM {table} \" f\"WHERE {conditions};\") Pros: It retains the pythonic 'well tabulated' format, but does not add extraneous space characters (which pollutes logging). It avoids the backslash continuation ugliness of Option 4, which makes it difficult to add statements (not to mention white-space blindness). And further, it's really simple to expand the statement in VIM (just position the cursor to the insert point, and press SHIFT-O to open a new line)."} +{"question_id": 7604636, "score": 154, "creation_date": 1317340529, "tags": ["python", "exception", "if-statement", "try-catch", "pep"], "instruction": "Better to 'try' something and catch the exception or test if it's possible first to avoid an exception?\n\nShould I test if something is valid or just try to do it and catch the exception? Is there any solid documentation saying that one way is preferred? Is one way more pythonic? For example, should I: if len(my_list) >= 4: x = my_list[3] else: x = 'NO_ABC' Or: try: x = my_list[3] except IndexError: x = 'NO_ABC' Some thoughts... PEP 20 says: Errors should never pass silently. Unless explicitly silenced. Should using a try instead of an if be interpreted as an error passing silently? And if so, are you explicitly silencing it by using it in this way, therefore making it OK? I'm not referring to situations where you can only do things 1 way; for example: try: import foo except ImportError: import baz", "output": "You should prefer try/except over if/else if that results in speed-ups (for example by preventing extra lookups) cleaner code (fewer lines/easier to read) Often, these go hand-in-hand. speed-ups In the case of trying to find an element in a long list by: try: x = my_list[index] except IndexError: x = 'NO_ABC' the try, except is the best option when the index is probably in the list and the IndexError is usually not raised. This way you avoid the need for an extra lookup by if index < len(my_list). Python encourages the use of exceptions, which you handle is a phrase from Dive Into Python. Your example not only handles the exception (gracefully), rather than letting it silently pass, also the exception occurs only in the exceptional case of index not being found (hence the word exception!). cleaner code The official Python Documentation mentions EAFP: Easier to ask for forgiveness than permission and Rob Knight notes that catching errors rather than avoiding them, can result in cleaner, easier to read code. His example says it like this: Worse (LBYL 'look before you leap'): #check whether int conversion will raise an error if not isinstance(s, str) or not s.isdigit(): return None elif len(s) > 10: #too many digits for int conversion return None else: return int(s) Better (EAFP: Easier to ask for forgiveness than permission): try: return int(s) except (TypeError, ValueError, OverflowError): #int conversion failed return None"} +{"question_id": 6031584, "score": 154, "creation_date": 1305639271, "tags": ["python", "import"], "instruction": "How can I import from the standard library, when my project has a module with the same name? (How can I control where Python looks for modules?)\n\nThere is a module in my project folder called calendar. Elsewhere in the code, I would like to use the standard library Calendar class. But when I try to import this class, using from calendar import Calendar, this imports from my own module instead, causing errors later. How can I avoid this? Do I have to rename the module?", "output": "In Python 3.5 and up, use the standard library importlib module to import directly from a specified path, bypassing import's lookup mechanism: import importlib.util import sys # For illustrative purposes. import tokenize file_path = tokenize.__file__ # returns \"/path/to/tokenize.py\" module_name = tokenize.__name__ # returns \"tokenize\" spec = importlib.util.spec_from_file_location(module_name, file_path) module = importlib.util.module_from_spec(spec) sys.modules[module_name] = module spec.loader.exec_module(module) In actual code, file_path can be set to any path to a .py file to import; module_name should be the name of the module that will be imported (the name that the import system uses to look up the module when further import statements are attempted). Subsequent code will use module as the name of the module; change the variable name module to use a different name. To load a package instead of a single file, file_path should be the path to the package's root __init__.py."} +{"question_id": 3862010, "score": 154, "creation_date": 1286267460, "tags": ["python", "string", "generator"], "instruction": "Is there a generator version of `string.split()` in Python?\n\nstring.split() returns a list instance. Is there a version that returns a generator instead? Are there any reasons against having a generator version?", "output": "It is highly probable that re.finditer uses fairly minimal memory overhead. def split_iter(string): return (x.group(0) for x in re.finditer(r\"[A-Za-z']+\", string)) Demo: >>> list( split_iter(\"A programmer's RegEx test.\") ) ['A', \"programmer's\", 'RegEx', 'test'] I have confirmed that this takes constant memory in python 3.2.1, assuming my testing methodology was correct. I created a string of very large size (1GB or so), then iterated through the iterable with a for loop (NOT a list comprehension, which would have generated extra memory). This did not result in a noticeable growth of memory (that is, if there was a growth in memory, it was far far less than the 1GB string). More general version: In reply to a comment \"I fail to see the connection with str.split\", here is a more general version: def splitStr(string, sep=\"\\s+\"): # warning: does not yet work if sep is a lookahead like `(?=b)` if sep=='': return (c for c in string) else: return (_.group(1) for _ in re.finditer(f'(?:^|{sep})((?:(?!{sep}).)*)', string)) # alternatively, more verbosely: regex = f'(?:^|{sep})((?:(?!{sep}).)*)' for match in re.finditer(regex, string): fragment = match.group(1) yield fragment The idea is that ((?!pat).)* 'negates' a group by ensuring it greedily matches until the pattern would start to match (lookaheads do not consume the string in the regex finite-state-machine). In pseudocode: repeatedly consume (begin-of-string xor {sep}) + as much as possible until we would be able to begin again (or hit end of string) Demo: >>> splitStr('.......A...b...c....', sep='...') <generator object splitStr.<locals>.<genexpr> at 0x7fe8530fb5e8> >>> list(splitStr('A,b,c.', sep=',')) ['A', 'b', 'c.'] >>> list(splitStr(',,A,b,c.,', sep=',')) ['', '', 'A', 'b', 'c.', ''] >>> list(splitStr('.......A...b...c....', '\\.\\.\\.')) ['', '', '.A', 'b', 'c', '.'] >>> list(splitStr(' A b c. ')) ['', 'A', 'b', 'c.', ''] (One should note that str.split has an ugly behavior: it special-cases having sep=None as first doing str.strip to remove leading and trailing whitespace. The above purposefully does not do that; see the last example where sep=\"\\s+\".) (I ran into various bugs (including an internal re.error) when trying to implement this... Negative lookbehind will restrict you to fixed-length delimiters so we don't use that. Almost anything besides the above regex seemed to result in errors with the beginning-of-string and end-of-string edge-cases (e.g. r'(.*?)($|,)' on ',,,a,,b,c' returns ['', '', '', 'a', '', 'b', 'c', ''] with an extraneous empty string at the end; one can look at the edit history for another seemingly-correct regex that actually has subtle bugs.) (If you want to implement this yourself for higher performance (although they are heavweight, regexes most importantly run in C), you'd write some code (with ctypes? not sure how to get generators working with it?), with the following pseudocode for fixed-length delimiters: Hash your delimiter of length L. Keep a running hash of length L as you scan the string using a running hash algorithm, O(1) update time. Whenever the hash might equal your delimiter, manually check if the past few characters were the delimiter; if so, then yield substring since last yield. Special case for beginning and end of string. This would be a generator version of the textbook algorithm to do O(N) text search. Multiprocessing versions are also possible. They might seem overkill, but the question implies that one is working with really huge strings... At that point you might consider crazy things like caching byte offsets if few of them, or working from disk with some disk-backed bytestring view object, buying more RAM, etc. etc.)"} +{"question_id": 18676156, "score": 153, "creation_date": 1378576040, "tags": ["python", "python-3.x", "django", "django-models", "enums"], "instruction": "How to properly use the \"choices\" field option in Django\n\nI'm reading the tutorial here: https://docs.djangoproject.com/en/1.5/ref/models/fields/#choices and i'm trying to create a box where the user can select the month he was born in. What I tried was MONTH_CHOICES = ( (JANUARY, \"January\"), (FEBRUARY, \"February\"), (MARCH, \"March\"), .... (DECEMBER, \"December\"), ) month = CharField(max_length=9, choices=MONTHS_CHOICES, default=JANUARY) Is this correct? I see that in the tutorial I was reading, they for some reason created variables first, like so FRESHMAN = 'FR' SOPHOMORE = 'SO' JUNIOR = 'JR' SENIOR = 'SR' Why did they create those variables? Also, the MONTHS_CHOICES is in a model called People, so would the code I provided create a \"Months Choices) column in the database called called \"People\" and would it say what month the user was born in after he clicks on of the months and submits the form?", "output": "According to the documentation: Field.choices An iterable (e.g., a list or tuple) consisting itself of iterables of exactly two items (e.g. [(A, B), (A, B) ...]) to use as choices for this field. If this is given, the default form widget will be a select box with these choices instead of the standard text field. The first element in each tuple is the actual value to be stored, and the second element is the human-readable name. So, your code is correct, except that you should either define variables JANUARY, FEBRUARY etc. or use calendar module to define MONTH_CHOICES: import calendar ... class MyModel(models.Model): ... MONTH_CHOICES = [(str(i), calendar.month_name[i]) for i in range(1,13)] month = models.CharField(max_length=9, choices=MONTH_CHOICES, default='1')"} +{"question_id": 16573802, "score": 153, "creation_date": 1368647491, "tags": ["python", "sqlalchemy", "flask-sqlalchemy"], "instruction": "Flask-SQLAlchemy how to delete all rows in a single table\n\nHow do I delete all rows in a single table using Flask-SQLAlchemy? Looking for something like this: >>> users = models.User.query.all() >>> models.db.session.delete(users) # but it errs out: UnmappedInstanceError: Class '__builtin__.list' is not mapped", "output": "Try delete: models.User.query.delete() From the docs: Returns the number of rows deleted, excluding any cascades."} +{"question_id": 702834, "score": 153, "creation_date": 1238530457, "tags": ["python", "enums"], "instruction": "What's the common practice for enums in Python?\n\nHow can I implement an enumeration type (spelled enum in some languages) in Python? What is the common practice to get this functionality?", "output": "class Materials: Shaded, Shiny, Transparent, Matte = range(4) >>> print Materials.Matte 3 Update: For Python 3.4+: As of Python 3.4+, you can now use Enum (or IntEnum for enums with int values) from the enum module. Use enum.auto to increment the values up automatically: import enum class Materials(enum.IntEnum): Shaded = 1 Shiny = enum.auto() Transparent = 3 Matte = enum.auto() print(Materials.Shiny == 2) # True print(Materials.Matte == 4) # True"} +{"question_id": 47113029, "score": 153, "creation_date": 1509813684, "tags": ["python", "opencv", "python-import", "importerror", "python-module"], "instruction": "ImportError: libSM.so.6: cannot open shared object file: No such file or directory\n\nWhen trying to import OpenCV, using import cv2 I get the following error: /usr/local/lib/python2.7/dist-packages/cv2/__init__.py in <module>() 7 8 # make IDE's (PyCharm) autocompletion happy ----> 9 from .cv2 import * 10 11 # wildcard import above does not import \"private\" variables like __version__ ImportError: libSM.so.6: cannot open shared object file: No such file or directory Not sure how to fix this - trying to play around with Google's new Colaboratory tool. Notebook is here: https://drive.google.com/file/d/0B7-sJqBiyjCcRmFkMzl6cy1iN0k/view?usp=sharing", "output": "This fixed the problem by having it as the first two lines of the script: !pip install opencv-python !apt update && apt install -y libsm6 libxext6 !apt-get install -y libxrender-dev"} +{"question_id": 15607903, "score": 153, "creation_date": 1364187273, "tags": ["python", "file", "permissions", "chmod"], "instruction": "Python module os.chmod(file, 664) does not change the permission to rw-rw-r-- but -w--wx----\n\nRecently I am using Python module os, when I tried to change the permission of a file, I did not get the expected result. For example, I intended to change the permission to rw-rw-r--, os.chmod(\"/tmp/test_file\", 664) The ownership permission is actually -w--wx--- (230) --w--wx--- 1 ag ag 0 Mar 25 05:45 test_file However, if I change 664 to 0664 in the code, the result is just what I need, e.g. os.chmod(\"/tmp/test_file\", 0664) The result is: -rw-rw-r-- 1 ag ag 0 Mar 25 05:55 test_file Could anybody help explaining why does that leading 0 is so important to get the correct result?", "output": "Found this on a forum If you're wondering why that leading zero is important, it's because permissions are set as an octal integer, and Python automagically treats any integer with a leading zero as octal. So os.chmod(\"file\", 484) (in decimal) would give the same result. What you are doing is passing 664 which in octal is 1230 In your case you would need os.chmod(\"/tmp/test_file\", 0o666) using the prefix with 0o (zero oh)."} +{"question_id": 29813694, "score": 153, "creation_date": 1429763319, "tags": ["python", "seaborn", "visualization", "facet-grid"], "instruction": "How to add a title to Seaborn Facet Plot\n\nHow do I add a title to this Seaborne plot? Let's give it a title 'I AM A TITLE'. tips = sns.load_dataset(\"tips\") g = sns.FacetGrid(tips, col=\"sex\", row=\"smoker\", margin_titles=True) g.map(sns.plt.scatter, \"total_bill\", \"tip\")", "output": "Updating slightly, with seaborn 0.11.1: Seaborn's relplot function creates a FacetGrid and gives each subplot its own explanatory title. You can add a title over the whole thing: import seaborn as sns tips = sns.load_dataset('tips') rp = sns.relplot(data=tips, x='total_bill', y='tip', col='sex', row='smoker', kind='scatter') # rp is a FacetGrid; # relplot is a nice organized way to use it rp.fig.subplots_adjust(top=0.9) # adjust the Figure in rp rp.fig.suptitle('ONE TITLE FOR ALL') If you create the FacetGrid directly, as in the original example, it automatically adds column and row labels instead of individual subplot titles. We can still add a title to the whole thing: from matplotlib.pyplot import scatter as plt_scatter g = sns.FacetGrid(tips, col='sex', row='smoker', margin_titles=True) g.map(plt_scatter, 'total_bill', 'tip') g.fig.subplots_adjust(top=0.9) g.fig.suptitle('TITLE!') The FacetGrid objects are built with matplotlib Figure objects, so we can use subplots_adjust, suptitle that may be familiar from matplotlib in general."} +{"question_id": 4757178, "score": 153, "creation_date": 1295601863, "tags": ["python", "linux", "unix", "virtualenv"], "instruction": "How do you set your pythonpath in an already-created virtualenv?\n\nWhat file do I edit, and how? I created a virtual environment.", "output": "The most elegant solution to this problem is here. Original answer remains, but this is a messy solution: If you want to change the PYTHONPATH used in a virtualenv, you can add the following line to your virtualenv's bin/activate file: export PYTHONPATH=\"/the/path/you/want\" This way, the new PYTHONPATH will be set each time you use this virtualenv. EDIT: (to answer @RamRachum's comment) To have it restored to its original value on deactivate, you could add export OLD_PYTHONPATH=\"$PYTHONPATH\" before the previously mentioned line, and add the following line to your bin/postdeactivate script. export PYTHONPATH=\"$OLD_PYTHONPATH\""} +{"question_id": 3144089, "score": 153, "creation_date": 1277840263, "tags": ["python", "search", "import", "path"], "instruction": "Expand Python Search Path to Other Source\n\nI have just joined a project with a rather large existing code base. We develop in linux and do not use and IDE. We run through the command line. I'm trying to figure out how to get python to search for the right path when I run project modules. For instance, when I run something like: python someprojectfile.py I get ImportError: no module named core.'somemodule' I get this for all of my imports to I assume it's an issue with the path. How do I get Python to search ~/codez/project/ and all the files and folders for *.py files during import statements?", "output": "There are a few possible ways to do this: Set the environment variable PYTHONPATH to a colon-separated list of directories to search for imported modules. In your program, use sys.path.append('/path/to/search') to add the names of directories you want Python to search for imported modules. sys.path is just the list of directories Python searches every time it gets asked to import a module, and you can alter it as needed (although I wouldn't recommend removing any of the standard directories!). Any directories you put in the environment variable PYTHONPATH will be inserted into sys.path when Python starts up. Use site.addsitedir to add a directory to sys.path. The difference between this and just plain appending is that when you use addsitedir, it also looks for .pth files within that directory and uses them to possibly add additional directories to sys.path based on the contents of the files. See the documentation for more detail. Which one of these you want to use depends on your situation. Remember that when you distribute your project to other users, they typically install it in such a manner that the Python code files will be automatically detected by Python's importer (i.e. packages are usually installed in the site-packages directory), so if you mess with sys.path in your code, that may be unnecessary and might even have adverse effects when that code runs on another computer. For development, I would venture a guess that setting PYTHONPATH is usually the best way to go. However, when you're using something that just runs on your own computer (or when you have nonstandard setups, e.g. sometimes in web app frameworks), it's not entirely uncommon to do something like import sys from os.path import dirname sys.path.append(dirname(__file__))"} +{"question_id": 3829742, "score": 153, "creation_date": 1285842775, "tags": ["python", "unit-testing"], "instruction": "Assert that a method was called in a Python unit test\n\nSuppose I have the following code in a Python unit test: aw = aps.Request(\"nv1\") aw2 = aps.Request(\"nv2\", aw) Is there an easy way to assert that a particular method (in my case aw.Clear()) was called during the second line of the test? e.g. is there something like this: #pseudocode: assertMethodIsCalled(aw.Clear, lambda: aps.Request(\"nv2\", aw))", "output": "I use Mock (which is now unittest.mock on py3.3+) for this: from mock import patch from PyQt4 import Qt @patch.object(Qt.QMessageBox, 'aboutQt') def testShowAboutQt(self, mock): self.win.actionAboutQt.trigger() self.assertTrue(mock.called) For your case, it could look like this: import mock from mock import patch def testClearWasCalled(self): aw = aps.Request(\"nv1\") with patch.object(aw, 'Clear') as mock: aw2 = aps.Request(\"nv2\", aw) mock.assert_called_with(42) # or mock.assert_called_once_with(42) Mock supports quite a few useful features, including ways to patch an object or module, as well as checking that the right thing was called, etc etc. Caveat emptor! (Buyer beware!) If you mistype assert_called_with (to assert_called_once or just swap two letters assert_called_wiht) your test may still run, as Mock will think this is a mocked function and happily go along, unless you use autospec=true. For more info read assert_called_once: Threat or Menace."} +{"question_id": 17373161, "score": 153, "creation_date": 1372451484, "tags": ["python", "python-2.7", "set"], "instruction": "Use curly braces to initialize a Set in Python\n\nI'm learning python, and I have a novice question about initializing sets. Through testing, I've discovered that a set can be initialized like so: my_set = {'foo', 'bar', 'baz'} Are there any disadvantages of doing it this way, as opposed to the standard way of: my_set = set(['foo', 'bar', 'baz']) or is it just a question of style?", "output": "There are two issues with the set literal syntax: my_set = {'foo', 'bar', 'baz'} It's not available before Python 2.7 There's no way to express an empty set using that syntax (using {} creates an empty dict) The section of the docs outlining this syntax is here."} +{"question_id": 58755948, "score": 153, "creation_date": 1573156012, "tags": ["python", "python-typing", "mypy"], "instruction": "What is the difference between TypeVar and NewType?\n\nTypeVar and NewType seem related but I'm not sure when I'm supposed to use each or what the difference is at runtime and statically.", "output": "The two concepts aren't related any more than any other type-related concepts. In short, a TypeVar is a variable you can use in type signatures so you can refer to the same unspecified type more than once, while a NewType is used to tell the type checker that some values should be treated as their own type. Type Variables To simplify, type variables let you refer to the same type more than once without specifying exactly which type it is. In a definition, a single type variable always takes the same value. # (This code will type check, but it won't run.) from typing import TypeVar, Generic # Two type variables, named T and R T = TypeVar('T') R = TypeVar('R') # Put in a list of Ts and get out one T def get_one(x: list[T]) -> T: ... # Put in a T and an R, get back an R and a T def swap(x: T, y: R) -> tuple[R, T]: return y, x # A simple generic class that holds a value of type T class ValueHolder(Generic[T]): def __init__(self, value: T): self.value = value def get(self) -> T: return self.value x: ValueHolder[int] = ValueHolder(123) y: ValueHolder[str] = ValueHolder('abc') Without type variables, there wouldn't be a good way to declare the type of get_one or ValueHolder.get. There are a few other options on TypeVar. You can restrict the possible values by passing in more types (e.g. TypeVar(name, int, str)), or you can give an upper bound so every value of the type variable must be a subtype of that type (e.g. TypeVar(name, bound=int)). Additionally, you can decide whether a type variable is covariant, contravariant, or neither when you declare it. This essentially decides when subclasses or superclasses can be used in place of a generic type. PEP 484 describes these concepts in more detail, and refers to additional resources. Addendum: Python 3.12 generic parameter lists Starting in Python 3.12, the following syntax has been available to declare type variables. def get_one[T](x: list[T]) -> T: ... def swap[T, R](x: T, y: R) -> tuple[R, T]: ... class ValueHolder[T]: def __init__(self, value: T): ... def get(self) -> T: ... These declarations are equivalent to those above, but now the type variables are only defined in type signatures within their functions/classes, rather than being stored in regular Python variables. The Python 3.12 release notes contain a summary, as well as links to more-detailed documentation. NewType A NewType is for when you want to declare a distinct type without actually doing the work of creating a new type or worry about the overhead of creating new class instances. In the type checker, NewType('Name', int) creates a subclass of int named \"Name.\" At runtime, NewType('Name', int) is not a class at all; it is actually the identity function, so x is NewType('Name', int)(x) is always true. from typing import NewType UserId = NewType('UserId', int) def get_user(x: UserId): ... get_user(UserId(123456)) # this is fine get_user(123456) # that's an int, not a UserId UserId(123456) + 123456 # fine, because UserId is a subclass of int To the type checker, UserId looks something like this: class UserId(int): pass But at runtime, UserId is basically just this: def UserId(x): return x There's almost nothing more than that to a NewType at runtime. In Python 3.8.1, its implementation was almost exactly as follows: def NewType(name, type_): def identity(x): return x identity.__name__ = name return identity"} +{"question_id": 34586114, "score": 153, "creation_date": 1451890677, "tags": ["python", "django", "django-staticfiles", "static-files", "collectstatic"], "instruction": "What's the point of Django's collectstatic?\n\nIn Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called static. So the structure would look like this: mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ In mysite/settings.py I have: STATIC_ROOT = 'staticfiles' So when I run the command: python manage.py collectstatic It creates a folder called staticfiles at the root level (so same directory as myapp/) What's the point of this? Isn't it just creating a copy of all my static files?", "output": "Collect static files from multiple apps into a single path Well, a single Django project may use several apps, so while there you only have one myapp, it may actually be myapp1, myapp2, etc By copying them from inside the individual apps into a single folder, you can point your frontend web server (e.g. nginx) to that single folder STATIC_ROOT and serve static files from a single location, rather than configure your web server to serve static files from multiple paths. Persistent URLs with ManifestStaticFilesStorage A note about the MD5 hash being appended to the filename for versioning: It's not part of the default behavior of collectstatic, as settings.STATICFILES_STORAGE defaults to StaticFilesStorage (which doesn't do that) The MD5 hash will kick in e.g. if you set it to use ManifestStaticFilesStorage, which adds that behavior. The purpose of this storage is to keep serving the old files in case some pages still refer to those files, e.g. because they are cached by you or a 3rd party proxy server. Additionally, it\u2019s very helpful if you want to apply far future Expires headers to the deployed files to speed up the load time for subsequent page visits."} +{"question_id": 6953351, "score": 153, "creation_date": 1312532197, "tags": ["python", "multithreading", "dictionary", "thread-safety"], "instruction": "Thread Safety in Python's dictionary\n\nI have a class which holds a dictionary class OrderBook: orders = {'Restaurant1': None, 'Restaurant2': None, 'Restaurant3': None, 'Restaurant4': None} @staticmethod def addOrder(restaurant_name, orders): OrderBook.orders[restaurant_name] = orders And I am running 4 threads (one for each restaurant) that call the method OrderBook.addOrder. Here is the function ran by each thread: def addOrders(restaurant_name): #creates orders ... OrderBook.addOrder(restaurant_name, orders) Is this safe, or do I have to use a lock before calling addOrder?", "output": "Python's built-in structures are thread-safe for single operations, but it can sometimes be hard to see where a statement really becomes multiple operations. Your code should be safe. Keep in mind: a lock here will add almost no overhead, and will give you peace of mind. https://web.archive.org/web/20201108091210/http://effbot.org/pyfaq/what-kinds-of-global-value-mutation-are-thread-safe.htm has more details."} +{"question_id": 52487663, "score": 153, "creation_date": 1537824755, "tags": ["python", "python-typing"], "instruction": "Python type hints: typing.Mapping vs. typing.Dict\n\nI'm working on a python3 project where we use the typing module type hints throughout. It seems that we use typing.Dict and typing.Mapping pretty much interchangeably. Is there a reason to prefer one over the other?", "output": "typing.Dict should be used to indicate a literal dict type with support for element type hinting i.e. Dict[bytes, str]. typing.Mapping is an object which defines the __getitem__, __len__, __iter__ magic methods. typing.MutableMapping is an object which defines same as Mapping but with __setitem__, __delitem__ magic methods as well. typing.Mapping et al. are based on the abc types in this table."} +{"question_id": 6570635, "score": 153, "creation_date": 1309777642, "tags": ["python", "pip", "dependency-management"], "instruction": "Installing multiple versions of a package with pip\n\nIn my application I would like to use: packageA, which requires packageX==1.3 packageB, which requires packageX==1.4 packageX==1.5 How can I install multiple versions of packageX with pip to handle this situation?", "output": "pip won't help you with this. You can tell it to install a specific version, but it will override the other one. On the other hand, using two virtualenvs will let you install both versions on the same machine, but not use them at the same time. Your best bet is to install both versions manually by putting them in your Python path with a different name. But if your two libs expect them to have the same name (and they should), you will have to modify them so they pick up the version they need with some import alias such as: import dependencyname_version as dependencyname There is currently no clean way to do this. The best you can do is hope for this hack to work. I'd rather ditch one of the two libs and replace it with an equivalent, or patch it to accept the new version of the dependency and give the patch back to the community."} +{"question_id": 2887878, "score": 152, "creation_date": 1274527558, "tags": ["python", "database", "csv", "sqlite"], "instruction": "Importing a CSV file into a sqlite3 database table using Python\n\nI have a CSV file and I want to bulk-import this file into my sqlite3 database using Python. the command is \".import .....\". but it seems that it cannot work like this. Can anyone give me an example of how to do it in sqlite3? I am using windows just in case. Thanks", "output": "import csv, sqlite3 con = sqlite3.connect(\":memory:\") # change to 'sqlite:///your_filename.db' cur = con.cursor() cur.execute(\"CREATE TABLE t (col1, col2);\") # use your column names here with open('data.csv','r') as fin: # `with` statement available in 2.5+ # csv.DictReader uses first line in file for column headings by default dr = csv.DictReader(fin) # comma is default delimiter to_db = [(i['col1'], i['col2']) for i in dr] cur.executemany(\"INSERT INTO t (col1, col2) VALUES (?, ?);\", to_db) con.commit() con.close()"} +{"question_id": 10238473, "score": 152, "creation_date": 1334879621, "tags": ["python", "pylint"], "instruction": "Disable all Pylint warnings for a file\n\nWe are using Pylint within our build system. We have a Python package within our code base that has throwaway code, and I'd like to disable all warnings for a module temporarily so I can stop bugging the other devs with these superfluous messages. Is there an easy way to pylint: disable all warnings for a module?", "output": "From the Pylint FAQ: With Pylint < 0.25, add # pylint: disable-all at the beginning of the module. Pylint 0.26.1 and up have renamed that directive to # pylint: skip-file (but the first version will be kept for backward compatibility). In order to ease finding which modules are ignored a information-level message I0013 is emitted. With recent versions of Pylint, if you use the old syntax, an additional I0014 message is emitted."} +{"question_id": 48248405, "score": 152, "creation_date": 1515923577, "tags": ["python", "python-imaging-library", "python-3.6"], "instruction": "cannot write mode RGBA as JPEG\n\nI am learning to use 'pillow 5.0' following book 'Automate the boring stuff with python' The info about the image object In [79]: audacious = auda In [80]: print(audacious.format, audacious.size, audacious.mode) PNG (1094, 960) RGBA When I tried to convert filetype, it report error. In [83]: audacious.save('audacious.jpg') OSError: cannot write mode RGBA as JPEG There's no such a n error in book.", "output": "JPG does not support transparency - RGBA means Red, Green, Blue, Alpha - Alpha is transparency. You need to discard the Alpha Channel or save as something that supports transparency - like PNG. The Image class has a method convert which can be used to convert RGBA to RGB - after that you will be able to save as JPG. Have a look here: the image class doku im = Image.open(\"audacious.png\") rgb_im = im.convert('RGB') rgb_im.save('audacious.jpg') Adapted from dm2013's answer to Convert png to jpeg using Pillow"} +{"question_id": 2022031, "score": 152, "creation_date": 1262883418, "tags": ["python", "list", "append", "nested-lists"], "instruction": "Python append() vs. += operator on lists, why do these give different results?\n\nWhy do these two operations give different results? >>> c = [1, 2, 3] >>> c [1, 2, 3] >>> c += c >>> c [1, 2, 3, 1, 2, 3] >>> c = [1, 2, 3] >>> c.append(c) >>> c [1, 2, 3, [...]] >>> In the last case there's actually infinite recursion. c[-1] and c are the same. Why is it different with the + operation?", "output": "Editor's note: This answer confuses lists with arrays as well as += with +. Despite that, the concepts are similar enough to still be useful. To explain \"why\": The + operation adds the array elements to the original array. The array.append operation inserts the array (or any object) into the end of the original array, which results in a reference to self in that spot (hence the infinite recursion in your case with lists, though with arrays, you'd receive a type error). The difference here is that the + operation acts specific when you add an array (it's overloaded like others, see this chapter on sequences) by concatenating the element. The append-method however does literally what you ask: append the object on the right-hand side that you give it (the array or any other object), instead of taking its elements. An alternative Use extend() if you want to use a function that acts similar to the + operator (as others have shown here as well). It's not wise to do the opposite: to try to mimic append with the + operator for lists (see my earlier link on why). More on lists below: Lists [edit] Several commenters have suggested that the question is about lists and not about arrays. The question has changed, though I should've included this earlier. Most of the above about arrays also applies to lists: The + operator concatenates two lists together. The operator will return a new list object. List.append does not append one list with another, but appends a single object (which here is a list) at the end of your current list. Adding c to itself, therefore, leads to infinite recursion. As with arrays, you can use List.extend to add extend a list with another list (or iterable). This will change your current list in situ, as opposed to +, which returns a new list."} +{"question_id": 5764782, "score": 151, "creation_date": 1303569309, "tags": ["python", "list", "loops"], "instruction": "Iterate through adjacent pairs of items in a Python list\n\nIs it possible to iterate a list in the following way in Python (treat this code as pseudocode)? a = [5, 7, 11, 4, 5] for v, w in a: print [v, w] And it should produce [5, 7] [7, 11] [11, 4] [4, 5]", "output": "From the itertools recipes: from itertools import tee def pairwise(iterable): \"s -> (s0,s1), (s1,s2), (s2, s3), ...\" a, b = tee(iterable) next(b, None) return zip(a, b) for v, w in pairwise(a): ..."} +{"question_id": 2676133, "score": 151, "creation_date": 1271775282, "tags": ["python", "sqlalchemy"], "instruction": "Best way to do enum in Sqlalchemy?\n\nI'm reading about sqlalchemy and I saw following code: employees_table = Table('employees', metadata, Column('employee_id', Integer, primary_key=True), Column('name', String(50)), Column('manager_data', String(50)), Column('engineer_info', String(50)), Column('type', String(20), nullable=False) ) employee_mapper = mapper(Employee, employees_table, \\ polymorphic_on=employees_table.c.type, polymorphic_identity='employee') manager_mapper = mapper(Manager, inherits=employee_mapper, polymorphic_identity='manager') engineer_mapper = mapper(Engineer, inherits=employee_mapper, polymorphic_identity='engineer') Should I make 'type' an int, with constants in a library? Or should I make just make type an enum?", "output": "SQLAlchemy has an Enum type since 0.6: http://docs.sqlalchemy.org/en/latest/core/type_basics.html?highlight=enum#sqlalchemy.types.Enum Although I would only recommend its usage if your database has a native enum type. Otherwise I would personally just use an int."} +{"question_id": 23862406, "score": 151, "creation_date": 1401075965, "tags": ["python", "python-2.7", "dictionary", "filtering"], "instruction": "filter items in a python dictionary where keys contain a specific string\n\nI'm a C coder developing something in python. I know how to do the following in C (and hence in C-like logic applied to python), but I'm wondering what the 'Python' way of doing it is. I have a dictionary d, and I'd like to operate on a subset of the items, only those whose key (string) contains a specific substring. i.e. the C logic would be: for key in d: if filter_string in key: # do something else # do nothing, continue I'm imagining the python version would be something like filtered_dict = crazy_python_syntax(d, substring) for key,value in filtered_dict.iteritems(): # do something I've found a lot of posts on here regarding filtering dictionaries, but couldn't find one which involved exactly this. My dictionary is not nested and i'm using python 2.7", "output": "How about a dict comprehension: filtered_dict = {k: v for k, v in d.iteritems() if filter_string in k} One you see it, it should be self-explanatory, as it reads like English pretty well. This syntax requires Python 2.7 or greater. In Python 3, there is only dict.items(), not iteritems() so you would use: filtered_dict = {k: v for k, v in d.items() if filter_string in k}"} +{"question_id": 19790790, "score": 151, "creation_date": 1383660073, "tags": ["python", "split", "pandas", "dataframe"], "instruction": "Splitting dataframe into multiple dataframes\n\nI have a very large dataframe (around 1 million rows) with data from an experiment (60 respondents). I would like to split the dataframe into 60 dataframes (a dataframe for each participant). In the dataframe, data, there is a variable called 'name', which is the unique code for each participant. I have tried the following, but nothing happens (or execution does not stop within an hour). What I intend to do is to split the data into smaller dataframes, and append these to a list (datalist): import pandas as pd def splitframe(data, name='name'): n = data[name][0] df = pd.DataFrame(columns=data.columns) datalist = [] for i in range(len(data)): if data[name][i] == n: df = df.append(data.iloc[i]) else: datalist.append(df) df = pd.DataFrame(columns=data.columns) n = data[name][i] df = df.append(data.iloc[i]) return datalist I do not get an error message, the script just seems to run forever! Is there a smart way to do it?", "output": "Firstly your approach is inefficient because the appending to the list on a row by basis will be slow as it has to periodically grow the list when there is insufficient space for the new entry, list comprehensions are better in this respect as the size is determined up front and allocated once. However, I think fundamentally your approach is a little wasteful as you have a dataframe already so why create a new one for each of these users? I would sort the dataframe by column 'name', set the index to be this and if required not drop the column. Then generate a list of all the unique entries and then you can perform a lookup using these entries and crucially if you only querying the data, use the selection criteria to return a view on the dataframe without incurring a costly data copy. Use pandas.DataFrame.sort_values and pandas.DataFrame.set_index: # sort the dataframe df.sort_values(by='name', axis=1, inplace=True) # set the index to be this and don't drop df.set_index(keys=['name'], drop=False,inplace=True) # get a list of names names=df['name'].unique().tolist() # now we can perform a lookup on a 'view' of the dataframe joe = df.loc[df.name=='joe'] # now you can query all 'joes'"} +{"question_id": 26745462, "score": 151, "creation_date": 1415136876, "tags": ["python", "python-requests", "appdynamics"], "instruction": "How do I use basic HTTP authentication with the Python Requests library?\n\nI'm trying to use basic HTTP authentication in Python. I am using the Requests library: auth = requests.post('http://' + hostname, auth=HTTPBasicAuth(user, password)) request = requests.get('http://' + hostname + '/rest/applications') Response form auth variable: <<class 'requests.cookies.RequestsCookieJar'>[<Cookie JSESSIONID=cb10906c6219c07f887dff5312fb for appdynamics/controller>]> 200 CaseInsensitiveDict({'content-encoding': 'gzip', 'x-powered-by': 'JSP/2.2', 'transfer-encoding': 'chunked', 'set-cookie': 'JSESSIONID=cb10906c6219c07f887dff5312fb; Path=/controller; HttpOnly', 'expires': 'Wed, 05 Nov 2014 19:03:37 GMT', 'server': 'nginx/1.1.19', 'connection': 'keep-alive', 'pragma': 'no-cache', 'cache-control': 'max-age=78000', 'date': 'Tue, 04 Nov 2014 21:23:37 GMT', 'content-type': 'text/html;charset=ISO-8859-1'}) But when I try to get data from different location, I'm getting HTTP Status 401 error: <<class 'requests.cookies.RequestsCookieJar'>[]> 401 CaseInsensitiveDict({'content-length': '1073', 'x-powered-by': 'Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 3.1.2.2 Java/Oracle Corporation/1.7)', 'expires': 'Thu, 01 Jan 1970 00:00:00 UTC', 'server': 'nginx/1.1.19', 'connection': 'keep-alive', 'pragma': 'No-cache', 'cache-control': 'no-cache', 'date': 'Tue, 04 Nov 2014 21:23:37 GMT', 'content-type': 'text/html', 'www-authenticate': 'Basic realm=\"controller_realm\"'}) As far as I understand, in the second request session parameters are not substituted.", "output": "You need to use a session object and send the authentication each request. The session will also track cookies for you: session = requests.Session() session.auth = (user, password) auth = session.post('http://' + hostname) response = session.get('http://' + hostname + '/rest/applications')"} +{"question_id": 15769246, "score": 151, "creation_date": 1364919855, "tags": ["python", "list", "printing", "list-comprehension"], "instruction": "Pythonic way to print list items\n\nI would like to know if there is a better way to print all objects in a Python list than this : myList = [Person(\"Foo\"), Person(\"Bar\")] print(\"\\n\".join(map(str, myList))) Foo Bar I read this way is not really good : myList = [Person(\"Foo\"), Person(\"Bar\")] for p in myList: print(p) Isn't there something like : print(p) for p in myList If not, my question is... why ? If we can do this kind of stuff with comprehensive lists, why not as a simple statement outside a list ?", "output": "Assuming you are using Python 3: print(*myList, sep='\\n') This is a kind of unpacking. Details in the Python tutorial: Unpacking Argument Lists You can get the same behavior on Python 2 using from __future__ import print_function. With the print statement on Python 2 you will need iteration of some kind. Regarding your question about print(p) for p in myList not working, you can just use the following which does the same thing and is still simple: for p in myList: print p For a solution that uses '\\n'.join(), I prefer list comprehensions and generators over map() so I would probably use the following: print '\\n'.join(str(p) for p in myList)"} +{"question_id": 21833872, "score": 151, "creation_date": 1392653282, "tags": ["python", "pylint", "naming-conventions"], "instruction": "Why does Pylint object to single-character variable names?\n\nI'm still getting used to Python conventions and using Pylint to make my code more Pythonic, but I'm puzzled by the fact that Pylint doesn't like single character variable names. I have a few loops like this: for x in x_values: my_list.append(x) and when I run pylint, I'm getting Invalid name \"x\" for type variable (should match [a-z_][a-z0-9_]{2,30} -- that suggests that a valid variable name must be between 3 and 31 characters long, but I've looked through the PEP8 naming conventions and I don't see anything explicit regarding single lower case letters, and I do see a lot of examples that use them. Is there something I'm missing in PEP8 or is this a standard that is unique to Pylint?", "output": "Pylint checks not only PEP8 recommendations. It has also its own recommendations, one of which is that a variable name should be descriptive and not too short. You can use this to avoid such short names: my_list.extend(x_values) Or tweak Pylint's configuration to tell Pylint what variable name are good."} +{"question_id": 3685265, "score": 151, "creation_date": 1284128001, "tags": ["python", "file-io", "numpy"], "instruction": "How to write a multidimensional array to a text file?\n\nIn another question, other users offered some help if I could supply the array I was having trouble with. However, I even fail at a basic I/O task, such as writing an array to a file. Can anyone explain what kind of loop I would need to write a 4x11x14 numpy array to file? This array consist of four 11 x 14 arrays, so I should format it with a nice newline, to make the reading of the file easier on others. Edit: So I've tried the numpy.savetxt function. Strangely, it gives the following error: TypeError: float argument required, not numpy.ndarray I assume that this is because the function doesn't work with multidimensional arrays? Any solutions as I would like them within one file?", "output": "If you want to write it to disk so that it will be easy to read back in as a numpy array, look into numpy.save. Pickling it will work fine, as well, but it's less efficient for large arrays (which yours isn't, so either is perfectly fine). If you want it to be human readable, look into numpy.savetxt. Edit: So, it seems like savetxt isn't quite as great an option for arrays with >2 dimensions... But just to draw everything out to it's full conclusion: I just realized that numpy.savetxt chokes on ndarrays with more than 2 dimensions... This is probably by design, as there's no inherently defined way to indicate additional dimensions in a text file. E.g. This (a 2D array) works fine import numpy as np x = np.arange(20).reshape((4,5)) np.savetxt('test.txt', x) While the same thing would fail (with a rather uninformative error: TypeError: float argument required, not numpy.ndarray) for a 3D array: import numpy as np x = np.arange(200).reshape((4,5,10)) np.savetxt('test.txt', x) One workaround is just to break the 3D (or greater) array into 2D slices. E.g. x = np.arange(200).reshape((4,5,10)) with open('test.txt', 'w') as outfile: for slice_2d in x: np.savetxt(outfile, slice_2d) However, our goal is to be clearly human readable, while still being easily read back in with numpy.loadtxt. Therefore, we can be a bit more verbose, and differentiate the slices using commented out lines. By default, numpy.loadtxt will ignore any lines that start with # (or whichever character is specified by the comments kwarg). (This looks more verbose than it actually is...) import numpy as np # Generate some test data data = np.arange(200).reshape((4,5,10)) # Write the array to disk with open('test.txt', 'w') as outfile: # I'm writing a header here just for the sake of readability # Any line starting with \"#\" will be ignored by numpy.loadtxt outfile.write('# Array shape: {0}\\n'.format(data.shape)) # Iterating through a ndimensional array produces slices along # the last axis. This is equivalent to data[i,:,:] in this case for data_slice in data: # The formatting string indicates that I'm writing out # the values in left-justified columns 7 characters in width # with 2 decimal places. np.savetxt(outfile, data_slice, fmt='%-7.2f') # Writing out a break to indicate different slices... outfile.write('# New slice\\n') This yields: # Array shape: (4, 5, 10) 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00 17.00 18.00 19.00 20.00 21.00 22.00 23.00 24.00 25.00 26.00 27.00 28.00 29.00 30.00 31.00 32.00 33.00 34.00 35.00 36.00 37.00 38.00 39.00 40.00 41.00 42.00 43.00 44.00 45.00 46.00 47.00 48.00 49.00 # New slice 50.00 51.00 52.00 53.00 54.00 55.00 56.00 57.00 58.00 59.00 60.00 61.00 62.00 63.00 64.00 65.00 66.00 67.00 68.00 69.00 70.00 71.00 72.00 73.00 74.00 75.00 76.00 77.00 78.00 79.00 80.00 81.00 82.00 83.00 84.00 85.00 86.00 87.00 88.00 89.00 90.00 91.00 92.00 93.00 94.00 95.00 96.00 97.00 98.00 99.00 # New slice 100.00 101.00 102.00 103.00 104.00 105.00 106.00 107.00 108.00 109.00 110.00 111.00 112.00 113.00 114.00 115.00 116.00 117.00 118.00 119.00 120.00 121.00 122.00 123.00 124.00 125.00 126.00 127.00 128.00 129.00 130.00 131.00 132.00 133.00 134.00 135.00 136.00 137.00 138.00 139.00 140.00 141.00 142.00 143.00 144.00 145.00 146.00 147.00 148.00 149.00 # New slice 150.00 151.00 152.00 153.00 154.00 155.00 156.00 157.00 158.00 159.00 160.00 161.00 162.00 163.00 164.00 165.00 166.00 167.00 168.00 169.00 170.00 171.00 172.00 173.00 174.00 175.00 176.00 177.00 178.00 179.00 180.00 181.00 182.00 183.00 184.00 185.00 186.00 187.00 188.00 189.00 190.00 191.00 192.00 193.00 194.00 195.00 196.00 197.00 198.00 199.00 # New slice Reading it back in is very easy, as long as we know the shape of the original array. We can just do numpy.loadtxt('test.txt').reshape((4,5,10)). As an example (You can do this in one line, I'm just being verbose to clarify things): # Read the array from disk new_data = np.loadtxt('test.txt') # Note that this returned a 2D array! print new_data.shape # However, going back to 3D is easy if we know the # original shape of the array new_data = new_data.reshape((4,5,10)) # Just to check that they're the same... assert np.all(new_data == data)"} +{"question_id": 13293810, "score": 151, "creation_date": 1352393699, "tags": ["python", "pandas", "casting", "type-conversion", "dtype"], "instruction": "Import pandas dataframe column as string not int\n\nI would like to import the following csv as strings not as int64. Pandas read_csv automatically converts it to int64, but I need this column as string. ID 00013007854817840016671868 00013007854817840016749251 00013007854817840016754630 00013007854817840016781876 00013007854817840017028824 00013007854817840017963235 00013007854817840018860166 df = read_csv('sample.csv') df.ID >> 0 -9223372036854775808 1 -9223372036854775808 2 -9223372036854775808 3 -9223372036854775808 4 -9223372036854775808 5 -9223372036854775808 6 -9223372036854775808 Name: ID Unfortunately using converters gives the same result. df = read_csv('sample.csv', converters={'ID': str}) df.ID >> 0 -9223372036854775808 1 -9223372036854775808 2 -9223372036854775808 3 -9223372036854775808 4 -9223372036854775808 5 -9223372036854775808 6 -9223372036854775808 Name: ID", "output": "Just want to reiterate this will work in pandas >= 0.9.1: In [2]: read_csv('sample.csv', dtype={'ID': object}) Out[2]: ID 0 00013007854817840016671868 1 00013007854817840016749251 2 00013007854817840016754630 3 00013007854817840016781876 4 00013007854817840017028824 5 00013007854817840017963235 6 00013007854817840018860166 I'm creating an issue about detecting integer overflows also. EDIT: See resolution here: https://github.com/pydata/pandas/issues/2247 Update as it helps others: To have all columns as str, one can do this (from the comment): pd.read_csv('sample.csv', dtype = str) To have most or selective columns as str, one can do this: # lst of column names which needs to be string lst_str_cols = ['prefix', 'serial'] # use dictionary comprehension to make dict of dtypes dict_dtypes = {x : 'str' for x in lst_str_cols} # use dict on dtypes pd.read_csv('sample.csv', dtype=dict_dtypes)"} +{"question_id": 1253528, "score": 151, "creation_date": 1249889147, "tags": ["python", "function", "pickle"], "instruction": "Is there an easy way to pickle a python function (or otherwise serialize its code)?\n\nI'm trying to transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side effects) for transfer like this? I would ideally like to have a pair of functions similar to these: def transmit(func): obj = pickle.dumps(func) [send obj across the network] def receive(): [receive obj from the network] func = pickle.loads(s) func()", "output": "You could serialise the function bytecode and then reconstruct it on the caller. The marshal module can be used to serialise code objects, which can then be reassembled into a function. ie: import marshal def foo(x): return x*x code_string = marshal.dumps(foo.__code__) Then in the remote process (after transferring code_string): import marshal, types code = marshal.loads(code_string) func = types.FunctionType(code, globals(), \"some_func_name\") func(10) # gives 100 A few caveats: marshal's format (any python bytecode for that matter) may not be compatable between major python versions. Will only work for cpython implementation. If the function references globals (including imported modules, other functions etc) that you need to pick up, you'll need to serialise these too, or recreate them on the remote side. My example just gives it the remote process's global namespace. You'll probably need to do a bit more to support more complex cases, like closures or generator functions."} +{"question_id": 10944621, "score": 151, "creation_date": 1339140127, "tags": ["python", "matplotlib", "tkinter"], "instruction": "Dynamically updating plot in matplotlib\n\nI am making an application in Python which collects data from a serial port and plots a graph of the collected data against arrival time. The time of arrival for the data is uncertain. I want the plot to be updated when data is received. I searched on how to do this and found two methods: Clear the plot and re-draw the plot with all the points again. Animate the plot by changing it after a particular interval. I do not prefer the first one as the program runs and collects data for a long time (a day for example), and redrawing the plot will be pretty slow. The second one is also not preferable as time of arrival of data is uncertain and I want the plot to update only when the data is received. Is there a way in which I can update the plot just by adding more points to it only when the data is received?", "output": "Is there a way in which I can update the plot just by adding more point[s] to it... There are a number of ways of animating data in matplotlib, depending on the version you have. Have you seen the animation examples in the matplotlib documentation? The animation API defines a function FuncAnimation which animates a function in time. This function could just be the function you use to acquire your data. Each method basically sets the data property of the object being drawn, so doesn't require clearing the screen or figure. The data property can simply be extended, so you can keep the previous points and just keep adding to your line (or image or whatever you are drawing). Given that you say that your data arrival time is uncertain your best bet is probably just to do something like: import matplotlib.pyplot as plt import numpy hl, = plt.plot([], []) def update_line(hl, new_data): hl.set_xdata(numpy.append(hl.get_xdata(), new_data)) hl.set_ydata(numpy.append(hl.get_ydata(), new_data)) plt.draw() Then when you receive data from the serial port just call update_line."} +{"question_id": 16782323, "score": 151, "creation_date": 1369702132, "tags": ["python", "pandas"], "instruction": "Keep selected column as DataFrame instead of Series\n\nWhen selecting a single column from a pandas DataFrame(say df.iloc[:, 0], df['A'], or df.A, etc), the resulting vector is automatically converted to a Series instead of a single-column DataFrame. However, I am writing some functions that takes a DataFrame as an input argument. Therefore, I prefer to deal with single-column DataFrame instead of Series so that the function can assume say df.columns is accessible. Right now I have to explicitly convert the Series into a DataFrame by using something like pd.DataFrame(df.iloc[:, 0]). This doesn't seem like the most clean method. Is there a more elegant way to index from a DataFrame directly so that the result is a single-column DataFrame instead of Series?", "output": "As @Jeff mentions there are a few ways to do this, but I recommend using loc/iloc to be more explicit (and raise errors early if you're trying something ambiguous): In [10]: df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) In [11]: df Out[11]: A B 0 1 2 1 3 4 In [12]: df[['A']] In [13]: df[[0]] In [14]: df.loc[:, ['A']] In [15]: df.iloc[:, [0]] Out[12-15]: # they all return the same thing: A 0 1 1 3 The latter two choices remove ambiguity in the case of integer column names (precisely why loc/iloc were created). For example: In [16]: df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 0]) In [17]: df Out[17]: A 0 0 1 2 1 3 4 In [18]: df[[0]] # ambiguous Out[18]: A 0 1 1 3"} +{"question_id": 20082935, "score": 150, "creation_date": 1384898244, "tags": ["python", "macos", "python-3.x", "pip", "python-3.3"], "instruction": "How to install pip for Python 3 on Mac OS X?\n\nOS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is: Download pyserial from pypi untar pyserial.tgz cd pyserial python3 setup.py install But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet.", "output": "UPDATE: This is no longer necessary as of Python3.4. pip3 is installed as part of the general Python3 installation. I ended up posting this same question on the python mailing list, and got the following answer: # download and install setuptools curl -O https://bootstrap.pypa.io/ez_setup.py python3 ez_setup.py # download and install pip curl -O https://bootstrap.pypa.io/get-pip.py python3 get-pip.py Which solved my question perfectly. After adding the following for my own: cd /usr/local/bin ln -s ../../../Library/Frameworks/Python.framework/Versions/3.3/bin/pip pip So that I could run pip directly, I was able to: # use pip to install pip install pyserial or: # Don't want it? pip uninstall pyserial"} +{"question_id": 11945321, "score": 150, "creation_date": 1344914249, "tags": ["python", "django", "django-forms", "django-templates", "django-views"], "instruction": "What is the equivalent of \"none\" in django templates?\n\nI want to see if a field/variable is none within a Django template. What is the correct syntax for that? This is what I currently have: {% if profile.user.first_name is null %} <p> -- </p> {% elif %} {{ profile.user.first_name }} {{ profile.user.last_name }} {% endif%} In the example above, what would I use to replace \"null\"?", "output": "None, False and True all are available within template tags and filters. None, False, the empty string ('', \"\", \"\"\"\"\"\") and empty lists/tuples all evaluate to False when evaluated by if, so you can easily do {% if profile.user.first_name == None %} {% if not profile.user.first_name %} A hint: @fabiocerqueira is right, leave logic to models, limit templates to be the only presentation layer and calculate stuff like that in you model. An example: # someapp/models.py class UserProfile(models.Model): user = models.OneToOneField('auth.User') # other fields def get_full_name(self): if not self.user.first_name: return return ' '.join([self.user.first_name, self.user.last_name]) # template {{ user.get_profile.get_full_name }}"} +{"question_id": 26589990, "score": 150, "creation_date": 1414420658, "tags": ["python", "pytest"], "instruction": "py.test: error: unrecognized arguments: --cov=ner_brands --cov-report=term-missing --cov-config\n\nwhen I am trying to run my test through command line py.test file_name.py I got this error: py.test: error: unrecognized arguments: --cov=ner_brands --cov-report=term-missing --cov-config How can I fix this?", "output": "pytest-cov package is required if you want to pass --cov arguments to pytest. By default it is not passed though. Are you using a modified version of py.test? pip install pytest-cov would fix your issue."} +{"question_id": 40416072, "score": 150, "creation_date": 1478238488, "tags": ["python", "python-3.x", "io", "relative-path", "python-import"], "instruction": "Reading a file using a relative path in a Python project\n\nSay I have a Python project that is structured as follows: project /data test.csv /package __init__.py module.py main.py __init__.py: from .module import test module.py: import csv with open(\"..data/test.csv\") as f: test = [line for line in csv.reader(f)] main.py: import package print(package.test) When I run main.py I get the following error: C:\\Users\\Patrick\\Desktop\\project>python main.py Traceback (most recent call last): File \"main.py\", line 1, in <module> import package File \"C:\\Users\\Patrick\\Desktop\\project\\package\\__init__.py\", line 1, in <module> from .module import test File \"C:\\Users\\Patrick\\Desktop\\project\\package\\module.py\", line 3, in <module> with open(\"../data/test.csv\") as f: FileNotFoundError: [Errno 2] No such file or directory: '../data/test.csv' However, if I run module.py from the package directory, I don\u2019t get any errors. So it seems that the relative path used in open(...) is only relative to where the originating file is being run from (i.e __name__ == \"__main__\")? How can deal with this, using relative paths only?", "output": "Relative paths are relative to current working directory. If you do not want your path to be relative, it must be absolute. But there is an often used trick to build an absolute path from current script: use its __file__ special attribute: from pathlib import Path path = Path(__file__).parent / \"../data/test.csv\" with path.open() as f: test = list(csv.reader(f)) This requires python 3.4+ (for the pathlib module). If you still need to support older versions, you can get the same result with: import csv import os.path my_path = os.path.abspath(os.path.dirname(__file__)) path = os.path.join(my_path, \"../data/test.csv\") with open(path) as f: test = list(csv.reader(f)) [2020 edit: python3.4+ should now be the norm, so I moved the pathlib version inspired by jpyams' comment first]"} +{"question_id": 3774328, "score": 150, "creation_date": 1285197514, "tags": ["python", "file-io", "with-statement"], "instruction": "Implementing use of 'with object() as f' in custom class in python\n\nI have to open a file-like object in python (it's a serial connection through /dev/) and then close it. This is done several times in several methods of my class. How I WAS doing it was opening the file in the constructor, and then closing it in the destructor. I'm getting weird errors though and I think it has to do with the garbage collector and such, I'm still not used to not knowing exactly when my objects are being deleted =\\ The reason I was doing this is because I have to use tcsetattr with a bunch of parameters each time I open it and it gets annoying doing all that all over the place. So I want to implement an inner class to handle all that so I can use it doing with Meter('/dev/ttyS2') as m: I was looking online and I couldn't find a really good answer on how the with syntax is implemented. I saw that it uses the __enter__(self) and __exit(self)__ methods. But is all I have to do implement those methods and I can use the with syntax? Or is there more to it? Is there either an example on how to do this or some documentation on how it's implemented on file objects already that I can look at?", "output": "Those methods are pretty much all you need for making the object work with with statement. In __enter__ you have to return the file object after opening it and setting it up. In __exit__ you have to close the file object. The code for writing to it will be in the with statement body. MODE = 'rb' class Meter(): def __init__(self, dev): self.dev = dev def __enter__(self): #ttysetattr etc goes here before opening and returning the file object self.fd = open(self.dev, MODE) return self def __exit__(self, exception_type, exception_value, exception_traceback): #Exception handling here self.fd.close() meter = Meter('/dev/tty0') with meter as m: #here you work with the file object. m.fd.read()"} +{"question_id": 7257588, "score": 150, "creation_date": 1314797283, "tags": ["python", "list", "dictionary", "tuples", "hashable"], "instruction": "Why can't I use a list as a dict key in python? Exactly what can and cannot be used, and why?\n\nI found that the following are all valid: >>> d = {} >>> d[None] = 'foo' >>> d[(1, 3)] = 'baz' Even a module can be used as a dict key: >>> import sys >>> d[sys] = 'bar' However, a list cannot, and neither can a tuple that contains a list: >>> d[[2]] = 'spam' Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: unhashable type: 'list' >>> d[(1, [3])] = 'qux' Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: unhashable type: 'list' Why does storing a list inside the tuple mean it can't be a dict key any more? After all, I could just as easily \"hide\" a list inside a module (and indeed, e.g. sys.path is a list already). I had some vague idea that that the key has to be \"hashable\" but I don't have a detailed understanding of what this means, or why there is such a limitation. What would go wrong if Python allowed using lists as keys, say, using their memory location as the hash?", "output": "There's a good article on the topic in the Python wiki: Why Lists Can't Be Dictionary Keys. As explained there: What would go wrong if Python allowed using lists as keys, say, using their memory location as the hash? It would cause some unexpected behavior. Lists are generally treated as if their value was derived from their content's values, for instance when checking (in-)equality. Many would - understandably - expect that you can use any list [1, 2] to get the same key, where you'd have to keep around exactly the same list object. But lookup by value breaks as soon as a list used as a key is modified, and lookup by identity requires keeping track of that exact list object - which isn't an ordinary requirement for working with lists. Other objects, such as modules and object, make a much bigger deal out of their object identity anyway (when was the last time you had two distinct module objects called sys?), and are compared by that anyway. Therefore, it's less surprising - or even expected - that they, when used as dict keys, compare by identity in that case as well."} +{"question_id": 41094013, "score": 150, "creation_date": 1481515174, "tags": ["python", "django", "serialization", "django-rest-framework"], "instruction": "When to use Serializer's create() and ModelViewset's perform_create()\n\nI want to clarify the given documentation of Django-rest-framework regarding the creation of a model object. So far I have found that there are 3 approaches on how to handle such events. The Serializer's create() method. Here is the documentation class CommentSerializer(serializers.Serializer): def create(self, validated_data): return Comment.objects.create(**validated_data) The ModelViewset create() method. Documentation class AccountViewSet(viewsets.ModelViewSet): queryset = Account.objects.all() serializer_class = AccountSerializer permission_classes = [IsAccountAdminOrReadOnly] The ModelViewset perform_create() method. Documentation class SnippetViewSet(viewsets.ModelViewSet): def perform_create(self, serializer): serializer.save(owner=self.request.user) These three approaches are important depending on your application environment. But when do we need to use each create() / perform_create() function? On the other hand, I found some accounts that two create methods were called for a single post request the ModelViewSet's create() and serializer's create().", "output": "You would use create(self, validated_data) to add any extra details into the object before saving AND \"prod\" values into each model field just like **validated_data does. Ideally speaking, you want to do this form of \"prodding\" only in ONE location so the create method in your CommentSerializer is the best place. On top of this, you might want to also call external apis to create user accounts on their side just before saving your accounts into your own database. You should use this create function in conjunction withModelViewSet. Always think - \"Thin views, Thick serializers\". Example: def create(self, validated_data): email = validated_data.get(\"email\", None) validated.pop(\"email\") # Now you have a clean valid email string # You might want to call an external API or modify another table # (eg. keep track of number of accounts registered.) or even # make changes to the email format. # Once you are done, create the instance with the validated data return models.YourModel.objects.create(email=email, **validated_data) The create(self, request, *args, **kwargs) function in the ModelViewSet is defined in the CreateModelMixin class which is the parent of ModelViewSet. CreateModelMixin's main functions are these: from rest_framework import status from rest_framework.response import Response def create(self, request, *args, **kwargs): serializer = self.get_serializer(data=request.data) serializer.is_valid(raise_exception=True) self.perform_create(serializer) headers = self.get_success_headers(serializer.data) return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers) def perform_create(self, serializer): serializer.save() As you can see, the above create function takes care of calling validation on your serializer and producing the correct response. The beauty behind this, is that you can now isolate your application logic and NOT concern yourself about the mundane and repetitive validation calls and handling response output :). This works quite well in conjuction with the create(self, validated_data) found in the serializer (where your specific application logic might reside). Now you might ask, why do we have a separate perform_create(self, serializer) function with just one line of code!?!? Well, the main reason behind this is to allow customizeability when calling the save function. You might want to supply extra data before calling save (like serializer.save(owner=self.request.user) and if we didn't have perform_create(self, serializer), you would have to override the create(self, request, *args, **kwargs) and that just defeats the purpose of having mixins doing the heavy and boring work."} +{"question_id": 39817081, "score": 150, "creation_date": 1475411914, "tags": ["python", "python-typing"], "instruction": "typing.Any vs object?\n\nIs there any difference between using typing.Any as opposed to object in typing? For example: def get_item(L: list, i: int) -> typing.Any: return L[i] Compared to: def get_item(L: list, i: int) -> object: return L[i]", "output": "Yes, there is a difference. Although in Python 3, all objects are instances of object, including object itself, only Any documents that the return value should be disregarded by the typechecker. The Any type docstring states that object is a subclass of Any and vice-versa: >>> import typing >>> print(typing.Any.__doc__) Special type indicating an unconstrained type. - Any object is an instance of Any. - Any class is a subclass of Any. - As a special case, Any and object are subclasses of each other. However, a proper typechecker (one that goes beyond isinstance() checks, and which inspects how the object is actually used in the function) can readily object to object where Any is always accepted. From the Any type documentation: Notice that no typechecking is performed when assigning a value of type Any to a more precise type. and Contrast the behavior of Any with the behavior of object. Similar to Any, every type is a subtype of object. However, unlike Any, the reverse is not true: object is not a subtype of every other type. That means when the type of a value is object, a type checker will reject almost all operations on it, and assigning it to a variable (or using it as a return value) of a more specialized type is a type error. and from the mypy documentation section Any vs. object: The type object is another type that can have an instance of arbitrary type as a value. Unlike Any, object is an ordinary static type (it is similar to Object in Java), and only operations valid for all types are accepted for object values. object can be cast to a more specific type, while Any really means anything goes and a type checker disengages from any use of the object (even if you later assign such an object to a name that is typechecked). You already painted your function into a an un-typed corner by accepting list, which comes down to being the same thing as List[Any]. The typechecker disengaged there and the return value no longer matters, but since your function accepts a list containing Any objects, the proper return value would be Any here. To properly participate in type-checked code, you need to mark your input as List[T] (a genericly typed container) for a typechecker to then be able to care about the return value. Which in your case would be T since you are retrieving a value from the list. Create T from a TypeVar: from typing import TypeVar, List T = TypeVar('T') def get_item(L: List[T], i: int) -> T: return L[i] or, using Python 3.12 or newer: def get_item[T](L: list[T], i: int) -> T: return L[i]"} +{"question_id": 16706956, "score": 150, "creation_date": 1369290372, "tags": ["python", "exception"], "instruction": "Is there a difference between \"raise exception()\" and \"raise exception\" without parenthesis?\n\nDefining a parameterless exception: class MyException(Exception): pass When raised, is there any difference between: raise MyException and raise MyException() I couldn't find any; is it simply an overloaded syntax?", "output": "The short answer is that both raise MyException and raise MyException() do the same thing. This first form auto instantiates your exception. The relevant section from the docs says: raise evaluates the first expression as the exception object. It must be either a subclass or an instance of BaseException. If it is a class, the exception instance will be obtained when needed by instantiating the class with no arguments. That said, even though the semantics are the same, the first form is microscopically faster, and the second form is more flexible (because you can pass it arguments if needed). The usual style that most people use in Python (i.e. in the standard library, in popular applications, and in many books) is to use raise MyException when there are no arguments. People only instantiate the exception directly when there some arguments need to be passed. For example: raise KeyError(badkey)."} +{"question_id": 976577, "score": 149, "creation_date": 1244649902, "tags": ["python", "hash", "md5"], "instruction": "Random hash in Python\n\nWhat is the easiest way to generate a random hash (MD5) in Python?", "output": "A md5-hash is just a 128-bit value, so if you want a random one: import random hash = random.getrandbits(128) print(\"hash value: %032x\" % hash) I don't really see the point, though. Maybe you should elaborate why you need this..."} +{"question_id": 33725862, "score": 149, "creation_date": 1447626470, "tags": ["python", "sql", "windows"], "instruction": "Connecting to Microsoft SQL server using Python\n\nI am trying to connect to SQL through python to run some queries on some SQL databases on Microsoft SQL server. From my research online and on this forum the most promising library seems to be pyodbc. So I have made the following code import pyodbc conn = pyodbc.connect(init_string=\"driver={SQLOLEDB}; server=+ServerName+; database=+MSQLDatabase+; trusted_connection=true\") cursor = conn.cursor() and get the following error Traceback (most recent call last): File \"C:\\Users...\\scrap.py\", line 3, in <module> conn = pyodbc.connect(init_string=\"driver={SQLOLEDB}; server=+ServerName+; database=+MSQLDatabase+; trusted_connection=true\") pyodbc.Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') I have looked at the folowing posts and tried changing my driver to {sql server} and have connected using ODBC links before in SAS, which is partially what my above code is based on, so don't think I need to install anything else. pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)') Pyodbc - \"Data source name not found, and no default driver specified\" Thanks", "output": "This is how I do it... import pyodbc cnxn = pyodbc.connect(\"Driver={SQL Server Native Client 11.0};\" \"Server=server_name;\" \"Database=db_name;\" \"Trusted_Connection=yes;\") cursor = cnxn.cursor() cursor.execute('SELECT * FROM Table') for row in cursor: print('row = %r' % (row,)) Relevant resources: Connecting to SQL Server from Windows | pyodbc GitHub Python and Data : SQL Server as a data source for Python applications"} +{"question_id": 4037481, "score": 149, "creation_date": 1288211769, "tags": ["python", "memoization"], "instruction": "Caching class attributes in Python\n\nI'm writing a class in python and I have an attribute that will take a relatively long time to compute, so I only want to do it once. Also, it will not be needed by every instance of the class, so I don't want to do it by default in __init__. I'm new to Python, but not to programming. I can come up with a way to do this pretty easily, but I've found over and over again that the 'Pythonic' way of doing something is often much simpler than what I come up with using my experience in other languages. Is there a 'right' way to do this in Python?", "output": "3.8 \u2264 Python @property and @functools.lru_cache have been combined into @cached_property. import functools class MyClass: @functools.cached_property def foo(self): print(\"long calculation here\") return 21 * 2 3.2 \u2264 Python < 3.8 You should use both @property and @functools.lru_cache decorators: import functools class MyClass: @property @functools.lru_cache() def foo(self): print(\"long calculation here\") return 21 * 2 This answer has more detailed examples and also mentions a backport for previous Python versions. Python < 3.2 The Python wiki has a cached property decorator (MIT licensed) that can be used like this: import random # the class containing the property must be a new-style class class MyClass(object): # create property whose value is cached for ten minutes @cached_property(ttl=600) def randint(self): # will only be evaluated every 10 min. at maximum. return random.randint(0, 100) Or any implementation mentioned in the others answers that fits your needs. Or the above mentioned backport."} +{"question_id": 31421413, "score": 149, "creation_date": 1436933856, "tags": ["python", "machine-learning", "scikit-learn", "nlp"], "instruction": "How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn?\n\nI'm working in a sentiment analysis problem the data looks like this: label instances 5 1190 4 838 3 239 1 204 2 127 So my data is unbalanced since 1190 instances are labeled with 5. For the classification Im using scikit's SVC. The problem is I do not know how to balance my data in the right way in order to compute accurately the precision, recall, accuracy and f1-score for the multiclass case. So I tried the following approaches: First: wclf = SVC(kernel='linear', C= 1, class_weight={1: 10}) wclf.fit(X, y) weighted_prediction = wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, weighted_prediction) print 'F1 score:', f1_score(y_test, weighted_prediction,average='weighted') print 'Recall:', recall_score(y_test, weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, weighted_prediction, average='weighted') print '\\n clasification report:\\n', classification_report(y_test, weighted_prediction) print '\\n confussion matrix:\\n',confusion_matrix(y_test, weighted_prediction) Second: auto_wclf = SVC(kernel='linear', C= 1, class_weight='auto') auto_wclf.fit(X, y) auto_weighted_prediction = auto_wclf.predict(X_test) print 'Accuracy:', accuracy_score(y_test, auto_weighted_prediction) print 'F1 score:', f1_score(y_test, auto_weighted_prediction, average='weighted') print 'Recall:', recall_score(y_test, auto_weighted_prediction, average='weighted') print 'Precision:', precision_score(y_test, auto_weighted_prediction, average='weighted') print '\\n clasification report:\\n', classification_report(y_test,auto_weighted_prediction) print '\\n confussion matrix:\\n',confusion_matrix(y_test, auto_weighted_prediction) Third: clf = SVC(kernel='linear', C= 1) clf.fit(X, y) prediction = clf.predict(X_test) from sklearn.metrics import precision_score, \\ recall_score, confusion_matrix, classification_report, \\ accuracy_score, f1_score print 'Accuracy:', accuracy_score(y_test, prediction) print 'F1 score:', f1_score(y_test, prediction) print 'Recall:', recall_score(y_test, prediction) print 'Precision:', precision_score(y_test, prediction) print '\\n clasification report:\\n', classification_report(y_test,prediction) print '\\n confussion matrix:\\n',confusion_matrix(y_test, prediction) F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". sample_weight=sample_weight) /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1082: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". sample_weight=sample_weight) 0.930416613529 However, Im getting warnings like this: /usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1172: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\" How can I deal correctly with my unbalanced data in order to compute in the right way classifier's metrics?", "output": "I think there is a lot of confusion about which weights are used for what. I am not sure I know precisely what bothers you so I am going to cover different topics, bear with me ;). Class weights The weights from the class_weight parameter are used to train the classifier. They are not used in the calculation of any of the metrics you are using: with different class weights, the numbers will be different simply because the classifier is different. Basically in every scikit-learn classifier, the class weights are used to tell your model how important a class is. That means that during the training, the classifier will make extra efforts to classify properly the classes with high weights. How they do that is algorithm-specific. If you want details about how it works for SVC and the doc does not make sense to you, feel free to mention it. The metrics Once you have a classifier, you want to know how well it is performing. Here you can use the metrics you mentioned: accuracy, recall_score, f1_score... Usually when the class distribution is unbalanced, accuracy is considered a poor choice as it gives high scores to models which just predict the most frequent class. I will not detail all these metrics but note that, with the exception of accuracy, they are naturally applied at the class level: as you can see in this print of a classification report they are defined for each class. They rely on concepts such as true positives or false negative that require defining which class is the positive one. precision recall f1-score support 0 0.65 1.00 0.79 17 1 0.57 0.75 0.65 16 2 0.33 0.06 0.10 17 avg / total 0.52 0.60 0.51 50 The warning F1 score:/usr/local/lib/python2.7/site-packages/sklearn/metrics/classification.py:676: DeprecationWarning: The default `weighted` averaging is deprecated, and from version 0.18, use of precision, recall or F-score with multiclass or multilabel data or pos_label=None will result in an exception. Please set an explicit value for `average`, one of (None, 'micro', 'macro', 'weighted', 'samples'). In cross validation use, for instance, scoring=\"f1_weighted\" instead of scoring=\"f1\". You get this warning because you are using the f1-score, recall and precision without defining how they should be computed! The question could be rephrased: from the above classification report, how do you output one global number for the f1-score? You could: Take the average of the f1-score for each class: that's the avg / total result above. It's also called macro averaging. Compute the f1-score using the global count of true positives / false negatives, etc. (you sum the number of true positives / false negatives for each class). Aka micro averaging. Compute a weighted average of the f1-score. Using 'weighted' in scikit-learn will weigh the f1-score by the support of the class: the more elements a class has, the more important the f1-score for this class in the computation. These are 3 of the options in scikit-learn, the warning is there to say you have to pick one. So you have to specify an average argument for the score method. Which one you choose is up to how you want to measure the performance of the classifier: for instance macro-averaging does not take class imbalance into account and the f1-score of class 1 will be just as important as the f1-score of class 5. If you use weighted averaging however you'll get more importance for the class 5. The whole argument specification in these metrics is not super-clear in scikit-learn right now, it will get better in version 0.18 according to the docs. They are removing some non-obvious standard behavior and they are issuing warnings so that developers notice it. Computing scores Last thing I want to mention (feel free to skip it if you're aware of it) is that scores are only meaningful if they are computed on data that the classifier has never seen. This is extremely important as any score you get on data that was used in fitting the classifier is completely irrelevant. Here's a way to do it using StratifiedShuffleSplit, which gives you a random splits of your data (after shuffling) that preserve the label distribution. from sklearn.datasets import make_classification from sklearn.cross_validation import StratifiedShuffleSplit from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix # We use a utility to generate artificial classification data. X, y = make_classification(n_samples=100, n_informative=10, n_classes=3) sss = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=0) for train_idx, test_idx in sss: X_train, X_test, y_train, y_test = X[train_idx], X[test_idx], y[train_idx], y[test_idx] svc.fit(X_train, y_train) y_pred = svc.predict(X_test) print(f1_score(y_test, y_pred, average=\"macro\")) print(precision_score(y_test, y_pred, average=\"macro\")) print(recall_score(y_test, y_pred, average=\"macro\"))"} +{"question_id": 46615554, "score": 149, "creation_date": 1507336283, "tags": ["python", "matplotlib", "imshow"], "instruction": "How to display multiple images in one figure\n\nI am trying to display 20 random images on a single Figure. The images are indeed displayed, but they are overlaid. I am using: import numpy as np import matplotlib.pyplot as plt w=10 h=10 fig=plt.figure() for i in range(1,20): img = np.random.randint(10, size=(h,w)) fig.add_subplot(i,2,1) plt.imshow(img) plt.show() I would like them to appear naturally in a grid layout (say 4x5), each with the same size. Part of the problem is that I do not know what the arguments to add_subplot mean. The documentation states that the arguments are the number of rows, number of columns, and plot number. There is no positioning argument. Additionally, the plot number can only be 1 or 2. How can I achieve this?", "output": "Here is my approach that you may try: import numpy as np import matplotlib.pyplot as plt w = 10 h = 10 fig = plt.figure(figsize=(8, 8)) columns = 4 rows = 5 for i in range(1, columns*rows +1): img = np.random.randint(10, size=(h,w)) fig.add_subplot(rows, columns, i) plt.imshow(img) plt.show() The resulting image: (Original answer date: Oct 7 '17 at 4:20) Edit 1 Since this answer is popular beyond my expectation. And I see that a small change is needed to enable flexibility for the manipulation of the individual plots. So that I offer this new version to the original code. In essence, it provides:- access to individual axes of subplots possibility to plot more features on selected axes/subplot New code: import numpy as np import matplotlib.pyplot as plt w = 10 h = 10 fig = plt.figure(figsize=(9, 13)) columns = 4 rows = 5 # prep (x,y) for extra plotting xs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi ys = np.abs(np.sin(xs)) # absolute of sine # ax enables access to manipulate each of subplots ax = [] for i in range(columns*rows): img = np.random.randint(10, size=(h,w)) # create subplot and append to ax ax.append( fig.add_subplot(rows, columns, i+1) ) ax[-1].set_title(\"ax:\"+str(i)) # set title plt.imshow(img, alpha=0.25) # do extra plots on selected axes/subplots # note: index starts with 0 ax[2].plot(xs, 3*ys) ax[19].plot(ys**2, xs) plt.show() # finally, render the plot The resulting plot: Edit 2 In the previous example, the code provides access to the sub-plots with single index, which is inconvenient when the figure has many rows/columns of sub-plots. Here is an alternative of it. The code below provides access to the sub-plots with [row_index][column_index], which is more suitable for manipulation of array of many sub-plots. import matplotlib.pyplot as plt import numpy as np # settings h, w = 10, 10 # for raster image nrows, ncols = 5, 4 # array of sub-plots figsize = [6, 8] # figure size, inches # prep (x,y) for extra plotting on selected sub-plots xs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi ys = np.abs(np.sin(xs)) # absolute of sine # create figure (fig), and array of axes (ax) fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize) # plot simple raster image on each sub-plot for i, axi in enumerate(ax.flat): # i runs from 0 to (nrows*ncols-1) # axi is equivalent with ax[rowid][colid] img = np.random.randint(10, size=(h,w)) axi.imshow(img, alpha=0.25) # get indices of row/column rowid = i // ncols colid = i % ncols # write row/col indices as axes' title for identification axi.set_title(\"Row:\"+str(rowid)+\", Col:\"+str(colid)) # one can access the axes by ax[row_id][col_id] # do additional plotting on ax[row_id][col_id] of your choice ax[0][2].plot(xs, 3*ys, color='red', linewidth=3) ax[4][3].plot(ys**2, xs, color='green', linewidth=3) plt.tight_layout(True) plt.show() The resulting plot: Ticks and Tick-labels for Array of Subplots Some of the ticks and tick-labels accompanying the subplots can be hidden to get cleaner plot if all of the subplots share the same value ranges. All of the ticks and tick-labels can be hidden except for the outside edges on the left and bottom like this plot. To achieve the plot with only shared tick-labels on the left and bottom edges, you can do the following:- Add options sharex=True, sharey=True in fig, ax = plt.subplots() That line of code will become: fig,ax=plt.subplots(nrows=nrows,ncols=ncols,figsize=figsize,sharex=True,sharey=True) To specify required number of ticks, and labels to plot, inside the body of for i, axi in enumerate(ax.flat):, add these code axi.xaxis.set_major_locator(plt.MaxNLocator(5)) axi.yaxis.set_major_locator(plt.MaxNLocator(4)) the number 5, and 4 are the number of ticks/tick_labels to plot. You may need other values that suit your plots."} +{"question_id": 14826888, "score": 149, "creation_date": 1360651215, "tags": ["python", "python-2.7", "os.path"], "instruction": "Python os.path.join() on a list\n\nI can do >>> os.path.join(\"c:/\",\"home\",\"foo\",\"bar\",\"some.txt\") 'c:/home\\\\foo\\\\bar\\\\some.txt' But, when I do >>> s = \"c:/,home,foo,bar,some.txt\".split(\",\") >>> os.path.join(s) ['c:/', 'home', 'foo', 'bar', 'some.txt'] What am I missing here?", "output": "The problem is, os.path.join doesn't take a list as argument, it has to be separate arguments. To unpack the list into separate arguments required by join (and for the record: list was obtained from a string using split), use * - or the 'splat' operator, thus: >>> s = \"c:/,home,foo,bar,some.txt\".split(\",\") >>> os.path.join(*s) 'c:/home\\\\foo\\\\bar\\\\some.txt'"} +{"question_id": 6579496, "score": 149, "creation_date": 1309852545, "tags": ["python"], "instruction": "Using print statements only to debug\n\nI have been coding a lot in Python of late. And I have been working with data that I haven't worked with before, using formulae never seen before and dealing with huge files. All this made me write a lot of print statements to verify if it's all going right and identify the points of failure. But, generally, outputting so much information is not a good practice. How do I use the print statements only when I want to debug and let them be skipped when I don't want them to be printed?", "output": "The logging module in the standard library has everything you could want. It may seem excessive at first, but only use the parts you need. I'd recommend using logging.basicConfig to set the logging level then using the simple log methods: debug, info, warning, error and critical. import logging logging.basicConfig(level=logging.DEBUG) logging.debug('A debug message!') logging.info('We processed %d records', len(processed_records))"} +{"question_id": 58302531, "score": 149, "creation_date": 1570618366, "tags": ["python", "python-3.x", "f-string"], "instruction": "Combine f-string and raw string literal\n\nI'm wondering how to use an f-string whilst using r to get a raw string literal. I currently have it as below but would like the option of allowing any name to replace Alex I was thinking adding an f-string and then replacing Alex with curly braces and putting username inside but this doesn't work with the r. username = input('Enter name') download_folder = r'C:\\Users\\Alex\\Downloads'", "output": "You can combine the f for an f-string with the r for a raw string: user = 'Alex' dirToSee = fr'C:\\Users\\{user}\\Downloads' print (dirToSee) # prints C:\\Users\\Alex\\Downloads The r only disables backslash escape sequence processing, not f-string processing. Quoting the docs: The 'f' may be combined with 'r', but not with 'b' or 'u', therefore raw formatted strings are possible, but formatted bytes literals are not. ... Unless an 'r' or 'R' prefix is present, escape sequences in string and bytes literals are interpreted..."} +{"question_id": 35282222, "score": 149, "creation_date": 1454979740, "tags": ["python"], "instruction": "In python, how do I cast a class object to a dict\n\nLet's say I've got a simple class in python class Wharrgarbl(object): def __init__(self, a, b, c, sum, version='old'): self.a = a self.b = b self.c = c self.sum = 6 self.version = version def __int__(self): return self.sum + 9000 def __what_goes_here__(self): return {'a': self.a, 'b': self.b, 'c': self.c} I can cast it to an integer very easily >>> w = Wharrgarbl('one', 'two', 'three', 6) >>> int(w) 9006 Which is great! But, now I want to cast it to a dict in a similar fashion >>> w = Wharrgarbl('one', 'two', 'three', 6) >>> dict(w) {'a': 'one', 'c': 'three', 'b': 'two'} What do I need to define for this to work? I tried substituting both __dict__ and dict for __what_goes_here__, but dict(w) resulted in a TypeError: Wharrgarbl object is not iterable in both cases. I don't think simply making the class iterable will solve the problem. I also attempted many googles with as many different wordings of \"python cast object to dict\" as I could think of but couldn't find anything relevant :{ Also! Notice how calling w.__dict__ won't do what I want because it's going to contain w.version and w.sum. I want to customize the cast to dict in the same way that I can customize the cast to int by using def int(self). I know that I could just do something like this >>> w.__what_goes_here__() {'a': 'one', 'c': 'three', 'b': 'two'} But I am assuming there is a pythonic way to make dict(w) work since it is the same type of thing as int(w) or str(w). If there isn't a more pythonic way, that's fine too, just figured I'd ask. Oh! I guess since it matters, this is for python 2.7, but super bonus points for a 2.4 old and busted solution as well. There is another question Overloading __dict__() on python class that is similar to this one but may be different enough to warrant this not being a duplicate. I believe that OP is asking how to cast all the data in his class objects as dictionaries. I'm looking for a more customized approach in that I don't want everything in __dict__ included in the dictionary returned by dict(). Something like public vs private variables may suffice to explain what I'm looking for. The objects will be storing some values used in calculations and such that I don't need/want to show up in the resulting dictionaries. UPDATE: I've chosen to go with the asdict route suggested but it was a tough choice selecting what I wanted to be the answer to the question. Both @RickTeachey and @jpmc26 provided the answer I'm going to roll with but the former had more info and options and landed on the same result as well and was upvoted more so I went with it. Upvotes all around though and thanks for the help. I've lurked long and hard on stackoverflow and I'm trying to get my toes in the water more.", "output": "There are at least five six ways. The preferred way depends on what your use case is. Option 1: Simply add an asdict() method. Based on the problem description I would very much consider the asdict way of doing things suggested by other answers. This is because it does not appear that your object is really much of a collection: class Wharrgarbl(object): ... def asdict(self): return {'a': self.a, 'b': self.b, 'c': self.c} Using the other options below could be confusing for others unless it is very obvious exactly which object members would and would not be iterated or specified as key-value pairs. Option 1a: Inherit your class from 'typing.NamedTuple' (or the mostly equivalent 'collections.namedtuple'), and use the _asdict method provided for you. from typing import NamedTuple class Wharrgarbl(NamedTuple): a: str b: str c: str sum: int = 6 version: str = 'old' Using a named tuple is a very convenient way to add lots of functionality to your class with a minimum of effort, including an _asdict method. However, a limitation is that, as shown above, the NT will include all the members in its _asdict. If there are members you don't want to include in your dictionary, you'll need to specify which members you want the named tuple _asdict result to include. To do this, you could either inherit from a base namedtuple class using the older collections.namedtuple API: from collections import namedtuple as nt class Wharrgarbl(nt(\"Basegarble\", \"a b c\")): # note that the typing info below isn't needed for the old API a: str b: str c: str sum: int = 6 version: str = 'old' ...or you could create a base class using the newer API, and inherit from that, using only the dictionary members in the base class: from typing import NamedTuple class Basegarbl(NamedTuple): a: str b: str c: str class Wharrgarbl(Basegarbl): sum: int = 6 version: str = 'old' Another limitation is that NT is read-only. This may or may not be desirable. Option 2: Implement __iter__. Like this, for example: def __iter__(self): yield 'a', self.a yield 'b', self.b yield 'c', self.c Now you can just do: dict(my_object) This works because the dict() constructor accepts an iterable of (key, value) pairs to construct a dictionary. Before doing this, ask yourself the question whether iterating the object as a series of key,value pairs in this manner- while convenient for creating a dict- might actually be surprising behavior in other contexts. E.g., ask yourself the question \"what should the behavior of list(my_object) be...?\" Additionally, note that accessing values directly using the get item obj[\"a\"] syntax will not work, and keyword argument unpacking won't work. For those, you'd need to implement the mapping protocol. Option 3: Implement the mapping protocol. This allows access-by-key behavior, casting to a dict without using __iter__, and also provides two types of unpacking behavior: mapping unpacking behavior: {**my_obj} keyword unpacking behavior, but only if all the keys are strings: dict(**my_obj) The mapping protocol requires that you provide (at minimum) two methods together: keys() and __getitem__. class MyKwargUnpackable: def keys(self): return list(\"abc\") def __getitem__(self, key): return dict(zip(\"abc\", \"one two three\".split()))[key] Now you can do things like: >>> m=MyKwargUnpackable() >>> m[\"a\"] 'one' >>> dict(m) # cast to dict directly {'a': 'one', 'b': 'two', 'c': 'three'} >>> dict(**m) # unpack as kwargs {'a': 'one', 'b': 'two', 'c': 'three'} As mentioned above, if you are using a new enough version of python you can also unpack your mapping-protocol object into a dictionary comprehension like so (and in this case it is not required that your keys be strings): >>> {**m} {'a': 'one', 'b': 'two', 'c': 'three'} Note that the mapping protocol takes precedence over the __iter__ method when casting an object to a dict directly (without using kwarg unpacking, i.e. dict(m)). So it is possible- and might be sometimes convenient- to cause the object to have different behavior when used as an iterable (e.g., list(m)) vs. when cast to a dict (dict(m)). But note also that with regular dictionaries, if you cast to a list, it will give the KEYS back, and not the VALUES as you require. If you implement another nonstandard behavior for __iter__ (returning values instead of keys), it could be surprising for other people using your code unless it is very obvious why this would happen. EMPHASIZED: Just because you CAN use the mapping protocol, does NOT mean that you SHOULD do so. Does it actually make sense for your object to be passed around as a set of key-value pairs, or as keyword arguments and values? Does accessing it by key- just like a dictionary- really make sense? Would you also expect your object to have other standard mapping methods such as items, values, get? Do you want to support the in keyword and equality checks (==)? If the answer to these questions is yes, it's probably a good idea to not stop here, and consider the next option instead. Option 4: Look into using the 'collections.abc' module. Inheriting your class from 'collections.abc.Mapping or 'collections.abc.MutableMapping signals to other users that, for all intents and purposes, your class is a mapping * and can be expected to behave that way. It also provides the methods items, values, get and supports the in keyword and equality checks (==) \"for free\". You can still cast your object to a dict just as you require, but there would probably be little reason to do so. Because of duck typing, bothering to cast your mapping object to a dict would just be an additional unnecessary step the majority of the time. This answer from me about how to use ABCs might also be helpful. As noted in the comments below: it's worth mentioning that doing this the abc way essentially turns your object class into a dict-like class (assuming you use MutableMapping and not the read-only Mapping base class). Everything you would be able to do with dict, you could do with your own class object. This may be, or may not be, desirable. Also consider looking at the numerical abcs in the numbers module: https://docs.python.org/3/library/numbers.html Since you're also casting your object to an int, it might make more sense to essentially turn your class into a full fledged int so that casting isn't necessary. Option 5: Look into using the dataclasses module (Python 3.7+ only), which includes a convenient asdict() utility method. from dataclasses import dataclass, asdict, field, InitVar @dataclass class Wharrgarbl(object): a: int b: int c: int sum: InitVar[int] # note: InitVar will exclude this from the dict version: InitVar[str] = \"old\" def __post_init__(self, sum, version): self.sum = 6 # this looks like an OP mistake? self.version = str(version) Now you can do this: >>> asdict(Wharrgarbl(1,2,3,4,\"X\")) {'a': 1, 'b': 2, 'c': 3} Option 6: Use typing.TypedDict, which has been added in python 3.8. NOTE: option 6 is likely NOT what the OP, or other readers based on the title of this question, are looking for. See additional comments below. class Wharrgarbl(TypedDict): a: str b: str c: str Using this option, the resulting object is a dict (emphasis: the object class will be dict, not Wharrgarbl). There is no reason at all to \"cast\" it to a dict (unless you are making a copy) because it already is. And since the object is a dict, the initialization signature is identical to that of dict and as such it only accepts keyword arguments or another dictionary. >>> w = Wharrgarbl(a=1,b=2,b=3) >>> w {'a': 1, 'b': 2, 'c': 3} >>> type(w) <class 'dict'> Emphasized: the above \"class\" Wharrgarbl isn't actually a new class at all. It is simply syntactic sugar for creating typed dict objects with specific keys ONLY and value fields of different types for the type checker. At run time, it is still nothing more than a dict. This option can be pretty convenient for signaling to readers of your code (and also to a type checker such as mypy) that these particular dict objects are expected to have specific keys with specific value types. And it also will allow code completion tools, such as Pycharm or the VSCode python extension, to do their magic. But this means you cannot, for example, add other methods, although you can try: class MyDict(TypedDict): def my_fancy_method(self): return \"world changing result\" ...but it won't work: >>> MyDict().my_fancy_method() Traceback (most recent call last): File \"<stdin>\", line 1, in <module> AttributeError: 'dict' object has no attribute 'my_fancy_method' * \"Mapping\" has become the standard \"name\" of the dict-like duck type"} +{"question_id": 22633544, "score": 149, "creation_date": 1395747783, "tags": ["python", "error-handling"], "instruction": "How to throw error and exit with a custom message in python\n\nI've seen people suggesting sys.exit() in Python. My question is that, is there any other way to exit the execution of current script, I mean termination, with an error. Something like this: sys.exit(\"You can not have three process at the same time.\") Currently my solution would be: print(\"You can not have three process at the same time.\") sys.exit()", "output": "Calling sys.exit with a string will work. The docs mention this use explicitly: In particular, sys.exit(\"some error message\") is a quick way to exit a program when an error occurs. To be more specific this will also result in an exit code of 1: any other object is printed to stderr and results in an exit code of 1"} +{"question_id": 14155669, "score": 149, "creation_date": 1357296508, "tags": ["python", "linux", "bash", "shell", "debian"], "instruction": "Call Python script from bash with argument\n\nI know that I can run a python script from my bash script using the following: python python_script.py But what about if I wanted to pass a variable / argument to my python script from my bash script. How can I do that? Basically bash will work out a filename and then python will upload it, but I need to send the filename from bash to python when I call it.", "output": "To execute a python script in a bash script you need to call the same command that you would within a terminal. For instance > python python_script.py var1 var2 To access these variables within python you will need import sys print(sys.argv[0]) # prints python_script.py print(sys.argv[1]) # prints var1 print(sys.argv[2]) # prints var2"} +{"question_id": 20637439, "score": 149, "creation_date": 1387292374, "tags": ["python", "pandas", "csv", "readfile"], "instruction": "Skip rows during csv import pandas\n\nI'm trying to import a .csv file using pandas.read_csv(), however, I don't want to import the 2nd row of the data file (the row with index = 1 for 0-indexing). I can't see how not to import it because the arguments used with the command seem ambiguous: From the pandas website: skiprows : list-like or integer Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file.\" If I put skiprows=1 in the arguments, how does it know whether to skip the first row or skip the row with index 1?", "output": "You can try yourself: >>> import pandas as pd >>> from io import StringIO >>> s = \"\"\"1, 2 ... 3, 4 ... 5, 6\"\"\" >>> pd.read_csv(StringIO(s), skiprows=[1], header=None) 0 1 0 1 2 1 5 6 >>> pd.read_csv(StringIO(s), skiprows=1, header=None) 0 1 0 3 4 1 5 6"} +{"question_id": 13730107, "score": 149, "creation_date": 1354732689, "tags": ["python"], "instruction": "Writelines writes lines without newline, Just fills the file\n\nI have a program that writes a list to a file. The list is a list of pipe delimited lines and the lines should be written to the file like this: 123|GSV|Weather_Mean|hello|joe|43.45 122|GEV|temp_Mean|hello|joe|23.45 124|GSI|Weather_Mean|hello|Mike|47.45 BUT it wrote them line this ahhhh: 123|GSV|Weather_Mean|hello|joe|43.45122|GEV|temp_Mean|hello|joe|23.45124|GSI|Weather_Mean|hello|Mike|47.45 This program wrote all the lines into like one line without any line breaks.. This hurts me a lot and I gotta figure-out how to reverse this but anyway, where is my program wrong here? I thought write lines should write lines down the file rather than just write everything to one line.. fr = open(sys.argv[1], 'r') # source file fw = open(sys.argv[2]+\"/masked_\"+sys.argv[1], 'w') # Target Directory Location for line in fr: line = line.strip() if line == \"\": continue columns = line.strip().split('|') if columns[0].find(\"@\") > 1: looking_for = columns[0] # this is what we need to search else: looking_for = \"Dummy@dummy.com\" if looking_for in d: # by default, iterating over a dictionary will return keys new_line = d[looking_for]+'|'+'|'.join(columns[1:]) line_list.append(new_line) else: new_idx = str(len(d)+1) d[looking_for] = new_idx kv = open(sys.argv[3], 'a') kv.write(looking_for+\" \"+new_idx+'\\n') kv.close() new_line = d[looking_for]+'|'+'|'.join(columns[1:]) line_list.append(new_line) fw.writelines(line_list)", "output": "This is actually a pretty common problem for newcomers to Python\u2014especially since, across the standard library and popular third-party libraries, some reading functions strip out newlines, but almost no writing functions (except the log-related stuff) add them. So, there's a lot of Python code out there that does things like: fw.write('\\n'.join(line_list) + '\\n') (writing a single string) or fw.writelines(line + '\\n' for line in line_list) Either one is correct, and of course you could even write your own writelinesWithNewlines function that wraps it up\u2026 But you should only do this if you can't avoid it. It's better if you can create/keep the newlines in the first place\u2014as in Greg Hewgill's suggestions: line_list.append(new_line + \"\\n\") And it's even better if you can work at a higher level than raw lines of text, e.g., by using the csv module in the standard library, as esuaro suggests. For example, right after defining fw, you might do this: cw = csv.writer(fw, delimiter='|') Then, instead of this: new_line = d[looking_for]+'|'+'|'.join(columns[1:]) line_list.append(new_line) You do this: row_list.append(d[looking_for] + columns[1:]) And at the end, instead of this: fw.writelines(line_list) You do this: cw.writerows(row_list) Finally, your design is \"open a file, then build up a list of lines to add to the file, then write them all at once\". If you're going to open the file up top, why not just write the lines one by one? Whether you're using simple writes or a csv.writer, it'll make your life simpler, and your code easier to read. (Sometimes there can be simplicity, efficiency, or correctness reasons to write a file all at once\u2014but once you've moved the open all the way to the opposite end of the program from the write, you've pretty much lost any benefits of all-at-once.)"} +{"question_id": 8898765, "score": 149, "creation_date": 1326820228, "tags": ["java", "python", "jython"], "instruction": "Calling Python in Java?\n\nI am wondering if it is possible to call Python functions from Java code using Jython, or is it only for calling Java code from Python?", "output": "Jython: Python for the Java Platform - http://www.jython.org/index.html You can easily call python functions from Java code with Jython. That is as long as your python code itself runs under jython, i.e. doesn't use some c-extensions that aren't supported. If that works for you, it's certainly the simplest solution you can get. Otherwise you can use org.python.util.PythonInterpreter from the new Java6 interpreter support. A simple example from the top of my head - but should work I hope: (no error checking done for brevity) PythonInterpreter interpreter = new PythonInterpreter(); interpreter.exec(\"import sys\\nsys.path.append('pathToModules if they are not there by default')\\nimport yourModule\"); // execute a function that takes a string and returns a string PyObject someFunc = interpreter.get(\"funcName\"); PyObject result = someFunc.__call__(new PyString(\"Test!\")); String realResult = (String) result.__tojava__(String.class); As of 2021, Jython does not support Python 3.x"} +{"question_id": 44076804, "score": 149, "creation_date": 1495218760, "tags": ["python", "python-2.7", "pyenv"], "instruction": "What shebang to use for Python scripts run under a pyenv virtualenv\n\nWhen a Python script is supposed to be run from a pyenv virtualenv, what is the correct shebang for the file? As an example test case, the default Python on my system (OS X) does not have pandas installed. The pyenv virtualenv venv_name does. I tried getting the path of the Python executable from the virtualenv. pyenv activate venv_name which python Output: /Users/username/.pyenv/shims/python So I made my example script.py: #!/Users/username/.pyenv/shims/python import pandas as pd print 'success' But when I tried running the script (from within 'venv_name'), I got an error: ./script.py Output: ./script.py: line 2: import: command not found ./script.py: line 3: print: command not found Although running that path directly on the command line (from within 'venv_name') works fine: /Users/username/.pyenv/shims/python script.py Output: success And: python script.py # Also works Output: success What is the proper shebang for this? Ideally, I want something generic so that it will point at the Python of whatever my current venv is.", "output": "I don't really know why calling the interpreter with the full path wouldn't work for you. I use it all the time. But if you want to use the Python interpreter that is in your environment, you should do: #!/usr/bin/env python That way you search your environment for the Python interpreter to use."} +{"question_id": 57150426, "score": 149, "creation_date": 1563814011, "tags": ["python", "printing", "f-string"], "instruction": "What is print(f\"...\")\n\nI am reading through a python script that takes an input of XML files and outputs an XML file. However, I do not understand the printing syntax. Can someone please explain what f in print(f\"...\") does? args = parser.parser_args() print(f\"Input directory: {args.input_directory}\") print(f\"Output directory: {args.output_directory}\")", "output": "The f means Formatted string literals and it's new in Python 3.6. A formatted string literal or f-string is a string literal that is prefixed with f or F. These strings may contain replacement fields, which are expressions delimited by curly braces {}. While other string literals always have a constant value, formatted strings are really expressions evaluated at run time. Some examples of formatted string literals: >>> name = \"Fred\" >>> f\"He said his name is {name}.\" \"He said his name is Fred.\" >>> name = \"Fred\" >>> f\"He said his name is {name!r}.\" \"He said his name is Fred.\" >>> f\"He said his name is {repr(name)}.\" # repr() is equivalent to !r \"He said his name is Fred.\" >>> width = 10 >>> precision = 4 >>> value = decimal.Decimal(\"12.34567\") >>> f\"result: {value:{width}.{precision}}\" # nested fields result: 12.35 >>> today = datetime(year=2023, month=1, day=27) >>> f\"{today:%B %d, %Y}\" # using date format specifier January 27, 2023 >>> number = 1024 >>> f\"{number:#0x}\" # using integer format specifier 0x400"} +{"question_id": 59762996, "score": 148, "creation_date": 1579148521, "tags": ["python", "python-3.x"], "instruction": "How to fix AttributeError: partially initialized module?\n\nI am trying to run my script but keep getting this error: File \".\\checkmypass.py\", line 1, in <module> import requests line 3, in <module> response = requests.get(url) AttributeError: partially initialized module 'requests' has no attribute 'get' (most likely due to a circular import) How can I fix it?", "output": "This can happen when there's a local file with the same name as an imported module \u2013 Python sees the local file and thinks it's the module. In my case, I had a file I created in the same folder called requests.py. So my code was actually importing that file and not the actual requests module you install with pip. Then I had another issue with a file I created called logging.py. I renamed both files and the issue was resolved."} +{"question_id": 33101935, "score": 148, "creation_date": 1444736943, "tags": ["python", "image", "arrays", "python-imaging-library"], "instruction": "Convert PIL Image to byte array?\n\nI have an image in PIL Image format. I need to convert it to byte array. img = Image.open(fh, mode='r') roiImg = img.crop(box) Now I need the roiImg as a byte array.", "output": "import io from PIL import Image img = Image.open(fh, mode='r') roi_img = img.crop(box) img_byte_arr = io.BytesIO() roi_img.save(img_byte_arr, format='PNG') img_byte_arr = img_byte_arr.getvalue() With this, I don't have to save the cropped image on my disk and I am able to retrieve the byte array from a PIL cropped image."} +{"question_id": 2177590, "score": 148, "creation_date": 1265036913, "tags": ["python", "arrays", "sorting", "reorderlist"], "instruction": "How can I reorder a list?\n\nGiven an arbitrary array of size n, I'd like to reorganize the elements of the array based on the array's discrete indices. Python example: # Unique array of size n [ \"a\", \"b\", \"c\", \"d\", \"e\", ... <n> ] # Indices of array [ 0, 1, 2, 3, 4, ... <index_of_n> ] # Desired re-organization function 'indexMove' indexMove( [ \"a\", \"b\", \"c\", \"d\", \"e\", ... <n> ], [ <index_of_n>, 4, 0, 2, 3, ... 1 ] ) # Desired output from indexMove operation [ <n>, \"e\", \"a\", \"c\", \"d\", ... \"b\" ] What is the fastest way to perform this operation (achieving the smallest time complexity)?", "output": "You can do it like this mylist = ['a', 'b', 'c', 'd', 'e'] myorder = [3, 2, 0, 1, 4] mylist = [mylist[i] for i in myorder] print(mylist) # prints: ['d', 'c', 'a', 'b', 'e']"} +{"question_id": 41415017, "score": 148, "creation_date": 1483269098, "tags": ["python", "pandas", "group-by"], "instruction": "Count unique values using pandas groupby\n\nI have data of the following form: df = pd.DataFrame({ 'group': [1, 1, 2, 3, 3, 3, 4], 'param': ['a', 'a', 'b', np.nan, 'a', 'a', np.nan] }) print(df) # group param # 0 1 a # 1 1 a # 2 2 b # 3 3 NaN # 4 3 a # 5 3 a # 6 4 NaN Non-null values within groups are always the same. I want to count the non-null value for each group (where it exists) once, and then find the total counts for each value. I'm currently doing this in the following (clunky and inefficient) way: param = [] for _, group in df[df.param.notnull()].groupby('group'): param.append(group.param.unique()[0]) print(pd.DataFrame({'param': param}).param.value_counts()) # a 2 # b 1 I'm sure there's a way to do this more cleanly and without using a loop, but I just can't seem to work it out. Any help would be much appreciated.", "output": "I think you can use SeriesGroupBy.nunique: print (df.groupby('param')['group'].nunique()) param a 2 b 1 Name: group, dtype: int64 Another solution with unique, then create new df by DataFrame.from_records, reshape to Series by stack and last value_counts: a = df[df.param.notnull()].groupby('group')['param'].unique() print (pd.DataFrame.from_records(a.values.tolist()).stack().value_counts()) a 2 b 1 dtype: int64"} +{"question_id": 17782142, "score": 148, "creation_date": 1374478317, "tags": ["python", "get", "python-requests"], "instruction": "Why doesn't requests.get() return? What is the default timeout that requests.get() uses?\n\nIn my script, requests.get never returns: import requests print (\"requesting..\") # This call never returns! r = requests.get( \"http://www.some-site.example\", proxies = {'http': '222.255.169.74:8080'}, ) print(r.ok) What could be the possible reason(s)? Any remedy? What is the default timeout that get uses?", "output": "What is the default timeout that get uses? The default timeout is None, which means it'll wait (hang) until the connection is closed. Just specify a timeout value, like this: r = requests.get( 'http://www.example.com', proxies={'http': '222.255.169.74:8080'}, timeout=5 )"} +{"question_id": 364802, "score": 148, "creation_date": 1229140544, "tags": ["python", "generator"], "instruction": "How does a generator comprehension works?\n\nWhat does generator comprehension do? How does it work? I couldn't find a tutorial about it.", "output": "Do you understand list comprehensions? If so, a generator expression is like a list comprehension, but instead of finding all the items you're interested and packing them into list, it waits, and yields each item out of the expression, one by one. >>> my_list = [1, 3, 5, 9, 2, 6] >>> filtered_list = [item for item in my_list if item > 3] >>> print(filtered_list) [5, 9, 6] >>> len(filtered_list) 3 >>> # compare to generator expression ... >>> filtered_gen = (item for item in my_list if item > 3) >>> print(filtered_gen) # notice it's a generator object <generator object <genexpr> at 0x7f2ad75f89e0> >>> len(filtered_gen) # So technically, it has no length Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: object of type 'generator' has no len() >>> # We extract each item out individually. We'll do it manually first. ... >>> next(filtered_gen) 5 >>> next(filtered_gen) 9 >>> next(filtered_gen) 6 >>> next(filtered_gen) # Should be all out of items and give an error Traceback (most recent call last): File \"<stdin>\", line 1, in <module> StopIteration >>> # Yup, the generator is spent. No values for you! ... >>> # Let's prove it gives the same results as our list comprehension ... >>> filtered_gen = (item for item in my_list if item > 3) >>> gen_to_list = list(filtered_gen) >>> print(gen_to_list) [5, 9, 6] >>> filtered_list == gen_to_list True >>> Because a generator expression only has to yield one item at a time, it can lead to big savings in memory usage. Generator expressions make the most sense in scenarios where you need to take one item at a time, do a lot of calculations based on that item, and then move on to the next item. If you need more than one value, you can also use a generator expression and grab a few at a time. If you need all the values before your program proceeds, use a list comprehension instead."} +{"question_id": 29461185, "score": 148, "creation_date": 1428261237, "tags": ["python", "pandas", "filter", "dataframe"], "instruction": "Filtering Pandas Dataframe using OR statement\n\nI have a pandas dataframe and I want to filter the whole df based on the value of two columns in the data frame. I want to get back all rows and columns where IBRD or IMF != 0. alldata_balance = alldata[(alldata[IBRD] !=0) or (alldata[IMF] !=0)] but this gives me a ValueError ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). So I know I am not using the or statement correctly, is there a way to do this?", "output": "From the docs: Another common operation is the use of boolean vectors to filter the data. The operators are: | for or, & for and, and ~ for not. These must be grouped by using parentheses. https://pandas.pydata.org/docs/user_guide/indexing.html#boolean-indexing Try: alldata_balance = alldata[(alldata[IBRD] !=0) | (alldata[IMF] !=0)]"} +{"question_id": 33356442, "score": 148, "creation_date": 1445896964, "tags": ["python", "numpy"], "instruction": "When should I use hstack/vstack vs append vs concatenate vs column_stack?\n\nSimple question: what is the advantage of each of these methods. It seems that given the right parameters (and ndarray shapes) they all work seemingly equivalently. Do some work in place? Have better performance? Which functions should I use when?", "output": "All the functions are written in Python except np.concatenate. With an IPython shell you just use ??. If not, here's a summary of their code: vstack concatenate([atleast_2d(_m) for _m in tup], 0) i.e. turn all inputs in to 2d (or more) and concatenate on first hstack concatenate([atleast_1d(_m) for _m in tup], axis=<0 or 1>) colstack transform arrays with (if needed) array(arr, copy=False, subok=True, ndmin=2).T append concatenate((asarray(arr), values), axis=axis) In other words, they all work by tweaking the dimensions of the input arrays, and then concatenating on the right axis. They are just convenience functions. And newer np.stack: arrays = [asanyarray(arr) for arr in arrays] shapes = set(arr.shape for arr in arrays) result_ndim = arrays[0].ndim + 1 axis = normalize_axis_index(axis, result_ndim) sl = (slice(None),) * axis + (_nx.newaxis,) expanded_arrays = [arr[sl] for arr in arrays] concatenate(expanded_arrays, axis=axis, out=out) That is, it expands the dims of all inputs (a bit like np.expand_dims), and then concatenates. With axis=0, the effect is the same as np.array. hstack documentation now adds: The functions concatenate, stack and block provide more general stacking and concatenation operations. np.block is also new. It, in effect, recursively concatenates along the nested lists."} +{"question_id": 24257803, "score": 147, "creation_date": 1402989367, "tags": ["python", "google-app-engine", "installation", "pip", "distutils"], "instruction": "DistutilsOptionError: must supply either home or prefix/exec-prefix -- not both\n\nI've been usually installed python packages through pip. For Google App Engine, I need to install packages to another target directory. I've tried: pip install -I flask-restful --target ./lib but it fails with: must supply either home or prefix/exec-prefix -- not both How can I get this to work?", "output": "Are you using OS X and Homebrew? The Homebrew python page https://github.com/Homebrew/brew/blob/master/docs/Homebrew-and-Python.md calls out a known issue with pip and a work around. Worked for me. You can make this \"empty prefix\" the default by adding a ~/.pydistutils.cfg file with the following contents: [install] prefix= Edit: The Homebrew page was later changed to recommend passing --prefix on the command line, as discussed in the comments below. Here is the last version which contained that text. Unfortunately this only works for sdists, not wheels. The issue was reported to pip, which later fixed it for --user. That's probably why the section has now been removed from the Homebrew page. However, the problem still occurs when using --target as in the question above."} +{"question_id": 53751050, "score": 147, "creation_date": 1544647421, "tags": ["python", "python-3.x", "parallel-processing", "multiprocessing", "python-multiprocessing"], "instruction": "multiprocessing: Understanding logic behind `chunksize`\n\nWhat factors determine an optimal chunksize argument to methods like multiprocessing.Pool.map()? The .map() method seems to use an arbitrary heuristic for its default chunksize (explained below); what motivates that choice and is there a more thoughtful approach based on some particular situation/setup? Example - say that I am: Passing an iterable to .map() that has ~15 million elements; Working on a machine with 24 cores and using the default processes = os.cpu_count() within multiprocessing.Pool(). My naive thinking is to give each of 24 workers an equally-sized chunk, i.e. 15_000_000 / 24 or 625,000. Large chunks should reduce turnover/overhead while fully utilizing all workers. But it seems that this is missing some potential downsides of giving large batches to each worker. Is this an incomplete picture, and what am I missing? Part of my question stems from the default logic for if chunksize=None: both .map() and .starmap() call .map_async(), which looks like this: def _map_async(self, func, iterable, mapper, chunksize=None, callback=None, error_callback=None): # ... (materialize `iterable` to list if it's an iterator) if chunksize is None: chunksize, extra = divmod(len(iterable), len(self._pool) * 4) # ???? if extra: chunksize += 1 if len(iterable) == 0: chunksize = 0 What's the logic behind divmod(len(iterable), len(self._pool) * 4)? This implies that the chunksize will be closer to 15_000_000 / (24 * 4) == 156_250. What's the intention in multiplying len(self._pool) by 4? This makes the resulting chunksize a factor of 4 smaller than my \"naive logic\" from above, which consists of just dividing the length of the iterable by number of workers in pool._pool. Lastly, there is also this snippet from the Python docs on .imap() that further drives my curiosity: The chunksize argument is the same as the one used by the map() method. For very long iterables using a large value for chunksize can make the job complete much faster than using the default value of 1. Related answer that is helpful but a bit too high-level: Python multiprocessing: why are large chunksizes slower?.", "output": "Short Answer Pool's chunksize-algorithm is a heuristic. It provides a simple solution for all imaginable problem scenarios you are trying to stuff into Pool's methods. As a consequence, it cannot be optimized for any specific scenario. The algorithm arbitrarily divides the iterable in approximately four times more chunks than the naive approach. More chunks mean more overhead, but increased scheduling flexibility. How this answer will show, this leads to a higher worker-utilization on average, but without the guarantee of a shorter overall computation time for every case. \"That's nice to know\" you might think, \"but how does knowing this help me with my concrete multiprocessing problems?\" Well, it doesn't. The more honest short answer is, \"there is no short answer\", \"multiprocessing is complex\" and \"it depends\". An observed symptom can have different roots, even for similar scenarios. This answer tries to provide you with basic concepts helping you to get a clearer picture of Pool's scheduling black box. It also tries to give you some basic tools at hand for recognizing and avoiding potential cliffs as far they are related to chunksize. Table of Contents Part I Definitions Parallelization Goals Parallelization Scenarios Risks of Chunksize > 1 Pool's Chunksize-Algorithm Quantifying Algorithm Efficiency 6.1 Models 6.2 Parallel Schedule 6.3 Efficiencies 6.3.1 Absolute Distribution Efficiency (ADE) 6.3.2 Relative Distribution Efficiency (RDE) Part II Naive vs. Pool's Chunksize-Algorithm Reality Check Conclusion It is necessary to clarify some important terms first. 1. Definitions Chunk A chunk here is a share of the iterable-argument specified in a pool-method call. How the chunksize gets calculated and what effects this can have, is the topic of this answer. Task A task's physical representation in a worker-process in terms of data can be seen in the figure below. The figure shows an example call to pool.map(), displayed along a line of code, taken from the multiprocessing.pool.worker function, where a task read from the inqueue gets unpacked. worker is the underlying main-function in the MainThread of a pool-worker-process. The func-argument specified in the pool-method will only match the func-variable inside the worker-function for single-call methods like apply_async and for imap with chunksize=1. For the rest of the pool-methods with a chunksize-parameter the processing-function func will be a mapper-function (mapstar or starmapstar). This function maps the user-specified func-parameter on every element of the transmitted chunk of the iterable (--> \"map-tasks\"). The time this takes, defines a task also as a unit of work. Taskel While the usage of the word \"task\" for the whole processing of one chunk is matched by code within multiprocessing.pool, there is no indication how a single call to the user-specified func, with one element of the chunk as argument(s), should be referred to. To avoid confusion emerging from naming conflicts (think of maxtasksperchild-parameter for Pool's __init__-method), this answer will refer to the single units of work within a task as taskel. A taskel (from task + element) is the smallest unit of work within a task. It is the single execution of the function specified with the func-parameter of a Pool-method, called with arguments obtained from a single element of the transmitted chunk. A task consists of chunksize taskels. Parallelization Overhead (PO) PO consists of Python-internal overhead and overhead for inter-process communication (IPC). The per-task overhead within Python comes with the code needed for packaging and unpacking the tasks and its results. IPC-overhead comes with the necessary synchronization of threads and the copying of data between different address spaces (two copy steps needed: parent -> queue -> child). The amount of IPC-overhead is OS-, hardware- and data-size dependent, what makes generalizations about the impact difficult. 2. Parallelization Goals When using multiprocessing, our overall goal (obviously) is to minimize total processing time for all tasks. To reach this overall goal, our technical goal needs to be optimizing the utilization of hardware resources. Some important sub-goals for achieving the technical goal are: minimize parallelization overhead (most famously, but not alone: IPC) high utilization across all cpu-cores keeping memory usage limited to prevent the OS from excessive paging (trashing) At first, the tasks need to be computationally heavy (intensive) enough, to earn back the PO we have to pay for parallelization. The relevance of PO decreases with increasing absolute computation time per taskel. Or, to put it the other way around, the bigger the absolute computation time per taskel for your problem, the less relevant gets the need for reducing PO. If your computation will take hours per taskel, the IPC overhead will be negligible in comparison. The primary concern here is to prevent idling worker processes after all tasks have been distributed. Keeping all cores loaded means, we are parallelizing as much as possible. 3. Parallelization Scenarios What factors determine an optimal chunksize argument to methods like multiprocessing.Pool.map() The major factor in question is how much computation time may vary across our single taskels. To name it, the choice for an optimal chunksize is determined by the Coefficient of Variation (CV) for computation times per taskel. The two extreme scenarios on a scale, following from the extent of this variation are: All taskels need exactly the same computation time. A taskel could take seconds or days to finish. For better memorability, I will refer to these scenarios as: Dense Scenario Wide Scenario Dense Scenario In a Dense Scenario it would be desirable to distribute all taskels at once, to keep necessary IPC and context switching at a minimum. This means we want to create only as much chunks, as much worker processes there are. How already stated above, the weight of PO increases with shorter computation times per taskel. For maximal throughput, we also want all worker processes busy until all tasks are processed (no idling workers). For this goal, the distributed chunks should be of equal size or close to. Wide Scenario The prime example for a Wide Scenario would be an optimization problem, where results either converge quickly or computation can take hours, if not days. Usually it is not predictable what mixture of \"light taskels\" and \"heavy taskels\" a task will contain in such a case, hence it's not advisable to distribute too many taskels in a task-batch at once. Distributing less taskels at once than possible, means increasing scheduling flexibility. This is needed here to reach our sub-goal of high utilization of all cores. If Pool methods, by default, would be totally optimized for the Dense Scenario, they would increasingly create suboptimal timings for every problem located closer to the Wide Scenario. 4. Risks of Chunksize > 1 Consider this simplified pseudo-code example of a Wide Scenario-iterable, which we want to pass into a pool-method: good_luck_iterable = [60, 60, 86400, 60, 86400, 60, 60, 84600] Instead of the actual values, we pretend to see the needed computation time in seconds, for simplicity only 1 minute or 1 day. We assume the pool has four worker processes (on four cores) and chunksize is set to 2. Because the order will be kept, the chunks send to the workers will be these: [(60, 60), (86400, 60), (86400, 60), (60, 84600)] Since we have enough workers and the computation time is high enough, we can say, that every worker process will get a chunk to work on in the first place. (This does not have to be the case for fast completing tasks). Further we can say, the whole processing will take about 86400+60 seconds, because that's the highest total computation time for a chunk in this artificial scenario and we distribute chunks only once. Now consider this iterable, which has only one element switching its position compared to the previous iterable: bad_luck_iterable = [60, 60, 86400, 86400, 60, 60, 60, 84600] ...and the corresponding chunks: [(60, 60), (86400, 86400), (60, 60), (60, 84600)] Just bad luck with the sorting of our iterable nearly doubled (86400+86400) our total processing time! The worker getting the vicious (86400, 86400)-chunk is blocking the second heavy taskel in its task from getting distributed to one of the idling workers already finished with their (60, 60)-chunks. We obviously would not risk such an unpleasant outcome if we set chunksize=1. This is the risk of bigger chunksizes. With higher chunksizes we trade scheduling flexibility for less overhead and in cases like above, that's a bad deal. How we will see in chapter 6. Quantifying Algorithm Efficiency, bigger chunksizes can also lead to suboptimal results for Dense Scenarios. 5. Pool's Chunksize-Algorithm Below you will find a slightly modified version of the algorithm inside the source code. As you can see, I cut off the lower part and wrapped it into a function for calculating the chunksize argument externally. I also replaced 4 with a factor parameter and outsourced the len() calls. # mp_utils.py def calc_chunksize(n_workers, len_iterable, factor=4): \"\"\"Calculate chunksize argument for Pool-methods. Resembles source-code within `multiprocessing.pool.Pool._map_async`. \"\"\" chunksize, extra = divmod(len_iterable, n_workers * factor) if extra: chunksize += 1 return chunksize To ensure we are all on the same page, here's what divmod does: divmod(x, y) is a builtin function which returns (x//y, x%y). x // y is the floor division, returning the down rounded quotient from x / y, while x % y is the modulo operation returning the remainder from x / y. Hence e.g. divmod(10, 3) returns (3, 1). Now when you look at chunksize, extra = divmod(len_iterable, n_workers * 4), you will notice n_workers here is the divisor y in x / y and multiplication by 4, without further adjustment through if extra: chunksize +=1 later on, leads to an initial chunksize at least four times smaller (for len_iterable >= n_workers * 4) than it would be otherwise. For viewing the effect of multiplication by 4 on the intermediate chunksize result consider this function: def compare_chunksizes(len_iterable, n_workers=4): \"\"\"Calculate naive chunksize, Pool's stage-1 chunksize and the chunksize for Pool's complete algorithm. Return chunksizes and the real factors by which naive chunksizes are bigger. \"\"\" cs_naive = len_iterable // n_workers or 1 # naive approach cs_pool1 = len_iterable // (n_workers * 4) or 1 # incomplete pool algo. cs_pool2 = calc_chunksize(n_workers, len_iterable) real_factor_pool1 = cs_naive / cs_pool1 real_factor_pool2 = cs_naive / cs_pool2 return cs_naive, cs_pool1, cs_pool2, real_factor_pool1, real_factor_pool2 The function above calculates the naive chunksize (cs_naive) and the first-step chunksize of Pool's chunksize-algorithm (cs_pool1), as well as the chunksize for the complete Pool-algorithm (cs_pool2). Further it calculates the real factors rf_pool1 = cs_naive / cs_pool1 and rf_pool2 = cs_naive / cs_pool2, which tell us how many times the naively calculated chunksizes are bigger than Pool's internal version(s). Below you see two figures created with output from this function. The left figure just shows the chunksizes for n_workers=4 up until an iterable length of 500. The right figure shows the values for rf_pool1. For iterable length 16, the real factor becomes >=4(for len_iterable >= n_workers * 4) and it's maximum value is 7 for iterable lengths 28-31. That's a massive deviation from the original factor 4 the algorithm converges to for longer iterables. 'Longer' here is relative and depends on the number of specified workers. Remember chunksize cs_pool1 still lacks the extra-adjustment with the remainder from divmod contained in cs_pool2 from the complete algorithm. The algorithm goes on with: if extra: chunksize += 1 Now in cases were there is a remainder (an extra from the divmod-operation), increasing the chunksize by 1 obviously cannot work out for every task. After all, if it would, there would not be a remainder to begin with. How you can see in the figures below, the \"extra-treatment\" has the effect, that the real factor for rf_pool2 now converges towards 4 from below 4 and the deviation is somewhat smoother. Standard deviation for n_workers=4 and len_iterable=500 drops from 0.5233 for rf_pool1 to 0.4115 for rf_pool2. Eventually, increasing chunksize by 1 has the effect, that the last task transmitted only has a size of len_iterable % chunksize or chunksize. The more interesting and how we will see later, more consequential, effect of the extra-treatment however can be observed for the number of generated chunks (n_chunks). For long enough iterables, Pool's completed chunksize-algorithm (n_pool2 in the figure below) will stabilize the number of chunks at n_chunks == n_workers * 4. In contrast, the naive algorithm (after an initial burp) keeps alternating between n_chunks == n_workers and n_chunks == n_workers + 1 as the length of the iterable grows. Below you will find two enhanced info-functions for Pool's and the naive chunksize-algorithm. The output of these functions will be needed in the next chapter. # mp_utils.py from collections import namedtuple Chunkinfo = namedtuple( 'Chunkinfo', ['n_workers', 'len_iterable', 'n_chunks', 'chunksize', 'last_chunk'] ) def calc_chunksize_info(n_workers, len_iterable, factor=4): \"\"\"Calculate chunksize numbers.\"\"\" chunksize, extra = divmod(len_iterable, n_workers * factor) if extra: chunksize += 1 # `+ (len_iterable % chunksize > 0)` exploits that `True == 1` n_chunks = len_iterable // chunksize + (len_iterable % chunksize > 0) # exploit `0 == False` last_chunk = len_iterable % chunksize or chunksize return Chunkinfo( n_workers, len_iterable, n_chunks, chunksize, last_chunk ) Don't be confused by the probably unexpected look of calc_naive_chunksize_info. The extra from divmod is not used for calculating the chunksize. def calc_naive_chunksize_info(n_workers, len_iterable): \"\"\"Calculate naive chunksize numbers.\"\"\" chunksize, extra = divmod(len_iterable, n_workers) if chunksize == 0: chunksize = 1 n_chunks = extra last_chunk = chunksize else: n_chunks = len_iterable // chunksize + (len_iterable % chunksize > 0) last_chunk = len_iterable % chunksize or chunksize return Chunkinfo( n_workers, len_iterable, n_chunks, chunksize, last_chunk ) 6. Quantifying Algorithm Efficiency Now, after we have seen how the output of Pool's chunksize-algorithm looks different compared to output from the naive algorithm... How to tell if Pool's approach actually improves something? And what exactly could this something be? As shown in the previous chapter, for longer iterables (a bigger number of taskels), Pool's chunksize-algorithm approximately divides the iterable into four times more chunks than the naive method. Smaller chunks mean more tasks and more tasks mean more Parallelization Overhead (PO), a cost which must be weighed against the benefit of increased scheduling-flexibility (recall \"Risks of Chunksize>1\"). For rather obvious reasons, Pool's basic chunksize-algorithm cannot weigh scheduling-flexibility against PO for us. IPC-overhead is OS-, hardware- and data-size dependent. The algorithm cannot know on what hardware we run our code, nor does it have a clue how long a taskel will take to finish. It's a heuristic providing basic functionality for all possible scenarios. This means it cannot be optimized for any scenario in particular. As mentioned before, PO also becomes increasingly less of a concern with increasing computation times per taskel (negative correlation). When you recall the Parallelization Goals from chapter 2, one bullet-point was: high utilization across all cpu-cores The previously mentioned something, Pool's chunksize-algorithm can try to improve is the minimization of idling worker-processes, respectively the utilization of cpu-cores. A repeating question on SO regarding multiprocessing.Pool is asked by people wondering about unused cores / idling worker-processes in situations where you would expect all worker-processes busy. While this can have many reasons, idling worker-processes towards the end of a computation are an observation we can often make, even with Dense Scenarios (equal computation times per taskel) in cases where the number of workers is not a divisor of the number of chunks (n_chunks % n_workers > 0). The question now is: How can we practically translate our understanding of chunksizes into something which enables us to explain observed worker-utilization, or even compare the efficiency of different algorithms in that regard? 6.1 Models For gaining deeper insights here, we need a form of abstraction of parallel computations which simplifies the overly complex reality down to a manageable degree of complexity, while preserving significance within defined boundaries. Such an abstraction is called a model. An implementation of such a \"Parallelization Model\" (PM) generates worker-mapped meta-data (timestamps) as real computations would, if the data were to be collected. The model-generated meta-data allows predicting metrics of parallel computations under certain constraints. One of two sub-models within the here defined PM is the Distribution Model (DM). The DM explains how atomic units of work (taskels) are distributed over parallel workers and time, when no other factors than the respective chunksize-algorithm, the number of workers, the input-iterable (number of taskels) and their computation duration is considered. This means any form of overhead is not included. For obtaining a complete PM, the DM is extended with an Overhead Model (OM), representing various forms of Parallelization Overhead (PO). Such a model needs to be calibrated for each node individually (hardware-, OS-dependencies). How many forms of overhead are represented in a OM is left open and so multiple OMs with varying degrees of complexity can exist. Which level of accuracy the implemented OM needs is determined by the overall weight of PO for the specific computation. Shorter taskels lead to a higher weight of PO, which in turn requires a more precise OM if we were attempting to predict Parallelization Efficiencies (PE). 6.2 Parallel Schedule (PS) The Parallel Schedule is a two-dimensional representation of the parallel computation, where the x-axis represents time and the y-axis represents a pool of parallel workers. The number of workers and the total computation time mark the extend of a rectangle, in which smaller rectangles are drawn in. These smaller rectangles represent atomic units of work (taskels). Below you find the visualization of a PS drawn with data from the DM of Pool's chunksize-algorithm for the Dense Scenario. The x-axis is sectioned into equal units of time, where each unit stands for the computation time a taskel requires. The y-axis is divided into the number of worker-processes the pool uses. A taskel here is displayed as the smallest cyan-colored rectangle, put into a timeline (a schedule) of an anonymized worker-process. A task is one or multiple taskels in a worker-timeline continuously highlighted with the same hue. Idling time units are represented through red colored tiles. The Parallel Schedule is partitioned into sections. The last section is the tail-section. The names for the composed parts can be seen in the picture below. In a complete PM including an OM, the Idling Share is not limited to the tail, but also comprises space between tasks and even between taskels. 6.3 Efficiencies The Models introduced above allow quantifying the rate of worker-utilization. We can distinguish: Distribution Efficiency (DE) - calculated with help of a DM (or a simplified method for the Dense Scenario). Parallelization Efficiency (PE) - either calculated with help of a calibrated PM (prediction) or calculated from meta-data of real computations. It's important to note, that calculated efficiencies do not automatically correlate with faster overall computation for a given parallelization problem. Worker-utilization in this context only distinguishes between a worker having a started, yet unfinished taskel and a worker not having such an \"open\" taskel. That means, possible idling during the time span of a taskel is not registered. All above mentioned efficiencies are basically obtained by calculating the quotient of the division Busy Share / Parallel Schedule. The difference between DE and PE comes with the Busy Share occupying a smaller portion of the overall Parallel Schedule for the overhead-extended PM. This answer will further only discuss a simple method to calculate DE for the Dense Scenario. This is sufficiently adequate to compare different chunksize-algorithms, since... ... the DM is the part of the PM, which changes with different chunksize-algorithms employed. ... the Dense Scenario with equal computation durations per taskel depicts a \"stable state\", for which these time spans drop out of the equation. Any other scenario would just lead to random results since the ordering of taskels would matter. 6.3.1 Absolute Distribution Efficiency (ADE) This basic efficiency can be calculated in general by dividing the Busy Share through the whole potential of the Parallel Schedule: Absolute Distribution Efficiency (ADE) = Busy Share / Parallel Schedule For the Dense Scenario, the simplified calculation-code looks like this: # mp_utils.py def calc_ade(n_workers, len_iterable, n_chunks, chunksize, last_chunk): \"\"\"Calculate Absolute Distribution Efficiency (ADE). `len_iterable` is not used, but contained to keep a consistent signature with `calc_rde`. \"\"\" if n_workers == 1: return 1 potential = ( ((n_chunks // n_workers + (n_chunks % n_workers > 1)) * chunksize) + (n_chunks % n_workers == 1) * last_chunk ) * n_workers n_full_chunks = n_chunks - (chunksize > last_chunk) taskels_in_regular_chunks = n_full_chunks * chunksize real = taskels_in_regular_chunks + (chunksize > last_chunk) * last_chunk ade = real / potential return ade If there is no Idling Share, Busy Share will be equal to Parallel Schedule, hence we get an ADE of 100%. In our simplified model, this is a scenario where all available processes will be busy through the whole time needed for processing all tasks. In other words, the whole job gets effectively parallelized to 100 percent. But why do I keep referring to PE as absolute PE here? To comprehend that, we have to consider a possible case for the chunksize (cs) which ensures maximal scheduling flexibility (also, the number of Highlanders there can be. Coincidence?): __________________________________~ ONE ~__________________________________ If we, for example, have four worker-processes and 37 taskels, there will be idling workers even with chunksize=1, just because n_workers=4 is not a divisor of 37. The remainder of dividing 37 / 4 is 1. This single remaining taskel will have to be processed by a sole worker, while the remaining three are idling. Likewise, there will still be one idling worker with 39 taskels, how you can see pictured below. When you compare the upper Parallel Schedule for chunksize=1 with the below version for chunksize=3, you will notice that the upper Parallel Schedule is smaller, the timeline on the x-axis shorter. It should become obvious now, how bigger chunksizes unexpectedly also can lead to increased overall computation times, even for Dense Scenarios. But why not just use the length of the x-axis for efficiency calculations? Because the overhead is not contained in this model. It will be different for both chunksizes, hence the x-axis is not really directly comparable. The overhead can still lead to a longer total computation time like shown in case 2 from the figure below. 6.3.2 Relative Distribution Efficiency (RDE) The ADE value does not contain the information if a better distribution of taskels is possible with chunksize set to 1. Better here still means a smaller Idling Share. To get a DE value adjusted for the maximum possible DE, we have to divide the considered ADE through the ADE we get for chunksize=1. Relative Distribution Efficiency (RDE) = ADE_cs_x / ADE_cs_1 Here is how this looks in code: # mp_utils.py def calc_rde(n_workers, len_iterable, n_chunks, chunksize, last_chunk): \"\"\"Calculate Relative Distribution Efficiency (RDE).\"\"\" ade_cs1 = calc_ade( n_workers, len_iterable, n_chunks=len_iterable, chunksize=1, last_chunk=1 ) ade = calc_ade(n_workers, len_iterable, n_chunks, chunksize, last_chunk) rde = ade / ade_cs1 return rde RDE, how defined here, in essence is a tale about the tail of a Parallel Schedule. RDE is influenced by the maximum effective chunksize contained in the tail. (This tail can be of x-axis length chunksize or last_chunk.) This has the consequence, that RDE naturally converges to 100% (even) for all sorts of \"tail-looks\" like shown in the figure below. A low RDE ... is a strong hint for optimization potential. naturally gets less likely for longer iterables, because the relative tail-portion of the overall Parallel Schedule shrinks. Please find Part II of this answer here."} +{"question_id": 1455602, "score": 147, "creation_date": 1253552625, "tags": ["python"], "instruction": "Why do I get \"TypeError: not all arguments converted during string formatting\" trying to format a tuple?\n\nI want to use %-style string formatting to print a tuple: tup = (1,2,3) print(\"this is a tuple: %s.\" % (tup)) I expect it to print like This is a tuple: (1,2,3)., but instead I get an error that says TypeError: not all arguments converted during string formatting. What is wrong, and how do I fix it? In editing this question for clarity and modernization, I preserved one interesting aspect of the original example: the parentheses around tup. These are not necessary for the % syntax, and also do not create a tuple. It's possible that OP understood that the tuple wrapping (described in the answers here) was necessary, but simply got it wrong. For more information on that issue, see How to create a \"singleton\" tuple with only one element.", "output": ">>> thetuple = (1, 2, 3) >>> print(\"this is a tuple: %s\" % (thetuple,)) this is a tuple: (1, 2, 3) Making a singleton tuple with the tuple of interest as the only item, i.e. the (thetuple,) part, is the key bit here."} +{"question_id": 21986194, "score": 147, "creation_date": 1393240641, "tags": ["python", "function", "python-2.7", "dictionary"], "instruction": "How to pass dictionary items as function arguments in python?\n\nMy code 1st file: data = {'school':'DAV', 'standard': '7', 'name': 'abc', 'city': 'delhi'} my_function(*data) 2nd file: my_function(*data): schoolname = school cityname = city standard = standard studentname = name in the above code, only keys of \"data\" dictionary were get passed to my_function(), but i want key-value pairs to pass. How to correct this ? I want the my_function() to get modified like this my_function(school='DAV', standard='7', name='abc', city='delhi') and this is my requirement, give answers according to this EDIT: dictionary key class is changed to standard", "output": "If you want to use them like that, define the function with the variable names as normal: def my_function(school, standard, city, name): schoolName = school cityName = city standardName = standard studentName = name Now you can use ** when you call the function: data = {'school':'DAV', 'standard': '7', 'name': 'abc', 'city': 'delhi'} my_function(**data) and it will work as you want. P.S. Don't use reserved words such as class.(e.g., use klass instead)"} +{"question_id": 26139423, "score": 147, "creation_date": 1412159849, "tags": ["python", "pandas", "matplotlib", "seaborn", "scatter-plot"], "instruction": "plot different color for different categorical levels\n\nI have this data frame diamonds which is composed of variables like (carat, price, color), and I want to draw a scatter plot of price to carat for each color, which means different color has different color in the plot. This is easy in R with ggplot: ggplot(aes(x=carat, y=price, color=color), #by setting color=color, ggplot automatically draw in different colors data=diamonds) + geom_point(stat='summary', fun.y=median) I wonder how could this be done in Python using matplotlib ? PS: I know about auxiliary plotting packages, such as seaborn and ggplot for python, and I don't prefer them, just want to find out if it is possible to do the job using matplotlib alone, ;P", "output": "Imports and Sample DataFrame import matplotlib.pyplot as plt import pandas as pd import seaborn as sns # for sample data from matplotlib.lines import Line2D # for legend handle # DataFrame used for all options df = sns.load_dataset('diamonds') carat cut color clarity depth table price x y z 0 0.23 Ideal E SI2 61.5 55.0 326 3.95 3.98 2.43 1 0.21 Premium E SI1 59.8 61.0 326 3.89 3.84 2.31 2 0.23 Good E VS1 56.9 65.0 327 4.05 4.07 2.31 With matplotlib You can pass plt.scatter a c argument, which allows you to select the colors. The following code defines a colors dictionary to map the diamond colors to the plotting colors. fig, ax = plt.subplots(figsize=(6, 6)) colors = {'D':'tab:blue', 'E':'tab:orange', 'F':'tab:green', 'G':'tab:red', 'H':'tab:purple', 'I':'tab:brown', 'J':'tab:pink'} ax.scatter(df['carat'], df['price'], c=df['color'].map(colors)) # add a legend handles = [Line2D([0], [0], marker='o', color='w', markerfacecolor=v, label=k, markersize=8) for k, v in colors.items()] ax.legend(title='color', handles=handles, bbox_to_anchor=(1.05, 1), loc='upper left') plt.show() df['color'].map(colors) effectively maps the colors from \"diamond\" to \"plotting\". (Forgive me for not putting another example image up, I think 2 is enough :P) With seaborn You can use seaborn which is a wrapper around matplotlib that makes it look prettier by default (rather opinion-based, I know :P) but also adds some plotting functions. For this you could use seaborn.lmplot with fit_reg=False (which prevents it from automatically doing some regression). sns.scatterplot(x='carat', y='price', data=df, hue='color', ec=None) also does the same thing. Selecting hue='color' tells seaborn to split and plot the data based on the unique values in the 'color' column. sns.lmplot(x='carat', y='price', data=df, hue='color', fit_reg=False) With pandas.DataFrame.groupby & pandas.DataFrame.plot If you don't want to use seaborn, use pandas.groupby to get the colors alone, and then plot them using just matplotlib, but you'll have to manually assign colors as you go, I've added an example below: fig, ax = plt.subplots(figsize=(6, 6)) grouped = df.groupby('color') for key, group in grouped: group.plot(ax=ax, kind='scatter', x='carat', y='price', label=key, color=colors[key]) plt.show() This code assumes the same DataFrame as above, and then groups it based on color. It then iterates over these groups, plotting for each one. To select a color, I've created a colors dictionary, which can map the diamond color (for instance D) to a real color (for instance tab:blue)."} +{"question_id": 8494209, "score": 147, "creation_date": 1323800637, "tags": ["python", "django", "templates"], "instruction": "Modulus % in Django template\n\nI'm looking for a way to use something like the modulus operator in django. What I am trying to do is to add a classname to every fourth element in a loop. With modulus it would look like this: {% for p in posts %} <div class=\"post width1 height2 column {% if forloop.counter0 % 4 == 0 %}first{% endif %}}\"> <div class=\"preview\"> </div> <div class=\"overlay\"> </div> <h2>p.title</h2> </div> {% endfor %} Of course this doesn't work because % is a reserved character. Is there any other way to do this?", "output": "You need divisibleby, a built-in django filter. {% for p in posts %} <div class=\"post width1 height2 column {% if forloop.counter0|divisibleby:4 %}first{% endif %}\"> <div class=\"preview\"> </div> <div class=\"overlay\"> </div> <h2>p.title</h2> </div> {% endfor %}"} +{"question_id": 13369888, "score": 147, "creation_date": 1352845251, "tags": ["python", "matplotlib", "label"], "instruction": "matplotlib y-axis label on right side\n\nIs there a simple way to put the y-axis label on the right-hand side of the plot? I know that this can be done for the tick labels using ax.yaxis.tick_right(), but I would like to know if it can be done for the axis label as well. One idea which came to mind was to use ax.yaxis.tick_right() ax2 = ax.twinx() ax2.set_ylabel('foo') However, this doesn't have the desired effect of placing all labels (tick and axis labels) on the right-hand side, while preserving the extent of the y-axis. In short, I would like a way to move all the y-axis labels from the left to the right.", "output": "It looks like you can do it with: ax.yaxis.set_label_position(\"right\") ax.yaxis.tick_right() See here for an example. To do the same with the plt. interface, use the plt.gca() function to expose the current axes, as this function does not seem to be exposed through any plt function. plt.gca().yaxis.set_label_position(\"right\") plt.gca().yaxis.tick_right()"} +{"question_id": 28252585, "score": 147, "creation_date": 1422713726, "tags": ["python", "functional-programming", "pipeline"], "instruction": "Functional pipes in python like %>% from R's magrittr\n\nIn R (thanks to magrittr) you can now perform operations with a more functional piping syntax via %>%. This means that instead of coding this: > as.Date(\"2014-01-01\") > as.character((sqrt(12)^2) You could also do this: > \"2014-01-01\" %>% as.Date > 12 %>% sqrt %>% .^2 %>% as.character To me this is more readable and this extends to use cases beyond the dataframe. Does the python language have support for something similar?", "output": "Pipes are a new feature in Pandas 0.16.2. Example: import pandas as pd from sklearn.datasets import load_iris x = load_iris() x = pd.DataFrame(x.data, columns=x.feature_names) def remove_units(df): df.columns = pd.Index(map(lambda x: x.replace(\" (cm)\", \"\"), df.columns)) return df def length_times_width(df): df['sepal length*width'] = df['sepal length'] * df['sepal width'] df['petal length*width'] = df['petal length'] * df['petal width'] x.pipe(remove_units).pipe(length_times_width) x NB: The Pandas version retains Python's reference semantics. That's why length_times_width doesn't need a return value; it modifies x in place."} +{"question_id": 49198068, "score": 147, "creation_date": 1520612710, "tags": ["python", "pandas", "dataframe", "timezone", "timestamp-with-timezone"], "instruction": "How to remove timezone from a Timestamp column in pandas\n\nI read Pandas change timezone for forex DataFrame but I'd like to make the time column of my dataframe timezone naive for interoperability with an sqlite3 database. The data in my pandas dataframe is already converted to UTC data, but I do not want to have to maintain this UTC timezone information in the database. Given a sample of the data derived from other sources, it looks like this: print(type(testdata)) print(testdata) print(testdata.applymap(type)) gives: <class 'pandas.core.frame.DataFrame'> time navd88_ft station_id new 0 2018-03-07 01:31:02+00:00 -0.030332 13 5 1 2018-03-07 01:21:02+00:00 -0.121653 13 5 2 2018-03-07 01:26:02+00:00 -0.072945 13 5 3 2018-03-07 01:16:02+00:00 -0.139917 13 5 4 2018-03-07 01:11:02+00:00 -0.152085 13 5 time navd88_ft station_id \\ 0 <class 'pandas._libs.tslib.Timestamp'> <class 'float'> <class 'int'> 1 <class 'pandas._libs.tslib.Timestamp'> <class 'float'> <class 'int'> 2 <class 'pandas._libs.tslib.Timestamp'> <class 'float'> <class 'int'> 3 <class 'pandas._libs.tslib.Timestamp'> <class 'float'> <class 'int'> 4 <class 'pandas._libs.tslib.Timestamp'> <class 'float'> <class 'int'> new 0 <class 'int'> 1 <class 'int'> 2 <class 'int'> 3 <class 'int'> 4 <class 'int'> but newstamp = testdata['time'].tz_convert(None) gives an eventual error: TypeError: index is not a valid DatetimeIndex or PeriodIndex What do I do to replace the column with a timezone naive timestamp?", "output": "The column must be a datetime dtype, for example after using pd.to_datetime. Then, you can use tz_localize to change the time zone, a naive timestamp corresponds to time zone None: testdata['time'].dt.tz_localize(None) Unless the column is an index (DatetimeIndex), the .dt accessor must be used to access pandas datetime functions."} +{"question_id": 52063759, "score": 147, "creation_date": 1535478872, "tags": ["python", "python-3.x", "python-dataclasses"], "instruction": "Passing default list argument to dataclasses\n\nI would like to pass default argument in my class, but somehow I am having problem: from dataclasses import dataclass, field from typing import List @dataclass class Pizza(): ingredients: List = field(default_factory=['dow', 'tomatoes']) meat: str = field(default='chicken') def __repr__(self): return 'preparing_following_pizza {} {}'.format(self.ingredients, self.meat) If I now try to instantiate Pizza, I get the following error: >>> my_order = Pizza() Traceback (most recent call last): File \"pizza.py\", line 13, in <module> Pizza() File \"<string>\", line 2, in __init__ TypeError: 'list' object is not callable What am I doing wrong?", "output": "From the dataclasses.field docs: The parameters to field() are: default_factory: If provided, it must be a zero-argument callable that will be called when a default value is needed for this field. Among other purposes, this can be used to specify fields with mutable default values, as discussed below. It is an error to specify both default and default_factory. Your default_factory is not a 0-argument callable but a list, which is the reason for the error: from dataclasses import dataclass, field from typing import List @dataclass class Pizza(): ingredients: List = field(default_factory=['dow', 'tomatoes']) # <- wrong! Use a lambda function instead: @dataclass class Pizza(): ingredients: List = field(default_factory=lambda: ['dow', 'tomatoes'])"} +{"question_id": 15799696, "score": 147, "creation_date": 1365029771, "tags": ["python"], "instruction": "How to build URLs in Python with the standard library?\n\nMany languages have standard APIs for building and parsing URLS from - or into parts like scheme, host, port, query-param. Some allow me to start with a base URL and add paths. How do I do this in python using the standard library? For example, how would I build the URL: http://subdomain.domain.com?arg1=someargument&arg2=someotherargument given a base of \"http://subdomain.domain.com\" and some query params. Or given the scheme: \"http\", the host \"subdomain.domain.com\", etc?", "output": "urlparse in the python standard library is all about building valid urls. Check the documentation of urlparse Example: from collections import namedtuple from urllib.parse import urljoin, urlencode, urlparse, urlunparse # namedtuple to match the internal signature of urlunparse Components = namedtuple( typename='Components', field_names=['scheme', 'netloc', 'url', 'path', 'query', 'fragment'] ) query_params = { 'param1': 'some data', 'param2': 42 } url = urlunparse( Components( scheme='https', netloc='example.com', query=urlencode(query_params), path='', url='/', fragment='anchor' ) ) print(url) Output: https://example.com/?param1=some+data¶m2=42#anchor"} +{"question_id": 45400284, "score": 147, "creation_date": 1501421133, "tags": ["python", "class", "subclass", "python-3.6", "metaclass"], "instruction": "Understanding __init_subclass__\n\nI finally upgraded my python version and I was discovering the new features added. Among other things, I was scratching my head around the new __init_subclass__ method. From the docs: This method is called whenever the containing class is subclassed. cls is then the new subclass. If defined as a normal instance method, this method is implicitly converted to a class method. So I started to playing around with it a little bit, following the example in the docs: class Philosopher: def __init_subclass__(cls, default_name, **kwargs): super().__init_subclass__(**kwargs) print(f\"Called __init_subclass({cls}, {default_name})\") cls.default_name = default_name class AustralianPhilosopher(Philosopher, default_name=\"Bruce\"): pass class GermanPhilosopher(Philosopher, default_name=\"Nietzsche\"): default_name = \"Hegel\" print(\"Set name to Hegel\") Bruce = AustralianPhilosopher() Mistery = GermanPhilosopher() print(Bruce.default_name) print(Mistery.default_name) Produces this output: Called __init_subclass(<class '__main__.AustralianPhilosopher'>, 'Bruce') 'Set name to Hegel' Called __init_subclass(<class '__main__.GermanPhilosopher'>, 'Nietzsche') 'Bruce' 'Nietzsche' I understand that this method is called after the subclass definition, but my questions are particularly about the usage of this feature. I read the PEP 487 article as well, but didn't help me much. Where would this method be helpful? Is it for: the superclass to register the subclasses upon creation? forcing the subclass to set a field at definition time? Also, do I need to understand the __set_name__ to fully comprehend its usage?", "output": "__init_subclass__ and __set_name__ are orthogonal mechanisms - they're not tied to each other, just described in the same PEP. Both are features that needed a full-featured metaclass before. The PEP 487 addresses two of the most common uses of metaclasses: how to let the parent know when it is being subclassed (__init_subclass__) how to let a descriptor class know the name of the property it is used for (__set_name__) As PEP 487 says: While there are many possible ways to use a metaclass, the vast majority of use cases falls into just three categories: some initialization code running after class creation, the initialization of descriptors and keeping the order in which class attributes were defined. The first two categories can easily be achieved by having simple hooks into the class creation: An __init_subclass__ hook that initializes all subclasses of a given class. upon class creation, a __set_name__ hook is called on all the attribute (descriptors) defined in the class, and The third category is the topic of another PEP, PEP 520. Notice also, that while __init_subclass__ is a replacement for using a metaclass in this class's inheritance tree, __set_name__ in a descriptor class is a replacement for using a metaclass for the class that has an instance of the descriptor as an attribute."} +{"question_id": 21289806, "score": 147, "creation_date": 1390411758, "tags": ["python", "spyder", "python-sphinx", "cross-reference"], "instruction": "Link to class method in Python docstring\n\nI want to add a link to a method in my class from within the docstring of another method of the same class. I want the link to work in Sphinx and preferentially also in Spyder and other Python IDEs. I tried several options and found only one that works, but it's cumbersome. Suppose the following structure in mymodule.py def class MyClass(): def foo(self): print 'foo' def bar(self): \"\"\"This method does the same as <link to foo>\"\"\" print 'foo' I tried the following options for <link to foo>: :func:`foo` :func:`self.foo` :func:`MyClass.foo` :func:`mymodule.MyClass.foo` The only one that effectively produces a link is :func:`mymodule.MyClass.foo` , but the link is shown as mymodule.MyClass.foo() and I want a link that is shown as foo() or foo. None of the options above produces a link in Spyder.", "output": "The solution that works for Sphinx is to prefix the reference with ~. Per the Sphinx documentation on Cross-referencing Syntax, If you prefix the content with ~, the link text will only be the last component of the target. For example, :py:meth:`~Queue.Queue.get` will refer to Queue.Queue.get but only display get as the link text. So the answer is: class MyClass(): def foo(self): print 'foo' def bar(self): \"\"\"This method does the same as :func:`~mymodule.MyClass.foo`\"\"\" print 'foo' This results in an HTML looking like this : This method does the same as foo(), and foo() is a link. However, note that this may not display in Spyder as a link."} +{"question_id": 49083680, "score": 147, "creation_date": 1520076843, "tags": ["python", "tensorflow", "tensorboard"], "instruction": "How are the new tf.contrib.summary summaries in TensorFlow evaluated?\n\nI'm having a bit of trouble understanding the new tf.contrib.summary API. In the old one, it seemed that all one was supposed to do was to run tf.summary.merge_all() and run that as an op. But now we have things like tf.contrib.summary.record_summaries_every_n_global_steps, which can be used like this: import tensorflow.contrib.summary as tfsum summary_writer = tfsum.create_file_writer(logdir, flush_millis=3000) summaries = [] # First we create one summary which runs every n global steps with summary_writer.as_default(), tfsum.record_summaries_every_n_global_steps(30): summaries.append(tfsum.scalar(\"train/loss\", loss)) # And then one that runs every single time? with summary_writer.as_default(), tfsum.always_record_summaries(): summaries.append(tfsum.scalar(\"train/accuracy\", accuracy)) # Then create an optimizer which uses a global step step = tf.create_global_step() train = tf.train.AdamOptimizer().minimize(loss, global_step=step) And now come a few questions: If we just run session.run(summaries) in a loop, I assume that the accuracy summary would get written every single time, while the loss one wouldn't, because it only gets written if the global step is divisible by 30? Assuming the summaries automatically evaluate their dependencies, I never need to run session.run([accuracy, summaries]) but can just run, session.run(summaries) since they have a dependency in the graph, right? If 2) is true, can't I just add a control dependency to the training step so that the summaries are written on every train run? Or is this a bad practice? Is there any downside to using control dependencies in general for things that are going to be evaluated at the same time anyway? Why does tf.contrib.summary.scalar (and others) take in a step parameter? By adding a control dependency in 3) I mean doing this: tf.control_dependencies(summaries): train = tf.train.AdamOptimizer().minimize(loss, global_step=step)", "output": "answer moved from edit to self-answer as requested I just played around with this a little bit, and it seems that if one combines tf.control_dependencies with tf.record_summaries_every_n_global_steps it behaves as expected and the summary only gets recorded every nth step. But if they are run together within a session, such as session.run([train, summs]), the summaries are stored every once in a while, but not exactly every nth step. I tested this with n=2 and with the second approach the summary was often written at odd steps, while with the control dependency approach it was always on an even step."} +{"question_id": 3963201, "score": 146, "creation_date": 1287434490, "tags": ["python", "django"], "instruction": "Select between two dates with Django\n\nI am looking to make a query that selects between dates with Django. I know how to do this with raw SQL pretty easily, but how could this be achieved using the Django ORM? This is where I want to add the between dates of 30 days in my query: start_date = datetime.datetime.now() + datetime.timedelta(-30) context[self.varname] = self.model._default_manager.filter( current_issue__isnull=True ).live().order_by('-created_at')", "output": "Use the __range operator: ...filter(current_issue__isnull=True, created_at__range=(start_date, end_date))"} +{"question_id": 30686701, "score": 146, "creation_date": 1433618622, "tags": ["python"], "instruction": "Python: Get size of string in bytes\n\nI have a string that is to be sent over a network. I need to check the total bytes it is represented in. sys.getsizeof(string_name) returns extra bytes. For example, for sys.getsizeof(\"a\") returns 22, while one character is only represented in 1 byte in Python. Is there some other method to find this?", "output": "If you want the number of bytes in a string, this function should do it for you pretty solidly. def utf8len(s): return len(s.encode('utf-8')) The reason you got weird numbers is because encapsulated in a string is a bunch of other information due to the fact that strings are actual objects in Python. It\u2019s interesting because if you look at my solution to encode the string into 'utf-8', there's an 'encode' method on the 's' object (which is a string). Well, it needs to be stored somewhere right? Hence, the higher than normal byte count. Its including that method, along with a few others :)."} +{"question_id": 43162506, "score": 146, "creation_date": 1491084341, "tags": ["python", "scikit-learn"], "instruction": "UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples\n\nI'm getting this weird error: classification.py:1113: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples. 'precision', 'predicted', average, warn_for)` but then it also prints the f-score the first time I run: metrics.f1_score(y_test, y_pred, average='weighted') The second time I run, it provides the score without error. Why is that? >>> y_pred = test.predict(X_test) >>> y_test array([ 1, 10, 35, 9, 7, 29, 26, 3, 8, 23, 39, 11, 20, 2, 5, 23, 28, 30, 32, 18, 5, 34, 4, 25, 12, 24, 13, 21, 38, 19, 33, 33, 16, 20, 18, 27, 39, 20, 37, 17, 31, 29, 36, 7, 6, 24, 37, 22, 30, 0, 22, 11, 35, 30, 31, 14, 32, 21, 34, 38, 5, 11, 10, 6, 1, 14, 12, 36, 25, 8, 30, 3, 12, 7, 4, 10, 15, 12, 34, 25, 26, 29, 14, 37, 23, 12, 19, 19, 3, 2, 31, 30, 11, 2, 24, 19, 27, 22, 13, 6, 18, 20, 6, 34, 33, 2, 37, 17, 30, 24, 2, 36, 9, 36, 19, 33, 35, 0, 4, 1]) >>> y_pred array([ 1, 10, 35, 7, 7, 29, 26, 3, 8, 23, 39, 11, 20, 4, 5, 23, 28, 30, 32, 18, 5, 39, 4, 25, 0, 24, 13, 21, 38, 19, 33, 33, 16, 20, 18, 27, 39, 20, 37, 17, 31, 29, 36, 7, 6, 24, 37, 22, 30, 0, 22, 11, 35, 30, 31, 14, 32, 21, 34, 38, 5, 11, 10, 6, 1, 14, 30, 36, 25, 8, 30, 3, 12, 7, 4, 10, 15, 12, 4, 22, 26, 29, 14, 37, 23, 12, 19, 19, 3, 25, 31, 30, 11, 25, 24, 19, 27, 22, 13, 6, 18, 20, 6, 39, 33, 9, 37, 17, 30, 24, 9, 36, 39, 36, 19, 33, 35, 0, 4, 1]) >>> metrics.f1_score(y_test, y_pred, average='weighted') C:\\Users\\Michael\\Miniconda3\\envs\\snowflakes\\lib\\site-packages\\sklearn\\metrics\\classification.py:1113: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples. 'precision', 'predicted', average, warn_for) 0.87282051282051276 >>> metrics.f1_score(y_test, y_pred, average='weighted') 0.87282051282051276 >>> metrics.f1_score(y_test, y_pred, average='weighted') 0.87282051282051276 Also, why is there a trailing 'precision', 'predicted', average, warn_for) error message? There is no open parenthesis so why does it end with a closing parenthesis? I am running sklearn 0.18.1 using Python 3.6.0 in a conda environment on Windows 10. I also looked at here and I don't know if it's the same bug. This SO post doesn't have solution either.", "output": "As mentioned in the comments, some labels in y_test don't appear in y_pred. Specifically in this case, label '2' is never predicted: >>> set(y_test) - set(y_pred) {2} This means that there is no F-score to calculate for this label, and thus the F-score for this case is considered to be 0.0. Since you requested an average of the score, you must take into account that a score of 0 was included in the calculation, and this is why scikit-learn is showing you that warning. This brings me to you not seeing the error a second time. As I mentioned, this is a warning, which is treated differently from an error in python. The default behavior in most environments is to show a specific warning only once. This behavior can be changed: import warnings warnings.filterwarnings('always') # \"error\", \"ignore\", \"always\", \"default\", \"module\" or \"once\" If you set this before importing the other modules, you will see the warning every time you run the code. There is no way to avoid seeing this warning the first time, aside for setting warnings.filterwarnings('ignore'). What you can do, is decide that you are not interested in the scores of labels that were not predicted, and then explicitly specify the labels you are interested in (which are labels that were predicted at least once): >>> metrics.f1_score(y_test, y_pred, average='weighted', labels=np.unique(y_pred)) 0.91076923076923078 The warning will be gone."} +{"question_id": 6074018, "score": 146, "creation_date": 1305904748, "tags": ["python", "comparison-operators"], "instruction": "Why does the expression 0 < 0 == 0 return False in Python? (How are chained comparisons interpreted?)\n\nI found a strange construct in this code from the standard library Queue.py in Python 2.6: def full(self): \"\"\"Return True if the queue is full, False otherwise (not reliable!).\"\"\" self.mutex.acquire() n = 0 < self.maxsize == self._qsize() self.mutex.release() return n Apparently, if maxsize is 0 the queue is never full. But how does this work? Why does 0 < 0 == 0 evaluate as False? No matter which operation is applied first, it seems like the result should be True, and indeed I get that result when adding parentheses: >>> 0 < 0 == 0 False >>> (0 < 0) == 0 True >>> 0 < (0 == 0) True See also Is \"x < y < z\" faster than \"x < y and y < z\"? for details on how the feature is implemented.", "output": "Python has special case handling for sequences of relational operators to make range comparisons easy to express. It's much nicer to be able to say 0 < x <= 5 than to say (0 < x) and (x <= 5). These are called chained comparisons. With the other cases you talk about, the parentheses force one relational operator to be applied before the other, and so they are no longer chained comparisons. And since True and False have values as integers you get the answers you do out of the parenthesized versions."} +{"question_id": 72133316, "score": 145, "creation_date": 1651782719, "tags": ["python", "linux", "openssl", "python-poetry"], "instruction": "libssl.so.1.1: cannot open shared object file: No such file or directory\n\nI've just updated to Ubuntu 22.04 LTS and my libs using OpenSSL just stopped working. Looks like Ubuntu switched to the version 3.0 of OpenSSL. For example, poetry stopped working: Traceback (most recent call last): File \"/home/robz/.local/bin/poetry\", line 5, in <module> from poetry.console import main File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/__init__.py\", line 1, in <module> from .application import Application File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/application.py\", line 7, in <module> from .commands.about import AboutCommand File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/commands/__init__.py\", line 4, in <module> from .check import CheckCommand File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/console/commands/check.py\", line 2, in <module> from poetry.factory import Factory File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/factory.py\", line 18, in <module> from .repositories.pypi_repository import PyPiRepository File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/repositories/pypi_repository.py\", line 33, in <module> from ..inspection.info import PackageInfo File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/inspection/info.py\", line 25, in <module> from poetry.utils.env import EnvCommandError File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py\", line 23, in <module> import virtualenv File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/__init__.py\", line 3, in <module> from .run import cli_run, session_via_cli File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/run/__init__.py\", line 11, in <module> from ..seed.wheels.periodic_update import manual_upgrade File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/__init__.py\", line 3, in <module> from .acquire import get_wheel, pip_wheel_env_run File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/acquire.py\", line 12, in <module> from .bundle import from_bundle File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/bundle.py\", line 4, in <module> from .periodic_update import periodic_update File \"/home/robz/.local/share/pypoetry/venv/lib/python3.9/site-packages/virtualenv/seed/wheels/periodic_update.py\", line 10, in <module> import ssl File \"/home/robz/.pyenv/versions/3.9.10/lib/python3.9/ssl.py\", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: libssl.so.1.1: cannot open shared object file: No such file or directory Is there an easy fix ? For example, having libssl.so.1.1 available without having to uninstall OpenSSL 3 (I don't know if it's even possible).", "output": "This fixes it (a problem with packaging in 22.04): wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2_amd64.deb sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2_amd64.deb PS: If the link is expired, check http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/?C=M;O=D for a valid one. Current version (as of 2025-10-16) is: libssl1.1_1.1.1f-1ubuntu2_amd64.deb"} +{"question_id": 8122079, "score": 145, "creation_date": 1321276204, "tags": ["python", "string", "list", "substring"], "instruction": "Python: How to check a string for substrings from a list?\n\nHow can I check a string for substrings contained in a list, like in Check if a string contains an element from a list (of strings), but in Python?", "output": "Try this test: any(substring in string for substring in substring_list) It will return True if any of the substrings in substring_list is contained in string. Note that there is a Python analogue of Marc Gravell's answer in the linked question: from itertools import imap any(imap(string.__contains__, substring_list)) In Python 3, you can use map directly instead: any(map(string.__contains__, substring_list)) Probably the above version using a generator expression is more clear though."} +{"question_id": 110923, "score": 145, "creation_date": 1222000804, "tags": ["python", "tkinter"], "instruction": "How do I close a tkinter window?\n\nHow do I end a Tkinter program? Let's say I have this code: from Tkinter import * def quit(): # code to exit root = Tk() Button(root, text=\"Quit\", command=quit).pack() root.mainloop() How should I define the quit function to exit my application?", "output": "You should use destroy() to close a Tkinter window. from Tkinter import * #use tkinter instead of Tkinter (small, not capital T) if it doesn't work #as it was changed to tkinter in newer Python versions root = Tk() Button(root, text=\"Quit\", command=root.destroy).pack() #button to close the window root.mainloop() Explanation: root.quit() The above line just bypasses the root.mainloop(), i.e., root.mainloop() will still be running in the background if quit() command is executed. root.destroy() While destroy() command vanishes out root.mainloop(), i.e., root.mainloop() stops. <window>.destroy() completely destroys and closes the window. So, if you want to exit and close the program completely, you should use root.destroy(), as it stops the mainloop() and destroys the window and all its widgets. But if you want to run some infinite loop and don't want to destroy your Tkinter window and want to execute some code after the root.mainloop() line, you should use root.quit(). Example: from Tkinter import * def quit(): global root root.quit() root = Tk() while True: Button(root, text=\"Quit\", command=quit).pack() root.mainloop() #do something See What is the difference between root.destroy() and root.quit()?."} +{"question_id": 27365467, "score": 145, "creation_date": 1418067165, "tags": ["python", "pandas", "matplotlib", "time-series"], "instruction": "Can Pandas plot a histogram of dates?\n\nI've taken my Series and coerced it to a datetime column of dtype=datetime64[ns] (though only need day resolution...not sure how to change). import pandas as pd df = pd.read_csv('somefile.csv') column = df['date'] column = pd.to_datetime(column, coerce=True) but plotting doesn't work: ipdb> column.plot(kind='hist') *** TypeError: ufunc add cannot use operands with types dtype('<M8[ns]') and dtype('float64') I'd like to plot a histogram that just shows the count of dates by week, month, or year. Surely there is a way to do this in pandas?", "output": "Given this df: date 0 2001-08-10 1 2002-08-31 2 2003-08-29 3 2006-06-21 4 2002-03-27 5 2003-07-14 6 2004-06-15 7 2003-08-14 8 2003-07-29 and, if it's not already the case: df[\"date\"] = df[\"date\"].astype(\"datetime64\") To show the count of dates by month: df.groupby(df[\"date\"].dt.month).count().plot(kind=\"bar\") .dt allows you to access the datetime properties. Which will give you: You can replace month by year, day, etc.. If you want to distinguish year and month for instance, just do: df.groupby([df[\"date\"].dt.year, df[\"date\"].dt.month]).count().plot(kind=\"bar\") Which gives:"} +{"question_id": 13182075, "score": 145, "creation_date": 1351789644, "tags": ["python", "datetime", "timezone", "rfc3339"], "instruction": "How to convert a timezone aware string to datetime in Python without dateutil?\n\nI have to convert a timezone-aware string like \"2012-11-01T04:16:13-04:00\" to a Python datetime object. I saw the dateutil module which has a parse function, but I don't really want to use it as it adds a dependency. So how can I do it? I have tried something like the following, but with no luck. datetime.datetime.strptime(\"2012-11-01T04:16:13-04:00\", \"%Y-%m-%dT%H:%M:%S%Z\")", "output": "As of Python 3.7, datetime.datetime.fromisoformat() can handle your format: >>> import datetime >>> datetime.datetime.fromisoformat('2012-11-01T04:16:13-04:00') datetime.datetime(2012, 11, 1, 4, 16, 13, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=72000))) In older Python versions you can't, not without a whole lot of painstaking manual timezone defining. Python versions before version 3.9 do not include a timezone database, because it would be outdated too quickly. Instead, for those versions Python relied on external libraries, which can have a far faster release cycle, to provide properly configured timezones for you. As a side-effect, this means that timezone parsing also needs to be an external library. If dateutil is too heavy-weight for you, use iso8601 instead, it'll parse your specific format just fine: >>> import iso8601 >>> iso8601.parse_date('2012-11-01T04:16:13-04:00') datetime.datetime(2012, 11, 1, 4, 16, 13, tzinfo=<FixedOffset '-04:00'>) iso8601 is a whopping 4KB small. Compare that tot python-dateutil's 148KB. As of Python 3.2 Python can handle simple offset-based timezones, and %z will parse -hhmm and +hhmm timezone offsets in a timestamp. That means that for a ISO 8601 timestamp you'd have to remove the : in the timezone: >>> from datetime import datetime >>> iso_ts = '2012-11-01T04:16:13-04:00' >>> datetime.strptime(''.join(iso_ts.rsplit(':', 1)), '%Y-%m-%dT%H:%M:%S%z') datetime.datetime(2012, 11, 1, 4, 16, 13, tzinfo=datetime.timezone(datetime.timedelta(-1, 72000))) The lack of proper ISO 8601 parsing was being tracked in Python issue 15873 (since migrated to GitHub issue #60077)."} +{"question_id": 52468956, "score": 145, "creation_date": 1537726504, "tags": ["python", "pytorch"], "instruction": "How do I visualize a net in Pytorch?\n\nConsider: import torch import torch.nn as nn import torch.optim as optim import torch.utils.data as data import torchvision.models as models import torchvision.datasets as dset import torchvision.transforms as transforms from torch.autograd import Variable from torchvision.models.vgg import model_urls from torchviz import make_dot batch_size = 3 learning_rate =0.0002 epoch = 50 resnet = models.resnet50(pretrained=True) print resnet make_dot(resnet) I want to visualize resnet from the PyTorch models. How can I do it? I tried to use torchviz, but it gives an error: 'ResNet' object has no attribute 'grad_fn'", "output": "The make_dot expects a variable (i.e., tensor with grad_fn), not the model itself. try: x = torch.zeros(1, 3, 224, 224, dtype=torch.float, requires_grad=False) out = resnet(x) make_dot(out) # plot graph of variable, not of a nn.Module"} +{"question_id": 49482969, "score": 145, "creation_date": 1522030058, "tags": ["python", "asynchronous", "python-asyncio", "python-trio", "curio"], "instruction": "What is the core difference between asyncio and trio?\n\nToday, I found a library named trio which says itself is an asynchronous API for humans. These words are a little similar with requests'. As requests is really a good library, I am wondering what is the advantages of trio. There aren't many articles about it, I just find an article discussing curio and asyncio. To my surprise, trio says itself is even better than curio(next-generation curio). After reading half of the article, I cannot find the core difference between these two asynchronous framework. It just gives some examples that curio's implementation is more convenient than asyncio's. But the underlying structure is almost the same. So could someone give me a reason I have to accept that trio or curio is better than asyncio? Or explain more about why I should choose trio instead of built-in asyncio?", "output": "Where I'm coming from: I'm the primary author of trio. I'm also one of the top contributors to curio (and wrote the article about it that you link to), and a Python core dev who's been heavily involved in discussions about how to improve asyncio. In trio (and curio), one of the core design principles is that you never program with callbacks; it feels more like thread-based programming than callback-based programming. I guess if you open up the hood and look at how they're implemented internally, then there are places where they use callbacks, or things that are sorta equivalent to callbacks if you squint. But that's like saying that Python and C are equivalent because the Python interpreter is implemented in C. You never use callbacks. Anyway: Trio vs asyncio Asyncio is more mature The first big difference is ecosystem maturity. At the time I'm writing this in March 2018, there are many more libraries with asyncio support than trio support. For example, right now there aren't any real HTTP servers with trio support. The Framework :: AsyncIO classifier on PyPI currently has 122 libraries in it, while the Framework :: Trio classifier only has 8. I'm hoping that this part of the answer will become out of date quickly \u2013 for example, here's Kenneth Reitz experimenting with adding trio support in the next version of requests \u2013 but right now, you should expect that if you're trio for anything complicated, then you'll run into missing pieces that you need to fill in yourself instead of grabbing a library from pypi, or that you'll need to use the trio-asyncio package that lets you use asyncio libraries in trio programs. (The trio chat channel is useful for finding out about what's available, and what other people are working on.) Trio makes your code simpler In terms of the actual libraries, they're also very different. The main argument for trio is that it makes writing concurrent code much, much simpler than using asyncio. Of course, when was the last time you heard someone say that their library makes things harder to use... let me give a concrete example. In this talk (slides), I use the example of implementing RFC 8305 \"Happy eyeballs\", which is a simple concurrent algorithm used to efficiently establish a network connection. This is something that Glyph has been thinking about for years, and his latest version for Twisted is ~600 lines long. (Asyncio would be about the same; Twisted and asyncio are very similar architecturally.) In the talk, I teach you everything you need to know to implement it in <40 lines using trio (and we fix a bug in his version while we're at it). So in this example, using trio literally makes our code an order of magnitude simpler. You might also find these comments from users interesting: 1, 2, 3 There are many many differences in detail Why does this happen? That's a much longer answer :-). I'm gradually working on writing up the different pieces in blog posts and talks, and I'll try to remember to update this answer with links as they become available. Basically, it comes down to Trio having a small set of carefully designed primitives that have a few fundamental differences from any other library I know of (though of course build on ideas from lots of places). Here are some random notes to give you some idea: A very, very common problem in asyncio and related libraries is that you call some_function(), and it returns, so you think it's done \u2013 but actually it's still running in the background. This leads to all kinds of tricky bugs, because it makes it difficult to control the order in which things happen, or know when anything has actually finished, and it can directly hide problems because if a background task crashes with an unhandled exception, asyncio will generally just print something to the console and then keep going. In trio, the way we handle task spawning via \"nurseries\" means that none of these things happen: when a function returns then you know it's done, and Trio's currently the only concurrency library for Python where exceptions always propagate until you catch them. Trio's way of managing timeouts and cancellations is novel, and I think better than previous state-of-the-art systems like C# and Golang. I actually did write a whole essay on this, so I won't go into all the details here. But asyncio's cancellation system \u2013 or really, systems, it has two of them with slightly different semantics \u2013 are based on an older set of ideas than even C# and Golang, and are difficult to use correctly. (For example, it's easy for code to accidentally \"escape\" a cancellation by spawning a background task; see previous paragraph.) There's a ton of redundant stuff in asyncio, which can make it hard to tell which thing to use when. You have futures, tasks, and coroutines, which are all basically used for the same purpose but you need to know the differences between them. If you want to implement a network protocol, you have to pick whether to use the protocols/transports layer or the streams layer, and they both have tricky pitfalls (this is what the first part of the essay you linked is about). Trio's currently the only concurrency library for Python where control-C just works the way you expect (i.e., it raises KeyboardInterrupt where-ever your code is). It's a small thing, but it makes a big difference :-). For various reasons, I don't think this is fixable in asyncio. Summing up If you need to ship something to production next week, then you should use asyncio (or Twisted or Tornado or gevent, which are even more mature). They have large ecosystems, other people have used them in production before you, and they're not going anywhere. If trying to use those frameworks leaves you frustrated and confused, or if want to experiment with a different way of doing things, then definitely check out trio \u2013 we're friendly :-). If you want to ship something to production a year from now... then I'm not sure what to tell you. Python concurrency is in flux. Trio has many advantages at the design level, but is that enough to overcome asyncio's head start? Will asyncio being in the standard library be an advantage, or a disadvantage? (Notice how these days everyone uses requests, even though the standard library has urllib.) How many of the new ideas in trio can be added to asyncio? No-one knows. I expect that there will be a lot of interesting discussions about this at PyCon this year :-)."} +{"question_id": 25812255, "score": 145, "creation_date": 1410537378, "tags": ["python", "matplotlib", "subplot"], "instruction": "Row and column headers in matplotlib's subplots\n\nWhat's the best practise to add a row and a column header to a grid of subplots generated in a loop in matplotlib? I can think of a couple, but not particularly neat: For columns, with a counter to your loop you can use set_title() for the first row only. For rows this doesn't work. You would have to draw text outside of the plots. You add an extra row of subplots on top and an extra column of subplots on the left, and draw text in the middle of that subplot. Can you suggest a better alternative?", "output": "There are several ways to do this. The easy way is to exploit the y-labels and titles of the plot and then use fig.tight_layout() to make room for the labels. Alternatively, you can place additional text in the right location with annotate and then make room for it semi-manually. If you don't have y-labels on your axes, it's easy to exploit the title and y-label of the first row and column of axes. import matplotlib.pyplot as plt cols = ['Column {}'.format(col) for col in range(1, 4)] rows = ['Row {}'.format(row) for row in ['A', 'B', 'C', 'D']] fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(12, 8)) for ax, col in zip(axes[0], cols): ax.set_title(col) for ax, row in zip(axes[:,0], rows): ax.set_ylabel(row, rotation=0, size='large') fig.tight_layout() plt.show() If you do have y-labels, or if you prefer a bit more flexibility, you can use annotate to place the labels. This is more complicated, but allows you to have individual plot titles, ylabels, etc in addition to the row and column labels. import matplotlib.pyplot as plt from matplotlib.transforms import offset_copy cols = ['Column {}'.format(col) for col in range(1, 4)] rows = ['Row {}'.format(row) for row in ['A', 'B', 'C', 'D']] fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(12, 8)) plt.setp(axes.flat, xlabel='X-label', ylabel='Y-label') pad = 5 # in points for ax, col in zip(axes[0], cols): ax.annotate(col, xy=(0.5, 1), xytext=(0, pad), xycoords='axes fraction', textcoords='offset points', size='large', ha='center', va='baseline') for ax, row in zip(axes[:,0], rows): ax.annotate(row, xy=(0, 0.5), xytext=(-ax.yaxis.labelpad - pad, 0), xycoords=ax.yaxis.label, textcoords='offset points', size='large', ha='right', va='center') fig.tight_layout() # tight_layout doesn't take these labels into account. We'll need # to make some room. These numbers are are manually tweaked. # You could automatically calculate them, but it's a pain. fig.subplots_adjust(left=0.15, top=0.95) plt.show()"} +{"question_id": 4005318, "score": 145, "creation_date": 1287856464, "tags": ["python", "hash"], "instruction": "How to implement a good __hash__ function in python\n\nWhen implementing a class with multiple properties (like in the toy example below), what is the best way to handle hashing? I guess that the __eq__ and __hash__ should be consistent, but how to implement a proper hash function that is capable of handling all the properties? class AClass: def __init__(self): self.a = None self.b = None def __eq__(self, other): return other and self.a == other.a and self.b == other.b def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash((self.a, self.b)) I read on this question that tuples are hashable, so I was wondering if something like the example above was sensible. Is it?", "output": "__hash__ should return the same value for objects that are equal. It also shouldn't change over the lifetime of the object; generally you only implement it for immutable objects. A trivial implementation would be to just return 0. This is always correct, but performs badly. Your solution, returning the hash of a tuple of properties, is good. But note that you don't need to list all properties that you compare in __eq__ in the tuple. If some property usually has the same value for inequal objects, just leave it out. Don't make the hash computation any more expensive than it needs to be. Edit: I would recommend against using xor to mix hashes in general. When two different properties have the same value, they will have the same hash, and with xor these will cancel eachother out. Tuples use a more complex calculation to mix hashes, see tuplehash in tupleobject.c."} +{"question_id": 68335, "score": 144, "creation_date": 1221526210, "tags": ["python", "ssh", "automation", "scp"], "instruction": "How to copy a file to a remote server in Python using SCP or SSH?\n\nI have a text file on my local machine that is generated by a daily Python script run in cron. I would like to add a bit of code to have that file sent securely to my server over SSH.", "output": "You can call the scp bash command (it copies files over SSH) with subprocess.run: import subprocess subprocess.run([\"scp\", FILE, \"USER@SERVER:PATH\"]) #e.g. subprocess.run([\"scp\", \"foo.bar\", \"joe@srvr.net:/path/to/foo.bar\"]) If you're creating the file that you want to send in the same Python program, you'll want to call subprocess.run command outside the with block you're using to open the file (or call .close() on the file first if you're not using a with block), so you know it's flushed to disk from Python. You need to generate (on the source machine) and install (on the destination machine) an ssh key beforehand so that the scp automatically gets authenticated with your public ssh key (in other words, so your script doesn't ask for a password)."} +{"question_id": 6586310, "score": 144, "creation_date": 1309885761, "tags": ["python", "list", "dictionary"], "instruction": "How to convert list of key-value tuples into dictionary?\n\nI have a list that looks like: [('A', 1), ('B', 2), ('C', 3)] I want to turn it into a dictionary that looks like: {'A': 1, 'B': 2, 'C': 3} What's the best way to go about this? EDIT: My list of tuples is actually more like: [(A, 12937012397), (BERA, 2034927830), (CE, 2349057340)] I am getting the error ValueError: dictionary update sequence element #0 has length 1916; 2 is required", "output": "Your error: Why you are getting the ValueError: dictionary update sequence element #0 has length 1916; 2 is required error: The answer is that the elements of your list are not what you think they are. If you type myList[0] you will find that the first element of your list is not a two-tuple, e.g. ('A', 1), but rather a 1916-length iterable. Once you actually have a list in the form you stated in your original question (myList = [('A',1),('B',2),...]), all you need to do is dict(myList). [2021 edit: now also answers the actual question asked, not the intended question about the specific error:] In general: Either use the usual dict(iterableOrMapping) constructor, or use the dict comprehension {someExpr(k,v) for k:v in iterable} syntax: >>> example1 = [('A',1), ('B',2), ('C',3)] >>> dict(example1) {'A': 1, 'B': 2, 'C': 3} >>> {x:x**2 for x in range(3)} {0: 0, 1: 1, 2:4} # inline; same as example 1 effectively. may be an iterable, such as # a sequence, evaluated generator, generator expression >>> dict( zip(range(2),range(2)) ) {0: 0, 1: 1, 2:2} A Python dictionary is an O(1)-searchable unordered collection of pairs {(key\u2192value), ...} where keys are any immutable objects and values are any object. Keys MUST implement the .__eq__() and .__hash__() methods to be usable in the dictionary. If you are thinking of implementing this, you are likely doing something wrong and should maybe consider a different mapping data structure! (Though sometimes you can get away with wrapping the keys in a different wrapper structure and using a regular dict, this may not be ideal.) Intermediate or advanced programmers who wish to implement a 'frozen' or 'immutable' type, or one which masquerades as one, must be very careful of implications or else your program will be wrong with extremely subtle and near-impossible-to-find bugs: You can't use a dict if you allow yourself to mutate the object later such that its notion of equality might change. Objects considered equal must always have __eq__ return True and have __hash__ return identical values. The methods must exactly obey the spec. This means that: For novices: Hash functions(wikip.) let you get a false-positive or true-positive result; hash(x)==hash(y) means x MIGHT equal y and the internal python code must then check x==y (.__eq__) to confirm it's a true-positive and not a false-positive. This allows O(1) lookup. For novices: It is critically important that the __hash__ value not change for any reason once the object is in its final state. If you cannot guarantee both this and hash(x)!=hash(y) implies x!=y, you should not be using a dict. One might consider a different type of mapping rather than modifying the data itself. This can be equivalent to writing a wrapper object, at the cost of using a library. This is usually not necessary. For experts: It should also be noted that the hashes of some default objects are salted and may change between python invocations and versions (this may be a gotcha if you store or network-communicate data in any way that contains python hashes; they are an internal detail that should be regenerated on each process startup). Python has a bunch of built-in frozen datastructures such as namedtuple, frozenset, etc., but they are sometimes harder to work with. tuple is the basic frozen variant of the basic list structure (which would let you store a {(1, 2): 3, (4, 5): 6}). It also has some variants of the dict structure. If you want to get a map from \"frozen dicts\" to values, frozendict doesn't exist except as a third-party library, but you can extract the dict's .items() as a an unordered frozenset of tuples."} +{"question_id": 5891453, "score": 144, "creation_date": 1304556165, "tags": ["python", "ascii"], "instruction": "How do I get a list of all the ASCII characters using Python?\n\nI'm looking for something like the following: import ascii print(ascii.charlist()) Which would return something like [\"A\", \"B\", \"C\", \"D\" ... ].", "output": "The constants in the string module may be what you want. All ASCII capital letters: >>> import string >>> string.ascii_uppercase 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' All printable ASCII characters: >>> string.printable '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~ \\t\\n\\r\\x0b\\x0c' For every single character defined in the ASCII standard, use chr: >>> ''.join(chr(i) for i in range(128)) '\\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\t\\n\\x0b\\x0c\\r\\x0e\\x0f\\x10\\x11\\x12\\x13\\x14\\x15\\x16\\x17\\x18\\x19\\x1a\\x1b\\x1c\\x1d\\x1e\\x1f !\"#$%&\\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\\x7f'"} +{"question_id": 1447287, "score": 144, "creation_date": 1253318917, "tags": ["python", "json", "formatting", "floating-point"], "instruction": "Format floats with standard json module\n\nI am using the standard json module in python 2.6 to serialize a list of floats. However, I'm getting results like this: >>> import json >>> json.dumps([23.67, 23.97, 23.87]) '[23.670000000000002, 23.969999999999999, 23.870000000000001]' I want the floats to be formated with only two decimal digits. The output should look like this: >>> json.dumps([23.67, 23.97, 23.87]) '[23.67, 23.97, 23.87]' I have tried defining my own JSON Encoder class: class MyEncoder(json.JSONEncoder): def encode(self, obj): if isinstance(obj, float): return format(obj, '.2f') return json.JSONEncoder.encode(self, obj) This works for a sole float object: >>> json.dumps(23.67, cls=MyEncoder) '23.67' But fails for nested objects: >>> json.dumps([23.67, 23.97, 23.87]) '[23.670000000000002, 23.969999999999999, 23.870000000000001]' I don't want to have external dependencies, so I prefer to stick with the standard json module. How can I achieve this?", "output": "NOTE: This does not work in any recent version of Python. Unfortunately, I believe you have to do this by monkey-patching (which, to my opinion, indicates a design defect in the standard library json package). E.g., this code: import json from json import encoder encoder.FLOAT_REPR = lambda o: format(o, '.2f') print(json.dumps(23.67)) print(json.dumps([23.67, 23.97, 23.87])) emits: 23.67 [23.67, 23.97, 23.87] as you desire. Obviously, there should be an architected way to override FLOAT_REPR so that EVERY representation of a float is under your control if you wish it to be; but unfortunately that's not how the json package was designed."} +{"question_id": 15451958, "score": 144, "creation_date": 1363452758, "tags": ["python", "random", "coding-style"], "instruction": "Simple way to create matrix of random numbers\n\nI am trying to create a matrix of random numbers, but my solution is too long and looks ugly random_matrix = [[random.random() for e in range(2)] for e in range(3)] this looks ok, but in my implementation it is weights_h = [[random.random() for e in range(len(inputs[0]))] for e in range(hiden_neurons)] which is extremely unreadable and does not fit on one line.", "output": "Take a look at numpy.random.rand: Docstring: rand(d0, d1, ..., dn) Random values in a given shape. Create an array of the given shape and propagate it with random samples from a uniform distribution over [0, 1). >>> import numpy as np >>> np.random.rand(2,3) array([[ 0.22568268, 0.0053246 , 0.41282024], [ 0.68824936, 0.68086462, 0.6854153 ]])"} +{"question_id": 41918836, "score": 144, "creation_date": 1485677036, "tags": ["python", "string"], "instruction": "How do I get rid of the b-prefix in a string in python?\n\nI have a string with a b-prefix: b'I posted a new photo to Facebook' I gather the b indicates it is a byte string. How do I remove this b prefix? I tried: b'I posted a new photo to Facebook'.encode(\"utf-8\").decode(\"utf-8\") But this gives an error: UnicodeEncodeError: 'charmap' codec can't encode characters in position 64-65: character maps to <undefined>", "output": "decode the bytes to produce a str: b = b'1234' print(b.decode('utf-8')) # '1234'"} +{"question_id": 14365542, "score": 144, "creation_date": 1358362215, "tags": ["python", "pandas", "csv", "dataframe"], "instruction": "Import CSV file as a Pandas DataFrame\n\nHow do I read the following CSV file into a Pandas DataFrame? Date,\"price\",\"factor_1\",\"factor_2\" 2012-06-11,1600.20,1.255,1.548 2012-06-12,1610.02,1.258,1.554 2012-06-13,1618.07,1.249,1.552 2012-06-14,1624.40,1.253,1.556 2012-06-15,1626.15,1.258,1.552 2012-06-16,1626.15,1.263,1.558 2012-06-17,1626.15,1.264,1.572", "output": "pandas.read_csv to the rescue: import pandas as pd df = pd.read_csv(\"data.csv\") print(df) This outputs a pandas DataFrame: Date price factor_1 factor_2 0 2012-06-11 1600.20 1.255 1.548 1 2012-06-12 1610.02 1.258 1.554 2 2012-06-13 1618.07 1.249 1.552 3 2012-06-14 1624.40 1.253 1.556 4 2012-06-15 1626.15 1.258 1.552 5 2012-06-16 1626.15 1.263 1.558 6 2012-06-17 1626.15 1.264 1.572"} +{"question_id": 21350605, "score": 144, "creation_date": 1390653372, "tags": ["python", "selenium-webdriver", "onclick", "click"], "instruction": "python selenium click on button\n\nI am quite new to python selenium and I am trying to click on a button which has the following html structure: <div class=\"b_div\"> <div class=\"button c_button s_button\" onclick=\"submitForm('mTF')\"> <input class=\"very_small\" type=\"button\"></input> <div class=\"s_image\"></div> <span> Search </span> </div> <div class=\"button c_button s_button\" onclick=\"submitForm('rMTF')\" style=\"margin-bottom: 30px;\"> <input class=\"v_small\" type=\"button\"></input> <span> Reset </span> </div> </div> I would like to be able to click both the Search and Reset buttons above (obviously individually). I have tried a couple of things, for example: driver.find_element_by_css_selector('.button .c_button .s_button').click() or, driver.find_element_by_name('s_image').click() or, driver.find_element_by_class_name('s_image').click() but, I seem to always end up with NoSuchElementException, for example: selenium.common.exceptions.NoSuchElementException: Message: u'Unable to locate element: {\"method\":\"name\",\"selector\":\"s_image\"}' ; I am wondering if I can somehow use the onclick attributes of the HTML to make selenium click? Any thoughts which can point me in the right direction would be great. Thanks.", "output": "For python, use the from selenium.webdriver import ActionChains and ActionChains(browser).click(element).perform()"} +{"question_id": 1985856, "score": 144, "creation_date": 1262273826, "tags": ["python", "matplotlib", "plot", "scatter-plot", "matplotlib-3d"], "instruction": "How to make a 3D scatter plot\n\nI am currently have a nx3 matrix array. I want plot the three columns as three axis's. How can I do that? I have googled and people suggested using Matlab, but I am really having a hard time with understanding it. I also need it be a scatter plot.", "output": "You can use matplotlib for this. matplotlib has a mplot3d module that will do exactly what you want. import matplotlib.pyplot as plt import random fig = plt.figure(figsize=(12, 12)) ax = fig.add_subplot(projection='3d') sequence_containing_x_vals = list(range(0, 100)) sequence_containing_y_vals = list(range(0, 100)) sequence_containing_z_vals = list(range(0, 100)) random.shuffle(sequence_containing_x_vals) random.shuffle(sequence_containing_y_vals) random.shuffle(sequence_containing_z_vals) ax.scatter(sequence_containing_x_vals, sequence_containing_y_vals, sequence_containing_z_vals) plt.show() The code above generates a figure like:"} +{"question_id": 171765, "score": 144, "creation_date": 1223200138, "tags": ["python", "algorithm", "math"], "instruction": "What is the best way to get all the divisors of a number?\n\nHere's the very dumb way: def divisorGenerator(n): for i in xrange(1,n/2+1): if n%i == 0: yield i yield n The result I'd like to get is similar to this one, but I'd like a smarter algorithm (this one it's too much slow and dumb :-) I can find prime factors and their multiplicity fast enough. I've an generator that generates factor in this way: (factor1, multiplicity1) (factor2, multiplicity2) (factor3, multiplicity3) and so on... i.e. the output of for i in factorGenerator(100): print(i) is: (2, 2) (5, 2) I don't know how much is this useful for what I want to do (I coded it for other problems), anyway I'd like a smarter way to make for i in divisorGen(100): print(i) output this: 1 2 4 5 10 20 25 50 100 UPDATE: Many thanks to Greg Hewgill and his \"smart way\" :) Calculating all divisors of 100000000 took 0.01s with his way against the 39s that the dumb way took on my machine, very cool :D UPDATE 2: Stop saying this is a duplicate of this post. Calculating the number of divisor of a given number doesn't need to calculate all the divisors. It's a different problem, if you think it's not then look for \"Divisor function\" on wikipedia. Read the questions and the answer before posting, if you do not understand what is the topic just don't add not useful and already given answers.", "output": "Given your factorGenerator function, here is a divisorGen that should work: def divisorGen(n): factors = list(factorGenerator(n)) nfactors = len(factors) f = [0] * nfactors while True: yield reduce(lambda x, y: x*y, [factors[x][0]**f[x] for x in range(nfactors)], 1) i = 0 while True: f[i] += 1 if f[i] <= factors[i][1]: break f[i] = 0 i += 1 if i >= nfactors: return The overall efficiency of this algorithm will depend entirely on the efficiency of the factorGenerator."} +{"question_id": 2596714, "score": 144, "creation_date": 1270684988, "tags": ["python", "unicode", "encoding", "ascii", "python-2.x"], "instruction": "Why does Python print unicode characters when the default encoding is ASCII?\n\nFrom the Python 2.6 shell: >>> import sys >>> print sys.getdefaultencoding() ascii >>> print u'\\xe9' \u00e9 >>> I expected to have either some gibberish or an Error after the print statement, since the \"\u00e9\" character isn't part of ASCII and I haven't specified an encoding. I guess I don't understand what ASCII being the default encoding means. EDIT I moved the edit to the Answers section and accepted it as suggested.", "output": "Thanks to bits and pieces from various replies, I think we can stitch up an explanation. When trying to print a Unicode string, u'\\xe9', Python implicitly attempts to encode that string using the scheme currently stored in sys.stdout.encoding. Python actually picks up this setting from the environment it's been initiated from. If it can't find a proper encoding from the environment, only then does it revert to its default, ASCII. For example, I use a bash shell whose encoding defaults to UTF-8. If I start Python from it, it picks up and uses that setting: $ python >>> import sys >>> print sys.stdout.encoding UTF-8 Let's for a moment exit the Python shell and set bash's environment with some bogus encoding: $ export LC_CTYPE=klingon # we should get some error message here, just ignore it. Then start the python shell again and verify that it does indeed revert to its default ASCII encoding. $ python >>> import sys >>> print sys.stdout.encoding ANSI_X3.4-1968 Bingo! If you now try to output some unicode character outside of ASCII you should get a nice error message >>> print u'\\xe9' UnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position 0: ordinal not in range(128) Lets exit Python and discard the bash shell. We'll now observe what happens after Python outputs strings. For this we'll first start a bash shell within a graphic terminal (I'll use Gnome Terminal). We'll set the terminal to decode output with ISO-8859-1 aka Latin-1 (graphic terminals usually have an option to Set Character Encoding in one of their dropdown menus). Note that this doesn't change the actual shell environment's encoding, it only changes the way the terminal itself will decode output it's given, a bit like a web browser does. You can therefore change the terminal's encoding, independently from the shell's environment. Let's then start Python from the shell and verify that sys.stdout.encoding is set to the shell environment's encoding (UTF-8 for me): $ python >>> import sys >>> print sys.stdout.encoding UTF-8 >>> print '\\xe9' # (1) \u00e9 >>> print u'\\xe9' # (2) \u00c3\u00a9 >>> print u'\\xe9'.encode('latin-1') # (3) \u00e9 >>> (1) python outputs binary string as is, terminal receives it and tries to match its value with Latin-1 character map. In Latin-1, 0xe9 or 233 yields the character \"\u00e9\" and so that's what the terminal displays. (2) python attempts to implicitly encode the Unicode string with whatever scheme is currently set in sys.stdout.encoding, in this instance it's UTF-8. After UTF-8 encoding, the resulting binary string is '\\xc3\\xa9' (see later explanation). Terminal receives the stream as such and tries to decode 0xc3a9 using Latin-1, but Latin-1 goes from 0 to 255 and so, only decodes streams 1 byte at a time. 0xc3a9 is 2 bytes long, the Latin-1 decoder therefore interprets it as two distinct bytes, 0xc3 (195) and 0xa9 (169), which yield the respective characters '\u00c3' and '\u00a9'. (3) python encodes Unicode code point u'\\xe9' (233) with the Latin-1 scheme. It turns out the Latin-1 code point range is 0-255 and it points to the exact same characters as Unicode does within that range. Therefore, Unicode code points between 0-255 will yield the same value when encoded in Latin-1. So u'\\xe9' (233) encoded in Latin-1 will also yields the binary string '\\xe9'. Terminal receives that value and tries to match it to its Latin-1 character map. Just like case (1), that yields \"\u00e9\" and that's what's displayed. Let's now change the terminal's encoding settings to UTF-8 from the dropdown menu (like you would change your web browser's encoding settings). No need to stop Python or restart the shell. The terminal's encoding now matches Python's. Let's try printing again: >>> print '\\xe9' # (4) >>> print u'\\xe9' # (5) \u00e9 >>> print u'\\xe9'.encode('latin-1') # (6) >>> (4) python outputs a binary string as is. Terminal attempts to decode that stream with UTF-8. But UTF-8 doesn't understand the value 0xe9 (see later explanation) and is therefore unable to convert it to a Unicode code point. No code point found, no character printed. (5) python attempts to implicitly encode the Unicode string with whatever sys.stdout.encoding is currently set (still UTF-8). The resulting binary string is '\\xc3\\xa9'. Terminal receives the stream and attempts to decode 0xc3a9 also using UTF-8. It yields back code value 0xe9 (233), which on the Unicode character map points to the symbol \"\u00e9\". Terminal displays \"\u00e9\". (6) python encodes Unicode string with Latin-1, it yields a binary string with the same value '\\xe9'. Again, for the terminal this is pretty much the same as case (4). Conclusions: Python outputs non-Unicode strings as raw data, without considering its default encoding. The terminal just happens to display them if its current encoding matches the data. Python outputs Unicode strings after encoding them using the scheme specified in sys.stdout.encoding. Python gets that setting from the shell's environment. the terminal displays output according to its own encoding settings. the terminal's encoding is independant from the shell's. More details on specifically Unicode, UTF-8 and Latin-1 Unicode is fundamentally a character table, where some keys (code points) have been conventionally assigned to specific symbols. For example, by convention it's been decided that hexadecimal key 0xe9 (decimal 233) points to the symbol '\u00e9'. ASCII and Unicode use the same code points from 0 to 127, as do Latin-1 and Unicode from 0 to 255. That is, 0x41 (dec 65) points to 'A' in ASCII, Latin-1 and Unicode, 0xc8 points to '\u00dc' in Latin-1 and Unicode, 0xe9 points to '\u00e9' in Latin-1 and Unicode. When dealing with electronics, Unicode code points require an efficient representation scheme. That's what encodings are about. Various Unicode encoding schemes exist (UTF-7, UTF-8, UTF-16, UTF-32). The most intuitive and straight forward encoding approach would be to simply use a code point's value in the Unicode map as its value for its electronic form, but Unicode currently has over a million code points, which means that some of them require 3 bytes to be expressed. To work efficiently with text, a 1 to 1 mapping would be rather impractical, since it would require that all code points be stored in exactly the same amount of space, with a minimum of 3 bytes per character, regardless of their actual need. Most encoding schemes have shortcomings regarding space requirement, the most economic ones leave out many Unicode code points. ASCII for example, only covers the first 128 Unicode code points and Latin-1, only the first 256. Other encodings that try to be more comprehensive end up also being wasteful, since they require more byte space than necessary, for even \"cheap\" code points. UTF-16 for instance, uses a minimum of 2 bytes per code point, including those in the ASCII range that normally only require one byte (e.g. 'B' which is 66, still requires 2 bytes of storage in UTF-16). UTF-32 is even more wasteful as it stores all code points in 4 bytes. The UTF-8 scheme (surprisingly more recent than UTF-16 and UTF-32) happens to have cleverly mitigated the dilemma. It's able to store code points with a variable amount of byte spaces. As part of its encoding strategy, UTF-8 laces code points with flag bits that indicate (presumably to decoders) their space requirements and their boundaries. UTF-8 encoding of Unicode code points in the ASCII range (0-127) 0xxx xxxx (in binary) The x's show the actual space reserved to \"store\" the code point during encoding. The leading 0 is a flag that indicates to the UTF-8 decoder that this code point will only require 1 byte. Upon encoding, UTF-8 doesn't change the value of Unicode code points in that specific range (i.e. Unicode 65 encoded in UTF-8 is also 65). Considering that ASCII is also compatible with Unicode in that range, it incidentally makes ASCII compatible with UTF-8 (for that range). E.g. The Unicode code point for 'B' is '0x42' (66 in decimal), or 0100 0010 in binary. As said previously it's the same in ASCII. Here's a description of its UTF-8 encoding: 0xxx xxxx <-- UTF-8 wrapper for Unicode code points in the range 0 - 127 *100 0010 <-- Unicode code point 0x42 0100 0010 <-- UTF-8 encoded (exactly the same) UTF-8 wrappers for Unicode code points above 127 (beyond-ASCII) 110x xxxx 10xx xxxx <-- (from 128 to 2047) 1110 xxxx 10xx xxxx 10xx xxxx <-- (from 2048 to 65535) A leading 110 flag bits indicate to the UTF-8 decoder the start of a code point encoded in 2 bytes, whereas a leading 1110 indicates 3 bytes, 11110 would indicate 4 bytes and so forth. A leading 10 flag bits is used to signal the start of an inner byte. As seen previously, the x's mark the space where the Unicode code point value is stored during encoding. E.g. '\u00e9' Unicode code point is 0xe9 (233). 1110 1001 <-- 0xe9 To encode this code point in UTF-8, it's determined that since its value is larger than 127 and less than 2048, it should be encoded with a 2-byte UTF-8 wrapper: 110x xxxx 10xx xxxx <-- 2-byte UTF-8 wrapper for Unicode 128-2047 ***0 0011 **10 1001 <-- 0xe9 1100 0011 1010 1001 <-- '\u00e9' after UTF-8 encoding C 3 A 9 The 0xe9 Unicode code points after UTF-8 encoding becomes 0xc3a9. Which is exactly how the terminal receives it. If your terminal is set to decode strings using Latin-1, you'll see '\u00c3\u00a9', because it just so happens that 0xc3 in Latin-1 points to \u00c3 while 0xa9 points to \u00a9."} +{"question_id": 12601316, "score": 143, "creation_date": 1348660946, "tags": ["python", "proxy", "socks", "python-requests"], "instruction": "How to make python Requests work via SOCKS proxy\n\nI'm using the great Requests library in my Python script: import requests r = requests.get(\"http://example.com\") print(r.text) I would like to use a SOCKS proxy, how can I do that? Requests seems to only support HTTP proxies.", "output": "The modern way: pip install -U 'requests[socks]' then import requests resp = requests.get('http://go.to', proxies=dict(http='socks5://user:pass@host:port', https='socks5://user:pass@host:port'))"} +{"question_id": 51774826, "score": 143, "creation_date": 1533844329, "tags": ["python", "python-3.x", "pandas", "dataframe"], "instruction": "append dictionary to data frame\n\nI have a function, which returns a dictionary like this: {'truth': 185.179993, 'day1': 197.22307753038834, 'day2': 197.26118010160317, 'day3': 197.19846975345905, 'day4': 197.1490578795196, 'day5': 197.37179265011116} I am trying to append this dictionary to a dataframe like so: output = pd.DataFrame() output.append(dictionary, ignore_index=True) print(output.head()) Unfortunately, the printing of the dataframe results in an empty dataframe. Any ideas?", "output": "The previous answer (user alex, answered Aug 9 2018 at 20:09) now triggers a warning saying that appending to a dataframe will be deprecated in a future version. A way to do it is to transform the dictionary to a dataframe and the concatenate the dataframes: output = pd.DataFrame() df_dictionary = pd.DataFrame([dictionary]) output = pd.concat([output, df_dictionary], ignore_index=True) print(output.head())"} +{"question_id": 9000164, "score": 143, "creation_date": 1327482934, "tags": ["python", "numpy", "scipy", "lapack", "blas"], "instruction": "How to check BLAS/LAPACK linkage in NumPy and SciPy?\n\nI am builing my numpy/scipy environment based on blas and lapack more or less based on this walk through. When I am done, how can I check, that my numpy/scipy functions really do use the previously built blas/lapack functionalities?", "output": "The method numpy.show_config() (or numpy.__config__.show()) outputs information about linkage gathered at build time. My output looks like this. I think it means I am using the BLAS/LAPACK that ships with Mac OS. >>> import numpy as np >>> np.show_config() lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)]"} +{"question_id": 6920858, "score": 143, "creation_date": 1312336671, "tags": ["python", "sockets", "pipe", "ipc"], "instruction": "Interprocess communication in Python\n\nWhat is a good way to communicate between two separate Python runtimes? Things tried: reading/writing on named pipes e.g. os.mkfifo (feels hacky) dbus services (worked on desktop, but too heavyweight for headless) sockets (seems too low-level; surely there's a higher level module to use?) My basic requirement is to be able to run python listen.py like a daemon, able to receive messages from python client.py. The client should just send a message to the existing process and terminate, with return code 0 for success and nonzero for failure (i.e. a two-way communication will be required.)", "output": "The multiprocessing library provides listeners and clients that wrap sockets and allow you to pass arbitrary python objects. Your server could listen to receive python objects: from multiprocessing.connection import Listener address = ('localhost', 6000) # family is deduced to be 'AF_INET' listener = Listener(address, authkey=b'secret password') conn = listener.accept() print 'connection accepted from', listener.last_accepted while True: msg = conn.recv() # do something with msg if msg == 'close': conn.close() break listener.close() Your client could send commands as objects: from multiprocessing.connection import Client address = ('localhost', 6000) conn = Client(address, authkey=b'secret password') conn.send('close') # can also send arbitrary objects: # conn.send(['a', 2.5, None, int, sum]) conn.close()"} +{"question_id": 1143671, "score": 143, "creation_date": 1247841408, "tags": ["python", "sorting", "dictionary"], "instruction": "How to sort objects by multiple keys?\n\nOr, practically, how can I sort a list of dictionaries by multiple keys? I have a list of dicts: b = [{u'TOT_PTS_Misc': u'Utley, Alex', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Russo, Brandon', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Chappell, Justin', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Foster, Toney', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lawson, Roman', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lempke, Sam', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Gnezda, Alex', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Kirks, Damien', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Worden, Tom', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Korecz, Mike', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Swartz, Brian', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Burgess, Randy', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Smugala, Ryan', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Harmon, Gary', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Blasinsky, Scott', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Carter III, Laymon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Coleman, Johnathan', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Venditti, Nick', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Blackwell, Devon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Kovach, Alex', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Bolden, Antonio', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Smith, Ryan', u'Total_Points': 60.0}] and I need to use a multi key sort reversed by Total_Points, then not reversed by TOT_PTS_Misc. This can be done at the command prompt like so: a = sorted(b, key=lambda d: (-d['Total_Points'], d['TOT_PTS_Misc'])) But I have to run this through a function, where I pass in the list and the sort keys. For example, def multikeysort(dict_list, sortkeys):. How can the lambda line be used which will sort the list, for an arbitrary number of keys that are passed in to the multikeysort function, and take into consideration that the sortkeys may have any number of keys and those that need reversed sorts will be identified with a '-' before it?", "output": "This answer works for any kind of column in the dictionary -- the negated column need not be a number. def multikeysort(items, columns): from operator import itemgetter comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else (itemgetter(col.strip()), 1)) for col in columns] def comparer(left, right): for fn, mult in comparers: result = cmp(fn(left), fn(right)) if result: return mult * result else: return 0 return sorted(items, cmp=comparer) You can call it like this: b = [{u'TOT_PTS_Misc': u'Utley, Alex', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Russo, Brandon', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Chappell, Justin', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Foster, Toney', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lawson, Roman', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lempke, Sam', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Gnezda, Alex', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Kirks, Damien', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Worden, Tom', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Korecz, Mike', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Swartz, Brian', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Burgess, Randy', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Smugala, Ryan', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Harmon, Gary', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Blasinsky, Scott', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Carter III, Laymon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Coleman, Johnathan', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Venditti, Nick', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Blackwell, Devon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Kovach, Alex', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Bolden, Antonio', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Smith, Ryan', u'Total_Points': 60.0}] a = multikeysort(b, ['-Total_Points', 'TOT_PTS_Misc']) for item in a: print item Try it with either column negated. You will see the sort order reverse. Next: change it so it does not use extra class.... 2016-01-17 Taking my inspiration from this answer What is the best way to get the first item from an iterable matching a condition?, I shortened the code: from operator import itemgetter as i def multikeysort(items, columns): comparers = [ ((i(col[1:].strip()), -1) if col.startswith('-') else (i(col.strip()), 1)) for col in columns ] def comparer(left, right): comparer_iter = ( cmp(fn(left), fn(right)) * mult for fn, mult in comparers ) return next((result for result in comparer_iter if result), 0) return sorted(items, cmp=comparer) In case you like your code terse. Later 2016-01-17 This works with python3 (which eliminated the cmp argument to sort): from operator import itemgetter as i from functools import cmp_to_key def cmp(x, y): \"\"\" Replacement for built-in function cmp that was removed in Python 3 Compare the two objects x and y and return an integer according to the outcome. The return value is negative if x < y, zero if x == y and strictly positive if x > y. https://portingguide.readthedocs.io/en/latest/comparisons.html#the-cmp-function \"\"\" return (x > y) - (x < y) def multikeysort(items, columns): comparers = [ ((i(col[1:].strip()), -1) if col.startswith('-') else (i(col.strip()), 1)) for col in columns ] def comparer(left, right): comparer_iter = ( cmp(fn(left), fn(right)) * mult for fn, mult in comparers ) return next((result for result in comparer_iter if result), 0) return sorted(items, key=cmp_to_key(comparer)) Inspired by this answer How should I do custom sort in Python 3?"} +{"question_id": 55078484, "score": 143, "creation_date": 1552142610, "tags": ["python", "jupyter-notebook"], "instruction": "Open Jupyter Notebook from a Drive Other than C Drive\n\nI have a machine learning project in drive D. It contains python notebook files. But when I launch jupyter notebook through command prompt, it only shows directories inside drive C. How can I navigate to a drive other than C drive without changing jupyter notebook startup path?. I'm using windows 10 operating system. All the solutions I found, trying to change notebook startup path.", "output": "You can launch it from any drive/location you want with the --notebook-dir option. For example: jupyter notebook --notebook-dir=D:/"} +{"question_id": 47406741, "score": 143, "creation_date": 1511246414, "tags": ["python", "visual-studio-code"], "instruction": "Disable auto wrap long line in Python code in VS Code\n\nI use Visual Studio Code to write Python code with Pylint. When I press Ctrl + S (save), the editor wraps a long line into multiple short lines. How do I disable the action or configure wrap column count to 120 (default is 80)? I have tried \"python.linting.pylintArgs\": [\"--max-line-length=120\"] and \"editor.wordWrapColumn\": 120, but it didn't work.", "output": "Check your Python formatting provider. \"python.formatting.provider\": \"autopep8\" I guess in your case it is not Pylint which keeps wrapping the long lines, but autopep8. Try setting --max-line-length for autopep8 instead. \"python.formatting.autopep8Args\": [ \"--max-line-length=200\" ]"} +{"question_id": 643699, "score": 143, "creation_date": 1236964073, "tags": ["python", "math", "numpy", "numerical-methods"], "instruction": "How can I use numpy.correlate to do autocorrelation?\n\nI need to do auto-correlation of a set of numbers, which as I understand it is just the correlation of the set with itself. I've tried it using numpy's correlate function, but I don't believe the result, as it almost always gives a vector where the first number is not the largest, as it ought to be. So, this question is really two questions: What exactly is numpy.correlate doing? How can I use it (or something else) to do auto-correlation?", "output": "To answer your first question, numpy.correlate(a, v, mode) is performing the convolution of a with the reverse of v and giving the results clipped by the specified mode. The definition of convolution, C(t)=\u2211 -\u221e < i < \u221e aivt+i where -\u221e < t < \u221e, allows for results from -\u221e to \u221e, but you obviously can't store an infinitely long array. So it has to be clipped, and that is where the mode comes in. There are 3 different modes: full, same, & valid: \"full\" mode returns results for every t where both a and v have some overlap. \"same\" mode returns a result with the same length as the shortest vector (a or v). \"valid\" mode returns results only when a and v completely overlap each other. The documentation for numpy.convolve gives more detail on the modes. For your second question, I think numpy.correlate is giving you the autocorrelation, it is just giving you a little more as well. The autocorrelation is used to find how similar a signal, or function, is to itself at a certain time difference. At a time difference of 0, the auto-correlation should be the highest because the signal is identical to itself, so you expected that the first element in the autocorrelation result array would be the greatest. However, the correlation is not starting at a time difference of 0. It starts at a negative time difference, closes to 0, and then goes positive. That is, you were expecting: autocorrelation(a) = \u2211 -\u221e < i < \u221e aivt+i where 0 <= t < \u221e But what you got was: autocorrelation(a) = \u2211 -\u221e < i < \u221e aivt+i where -\u221e < t < \u221e What you need to do is take the last half of your correlation result, and that should be the autocorrelation you are looking for. A simple python function to do that would be: def autocorr(x): result = numpy.correlate(x, x, mode='full') return result[result.size//2:] returning you only the second half of what numpy calculates. You will, of course, need error checking to make sure that x is actually a 1-d array. Also, this explanation probably isn't the most mathematically rigorous. I've been throwing around infinities because the definition of convolution uses them, but that doesn't necessarily apply for autocorrelation. So, the theoretical portion of this explanation may be slightly wonky, but hopefully the practical results are helpful. These pages on autocorrelation are pretty helpful, and can give you a much better theoretical background if you don't mind wading through the notation and heavy concepts."} +{"question_id": 35166633, "score": 143, "creation_date": 1454460843, "tags": ["python", "list"], "instruction": "How do I multiply each element in a list by a number?\n\nI have a list: my_list = [1, 2, 3, 4, 5] How can I multiply each element in my_list by 5? The output should be: [5, 10, 15, 20, 25]", "output": "You can just use a list comprehension: my_list = [1, 2, 3, 4, 5] my_new_list = [i * 5 for i in my_list] >>> print(my_new_list) [5, 10, 15, 20, 25] Note that a list comprehension is generally a more efficient way to do a for loop: my_new_list = [] for i in my_list: my_new_list.append(i * 5) >>> print(my_new_list) [5, 10, 15, 20, 25] As an alternative, here is a solution using the popular Pandas package: import pandas as pd s = pd.Series(my_list) >>> s * 5 0 5 1 10 2 15 3 20 4 25 dtype: int64 Or, if you just want the list: >>> (s * 5).tolist() [5, 10, 15, 20, 25] Finally, one could use map, although this is generally frowned upon. my_new_list = map(lambda x: x * 5, my_list) Using map, however, is generally less efficient. Per a comment from ShadowRanger on a deleted answer to this question: The reason \"no one\" uses it is that, in general, it's a performance pessimization. The only time it's worth considering map in CPython is if you're using a built-in function implemented in C as the mapping function; otherwise, map is going to run equal to or slower than the more Pythonic listcomp or genexpr (which are also more explicit about whether they're lazy generators or eager list creators; on Py3, your code wouldn't work without wrapping the map call in list). If you're using map with a lambda function, stop, you're doing it wrong. And another one of his comments posted to this reply: Please don't teach people to use map with lambda; the instant you need a lambda, you'd have been better off with a list comprehension or generator expression. If you're clever, you can make map work without lambdas a lot, e.g. in this case, map((5).__mul__, my_list), although in this particular case, thanks to some optimizations in the byte code interpreter for simple int math, [x * 5 for x in my_list] is faster, as well as being more Pythonic and simpler."} +{"question_id": 48242324, "score": 143, "creation_date": 1515863844, "tags": ["python", "sqlalchemy", "alembic"], "instruction": "Undo last Alembic migration\n\nI created a migration with alembic revision --autogenerate, applied it to my development database with alembic upgrade head, and then realised it wasn't quite what I wanted. How can I revert the migration so that I can tweak it and try again?", "output": "Assuming that you only want to go back one revision, use alembic downgrade with a relative migration identifier of -1: alembic downgrade -1 This will run the downgrade() method of your latest revision and update the alembic_version table to indicate the revision you're now at. If you need to go back multiple migrations, run alembic history to view a list of all the migrations in your project (from newest to oldest), then copy and paste the identifier of the migration you want to go back to: alembic downgrade 8ac14e223d1e There's currently no command to delete migrations from your versions directory, so if you want to completely wipe away all trace of your bad migration, you'll need to delete the version file (like 4c009570237e_add_widget_table.py) manually."} +{"question_id": 5445080, "score": 143, "creation_date": 1301171258, "tags": ["python"], "instruction": "Compare if two variables reference the same object in python\n\nHow to check whether two variables reference the same object? x = ['a', 'b', 'c'] y = x # x and y reference the same object z = ['a', 'b', 'c'] # x and z reference different objects", "output": "That\u2019s what is is for. In the example, x is y returns True because it is the same object while x is z returns False because it are different objects (which happen to hold identical data)."} +{"question_id": 13145368, "score": 143, "creation_date": 1351621442, "tags": ["python", "list", "tuples"], "instruction": "How to find the maximum value in a list of tuples?\n\nI have a list with ~10^6 tuples in it like this: [(101, 153), (255, 827), (361, 961), ...] ^ ^ X Y I want to find the maximum value of the Ys in this list, but also want to know the X that it is bound to. How do I do this?", "output": "operator.itemgetter(): In [53]: lis=[(101, 153), (255, 827), (361, 961)] In [81]: from operator import itemgetter In [82]: max(lis, key=itemgetter(1)) # Faster solution Out[82]: (361, 961) In [83]: max(lis, key=itemgetter(1))[0] # Faster solution Out[83]: 361 lambda: In [54]: max(lis, key=lambda item: item[1]) Out[54]: (361, 961) In [55]: max(lis, key=lambda item: item[1])[0] Out[55]: 361 timeit comparison of operator.itemgetter vs lambda: In [84]: %timeit max(lis, key=itemgetter(1)) 1000 loops, best of 3: 232 us per loop In [85]: %timeit max(lis, key=lambda item: item[1]) 1000 loops, best of 3: 556 us per loop"} +{"question_id": 56942670, "score": 143, "creation_date": 1562620725, "tags": ["python", "matplotlib", "pip", "seaborn", "heatmap"], "instruction": "First and last row cut in half of heatmap plot\n\nWhen plotting heatmaps with seaborn (and correlation matrices with matplotlib) the first and the last row is cut in halve. This happens also when I run this minimal code example which I found online. import pandas as pd import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv('https://raw.githubusercontent.com/resbaz/r-novice-gapminder-files/master/data/gapminder-FiveYearData.csv') plt.figure(figsize=(10,5)) sns.heatmap(data.corr()) plt.show() The labels at the y axis are on the correct spot, but the rows aren't completely there. A few days ago, it work as intended. Since then, I installed texlive-xetex so I removed it again but it didn't solve my problem. Any ideas what I could be missing?", "output": "Unfortunately matplotlib 3.1.1 broke seaborn heatmaps; and in general inverted axes with fixed ticks. This is fixed in the current development version; you may hence revert to matplotlib 3.1.0 use matplotlib 3.1.2 or higher set the heatmap limits manually (ax.set_ylim(bottom, top) # set the ylim to bottom, top)"} +{"question_id": 34347145, "score": 143, "creation_date": 1450401811, "tags": ["python", "pandas", "matplotlib"], "instruction": "Pandas plot doesn't show\n\nWhen using this in a script (not IPython), nothing happens, i.e. the plot window doesn't appear : import numpy as np import pandas as pd ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts.plot() Even when adding time.sleep(5), there is still nothing. Why? Is there a way to do it, without having to manually call matplotlib ?", "output": "Once you have made your plot, you need to tell matplotlib to show it. The usual way to do things is to import matplotlib.pyplot and call show from there: import numpy as np import pandas as pd import matplotlib.pyplot as plt ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts.plot() plt.show() In older versions of pandas, you were able to find a backdoor to matplotlib, as in the example below. NOTE: This no longer works in modern versions of pandas, and I still recommend importing matplotlib separately, as in the example above. import numpy as np import pandas as pd ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts.plot() pd.tseries.plotting.pylab.show() But all you are doing there is finding somewhere that matplotlib has been imported in pandas, and calling the same show function from there. Are you trying to avoid calling matplotlib in an effort to speed things up? If so then you are really not speeding anything up, since pandas already imports pyplot: python -mtimeit -s 'import pandas as pd' 100000000 loops, best of 3: 0.0122 usec per loop python -mtimeit -s 'import pandas as pd; import matplotlib.pyplot as plt' 100000000 loops, best of 3: 0.0125 usec per loop Finally, the reason the example you linked in comments doesn't need the call to matplotlib is because it is being run interactively in an iPython notebook, not in a script."} +{"question_id": 1479776, "score": 143, "creation_date": 1253916267, "tags": ["python", "django"], "instruction": "\"Too many values to unpack\" Exception\n\nI'm working on a project in Django and I've just started trying to extend the User model in order to make user profiles. Unfortunately, I've run into a problem: Every time I try to get the user's profile inside of a template (user.get_template.lastIP, for example), I get the following error: Environment: Request Method: GET Request URL: http://localhost:8000/ Django Version: 1.1 Python Version: 2.6.1 Template error: In template /path/to/base.tpl, error at line 19 Caught an exception while rendering: too many values to unpack 19 : Hello, {{user.username}} ({{ user.get_profile.rep}}). How's it goin? Logout Exception Type: TemplateSyntaxError at / Exception Value: Caught an exception while rendering: too many values to unpack Any ideas as to what's going on or what I'm doing wrong?", "output": "That exception means that you are trying to unpack a tuple, but the tuple has too many values with respect to the number of target variables. For example: this works, and prints 1, then 2, then 3 def returnATupleWithThreeValues(): return (1,2,3) a,b,c = returnATupleWithThreeValues() print a print b print c But this raises your error def returnATupleWithThreeValues(): return (1,2,3) a,b = returnATupleWithThreeValues() print a print b raises Traceback (most recent call last): File \"c.py\", line 3, in ? a,b = returnATupleWithThreeValues() ValueError: too many values to unpack Now, the reason why this happens in your case, I don't know, but maybe this answer will point you in the right direction."} +{"question_id": 21095654, "score": 143, "creation_date": 1389628686, "tags": ["python", "null", "nonetype"], "instruction": "What is a 'NoneType' object?\n\nI'm getting this error when I run my python script: TypeError: cannot concatenate 'str' and 'NoneType' objects I'm pretty sure the 'str' means string, but I dont know what a 'NoneType' object is. My script craps out on the second line, I know the first one works because the commands from that line are in my asa as I would expect. At first I thought it may be because I'm using variables and user input inside send_command. Everything in 'CAPS' are variables, everything in 'lower case' is input from 'parser.add_option' options. I'm using pexpect, and optparse send_command(child, SNMPGROUPCMD + group + V3PRIVCMD) send_command(child, SNMPSRVUSRCMD + snmpuser + group + V3AUTHCMD + snmphmac + snmpauth + PRIVCMD + snmpencrypt + snmppriv)", "output": "NoneType is the type for the None object, which is an object that indicates no value. None is the return value of functions that \"don't return anything\". It is also a common default return value for functions that search for something and may or may not find it; for example, it's returned by re.search when the regex doesn't match, or dict.get when the key has no entry in the dict. You cannot add None to strings or other objects. One of your variables is None, not a string. Maybe you forgot to return in one of your functions, or maybe the user didn't provide a command-line option and optparse gave you None for that option's value. When you try to add None to a string, you get that exception: send_command(child, SNMPGROUPCMD + group + V3PRIVCMD) One of group or SNMPGROUPCMD or V3PRIVCMD has None as its value."} +{"question_id": 50872498, "score": 143, "creation_date": 1529054197, "tags": ["python", "python-3.x", "dictionary", "python-3.7", "ordereddict"], "instruction": "Will OrderedDict become redundant in Python 3.7?\n\nFrom the Python 3.7 changelog: the insertion-order preservation nature of dict objects has been declared to be an official part of the Python language spec. Would this mean that OrderedDict will become redundant? The only use I can think of it will be to maintain backwards compatibility with older versions of Python which don't preserve insertion-order for normal dictionaries.", "output": "No it won't become redundant in Python 3.7 because OrderedDict is not just a dict that retains insertion order, it also offers an order dependent method, OrderedDict.move_to_end(), and supports reversed() iteration*. Moreover, equality comparisons with OrderedDict are order sensitive and this is still not the case for dict in Python 3.7, for example: >>> OrderedDict([(1,1), (2,2)]) == OrderedDict([(2,2), (1,1)]) False >>> dict([(1,1), (2,2)]) == dict([(2,2), (1,1)]) True Two relevant questions here and here. * Support for reversed() iteration of regular Python dict is added for Python 3.8, see issue33462"} +{"question_id": 21892989, "score": 143, "creation_date": 1392845801, "tags": ["python", "python-3.x"], "instruction": "How can I simulate 2.x's tuple unpacking for lambda parameters, using 3.x?\n\nIn Python 2, I can write: In [5]: points = [ (1,2), (2,3)] In [6]: min(points, key=lambda (x, y): (x*x + y*y)) Out[6]: (1, 2) But that is not supported in 3.x: File \"<stdin>\", line 1 min(points, key=lambda (x, y): (x*x + y*y)) ^ SyntaxError: invalid syntax The straightforward workaround is to index explicitly into the tuple that was passed: >>> min(points, key=lambda p: p[0]*p[0] + p[1]*p[1]) (1, 2) This is very ugly. If the lambda were a function, I could do def some_name_to_think_of(p): x, y = p return x*x + y*y But because the lambda only supports a single expression, it's not possible to put the x, y = p part into it. How else can I work around this limitation?", "output": "No, there is no other way. You covered it all. The way to go would be to raise this issue on the Python ideas mailing list, but be prepared to argue a lot over there to gain some traction. Actually, just not to say \"there is no way out\", a third way could be to implement one more level of lambda calling just to unfold the parameters - but that would be at once more inefficient and harder to read than your two suggestions: min(points, key=lambda p: (lambda x,y: (x*x + y*y))(*p)) Python 3.8 update Since the release of Python 3.8, PEP 572 \u2014 assignment expressions \u2014 have been available as a tool. So, if one uses a trick to execute multiple expressions inside a lambda - I usually do that by creating a tuple and just returning the last component of it, it is possible to do the following: >>> a = lambda p:(x:=p[0], y:=p[1], x ** 2 + y ** 2)[-1] >>> a((3,4)) 25 One should keep in mind that this kind of code will seldom be more readable or practical than having a full function. Still, there are possible uses - if there are various one-liners that would operate on this point, it could be worth to have a namedtuple, and use the assignment expression to effectively \"cast\" the incoming sequence to the namedtuple: >>> from collections import namedtuple >>> point = namedtuple(\"point\", \"x y\") >>> b = lambda s: (p:=point(*s), p.x ** 2 + p.y ** 2)[-1]"} +{"question_id": 3025162, "score": 142, "creation_date": 1276279996, "tags": ["python", "statistics", "combinations"], "instruction": "Statistics: combinations in Python\n\nI need to compute combinatorials (nCr) in Python but cannot find the function to do that in math, numpy or stat libraries. Something like a function of the type: comb = calculate_combinations(n, r) I need the number of possible combinations, not the actual combinations, so itertools.combinations does not interest me. Finally, I want to avoid using factorials, as the numbers I'll be calculating the combinations for can get too big and the factorials are going to be monstrous. This seems like a REALLY easy to answer question, however I am being drowned in questions about generating all the actual combinations, which is not what I want.", "output": "Updated answer in 2023: Use the math.comb function, which exists since Python 3.8 and has gotten much faster in 3.11. Old answer: See scipy.special.comb (scipy.misc.comb in older versions of scipy). When exact is False, it uses the gammaln function to obtain good precision without taking much time. In the exact case it returns an arbitrary-precision integer, which might take a long time to compute."} +{"question_id": 6999565, "score": 142, "creation_date": 1312907482, "tags": ["python", "python-3.x"], "instruction": "Python, HTTPS GET with basic authentication\n\nIm trying to do a HTTPS GET with basic authentication using python. Im very new to python and the guides seem to use diffrent librarys to do things. (http.client, httplib and urllib). Can anyone show me how its done? How can you tell the standard library to use?", "output": "In Python 3 the following will work. I am using the lower level http.client from the standard library. Also check out section 2 of rfc2617 for details of basic authorization. This code won't check the certificate is valid, but will set up a https connection. See the http.client docs on how to do that. from http.client import HTTPSConnection from base64 import b64encode # Authorization token: we need to base 64 encode it # and then decode it to acsii as python 3 stores it as a byte string def basic_auth(username, password): token = b64encode(f\"{username}:{password}\".encode('utf-8')).decode(\"ascii\") return f'Basic {token}' username = \"user_name\" password = \"password\" #This sets up the https connection c = HTTPSConnection(\"www.google.com\") #then connect headers = { 'Authorization' : basic_auth(username, password) } c.request('GET', '/', headers=headers) #get the response back res = c.getresponse() # at this point you could check the status etc # this gets the page text data = res.read()"} +{"question_id": 902408, "score": 142, "creation_date": 1243110033, "tags": ["python", "sql"], "instruction": "How can I use variables in an SQL statement in Python?\n\nI have the following Python code: cursor.execute(\"INSERT INTO table VALUES var1, var2, var3,\") where var1 is an integer. var2 and var3 are strings. How can I write the variable names without Python including them as part of the query text?", "output": "cursor.execute(\"INSERT INTO table VALUES (%s, %s, %s)\", (var1, var2, var3)) Note that the parameters are passed as a tuple, (a, b, c). If you're passing a single parameter, the tuple needs to end with a comma, (a,). The database API does proper escaping and quoting of variables. Be careful not to use the string formatting operator (%), because It does not do any escaping or quoting. It is prone to uncontrolled string format attacks e.g. SQL injection."} +{"question_id": 487971, "score": 142, "creation_date": 1233155492, "tags": ["python", "module", "package"], "instruction": "Is there a standard way to list names of Python modules in a package?\n\nIs there a straightforward way to list the names of all modules in a package, without using __all__? For example, given this package: /testpkg /testpkg/__init__.py /testpkg/modulea.py /testpkg/moduleb.py I'm wondering if there is a standard or built-in way to do something like this: >>> package_contents(\"testpkg\") ['modulea', 'moduleb'] The manual approach would be to iterate through the module search paths in order to find the package's directory. One could then list all the files in that directory, filter out the uniquely-named py/pyc/pyo files, strip the extensions, and return that list. But this seems like a fair amount of work for something the module import mechanism is already doing internally. Is that functionality exposed anywhere?", "output": "Maybe this will do what you're looking for? import imp import os MODULE_EXTENSIONS = ('.py', '.pyc', '.pyo') def package_contents(package_name): file, pathname, description = imp.find_module(package_name) if file: raise ImportError('Not a package: %r', package_name) # Use a set because some may be both source and compiled. return set([os.path.splitext(module)[0] for module in os.listdir(pathname) if module.endswith(MODULE_EXTENSIONS)])"} +{"question_id": 25212986, "score": 142, "creation_date": 1407535920, "tags": ["python", "pandas", "seaborn", "facet-grid", "lmplot"], "instruction": "How to set the xlim and ylim of a FacetGrid\n\nI'm using sns.lmplot to plot a linear regression, dividing my dataset into two groups with a categorical variable. For both x and y, I'd like to manually set the lower bound on both plots, but leave the upper bound at the Seaborn default. Here's a simple example: import pandas as pd import seaborn as sns import numpy as np n = 200 np.random.seed(2014) base_x = np.random.rand(n) base_y = base_x * 2 errors = np.random.uniform(size=n) y = base_y + errors df = pd.DataFrame({'X': base_x, 'Y': y, 'Z': ['A','B']*(100)}) mask_for_b = df.Z == 'B' df.loc[mask_for_b,['X','Y']] = df.loc[mask_for_b,] *2 sns.lmplot('X','Y',df,col='Z',sharex=False,sharey=False) This outputs the following: But in this example, I'd like the xlim and the ylim to be (0,*) . I tried using sns.plt.ylim and sns.plt.xlim but those only affect the right-hand plot. Example: sns.plt.ylim(0,) sns.plt.xlim(0,) How can I access the xlim and ylim for each plot in the FacetGrid?", "output": "The lmplot function returns a FacetGrid instance. This object has a method called set, to which you can pass key=value pairs and they will be set on each Axes object in the grid. Secondly, you can set only one side of an Axes limit in matplotlib by passing None for the value you want to remain as the default. Putting these together, we have: g = sns.lmplot('X', 'Y', df, col='Z', sharex=False, sharey=False) g.set(ylim=(0, None)) Update Positional arguments, sharex and sharey are deprecate beginning in seaborn 0.11 g = sns.lmplot(x='X', y='Y', data=df, col='Z', facet_kws={'sharey': False, 'sharex': False}) g.set(ylim=(0, None))"} +{"question_id": 53657215, "score": 142, "creation_date": 1544119209, "tags": ["python", "google-chrome", "selenium-webdriver", "selenium-chromedriver", "google-chrome-headless"], "instruction": "How to run headless Chrome with Selenium in Python?\n\nI'm trying some stuff out with selenium, and I really want my script to run quickly. I thought that running my script with headless Chrome would make it faster. First, is that assumption correct, or does it not matter if I run my script with a headless driver? I want headless Chrome to work, but somehow it isn't working correctly. I tried different things, and most suggested that it would work as said here in the October update: How to configure ChromeDriver to initiate Chrome browser in Headless mode through Selenium? But when I tried that, I saw weird console output, and it still doesn't seem to work. Any tips appreciated.", "output": "To run chrome-headless just add --headless via chrome_options.add_argument, e.g.: from selenium import webdriver from selenium.webdriver.chrome.options import Options chrome_options = Options() # chrome_options.add_argument(\"--disable-extensions\") # chrome_options.add_argument(\"--disable-gpu\") # chrome_options.add_argument(\"--no-sandbox\") # linux only chrome_options.add_argument(\"--headless=new\") # for Chrome >= 109 # chrome_options.add_argument(\"--headless\") # chrome_options.headless = True # also works driver = webdriver.Chrome(options=chrome_options) start_url = \"https://duckgo.com\" driver.get(start_url) print(driver.page_source.encode(\"utf-8\")) # b'<!DOCTYPE html><html xmlns=\"http://www.... driver.quit() So my thought is that running it with headless chrome would make my script faster. Try using chrome options like --disable-extensions or --disable-gpu and benchmark it, but I wouldn't count with substantial improvement. References: headless-chrome"} +{"question_id": 3022013, "score": 142, "creation_date": 1276252059, "tags": ["python", "path", "python-3.x"], "instruction": "Windows can't find the file on subprocess.call()\n\nI am getting the following error: WindowsError: [Error 2] The system cannot find the file specified My code is: subprocess.call([\"<<executable file found in PATH>>\"]) Windows 7, 64 bit. Python 3.x latest, stable. Any ideas? Thanks,", "output": "When the command is a shell built-in, add a shell=True to the call. E.g. for dir you would type: import subprocess subprocess.call('dir', shell=True) To quote from the documentation: The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable."} +{"question_id": 4804005, "score": 142, "creation_date": 1296041111, "tags": ["python", "matplotlib"], "instruction": "Matplotlib figure facecolor (background color)\n\nCan someone please explain why the code below does not work when setting the facecolor of the figure? import matplotlib.pyplot as plt # create figure instance fig1 = plt.figure(1) fig1.set_figheight(11) fig1.set_figwidth(8.5) rect = fig1.patch rect.set_facecolor('red') # works with plt.show(). # Does not work with plt.savefig(\"trial_fig.png\") ax = fig1.add_subplot(1,1,1) x = 1, 2, 3 y = 1, 4, 9 ax.plot(x, y) # plt.show() # Will show red face color set above using rect.set_facecolor('red') plt.savefig(\"trial_fig.png\") # The saved trial_fig.png DOES NOT have the red facecolor. # plt.savefig(\"trial_fig.png\", facecolor='red') # Here the facecolor is red. When I specify the height and width of the figure using fig1.set_figheight(11) fig1.set_figwidth(8.5) these are picked up by the command plt.savefig(\"trial_fig.png\"). However, the facecolor setting is not picked up. Why? Thanks for your help.", "output": "It's because savefig overrides the facecolor for the background of the figure. (This is deliberate, actually... The assumption is that you'd probably want to control the background color of the saved figure with the facecolor kwarg to savefig. It's a confusing and inconsistent default, though!) The easiest workaround is just to do fig.savefig('whatever.png', facecolor=fig.get_facecolor(), edgecolor='none') (I'm specifying the edgecolor here because the default edgecolor for the actual figure is white, which will give you a white border around the saved figure)"} +{"question_id": 20044559, "score": 142, "creation_date": 1384767811, "tags": ["python", "python-2.7", "tkinter", "pip", "easy-install"], "instruction": "How to pip or easy_install tkinter on Windows\n\nIDLE is throwing errors that and says tkinter can't be imported. Is there a simple way to install tkinter via pip or easy_install? There seem to be a lot of package names flying around for this... This and other assorted variations with tkinter-pypy aren't working. This is what I run: pip install python-tk I'm on Windows with Python 2.7 and I don't have apt-get or other system package managers.", "output": "Well I can see two solutions here: 1) Follow the Docs-Tkinter install for Python (for Windows): Tkinter (and, since Python 3.1, ttk) are included with all standard Python distributions. It is important that you use a version of Python supporting Tk 8.5 or greater, and ttk. We recommend installing the \"ActivePython\" distribution from ActiveState, which includes everything you'll need. In your web browser, go to Activestate.com, and follow along the links to download the Community Edition of ActivePython for Windows. Make sure you're downloading a 3.1 or newer version, not a 2.x version. Run the installer, and follow along. You'll end up with a fresh install of ActivePython, located in, e.g. C:\\python32. From a Windows command prompt, or the Start Menu's \"Run...\" command, you should then be able to run a Python shell via: % C:\\python32\\python This should give you the Python command prompt. From the prompt, enter these two commands: >>> import tkinter >>> tkinter._test() This should pop up a small window; the first line at the top of the window should say \"This is Tcl/Tk version 8.5\"; make sure it is not 8.4! 2) Uninstall 64-bit Python and install 32 bit Python."} +{"question_id": 18739239, "score": 142, "creation_date": 1378896872, "tags": ["python", "python-2.7", "stdout", "stderr", "os.system"], "instruction": "Python: How to get stdout after running os.system?\n\nI want to get the stdout in a variable after running the os.system call. Lets take this line as an example: batcmd=\"dir\" result = os.system(batcmd) result will contain the error code (stderr 0 under Windows or 1 under some linux for the above example). How can I get the stdout for the above command without using redirection in the executed command?", "output": "If all you need is the stdout output, then take a look at subprocess.check_output(): import subprocess batcmd=\"dir\" result = subprocess.check_output(batcmd, shell=True) Because you were using os.system(), you'd have to set shell=True to get the same behaviour. You do want to heed the security concerns about passing untrusted arguments to your shell. If you need to capture stderr as well, simply add stderr=subprocess.STDOUT to the call: result = subprocess.check_output([batcmd], stderr=subprocess.STDOUT) to redirect the error output to the default output stream. If you know that the output is text, add text=True to decode the returned bytes value with the platform default encoding; use encoding=\"...\" instead if that codec is not correct for the data you receive."} +{"question_id": 6167923, "score": 142, "creation_date": 1306674867, "tags": ["python", "scope"], "instruction": "Block scope in Python\n\nWhen you code in other languages, you will sometimes create a block scope, like this: statement ... statement { statement ... statement } statement ... statement One purpose (of many) is to improve code readability: to show that certain statements form a logical unit or that certain local variables are used only in that block. Is there an idiomatic way of doing the same thing in Python?", "output": "No, there is no language support for creating block scope. The following constructs create scope: module class function (incl. lambda) generator expression comprehensions (dict, set, list (in Python 3))"} +{"question_id": 30986751, "score": 142, "creation_date": 1434996023, "tags": ["python", "set", "subtraction", "set-difference"], "instruction": "Set difference versus set subtraction\n\nWhat distinguishes - and .difference() on sets? Obviously the syntax is not the same. One is a binary operator, and the other is an instance method. What else? s1 = set([1,2,3]) s2 = set([3,4,5]) >>> s1 - s2 set([1, 2]) >>> s1.difference(s2) set([1, 2])", "output": "set.difference, set.union... can take any iterable as the second arg while both need to be sets to use -, there is no difference in the output. Operation Equivalent Result s.difference(t) s - t new set with elements in s but not in t With .difference you can do things like: s1 = set([1,2,3]) print(s1.difference(*[[3],[4],[5]])) {1, 2} It is also more efficient when creating sets using the *(iterable,iterable) syntax as you don't create intermediary sets, you can see some comparisons here"} +{"question_id": 52335970, "score": 142, "creation_date": 1536942653, "tags": ["python"], "instruction": "How to fix \"SyntaxWarning: invalid escape sequence\" in Python?\n\nI'm getting lots of warnings like this in Python: DeprecationWarning: invalid escape sequence \\A orcid_regex = '\\A[0-9]{4}-[0-9]{4}-[0-9]{4}-[0-9]{3}[0-9X]\\Z' DeprecationWarning: invalid escape sequence \\/ AUTH_TOKEN_PATH_PATTERN = '^\\/api\\/groups' DeprecationWarning: invalid escape sequence \\ \"\"\" DeprecationWarning: invalid escape sequence \\. DOI_PATTERN = re.compile('(https?://(dx\\.)?doi\\.org/)?10\\.[0-9]{4,}[.0-9]*/.*') <unknown>:20: DeprecationWarning: invalid escape sequence \\( <unknown>:21: DeprecationWarning: invalid escape sequence \\( What do they mean? And how can I fix them? In Python 3.12+ the error message is changed from a DeprecationWarning to a SyntaxWarning (changelog): SyntaxWarning: invalid escape sequence '\\A'", "output": "\\ is the escape character in Python string literals. For example if you want to put a tab character in a string you may use: >>> print(\"foo \\t bar\") foo bar If you want to put a literal \\ in a string you may use \\\\: >>> print(\"foo \\\\ bar\") foo \\ bar Or you may use a \"raw string\": >>> print(r\"foo \\ bar\") foo \\ bar You can't just go putting backslashes in string literals whenever you want one. A backslash is only allowed when part of one of the valid escape sequences, and it will cause a DeprecationWarning (< 3.12) or a SyntaxWarning (3.12+) otherwise. For example \\A isn't a valid escape sequence: $ python3.6 -Wd -c '\"\\A\"' <string>:1: DeprecationWarning: invalid escape sequence \\A $ python3.12 -c '\"\\A\"' <string>:1: SyntaxWarning: invalid escape sequence '\\A' If your backslash sequence does accidentally match one of Python's escape sequences, but you didn't mean it to, that's even worse because the data is just corrupted without any error or warning. So you should always use raw strings or \\\\. It's important to remember that a string literal is still a string literal even if that string is intended to be used as a regular expression. Python's regular expression syntax supports many special sequences that begin with \\. For example \\A matches the start of a string. But \\A is not valid in a Python string literal! This is invalid: my_regex = \"\\Afoo\" Instead you should do this: my_regex = r\"\\Afoo\" Docstrings are another one to remember: docstrings are string literals too, and invalid \\ sequences are invalid in docstrings too! Use r\"\"\"raw strings\"\"\" for docstrings if they must contain \\."} +{"question_id": 1517347, "score": 142, "creation_date": 1254689280, "tags": ["python", "algorithm", "sorting", "python-internals"], "instruction": "What algorithm does Python's built-in sort() method use?\n\nWhat algorithm is the built in sort() method in Python using? Is it possible to have a look at the code for that method?", "output": "Sure! The code's here: listobject.c, starting with function islt and proceeding for QUITE a while ;-). As the file extension suggests, it's C code. You'll also want to read this for a textual explanation, results, etc etc: listsort.txt If you prefer reading Java code than C code, you could look at Joshua Bloch's implementation of timsort in and for Java (Joshua's also the guy who implemented, in 1997, the modified mergesort that's still used in Java, and one can hope that Java will eventually switch to his recent port of timsort). Some explanation of the Java port of timsort is in this request for enhancement1, the diff is here2 (with pointers to all needed files), the key file is here3 -- FWIW, while I'm a better C programmer than Java programmer, in this case I find Joshua's Java code more readable overall than Tim's C code ;-). Editor's notes Archive link: Bug ID: 6804124 - Replace \"modified mergesort\" in java.util.Arrays.sort with timsort Archive link: jdk7/tl/jdk: changeset 1423:bfd7abda8f79 (6804124) Dead link. This may be the modern equivalent but I don't know Java: TimSort.java at master - openjdk/jdk"} +{"question_id": 20105364, "score": 141, "creation_date": 1384976370, "tags": ["python", "matplotlib", "kernel-density", "density-plot"], "instruction": "How can I make a scatter plot colored by density?\n\nI'd like to make a scatter plot where each point is colored by the spatial density of nearby points. I've come across a very similar question, which shows an example of this using R: R Scatter Plot: symbol color represents number of overlapping points What's the best way to accomplish something similar in python using matplotlib?", "output": "In addition to hist2d or hexbin as @askewchan suggested, you can use the same method that the accepted answer in the question you linked to uses. If you want to do that: import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde # Generate fake data x = np.random.normal(size=1000) y = x * 3 + np.random.normal(size=1000) # Calculate the point density xy = np.vstack([x,y]) z = gaussian_kde(xy)(xy) fig, ax = plt.subplots() ax.scatter(x, y, c=z, s=100) plt.show() If you'd like the points to be plotted in order of density so that the densest points are always on top (similar to the linked example), just sort them by the z-values. I'm also going to use a smaller marker size here as it looks a bit better: import numpy as np import matplotlib.pyplot as plt from scipy.stats import gaussian_kde # Generate fake data x = np.random.normal(size=1000) y = x * 3 + np.random.normal(size=1000) # Calculate the point density xy = np.vstack([x,y]) z = gaussian_kde(xy)(xy) # Sort the points by density, so that the densest points are plotted last idx = z.argsort() x, y, z = x[idx], y[idx], z[idx] fig, ax = plt.subplots() ax.scatter(x, y, c=z, s=50) plt.show()"} +{"question_id": 11026959, "score": 141, "creation_date": 1339650435, "tags": ["python"], "instruction": "Writing a dict to txt file and reading it back?\n\nI am trying to write a dictionary to a txt file. Then read the dict values by typing the keys with raw_input. I feel like I am just missing one step but I have been looking for a while now. I get this error File \"name.py\", line 24, in reading print whip[name] TypeError: string indices must be integers, not str My code: #!/usr/bin/env python from sys import exit class Person(object): def __init__(self): self.name = \"\" self.address = \"\" self.phone = \"\" self.age = \"\" self.whip = {} def writing(self): self.whip[p.name] = p.age, p.address, p.phone target = open('deed.txt', 'a') target.write(str(self.whip)) print self.whip def reading(self): self.whip = open('deed.txt', 'r').read() name = raw_input(\"> \") if name in self.whip: print self.whip[name] p = Person() while True: print \"Type:\\n\\t*read to read data base\\n\\t*write to write to data base\\n\\t*exit to exit\" action = raw_input(\"\\n> \") if \"write\" in action: p.name = raw_input(\"Name?\\n> \") p.phone = raw_input(\"Phone Number?\\n> \") p.age = raw_input(\"Age?\\n> \") p.address = raw_input(\"Address?\\n>\") p.writing() elif \"read\" in action: p.reading() elif \"exit\" in action: exit(0)", "output": "Your code is almost right! You are right, you are just missing one step. When you read in the file, you are reading it as a string; but you want to turn the string back into a dictionary. The error message you saw was because self.whip was a string, not a dictionary. So you need to convert the string to a dictionary. Example Here is the simplest way: feed the string into eval(). Like so: def reading(self): s = open('deed.txt', 'r').read() self.whip = eval(s) You can do it in one line, but I think it looks messy this way: def reading(self): self.whip = eval(open('deed.txt', 'r').read()) But eval() is sometimes not recommended. The problem is that eval() will evaluate any string, and if someone tricked you into running a really tricky string, something bad might happen. In this case, you are just running eval() on your own file, so it should be okay. But because eval() is useful, someone made an alternative to it that is safer. This is called literal_eval and you get it from a Python module called ast. import ast def reading(self): s = open('deed.txt', 'r').read() self.whip = ast.literal_eval(s) ast.literal_eval() will only evaluate strings that turn into the basic Python types, so there is no way that a tricky string can do something bad on your computer. EDIT Actually, best practice in Python is to use a with statement to make sure the file gets properly closed. Rewriting the above to use a with statement: import ast def reading(self): with open('deed.txt', 'r') as f: s = f.read() self.whip = ast.literal_eval(s) In the most popular Python, known as \"CPython\", you usually don't need the with statement as the built-in \"garbage collection\" features will figure out that you are done with the file and will close it for you. But other Python implementations, like \"Jython\" (Python for the Java VM) or \"PyPy\" (a really cool experimental system with just-in-time code optimization) might not figure out to close the file for you. It's good to get in the habit of using with, and I think it makes the code pretty easy to understand."} +{"question_id": 28906859, "score": 141, "creation_date": 1425672784, "tags": ["python", "python-3.x", "urllib"], "instruction": "'module' has no attribute 'urlencode'\n\nWhen I try to follow the Python Wiki's example related to URL encoding: >>> import urllib >>> params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) >>> f = urllib.urlopen(\"http://www.musi-cal.com/cgi-bin/query\", params) >>> print f.read() An error is raised on the second line: Traceback (most recent call last): File \"<stdin>\", line 1, in <module> AttributeError: 'module' object has no attribute 'urlencode' What am I missing?", "output": "urllib has been split up in Python 3. The urllib.urlencode() function is now urllib.parse.urlencode(), the urllib.urlopen() function is now urllib.request.urlopen()."} +{"question_id": 42982143, "score": 141, "creation_date": 1490287494, "tags": ["python", "ssl", "debian", "python-requests", "debian-based"], "instruction": "Python Requests - How to use system ca-certificates (debian/ubuntu)?\n\nI've installed a self-signed root ca cert into debian's /usr/share/ca-certificates/local and installed them with sudo dpkg-reconfigure ca-certificates. At this point true | gnutls-cli mysite.local is happy, and true | openssl s_client -connect mysite.local:443 is happy, but python2 and python3 requests module insists it is not happy with the cert. python2: Traceback (most recent call last): File \"<string>\", line 1, in <module> File \"/usr/local/lib/python2.7/site-packages/requests/api.py\", line 70, in get return request('get', url, params=params, **kwargs) File \"/usr/local/lib/python2.7/site-packages/requests/api.py\", line 56, in request return session.request(method=method, url=url, **kwargs) File \"/usr/local/lib/python2.7/site-packages/requests/sessions.py\", line 488, in request resp = self.send(prep, **send_kwargs) File \"/usr/local/lib/python2.7/site-packages/requests/sessions.py\", line 609, in send r = adapter.send(request, **kwargs) File \"/usr/local/lib/python2.7/site-packages/requests/adapters.py\", line 497, in send raise SSLError(e, request=request) requests.exceptions.SSLError: (\"bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)\",) python3 Traceback (most recent call last): File \"<string>\", line 1, in <module> File \"/usr/local/bin/python3.5/site-packages/requests/api.py\", line 70, in get return request('get', url, params=params, **kwargs) File \"/usr/local/bin/python3.5/site-packages/requests/api.py\", line 56, in request return session.request(method=method, url=url, **kwargs) File \"/usr/local/bin/python3.5/site-packages/requests/sessions.py\", line 488, in request resp = self.send(prep, **send_kwargs) File \"/usr/local/bin/python3.5/site-packages/requests/sessions.py\", line 609, in send r = adapter.send(request, **kwargs) File \"/usr/local/bin/python3.5/site-packages/requests/adapters.py\", line 497, in send raise SSLError(e, request=request) requests.exceptions.SSLError: (\"bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)\",) Why does python ignore the system ca-certificates bundle, and how do I integrate it?", "output": "From https://stackoverflow.com/a/33717517/1695680 To make python requests use the system ca-certificates bundle, it needs to be told to use it over its own embedded bundle export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt Requests embeds its bundles here, for reference: /usr/local/lib/python2.7/site-packages/requests/cacert.pem /usr/lib/python3/dist-packages/requests/cacert.pem Or in newer versions use additional package to obtain certificates from: https://github.com/certifi/python-certifi To verify from which file certificates are loaded, you can try: Python 3.8.5 (default, Jul 28 2020, 12:59:40) >>> import certifi >>> certifi.where() '/etc/ssl/certs/ca-certificates.crt'"} +{"question_id": 44951456, "score": 141, "creation_date": 1499350504, "tags": ["python", "python-3.x", "python-3.6", "pycrypto"], "instruction": "Pip error: Microsoft Visual C++ 14.0 is required\n\nI just ran the following command: pip install -U steem and the installation worked well until it failed to install pycrypto. Afterwards I did the pip install cryptography command because I thought it was the missing package. So my question is, how I can install steem without the pycrypto-error (or the pycrypto-package in addition) and how to uninstall the cryptography-Package which I don't need. (I'm using Windows 7 and Python 3) Requirement already up-to-date: python-dateutil in c:\\users\\***\\appdata\\lo cal\\programs\\python\\python36\\lib\\site-packages (from dateparser->maya->steem) ... Installing collected packages: urllib3, idna, chardet, certifi, requests, pycryp to, funcy, w3lib, voluptuous, diff-match-patch, scrypt, prettytable, appdirs, la ngdetect, ruamel.yaml, humanize, tzlocal, regex, dateparser, pytzdata, pendulum, maya, ecdsa, pylibscrypt, ujson, toolz, steem Running setup.py install for pycrypto ... error Complete output from command c:\\users\\***\\appdata\\local\\programs\\pytho n\\python36\\python.exe -u -c \"import setuptools, tokenize;__file__='C:\\\\Users\\\\ ***~1\\\\AppData\\\\Local\\\\Temp\\\\pip-build-k6flhu5k\\\\pycrypto\\\\setup.py';f=getattr( tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close(); exec(compile(code, __file__, 'exec'))\" install --record C:\\Users\\***N~1\\AppDat a\\Local\\Temp\\pip-igpkll6u-record\\install-record.txt --single-version-externally- managed --compile: running install running build running build_py ... building 'Crypto.Random.OSRNG.winrandom' extension error: Microsoft Visual C++ 14.0 is required. Get it with \"Microsoft Visual C++ Build Tools\": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command \"c:\\users\\***\\appdata\\local\\programs\\python\\python36\\python.exe -u -c \"import setuptools, tokenize;__file__='C:\\\\Users\\\\***N~1\\\\AppData\\\\Local\\\\ Temp\\\\pip-build-k6flhu5k\\\\pycrypto\\\\setup.py';f=getattr(tokenize, 'open', open)( __file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __fil e__, 'exec'))\" install --record C:\\Users\\***N~1\\AppData\\Local\\Temp\\pip-igpkll6 u-record\\install-record.txt --single-version-externally-managed --compile\" faile d with error code 1 in C:\\Users\\***N~1\\AppData\\Local\\Temp\\pip-build- k6flhu5k\\p ycrypto\\", "output": "You need to install Microsoft Visual C++ 14.0 to install pycrypto: error: Microsoft Visual C++ 14.0 is required. Get it with \"Microsoft Visual C++ Build Tools\": http://landinghub.visualstudio.com/visual-cpp-build-tools https://visualstudio.microsoft.com/visual-cpp-build-tools/ In the comments you ask which link to use. Use the link to Visual C++ 2015 Build Tools. That will install Visual C++ 14.0 without installing Visual Studio. Alternate link: https://wiki.python.org/moin/WindowsCompilers#Microsoft_Visual_C.2B-.2B-_14.2_standalone:_Build_Tools_for_Visual_Studio_2019_.28x86.2C_x64.2C_ARM.2C_ARM64.29 In the comments you ask about methods of installing pycrypto that do not require installing a compiler. The binaries in the links appear to be for earlier versions of Python than you are using. One link is to a binary in a DropBox account. I do not recommend downloading binary versions of cryptography libraries provided by third parties. The only way to guarantee that you are getting a version of pycrypto that is compatible with your version of Python and has not been built with any backdoors is to build it from the source. After you have installed Visual C++, simply re-run the original command: pip install -U steem To find out what the various install options mean, run this command: pip help install The help for the -U option says -U, --upgrade Upgrade all specified packages to the newest available version. The handling of dependencies depends on the upgrade-strategy used. If you do not already have the steem library installed, you can run the command without the -U option."} +{"question_id": 1885181, "score": 141, "creation_date": 1260493201, "tags": ["python", "string", "escaping"], "instruction": "How to un-escape a backslash-escaped string?\n\nSuppose I have a string which is a backslash-escaped version of another string. Is there an easy way, in Python, to unescape the string? I could, for example, do: >>> escaped_str = '\"Hello,\\\\nworld!\"' >>> raw_str = eval(escaped_str) >>> print raw_str Hello, world! >>> However that involves passing a (possibly untrusted) string to eval() which is a security risk. Is there a function in the standard lib which takes a string and produces a string with no security implications?", "output": "Python 2 >>> print '\"Hello,\\\\nworld!\"'.decode('string_escape') \"Hello, world!\""} +{"question_id": 3674409, "score": 141, "creation_date": 1284015454, "tags": ["python", "arrays", "optimization", "numpy"], "instruction": "How to split/partition a dataset into training and test datasets for, e.g., cross validation?\n\nWhat is a good way to split a NumPy array randomly into training and testing/validation dataset? Something similar to the cvpartition or crossvalind functions in Matlab.", "output": "If you want to split the data set once in two parts, you can use numpy.random.shuffle, or numpy.random.permutation if you need to keep track of the indices (remember to fix the random seed to make everything reproducible): import numpy # x is your dataset x = numpy.random.rand(100, 5) numpy.random.shuffle(x) training, test = x[:80,:], x[80:,:] or import numpy # x is your dataset x = numpy.random.rand(100, 5) indices = numpy.random.permutation(x.shape[0]) training_idx, test_idx = indices[:80], indices[80:] training, test = x[training_idx,:], x[test_idx,:] There are many ways other ways to repeatedly partition the same data set for cross validation. Many of those are available in the sklearn library (k-fold, leave-n-out, ...). sklearn also includes more advanced \"stratified sampling\" methods that create a partition of the data that is balanced with respect to some features, for example to make sure that there is the same proportion of positive and negative examples in the training and test set."} +{"question_id": 29596350, "score": 141, "creation_date": 1428885827, "tags": ["python", "pip", "uninstallation", "conda", "miniconda"], "instruction": "How to uninstall miniconda?\n\nI've installed conda packages as follows: $ wget http://.../miniconda $ bash miniconda $ conda install numpy pandas scipy matplotlib scikit-learn nltk ipython-notebook seaborn I want to uninstall conda because it's messing up my pips and environment. How do I uninstall conda totally? Will it uninstall also my pip managed packages? If so, is there a way to uninstall conda safely without uninstalling packages managed by pip?", "output": "In order to uninstall miniconda, simply remove the miniconda folder, rm -r ~/miniconda/ As for avoiding conflicts between different Python environments, you can use virtual environments. In particular, with Miniconda, the following workflow could be used, $ wget https://repo.continuum.io/miniconda/Miniconda3-3.7.0-Linux-x86_64.sh -O ~/miniconda.sh $ bash miniconda $ conda env remove --yes -n new_env # remove the environement new_env if it exists (optional) $ conda create --yes -n new_env pip numpy pandas scipy matplotlib scikit-learn nltk ipython-notebook seaborn python=2 $ activate new_env $ # pip install modules if needed, run python scripts, etc # everything will be installed in the new_env # located in ~/miniconda/envs/new_env $ deactivate"} +{"question_id": 16908236, "score": 141, "creation_date": 1370307782, "tags": ["python", "shell", "inline", "execution"], "instruction": "How to execute Python inline from a bash shell\n\nIs there a Python argument to execute code from the shell without starting up an interactive interpreter or reading from a file? Something similar to: perl -e 'print \"Hi\"'", "output": "This works: python -c 'print(\"Hi\")' Hi From the manual, man python: -c command Specify the command to execute (see next section). This termi- nates the option list (following options are passed as arguments to the command)."} +{"question_id": 6005066, "score": 141, "creation_date": 1305410651, "tags": ["python", "dictionary", "append"], "instruction": "Adding dictionaries together\n\nI have two dictionaries and I'd like to be able to make them one: Something like this pseudo-Python would be nice: dic0 = {'dic0': 0} dic1 = {'dic1': 1} ndic = dic0 + dic1 # ndic would equal {'dic0': 0, 'dic1': 1}", "output": "If you're interested in creating a new dict without using intermediary storage: (this is faster, and in my opinion, cleaner than using dict.items()) dic2 = dict(dic0, **dic1) Or if you're happy to use one of the existing dicts: dic0.update(dic1)"} +{"question_id": 16022915, "score": 141, "creation_date": 1366052734, "tags": ["python", "pytest"], "instruction": "How to execute ipdb.set_trace() at will while running pytest tests\n\nI'm using pytest for my test suite. While catching bugs in complex inter-components test, I would like to place import ipdb; ipdb.set_trace() in the middle of my code to allow me to debug it. However, since pytest traps sys.stdin/sys.stdout ipdb fails. How can I use ipdb while testing with pytest. I'm not interested in jumping to pdb or ipdb after a failure, but to place breaks anywhere in the code and be able to debug it there before the failure occurs.", "output": "The error is raised because pytest captures output by default. You can run pytest with -s option (turn off capture output). For example: py.test -s my_test.py and then in my_test.py: import ipdb; ipdb.set_trace()"} +{"question_id": 8372399, "score": 141, "creation_date": 1322960099, "tags": ["python", "list", "zip"], "instruction": "Zip with list output instead of tuple\n\nWhat is the fastest and most elegant way of doing list of lists from two lists? I have In [1]: a=[1,2,3,4,5,6] In [2]: b=[7,8,9,10,11,12] In [3]: zip(a,b) Out[3]: [(1, 7), (2, 8), (3, 9), (4, 10), (5, 11), (6, 12)] And I'd like to have In [3]: some_method(a,b) Out[3]: [[1, 7], [2, 8], [3, 9], [4, 10], [5, 11], [6, 12]] I was thinking about using map instead of zip, but I don't know if there is some standard library method to put as a first argument. I can def my own function for this, and use map, my question is if there is already implemented something. No is also an answer.", "output": "If you are zipping more than 2 lists (or even only 2, for that matter), a readable way would be: [list(a) for a in zip([1,2,3], [4,5,6], [7,8,9])] This uses a list comprehension to apply list to each element (tuple) in the list, converting them into lists."} +{"question_id": 11279331, "score": 141, "creation_date": 1341113212, "tags": ["python", "google-app-engine"], "instruction": "What does the 'u' symbol mean in front of string values?\n\nYes in short i would like to know why am I seeing a u in front of my keys and values. I am rendering a form. The form has check-box for the particular label and one text field for the ip address. I am creating a dictionary with keys being the label which are hardcoded in the list_key and values for the dictionary are taken from the form input (list_value). The dictionary is created but it is preceded by u for some values. here is the sample output for the dictionary: {u'1': {'broadcast': u'on', 'arp': '', 'webserver': '', 'ipaddr': u'', 'dns': ''}} can someone please explain what I am doing wrong. I am not getting the error when i simulate similar method in pyscripter. Any suggestions to improve the code are welcome. Thank you #!/usr/bin/env python import webapp2 import itertools import cgi form =\"\"\" <form method=\"post\"> FIREWALL <br><br> <select name=\"profiles\"> <option value=\"1\">profile 1</option> <option value=\"2\">profile 2</option> <option value=\"3\">profile 3</option> </select> <br><br> Check the box to implement the particular policy <br><br> <label> Allow Broadcast <input type=\"checkbox\" name=\"broadcast\"> </label> <br><br> <label> Allow ARP <input type=\"checkbox\" name=\"arp\"> </label><br><br> <label> Allow Web traffic from external address to internal webserver <input type=\"checkbox\" name=\"webserver\"> </label><br><br> <label> Allow DNS <input type=\"checkbox\" name=\"dns\"> </label><br><br> <label> Block particular Internet Protocol address <input type=\"text\" name=\"ipaddr\"> </label><br><br> <input type=\"submit\"> </form> \"\"\" dictionarymain={} class MainHandler(webapp2.RequestHandler): def get(self): self.response.out.write(form) def post(self): # get the parameters from the form profile = self.request.get('profiles') broadcast = self.request.get('broadcast') arp = self.request.get('arp') webserver = self.request.get('webserver') dns =self.request.get('dns') ipaddr = self.request.get('ipaddr') # Create a dictionary for the above parameters list_value =[ broadcast , arp , webserver , dns, ipaddr ] list_key =['broadcast' , 'arp' , 'webserver' , 'dns' , 'ipaddr' ] #self.response.headers['Content-Type'] ='text/plain' #self.response.out.write(profile) # map two list to a dictionary using itertools adict = dict(zip(list_key,list_value)) self.response.headers['Content-Type'] ='text/plain' self.response.out.write(adict) if profile not in dictionarymain: dictionarymain[profile]= {} dictionarymain[profile]= adict #self.response.headers['Content-Type'] ='text/plain' #self.response.out.write(dictionarymain) def escape_html(s): return cgi.escape(s, quote =True) app = webapp2.WSGIApplication([('/', MainHandler)], debug=True)", "output": "The 'u' in front of the string values means the string is a Unicode string. Unicode is a way to represent more characters than normal ASCII can manage. The fact that you're seeing the u means you're on Python 2 - strings are Unicode by default on Python 3, but on Python 2, the u in front distinguishes Unicode strings. The rest of this answer will focus on Python 2. You can create a Unicode string multiple ways: >>> u'foo' u'foo' >>> unicode('foo') # Python 2 only u'foo' But the real reason is to represent something like this (translation here): >>> val = u'\u041e\u0437\u043d\u0430\u043a\u043e\u043c\u044c\u0442\u0435\u0441\u044c \u0441 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430\u0446\u0438\u0435\u0439' >>> val u'\\u041e\\u0437\\u043d\\u0430\\u043a\\u043e\\u043c\\u044c\\u0442\\u0435\\u0441\\u044c \\u0441 \\u0434\\u043e\\u043a\\u0443\\u043c\\u0435\\u043d\\u0442\\u0430\\u0446\\u0438\\u0435\\u0439' >>> print val \u041e\u0437\u043d\u0430\u043a\u043e\u043c\u044c\u0442\u0435\u0441\u044c \u0441 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430\u0446\u0438\u0435\u0439 For the most part, Unicode and non-Unicode strings are interoperable on Python 2. There are other symbols you will see, such as the \"raw\" symbol r for telling a string not to interpret backslashes. This is extremely useful for writing regular expressions. >>> 'foo\\\"' 'foo\"' >>> r'foo\\\"' 'foo\\\\\"' Unicode and non-Unicode strings can be equal on Python 2: >>> bird1 = unicode('unladen swallow') >>> bird2 = 'unladen swallow' >>> bird1 == bird2 True but not on Python 3: >>> x = u'asdf' # Python 3 >>> y = b'asdf' # b indicates bytestring >>> x == y False"} +{"question_id": 19976115, "score": 141, "creation_date": 1384427370, "tags": ["python", "django", "django-settings"], "instruction": "What's the difference between `from django.conf import settings` and `import settings` in a Django project\n\nI'm reading up that most people do from django.conf import settings but I don't undertstand the difference to simply doing import settings in a django project file. Can anyone explain the difference?", "output": "import settings will import the first python module named settings.py found in sys.path. Usually (in default django setups) it allows access only to your site defined settings file, which overwrites django default settings (django.conf.global_settings). So, if you try to access a valid django setting not specified in your settings file you will get an error. django.conf.settings is not a file but an object (see source) making an abstraction of the concepts, default settings and your site-specific settings. Django also does other checks when you use from django.conf import settings. You can also find it in the django docs."} +{"question_id": 62528272, "score": 141, "creation_date": 1592892031, "tags": ["python", "python-3.x", "async-await", "python-asyncio"], "instruction": "What does asyncio.create_task() do?\n\nWhat does asyncio.create_task() do? A bit of code that confuses me is this: import asyncio async def counter_loop(x, n): for i in range(1, n + 1): print(f\"Counter {x}: {i}\") await asyncio.sleep(0.5) return f\"Finished {x} in {n}\" async def main(): slow_task = asyncio.create_task(counter_loop(\"Slow\", 4)) fast_coro = counter_loop(\"Fast\", 2) print(\"Awaiting Fast\") fast_val = await fast_coro print(\"Finished Fast\") print(\"Awaiting Slow\") slow_val = await slow_task print(\"Finished Slow\") print(f\"{fast_val}, {slow_val}\") asyncio.run(main()) This outputs: 001 | Awaiting Fast 002 | Counter Fast: 1 003 | Counter Slow: 1 004 | Counter Fast: 2 005 | Counter Slow: 2 006 | Finished Fast 007 | Awaiting Slow 008 | Counter Slow: 3 009 | Counter Slow: 4 010 | Finished Slow 011 | Finished Fast in 2, Finished Slow in 4 I don't understand quite how this is working. Shouldn't the slow_task not be able to run until the completion of the fast_coro because it was never used in an asyncio.gather() method? Why do we have to await slow_task? Why is \"Awaiting Slow\" printed after the coroutine appears to have started? What really is a task? I know that what gather is doing is scheduling a task. And create_task supposedly creates a task.", "output": "What does asyncio.create_task() do? It submits the coroutine to run \"in the background\", i.e. concurrently with the current task and all other tasks, switching between them at await points. It returns an awaitable handle called a \"task\" which you can also use to cancel the execution of the coroutine. It's one of the central primitives of asyncio, the asyncio equivalent of starting a thread. (In the same analogy, awaiting the task with await is the equivalent of joining a thread.) Shouldn't the slow_task not be able to run until the completion of the fast_coro No, because you explicitly used create_task to start slow_task in the background. Had you written something like: slow_coro = counter_loop(\"Slow\", 4) fast_coro = counter_loop(\"Fast\", 2) fast_val = await fast_coro ...indeed slow_coro would not run because no one would have yet submitted it to the event loop. But create_task does exactly that: submit it to the event loop for execution concurrently with other tasks, the point of switching being any await. because it was never used in an asyncio.gather method? asyncio.gather is not the only way to achieve concurrency in asyncio. It's just a utility function that makes it easier to wait for a number of coroutines to all complete, and submit them to the event loop at the same time. create_task does just the submitting, it should have probably been called start_coroutine or something like that. Why do we have to await slow_task? We don't have to, it just serves to wait for both coroutines to finish cleanly. The code could have also awaited asyncio.sleep() or something like that. Returning from main() (and the event loop) immediately with some tasks still pending would have worked as well, but it would have printed a warning message indicating a possible bug. Awaiting (or canceling) the task before stopping the event loop is just cleaner. What really is a task? It's an asyncio construct that tracks execution of a coroutine in a concrete event loop. When you call create_task, you submit a coroutine for execution and receive back a handle. You can await this handle when you actually need the result, or you can never await it, if you don't care about the result. This handle is the task, and it inherits from Future, which makes it awaitable and also provides the lower-level callback-based interface, such as add_done_callback."} +{"question_id": 799767, "score": 140, "creation_date": 1240951751, "tags": ["python", "windows", "network-programming"], "instruction": "Getting name of windows computer running python script?\n\nI have a couple Windows computers on my network that will be running a python script. A different set of configuration options should be used in the script depending on which computer is running this script. How would I get that computer name in the python script? Let's say the script was running on a computer named DARK-TOWER, I'd like to write something like this: >>> python.library.get_computer_name() 'DARK-TOWER' Is there a standard or third party library I can use?", "output": "It turns out there are three options (including the two already answered earlier): >>> import platform >>> import socket >>> import os >>> platform.node() 'DARK-TOWER' >>> socket.gethostname() 'DARK-TOWER' >>> os.environ['COMPUTERNAME'] # WORK ONLY ON WINDOWS 'DARK-TOWER'"} +{"question_id": 7102754, "score": 140, "creation_date": 1313645049, "tags": ["python", "sqlalchemy", "flask", "flask-sqlalchemy"], "instruction": "jsonify a SQLAlchemy result set in Flask\n\nI'm trying to jsonify a SQLAlchemy result set in Flask/Python. The Flask mailing list suggested the following method http://librelist.com/browser//flask/2011/2/16/jsonify-sqlalchemy-pagination-collection-result/#04a0754b63387f87e59dda564bde426e : return jsonify(json_list = qryresult) However I'm getting the following error back: TypeError: <flaskext.sqlalchemy.BaseQuery object at 0x102c2df90> is not JSON serializable What am I overlooking here? I have found this question: How to serialize SqlAlchemy result to JSON? which seems very similar however I didn't know whether Flask had some magic to make it easier as the mailing list post suggested. Edit: for clarification, this is what my model looks like class Rating(db.Model): __tablename__ = 'rating' id = db.Column(db.Integer, primary_key=True) fullurl = db.Column(db.String()) url = db.Column(db.String()) comments = db.Column(db.Text) overall = db.Column(db.Integer) shipping = db.Column(db.Integer) cost = db.Column(db.Integer) honesty = db.Column(db.Integer) communication = db.Column(db.Integer) name = db.Column(db.String()) ipaddr = db.Column(db.String()) date = db.Column(db.String()) def __init__(self, fullurl, url, comments, overall, shipping, cost, honesty, communication, name, ipaddr, date): self.fullurl = fullurl self.url = url self.comments = comments self.overall = overall self.shipping = shipping self.cost = cost self.honesty = honesty self.communication = communication self.name = name self.ipaddr = ipaddr self.date = date", "output": "It seems that you actually haven't executed your query. Try following: return jsonify(json_list = qryresult.all()) [Edit]: Problem with jsonify is, that usually the objects cannot be jsonified automatically. Even Python's datetime fails ;) What I have done in the past, is adding an extra property (like serialize) to classes that need to be serialized. def dump_datetime(value): \"\"\"Deserialize datetime object into string form for JSON processing.\"\"\" if value is None: return None return [value.strftime(\"%Y-%m-%d\"), value.strftime(\"%H:%M:%S\")] class Foo(db.Model): # ... SQLAlchemy defs here.. def __init__(self, ...): # self.foo = ... pass @property def serialize(self): \"\"\"Return object data in easily serializable format\"\"\" return { 'id' : self.id, 'modified_at': dump_datetime(self.modified_at), # This is an example how to deal with Many2Many relations 'many2many' : self.serialize_many2many } @property def serialize_many2many(self): \"\"\" Return object's relations in easily serializable format. NB! Calls many2many's serialize property. \"\"\" return [ item.serialize for item in self.many2many] And now for views I can just do: return jsonify(json_list=[i.serialize for i in qryresult.all()]) [Edit 2019]: In case you have more complex objects or circular references, use a library like marshmallow)."} +{"question_id": 64654805, "score": 140, "creation_date": 1604360224, "tags": ["python", "python-3.x", "windows", "numpy"], "instruction": "How do you fix \"runtimeError: package fails to pass a sanity check\" for numpy and pandas?\n\nThis is the error I am getting and, as far as I can tell, there is nothing useful on the error link to fix this. RuntimeError: The current Numpy installation ('...\\\\venv\\\\lib\\\\site-packages\\\\numpy\\\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://developercommunity.visualstudio.com/content/problem/1207405/fmod-after-an-update-to-windows-2004-is-causing-a.html I have tried multiple versions of Python (3.8.6 and 3.9.0) and numpy and pandas. I am currently using PyCharm to do all this.", "output": "This error occurs when using python3.9 and numpy1.19.4 So uninstalling numpy1.19.4 and installing 1.19.3 will work. Edit As of January 5th 2021 numpy version 1.19.5 is out and appears to solve the problem."} +{"question_id": 38080748, "score": 140, "creation_date": 1467128759, "tags": ["python", "apache-spark", "datetime", "pyspark", "apache-spark-sql"], "instruction": "Convert pyspark string to date format\n\nI have a date pyspark dataframe with a string column in the format of MM-dd-yyyy and I am attempting to convert this into a date column. I tried: df.select(to_date(df.STRING_COLUMN).alias('new_date')).show() And I get a string of nulls. Can anyone help?", "output": "Update (1/10/2018): For Spark 2.2+ the best way to do this is probably using the to_date or to_timestamp functions, which both support the format argument. From the docs: >>> from pyspark.sql.functions import to_timestamp >>> df = spark.createDataFrame([('1997-02-28 10:30:00',)], ['t']) >>> df.select(to_timestamp(df.t, 'yyyy-MM-dd HH:mm:ss').alias('dt')).collect() [Row(dt=datetime.datetime(1997, 2, 28, 10, 30))] Original Answer (for Spark < 2.2) It is possible (preferrable?) to do this without a udf: from pyspark.sql.functions import unix_timestamp, from_unixtime df = spark.createDataFrame( [(\"11/25/1991\",), (\"11/24/1991\",), (\"11/30/1991\",)], ['date_str'] ) df2 = df.select( 'date_str', from_unixtime(unix_timestamp('date_str', 'MM/dd/yyy')).alias('date') ) print(df2) #DataFrame[date_str: string, date: timestamp] df2.show(truncate=False) #+----------+-------------------+ #|date_str |date | #+----------+-------------------+ #|11/25/1991|1991-11-25 00:00:00| #|11/24/1991|1991-11-24 00:00:00| #|11/30/1991|1991-11-30 00:00:00| #+----------+-------------------+"} +{"question_id": 898669, "score": 140, "creation_date": 1243008590, "tags": ["python", "file", "binary"], "instruction": "How can I detect if a file is binary (non-text) in Python?\n\nHow can I tell if a file is binary (non-text) in Python? I am searching through a large set of files in Python, and keep getting matches in binary files. This makes the output look incredibly messy. I know I could use grep -I, but I am doing more with the data than what grep allows for. In the past, I would have just searched for characters greater than 0x7f, but utf8 and the like, make that impossible on modern systems. Ideally, the solution would be fast.", "output": "You can also use the mimetypes module: import mimetypes ... mime = mimetypes.guess_type(file) It's fairly easy to compile a list of binary mime types. For example Apache distributes with a mime.types file that you could parse into a set of lists, binary and text and then check to see if the mime is in your text or binary list."} +{"question_id": 36950416, "score": 140, "creation_date": 1461987463, "tags": ["python", "django"], "instruction": "When to use get, get_queryset, get_context_data in Django?\n\nI recently learned that you should override the get method when you specifically want to do something other than what the default view does: class ExampleView(generic.ListView): template_name = 'ppm/ppm.html' def get(self, request): manager = request.GET.get('manager', None) if manager: profiles_set = EmployeeProfile.objects.filter(manager=manager) else: profiles_set = EmployeeProfile.objects.all() context = { 'profiles_set': profiles_set, 'title': 'Employee Profiles' } That's simple enough, but when should I use get_queryset or get_context_data over get? To me it seems like they basically do the same thing or am I just missing something? Can I use them together? This is a major source of confusion for me. So to reiterate: In what cases would I use get over get_queryset or get_context_data and vise versa?", "output": "They indeed do different things. get() This is a top-level method, and there's one for each HTTP verb - get(), post(), patch(), etc. You would override it when you want to do something before a request is processed by the view, or after. But this is only called when a form view is loaded for the first time, not when the form is submitted. Basic example in the documentation. By default it will just render the configured template and return the HTML. class MyView(TemplateView): # ... other methods def get(self, *args, **kwargs): print('Processing GET request') resp = super().get(*args, **kwargs) print('Finished processing GET request') return resp get_queryset() Used by ListViews - it determines the list of objects that you want to display. By default, it will just give you all for the model you specify. By overriding this method you can extend or completely replace this logic. Django documentation on the subject. class FilteredAuthorView(ListView): template_name = 'authors.html' model = Author def get_queryset(self): # original qs qs = super().get_queryset() # filter by a variable captured from url, for example return qs.filter(name__startswith=self.kwargs['name']) get_context_data() This method is used to populate a dictionary to use as the template context. For example, ListViews will populate the result from get_queryset() as author_list in the above example. You will probably be overriding this method most often to add things to display in your templates. def get_context_data(self, **kwargs): data = super().get_context_data(**kwargs) data['page_title'] = 'Authors' return data And then in your template, you can reference these variables. <h1>{{ page_title }}</h1> <ul> {% for author in author_list %} <li>{{ author.name }}</li> {% endfor %} </ul> Now to answer your main question, the reason you have so many methods is to let you easily stick your custom logic with pin-point accuracy. It not only allows your code to be more readable and modular, but also more testable. The documentation should explain everything. If still not enough, you may find the sources helpful as well. You'll see how everything is implemented with mixins which are only possible because everything is compartmentalized."} +{"question_id": 12201928, "score": 140, "creation_date": 1346346099, "tags": ["python", "file", "file-io", "path", "file-not-found"], "instruction": "open() gives FileNotFoundError / IOError: '[Errno 2] No such file or directory'\n\nI am trying to open the file recentlyUpdated.yaml from my Python script. But when I try using: open('recentlyUpdated.yaml') I get an error that says: IOError: [Errno 2] No such file or directory: 'recentlyUpdated.yaml' Why? How can I fix the problem?", "output": "Let me clarify how Python finds files: An absolute path is a path that starts with your computer's root directory, for example C:\\Python\\scripts if you're on Windows. A relative path is a path that does not start with your computer's root directory, and is instead relative to something called the working directory. If you try to do open('recentlyUpdated.yaml'), Python will see that you are passing it a relative path, so it will search for the file inside the current working directory. To diagnose the problem: Ensure the file exists (and has the right file extension): use os.listdir() to see the list of files in the current working directory. Ensure you're in the expected directory using os.getcwd(). (If you launch your code from an IDE, you may be in a different directory.) You can then either: Call os.chdir(dir) where dir is the directory containing the file. This will change the current working directory. Then, open the file using just its name, e.g. open(\"file.txt\"). Specify an absolute path to the file in your open call. By the way: Use a raw string (r\"\") if your path uses backslashes, like so: dir = r'C:\\Python32' If you don't use raw string, you have to escape every backslash: 'C:\\\\User\\\\Bob\\\\...' Forward-slashes also work on Windows 'C:/Python32' and do not need to be escaped. Example: Let's say file.txt is found in C:\\Folder. To open it, you can do: os.chdir(r'C:\\Folder') open('file.txt') # relative path, looks inside the current working directory or open(r'C:\\Folder\\file.txt') # absolute path"} +{"question_id": 7152497, "score": 140, "creation_date": 1314041106, "tags": ["python", "class", "sorting", "hash", "magic-methods"], "instruction": "Making a python user-defined class sortable, hashable\n\nWhat methods need to be overridden/implemented when making user-defined classes sortable and/or hashable in python? What are the gotchas to watch out for? I type dir({}) into my interpreter to get a list of methods on built-in dicts. Of those, I assume I need to some implement some subset of ['__cmp__', '__eq__', '__ge__', '__gt__', '__hash__', '__le__', '__lt__', '__ne__'] Is there a difference in which methods must be implemented for Python3 as opposed to Python2?", "output": "To make your items sortable, they only need to implement __lt__. That's the only method used by the built in sort. The other comparisons or functools.total_ordering are only needed if you actually want to use the comparison operators with your class. To make your items hashable, you implement __hash__ as others noted. You should also implement __eq__ in a compatible way -- items that are equivalent should hash the same."} +{"question_id": 4380879, "score": 140, "creation_date": 1291751022, "tags": ["python", "django", "django-models"], "instruction": "Django Model Field Default Based Off Another Field in Same Model\n\nI have a model that I would like to contain a subjects name and their initials (he data is somewhat anonymized and tracked by initials). Right now, I wrote class Subject(models.Model): name = models.CharField(\"Name\", max_length=30) def subject_initials(self): return ''.join(map(lambda x: '' if len(x)==0 else x[0], self.name.split(' '))) # Next line is what I want to do (or something equivalent), but doesn't work with # NameError: name 'self' is not defined subject_init = models.CharField(\"Subject Initials\", max_length=5, default=self.subject_initials) As indicated by the last line, I would prefer to be able to have the initials actually get stored in the database as a field (independent of name), but that is initialized with a default value based on the name field. However, I am having issues as django models don't seem to have a 'self'. If I change the line to subject_init = models.CharField(\"Subject initials\", max_length=2, default=subject_initials), I can do the syncdb, but can't create new subjects. Is this possible in Django, having a callable function give a default to some field based on the value of another field? (For the curious, the reason I want to separate my store initials separately is in rare cases where weird last names may have different than the ones I am tracking. E.g., someone else decided that Subject 1 Named \"John O'Mallory\" initials are \"JM\" rather than \"JO\" and wants to fix edit it as an administrator.)", "output": "Models certainly do have a \"self\"! It's just that you're trying to define an attribute of a model class as being dependent upon a model instance; that's not possible, as the instance does not (and cannot) exist before your define the class and its attributes. To get the effect you want, override the save() method of the model class. Make any changes you want to the instance necessary, then call the superclass's method to do the actual saving. Here's a quick example. def save(self, *args, **kwargs): if not self.subject_init: self.subject_init = self.subject_initials() super(Subject, self).save(*args, **kwargs) This is covered in Overriding Model Methods in the documentation."} +{"question_id": 23033939, "score": 139, "creation_date": 1397324613, "tags": ["python", "unit-testing", "python-3.x", "python-unittest", "python-asyncio"], "instruction": "How to test Python 3.4 asyncio code?\n\nWhat's the best way to write unit tests for code using the Python 3.4 asyncio library? Assume I want to test a TCP client (SocketConnection): import asyncio import unittest class TestSocketConnection(unittest.TestCase): def setUp(self): self.mock_server = MockServer(\"localhost\", 1337) self.socket_connection = SocketConnection(\"localhost\", 1337) @asyncio.coroutine def test_sends_handshake_after_connect(self): yield from self.socket_connection.connect() self.assertTrue(self.mock_server.received_handshake()) When running this test case with the default test runner, the test will always succeed as the method executes only up until the first yield from instruction, after which it returns before executing any assertions. This causes tests to always succeed. Is there a prebuilt test runner that is able to handle asynchronous code like this?", "output": "Since Python 3.8 unittest comes with the IsolatedAsyncioTestCase function, designed for this purpose. from unittest import IsolatedAsyncioTestCase class Test(IsolatedAsyncioTestCase): async def test_functionality(self): result = await functionality() self.assertEqual(expected, result)"} +{"question_id": 2424921, "score": 139, "creation_date": 1268311372, "tags": ["python", "performance", "bash"], "instruction": "Python vs Bash - In which kind of tasks each one outruns the other performance-wise?\n\nObviously Python is more user friendly, a quick search on google shows many results that say that, as Python is byte-compiled is usually faster. I even found this that claims that you can see an improvement of over 2000% on dictionary-based operations. What is your experience on this matter? In which kind of task each one is a clear winner?", "output": "Typical mainframe flow... Input Disk/Tape/User (runtime) --> Job Control Language (JCL) --> Output Disk/Tape/Screen/Printer | ^ v | `--> COBOL Program --------' Typical Linux flow... Input Disk/SSD/User (runtime) --> sh/bash/ksh/zsh/... ----------> Output Disk/SSD/Screen/Printer | ^ v | `--> Python script --------' | ^ v | `--> awk script -----------' | ^ v | `--> sed script -----------' | ^ v | `--> C/C++ program --------' | ^ v | `--- Java program ---------' | ^ v | : : Shells are the glue of Linux Linux shells like sh/ksh/bash/... provide input/output/flow-control designation facilities much like the old mainframe Job Control Language... but on steroids! They are Turing complete languages in their own right while being optimized to efficiently pass data and control to and from other executing processes written in any language the O/S supports. Most Linux applications, regardless what language the bulk of the program is written in, depend on shell scripts and Bash has become the most common. Clicking an icon on the desktop usually runs a short Bash script. That script, either directly or indirectly, knows where all the files needed are and sets variables and command line parameters, finally calling the program. That's a shell's simplest use. Linux as we know it however would hardly be Linux without the thousands of shell scripts that startup the system, respond to events, control execution priorities and compile, configure and run programs. Many of these are quite large and complex. Shells provide an infrastructure that lets us use pre-built components that are linked together at run time rather than compile time. Those components are free-standing programs in their own right that can be used alone or in other combinations without recompiling. The syntax for calling them is indistinguishable from that of a Bash builtin command, and there are in fact numerous builtin commands for which there is also a stand-alone executable on the system, often having additional options. There is no language-wide difference between Python and Bash in performance. It entirely depends on how each is coded and which external tools are called. Any of the well known tools like awk, sed, grep, bc, dc, tr, etc. will leave doing those operations in either language in the dust. Bash then is preferred for anything without a graphical user interface since it is easier and more efficient to call and pass data back from a tool like those with Bash than Python. Performance It depends on which programs the Bash shell script calls and their suitability for the subtask they are given whether the overall throughput and/or responsiveness will be better or worse than the equivalent Python. To complicate matters Python, like most languages, can also call other executables, though it is more cumbersome and thus not as often used. User Interface One area where Python is the clear winner is user interface. That makes it an excellent language for building local or client-server applications as it natively supports GTK graphics and is far more intuitive than Bash. Bash only understands text. Other tools must be called for a GUI and data passed back from them. A Python script is one option. Faster but less flexible options are the binaries like YAD, Zenity, and GTKDialog. While shells like Bash work well with GUIs like Yad, GtkDialog (embedded XML-like interface to GTK+ functions), dialog, and xmessage, Python is much more capable and so better for complex GUI windows. Summary Building with shell scripts is like assembling a computer with off-the-shelf components the way desktop PCs are. Building with Python, C++ or most any other language is more like building a computer by soldering the chips (libraries) and other electronic parts together the way smartphones are. The best results are usually obtained by using a combination of languages where each can do what they do best. One developer calls this \"polyglot programming\"."} +{"question_id": 9450656, "score": 139, "creation_date": 1330232197, "tags": ["python", "keyword-argument", "positional-argument"], "instruction": "Positional argument vs keyword argument\n\nBased on this A positional argument is a name that is not followed by an equal sign (=) and default value. A keyword argument is followed by an equal sign and an expression that gives its default value. def rectangleArea(width, height): return width * height print rectangleArea(width=1, height=2) Question. I assume that both width and height are positional arguments. Then why can we also call it with the keyword argument syntax?", "output": "That text you quote seems to be confused about two totally different things: Positional and keyword arguments are a feature of calls to a function (see Python reference section 5.3.4 Calls). Default values are a feature of function definitions, as per section 7.6 Function definitions I suspect the people who put together that course-ware weren't totally familiar with Python :-) Hence that link you provide is not a very good quality one. In your call to your function, you're using the \"keyword argument\" feature (where the argument is named rather than relying on its position). Without that, values are bound to names based on order alone. So, in this example, the two calls below are equivalent: def process_a_and_b(a, b): blah_blah_blah() process_a_and_b(1, 2) process_a_and_b(b=2, a=1) By further way of example, refer to the following definition and calls: def fn(a, b, c=1): # a/b required, c optional. return a * b + c print(fn(1, 2)) # returns 3, positional and default. print(fn(1, 2, 3)) # returns 5, positional. print(fn(c=5, b=2, a=2)) # returns 9, named. print(fn(b=2, a=2)) # returns 5, named and default. print(fn(5, c=2, b=1)) # returns 7, positional and named. print(fn(8, b=0)) # returns 1, positional, named and default."} +{"question_id": 5478351, "score": 139, "creation_date": 1301429559, "tags": ["python", "time", "callback"], "instruction": "Python time measure function\n\nI want to create a python function to test the time spent in each function and print its name with its time, how i can print the function name and if there is another way to do so please tell me def measureTime(a): start = time.clock() a() elapsed = time.clock() elapsed = elapsed - start print \"Time spent in (function name) is: \", elapsed", "output": "First and foremost, I highly suggest using a profiler or atleast use timeit. However if you wanted to write your own timing method strictly to learn, here is somewhere to get started using a decorator. Python 2: def timing(f): def wrap(*args): time1 = time.time() ret = f(*args) time2 = time.time() print '%s function took %0.3f ms' % (f.func_name, (time2-time1)*1000.0) return ret return wrap And the usage is very simple, just use the @timing decorator: @timing def do_work(): #code Python 3: def timing(f): def wrap(*args, **kwargs): time1 = time.time() ret = f(*args, **kwargs) time2 = time.time() print('{:s} function took {:.3f} ms'.format(f.__name__, (time2-time1)*1000.0)) return ret return wrap Note I'm calling f.func_name to get the function name as a string(in Python 2), or f.__name__ in Python 3."} +{"question_id": 10665591, "score": 139, "creation_date": 1337434261, "tags": ["python", "list"], "instruction": "How to remove list elements in a for loop in Python?\n\nI have a list a = [\"a\", \"b\", \"c\", \"d\", \"e\"] I want to remove elements in this list in a for loop like below: for item in a: print(item) a.remove(item) But it doesn't work. What can I do?", "output": "You are not permitted to remove elements from the list while iterating over it using a for loop. The best way to rewrite the code depends on what it is you're trying to do. For example, your code is equivalent to: for item in a: print(item) a[:] = [] Alternatively, you could use a while loop: while a: print(a.pop()) I'm trying to remove items if they match a condition. Then I go to next item. You could copy every element that doesn't match the condition into a second list: result = [] for item in a: if condition is False: result.append(item) a = result Alternatively, you could use filter or a list comprehension and assign the result back to a: a = filter(lambda item:... , a) or a = [item for item in a if ...] where ... stands for the condition that you need to check."} +{"question_id": 12309976, "score": 139, "creation_date": 1346975435, "tags": ["python", "string", "list", "spaces"], "instruction": "How do I convert a list into a string with spaces in Python?\n\nHow can I convert a list into a space-separated string in Python? For example, I want to convert this list: my_list = [\"how\", \"are\", \"you\"] into the string \"how are you\". The spaces are important. I don't want to get \"howareyou\".", "output": "\" \".join(my_list) You need to join with a space, not an empty string."} +{"question_id": 26265819, "score": 139, "creation_date": 1412800047, "tags": ["python", "pandas", "dataframe"], "instruction": "How to merge a Series and DataFrame\n\nIf you came here looking for information on how to merge a DataFrame and Series on the index, please look at this answer. The OP's original intention was to ask how to assign series elements as columns to another DataFrame. If you are interested in knowing the answer to this, look at the accepted answer by EdChum. Best I can come up with is df = pd.DataFrame({'a':[1, 2], 'b':[3, 4]}) # see EDIT below s = pd.Series({'s1':5, 's2':6}) for name in s.index: df[name] = s[name] a b s1 s2 0 1 3 5 6 1 2 4 5 6 Can anybody suggest better syntax / faster method? My attempts: df.merge(s) AttributeError: 'Series' object has no attribute 'columns' and df.join(s) ValueError: Other Series must have a name EDIT The first two answers posted highlighted a problem with my question, so please use the following to construct df: df = pd.DataFrame({'a':[np.nan, 2, 3], 'b':[4, 5, 6]}, index=[3, 5, 6]) with the final result a b s1 s2 3 NaN 4 5 6 5 2 5 5 6 6 3 6 5 6", "output": "You could construct a dataframe from the series and then merge with the dataframe. So you specify the data as the values but multiply them by the length, set the columns to the index and set params for left_index and right_index to True: In [27]: df.merge(pd.DataFrame(data = [s.values] * len(s), columns = s.index), left_index=True, right_index=True) Out[27]: a b s1 s2 0 1 3 5 6 1 2 4 5 6 EDIT for the situation where you want the index of your constructed df from the series to use the index of the df then you can do the following: df.merge(pd.DataFrame(data = [s.values] * len(df), columns = s.index, index=df.index), left_index=True, right_index=True) This assumes that the indices match the length."} +{"question_id": 11232230, "score": 139, "creation_date": 1340819694, "tags": ["python", "python-3.x", "logging", "logfile", "python-logging"], "instruction": "Logging to two files with different settings\n\nI am already using a basic logging config where all messages across all modules are stored in a single file. However, I need a more complex solution now: Two files: the first remains the same. The second file should have some custom format. I have been reading the docs for the module, bu they are very complex for me at the moment. Loggers, handlers... So, in short: How to log to two files in Python 3, ie: import logging # ... logging.file1.info('Write this to file 1') logging.file2.info('Write this to file 2')", "output": "You can do something like this: import logging formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') def setup_logger(name, log_file, level=logging.INFO): \"\"\"To setup as many loggers as you want\"\"\" handler = logging.FileHandler(log_file) handler.setFormatter(formatter) logger = logging.getLogger(name) logger.setLevel(level) logger.addHandler(handler) return logger # first file logger logger = setup_logger('first_logger', 'first_logfile.log') logger.info('This is just info message') # second file logger super_logger = setup_logger('second_logger', 'second_logfile.log') super_logger.error('This is an error message') def another_method(): # using logger defined above also works here logger.info('Inside method')"} +{"question_id": 104983, "score": 139, "creation_date": 1221853999, "tags": ["python", "multithreading", "thread-local"], "instruction": "What is \"thread local storage\" in Python, and why do I need it?\n\nIn Python specifically, how do variables get shared between threads? Although I have used threading.Thread before I never really understood or saw examples of how variables got shared. Are they shared between the main thread and the children or only among the children? When would I need to use thread local storage to avoid this sharing? I have seen many warnings about synchronizing access to shared data among threads by using locks but I have yet to see a really good example of the problem.", "output": "In Python, everything is shared, except for function-local variables (because each function call gets its own set of locals, and threads are always separate function calls.) And even then, only the variables themselves (the names that refer to objects) are local to the function; objects themselves are always global, and anything can refer to them. The Thread object for a particular thread is not a special object in this regard. If you store the Thread object somewhere all threads can access (like a global variable) then all threads can access that one Thread object. If you want to atomically modify anything that another thread has access to, you have to protect it with a lock. And all threads must of course share this very same lock, or it wouldn't be very effective. If you want actual thread-local storage, that's where threading.local comes in. Attributes of threading.local are not shared between threads; each thread sees only the attributes it itself placed in there. If you're curious about its implementation, the source is in _threading_local.py in the standard library."} +{"question_id": 14263872, "score": 139, "creation_date": 1357839279, "tags": ["python", "variables", "dictionary", "urllib"], "instruction": "Only add to a dict if a condition is met\n\nI am using urllib.urlencode to build web POST parameters, however there are a few values I only want to be added if a value other than None exists for them. apple = 'green' orange = 'orange' params = urllib.urlencode({ 'apple': apple, 'orange': orange }) That works fine, however if I make the orange variable optional, how can I prevent it from being added to the parameters? Something like this (pseudocode): apple = 'green' orange = None params = urllib.urlencode({ 'apple': apple, if orange: 'orange': orange }) I hope this was clear enough, does anyone know how to solve this?", "output": "You'll have to add the key separately, after the creating the initial dict: params = {'apple': apple} if orange is not None: params['orange'] = orange params = urllib.urlencode(params) Python has no syntax to define a key as conditional; you could use a dict comprehension if you already had everything in a sequence: params = urllib.urlencode({k: v for k, v in (('orange', orange), ('apple', apple)) if v is not None}) but that's not very readable. If you are using Python 3.9 or newer, you could use the new dict merging operator support and a conditional expression: params = urllib.urlencode( {'apple': apple} | ({'orange': orange} if orange is not None else {}) ) but I find readability suffers, and so would probably still use a separate if expression: params = {'apple': apple} if orange is not None: params |= {'orange': orange} params = urllib.urlencode(params) Another option is to use dictionary unpacking, but for a single key that's not all that more readable: params = urllib.urlencode({ 'apple': apple, **({'orange': orange} if orange is not None else {}) }) I personally would never use this, it's too hacky and is not nearly as explicit and clear as using a separate if statement. As the Zen of Python states: Readability counts."} +{"question_id": 3551242, "score": 139, "creation_date": 1282593982, "tags": ["python", "numpy"], "instruction": "Numpy index slice without losing dimension information\n\nI'm using numpy and want to index a row without losing the dimension information. import numpy as np X = np.zeros((100,10)) X.shape # >> (100, 10) xslice = X[10,:] xslice.shape # >> (10,) In this example xslice is now 1 dimension, but I want it to be (1,10). In R, I would use X[10,:,drop=F]. Is there something similar in numpy. I couldn't find it in the documentation and didn't see a similar question asked. Thanks!", "output": "It's probably easiest to do x[None, 10, :] or equivalently (but more readable) x[np.newaxis, 10, :]. None or np.newaxis increases the dimension of the array by 1, so that you're back to the original after the slicing eliminates a dimension. As far as why it's not the default, personally, I find that constantly having arrays with singleton dimensions gets annoying very quickly. I'd guess the numpy devs felt the same way. Also, numpy handle broadcasting arrays very well, so there's usually little reason to retain the dimension of the array the slice came from. If you did, then things like: a = np.zeros((100,100,10)) b = np.zeros(100,10) a[0,:,:] = b either wouldn't work or would be much more difficult to implement. (Or at least that's my guess at the numpy dev's reasoning behind dropping dimension info when slicing)"} +{"question_id": 19966707, "score": 139, "creation_date": 1384385974, "tags": ["javascript", "python", "ajax", "cross-domain", "cors"], "instruction": "CORS error on same domain?\n\nI'm running into a weird CORS issue right now. Here's the error message: XMLHttpRequest cannot load http://localhost:8666/routeREST/select?q=[...] Origin http://localhost:8080 is not allowed by Access-Control-Allow-Origin Two servers: localhost:8666/routeREST/ : this is a simple Python Bottle server. localhost:8080/ : Python simpleHTTPserver where I run y Javascript application. This app is executing Ajax requests on the server above. Any thought on what could be the problem? EDIT: And... the port was the problem. Thanks for your answers :) If anyone is using a Python bottle server as well, you can follow the answer given on this post to solve the CORS issue: Bottle Py: Enabling CORS for jQuery AJAX requests", "output": "It is only the same if the scheme, domain and port are identical. Same Origin Policy Clarification http and https are not the same scheme. (By default they also use different ports) example.com and www.example.com are not the same domain. Port 80 and 443 are not the same port. How to enable CORS If you want to enable CORS you must follow Cross-Origin Resource Sharing (cors) by adding headers. Mozilla has examples. In the incoming request you get an Origin header: Origin: https://example.com You need to add Access-Control-Allow-Origin as a header in your response. To allow everyone (you should probably NOT do that): Access-Control-Allow-Origin: * Multiple origins If you need to support multiple origins (for example, both example.com and www.example.com), set the Access-Control-Allow-Origin header in your response to match the Origin header in the request (provided you have verified that the origin is on the whitelist). WHY DO I GET REQUESTS WITH OPTIONS METHOD? Note that some requests send a preflight-request, with an OPTIONS-method, so if you write your own code you must handle those requests too. See Mozilla for examples."} +{"question_id": 42096280, "score": 139, "creation_date": 1486488699, "tags": ["python", "python-3.x", "anaconda"], "instruction": "How is Anaconda related to Python?\n\nI am a beginner and I want to learn computer programming. So, for now, I have started learning Python by myself with some knowledge about programming in C and Fortran. Now, I have installed Python version 3.6.0 and I have struggled finding a suitable text for learning Python in this version. Even the online lecture series ask for versions 2.7 and 2.5 . Now that I have got a book which, however, makes codes in version 2 and tries to make it as close as possible in version 3 (according to the author); the author recommends \"downloading Anaconda for Windows\" for installing Python. So, my question is: What is this 'Anaconda'? I saw that it was some open data science platform. What does it mean? Is it some editor or something like Pycharm, IDLE or something? Also, I downloaded my Python (the one that I am using right now) for Windows from Python.org and I didn't need to install any \"open data science platform\". So what is this happening? Please explain in easy language. I don't have too much knowledge about these.", "output": "Anaconda is a commercial python and R distribution. It aims to provide everything you need (Python-wise) for data science \"out of the box\". It includes: The core Python language 100+ Python \"packages\" (libraries) Spyder (IDE/editor - like PyCharm) and Jupyter conda, Anaconda's own package manager, used for updating Anaconda and packages Your course may have recommended it as it comes with these extras but if you don't need them and are getting on fine with vanilla Python that's OK too. Learn more: https://www.anaconda.com/distribution/"} +{"question_id": 27954702, "score": 139, "creation_date": 1421279996, "tags": ["python", "unit-testing", "pytest", "python-unittest"], "instruction": "unittest vs pytest\n\nIn unittest, I can setUp variables in a class, and then the methods of this class can choose whichever variable it wants to use... class test_class(unittest.TestCase): def setUp(self): self.varA = 1 self.varB = 2 self.varC = 3 self.modified_varA = 2 def test_1(self): do_something_with_self.varA, self.varB def test_2(self): do_something_with_self_modified_varA, self.varC So in unittest, it was easy to put a bunch of tests together that could go under one class and then use many different variables (varA and varB) for different methods. In pytest, I created a fixture in conftest.py instead of a class in unittest, like this... @pytest.fixture(scope=\"module\") def input1(): varA = 1 varB = 2 return varA, varB @pytest.fixture(scope=\"module\") def input2(): varA = 2 varC = 3 return varA, varC I feed this input1 and input2 to my functions in a different file (let's say test_this.py) for two different functions. Here are the questions based on information above... Since I can't just declare local variables in conftest.py as I can't simply import this file. Is there a better way of declaring different variables here that can be used in different functions in test_this.py ? I have five different configurations in my actual testing for these variables, defining that many different fixtures in conftest.py and use them as function argument in five different functions in test_this.py sounds painful, I would rather go back to unittest class structure, define my variables and pick and choose what I want. Should I just declare global variables in test_this.py and use them in the functions the way I want? Seems a bit not pythonic. These variables are only used by the functions in this file. Let's say I have test_that.py and test_them.py as well. If I have some shared variables between these different files, how would I declare them ? just create a file called variables.py in the directory where all these test files are and do an import whenever I need ? This way I can keep all data in a separate. Is it my impression that pytest discourages using a class to organize your functions? Every example I read online, it all seem to employ bunch of functions with fixtures only. What is a configuration of defining class and methods and organize tests in pytest? I have a test scenario where I have to use result of one function into another. With pytest, I have an assert that is at the end of a function not a return so I won't be able to use this function as a fixture. How do I accomplish this? I know this is not a good practice that my one test relies on another but is there a work around?", "output": "1) First of all, you can declare those fixtures not only in conftest.py, but in every Python module you want. And you can import that module. Also you can use fixtures in the same way as you used setUp method: @pytest.fixture(scope='class') def input(request): request.cls.varA = 1 request.cls.varB = 2 request.cls.varC = 3 request.cls.modified_varA = 2 @pytest.usefixtures('input') class TestClass: def test_1(self): do_something_with_self.varA, self.varB def test_2(self): do_something_with_self_modified_varA, self.varC or you can define separate variables in separate fixtures: def fixture_a(): return varA def fixture_b(): return varB def fixture_c(): return varC def fixture_mod_A(): return modified_varA or make one fixture which returns all the variables (why not?) or even make indirect parametrized fixture which returns variables by your choice (quite confusing way): @pytest.fixture() def parametrized_input(request): vars = {'varA': 1, 'varB': 2, 'varC': 3} var_names = request.param return (vars[var_name] for var_name in var_names) @pytest.mark.parametrize('parametrized_input', [('varA', 'varC')], indirect=True) def test_1(parametrized_input) varA, varC = parametrized_input ... Or even you can make fixture factory which will make fixtures for you on the fly. Sounds curiously when you have only 5 tests and 5 configurations of variables, but when you get hundreds of both, it can be useful. 3) Of course you can. But I recommend you not to import this file directly, but use command line option pointing what file to import. In this case you can choose another file with variables without changing your code. 4) I use classes in my tests because I migrated from nosetest. I didn't mention any problem with using classes in pytest. 5) In that case I propose you to do the following: fist make the function with desired actions: def some_actions(a, b): # some actions here ... return c then use it both in test and fixture: def test(): assert some_actions(1,2) == 10 @pytest.fixture() def some_fixture(): return some_actions(1,2)"} +{"question_id": 37332190, "score": 138, "creation_date": 1463685257, "tags": ["python", "python-3.x", "django", "django-authentication", "django-login"], "instruction": "Django - Login with Email\n\nI want django to authenticate users via email, not via usernames. One way can be providing email value as username value, but I dont want that. Reason being, I've a url /profile/<username>/, hence I cannot have a url /profile/abcd@gmail.com/. Another reason being that all emails are unique, but it happen sometimes that the username is already being taken. Hence I'm auto-creating the username as fullName_ID. How can I just change let Django authenticate with email? This is how I create a user. username = `abcd28` user_email = `abcd@gmail.com` user = User.objects.create_user(username, user_email, user_pass) This is how I login. email = request.POST['email'] password = request.POST['password'] username = User.objects.get(email=email.lower()).username user = authenticate(username=username, password=password) login(request, user) Is there any other of of login apart from getting the username first?", "output": "You should write a custom authentication backend. Something like this will work: from django.contrib.auth import get_user_model from django.contrib.auth.backends import ModelBackend class EmailBackend(ModelBackend): def authenticate(self, request, username=None, password=None, **kwargs): UserModel = get_user_model() try: user = UserModel.objects.get(email=username) except UserModel.DoesNotExist: return None else: if user.check_password(password): return user return None Then, set that backend as your auth backend in your settings: AUTHENTICATION_BACKENDS = ['path.to.auth.module.EmailBackend'] Updated. Inherit from ModelBackend as it implements methods like get_user() already. See docs here: https://docs.djangoproject.com/en/3.0/topics/auth/customizing/#writing-an-authentication-backend"} +{"question_id": 19805654, "score": 138, "creation_date": 1383719960, "tags": ["python", "try-except", "try-finally"], "instruction": "Python try finally block returns\n\nThere is the interesting code below: def func1(): try: return 1 finally: return 2 def func2(): try: raise ValueError() except: return 1 finally: return 3 func1() func2() Could please somebody explain, what results will return these two functions and explain why, i.e. describe the order of the execution", "output": "From the Python documentation A finally clause is always executed before leaving the try statement, whether an exception has occurred or not. When an exception has occurred in the try clause and has not been handled by an except clause (or it has occurred in a except or else clause), it is re-raised after the finally clause has been executed. The finally clause is also executed \u201con the way out\u201d when any other clause of the try statement is left via a break, continue or return statement. A more complicated example (having except and finally clauses in the same try statement works as of Python 2.5): So once the try/except block is left using return, which would set the return value to given - finally blocks will always execute, and should be used to free resources etc. while using there another return - overwrites the original one. In your particular case, func1() returns 2 and func2() returns 3, as these are values returned in the finally blocks."} +{"question_id": 5239856, "score": 138, "creation_date": 1299628727, "tags": ["python", "operators"], "instruction": "Asterisk in function call\n\nI'm using itertools.chain to \"flatten\" a list of lists in this fashion: uniqueCrossTabs = list(itertools.chain(*uniqueCrossTabs)) how is this different than saying: uniqueCrossTabs = list(itertools.chain(uniqueCrossTabs))", "output": "* is the \"splat\" operator: It takes an iterable like a list as input, and expands it into actual positional arguments in the function call. So if uniqueCrossTabs were [[1, 2], [3, 4]], then itertools.chain(*uniqueCrossTabs) is the same as saying itertools.chain([1, 2], [3, 4]) This is obviously different from passing in just uniqueCrossTabs. In your case, you have a list of lists that you wish to flatten; what itertools.chain() does is return an iterator over the concatenation of all the positional arguments you pass to it, where each positional argument is iterable in its own right. In other words, you want to pass each list in uniqueCrossTabs as an argument to chain(), which will chain them together, but you don't have the lists in separate variables, so you use the * operator to expand the list of lists into several list arguments. chain.from_iterable() is better-suited for this operation, as it assumes a single iterable of iterables to begin with. Your code then becomes simply: uniqueCrossTabs = list(itertools.chain.from_iterable(uniqueCrossTabs))"} +{"question_id": 3906137, "score": 138, "creation_date": 1286799938, "tags": ["python", "io"], "instruction": "Why can't I call read() twice on an open file?\n\nFor an exercise I'm doing, I'm trying to read the contents of a given file twice using the read() method. Strangely, when I call it the second time, it doesn't seem to return the file content as a string? Here's the code f = f.open() # get the year match = re.search(r'Popularity in (\\d+)', f.read()) if match: print match.group(1) # get all the names matches = re.findall(r'<td>(\\d+)</td><td>(\\w+)</td><td>(\\w+)</td>', f.read()) if matches: # matches is always None Of course I know that this is not the most efficient or best way, this is not the point here. The point is, why can't I call read() twice? Do I have to reset the file handle? Or close / reopen the file in order to do that?", "output": "Calling read() reads through the entire file and leaves the read cursor at the end of the file (with nothing more to read). If you are looking to read a certain number of lines at a time you could use readline(), readlines() or iterate through lines with for line in handle:. To answer your question directly, once a file has been read, with read() you can use seek(0) to return the read cursor to the start of the file (docs are here). If you know the file isn't going to be too large, you can also save the read() output to a variable, using it in your findall expressions. Ps. Don't forget to close the file after you are done with it."} +{"question_id": 35180764, "score": 138, "creation_date": 1454512457, "tags": ["python", "image", "opencv", "image-processing", "computer-vision"], "instruction": "OpenCV & Python - Image too big to display\n\nI have an image that is 6400 \u00d7 3200, while my screen is 1280 x 800. Therefore, the image needs to be resized for display only. I am using Python and OpenCV 2.4.9. According to OpenCV Documentation, If you need to show an image that is bigger than the screen resolution, you will need to call namedWindow(\"\", WINDOW_NORMAL) before the imshow. That is what I am doing, but the image is not fitted to the screen, only a portion is shown because it's too big. I've also tried with cv2.resizeWindow, but it doesn't make any difference. import cv2 cv2.namedWindow(\"output\", cv2.WINDOW_NORMAL) # Create window with freedom of dimensions # cv2.resizeWindow(\"output\", 400, 300) # Resize window to specified dimensions im = cv2.imread(\"earth.jpg\") # Read image cv2.imshow(\"output\", im) # Show image cv2.waitKey(0) # Display the image infinitely until any keypress", "output": "Although I was expecting an automatic solution (fitting to the screen automatically), resizing solves the problem as well. import cv2 cv2.namedWindow(\"output\", cv2.WINDOW_NORMAL) # Create window with freedom of dimensions im = cv2.imread(\"earth.jpg\") # Read image imS = cv2.resize(im, (960, 540)) # Resize image cv2.imshow(\"output\", imS) # Show image cv2.waitKey(0) # Display the image infinitely until any keypress"} +{"question_id": 10035446, "score": 138, "creation_date": 1333657338, "tags": ["python", "matplotlib", "axes", "subplot"], "instruction": "How can I make a blank subplot in matplotlib?\n\nI am making a group of subplot (say, 3 x 2) in matplotlib, but I have fewer than 6 datasets. How can I make the remaining subplot blank? The arrangement looks like this: +----+----+ | 0,0| 0,1| +----+----+ | 1,0| 1,1| +----+----+ | 2,0| 2,1| +----+----+ This may go on for several pages, but on the final page, there are, for example, 5 datasets to the 2,1 box will be empty. However, I have declared the figure as: cfig,ax = plt.subplots(3,2) So in the space for subplot 2,1 there is a default set of axes with ticks and labels. How can I programatically render that space blank and devoid of axes?", "output": "You could always hide the axes which you do not need. For example, the following code turns off the 6th axes completely: import matplotlib.pyplot as plt hf, ha = plt.subplots(3,2) ha[-1, -1].axis('off') plt.show() and results in the following figure: Alternatively, see the accepted answer to the question Hiding axis text in matplotlib plots for a way of keeping the axes but hiding all the axes decorations (e.g. the tick marks and labels)."} +{"question_id": 39581893, "score": 138, "creation_date": 1474318257, "tags": ["python", "pandas", "statistics", "quantile", "percentile"], "instruction": "Find percentile stats of a given column\n\nI have a pandas data frame my_df, where I can find the mean(), median(), mode() of a given column: my_df['field_A'].mean() my_df['field_A'].median() my_df['field_A'].mode() I am wondering is it possible to find more detailed statistics such as the 90th percentile?", "output": "You can use the pandas.DataFrame.quantile() function. If you look at the API for quantile(), you will see it takes an argument for how to do interpolation. If you want a quantile that falls between two positions in your data: 'linear', 'lower', 'higher', 'midpoint', or 'nearest'. By default, it performs linear interpolation. These interpolation methods are discussed in the Wikipedia article for percentile import pandas as pd import numpy as np # sample data np.random.seed(2023) # for reproducibility data = {'Category': np.random.choice(['hot', 'cold'], size=(10,)), 'field_A': np.random.randint(0, 100, size=(10,)), 'field_B': np.random.randint(0, 100, size=(10,))} df = pd.DataFrame(data) df.field_A.mean() # Same as df['field_A'].mean() # 51.1 df.field_A.median() # 50.0 # You can call `quantile(i)` to get the i'th quantile, # where `i` should be a fractional number. df.field_A.quantile(0.1) # 10th percentile # 15.6 df.field_A.quantile(0.5) # same as median # 50.0 df.field_A.quantile(0.9) # 90th percentile # 88.8 df.groupby('Category').field_A.quantile(0.1) #Category #cold 28.8 #hot 8.6 #Name: field_A, dtype: float64 df Category field_A field_B 0 cold 96 58 1 cold 22 28 2 hot 17 81 3 cold 53 71 4 cold 47 63 5 hot 77 48 6 cold 39 32 7 hot 69 29 8 hot 88 49 9 hot 3 49"} +{"question_id": 17682216, "score": 138, "creation_date": 1373992591, "tags": ["python", "matplotlib"], "instruction": "Scatter plot and Color mapping in Python\n\nI have a range of points x and y stored in numpy arrays. Those represent x(t) and y(t) where t=0...T-1 I am plotting a scatter plot using import matplotlib.pyplot as plt plt.scatter(x,y) plt.show() I would like to have a colormap representing the time (therefore coloring the points depending on the index in the numpy arrays) What is the easiest way to do so?", "output": "Here is an example import numpy as np import matplotlib.pyplot as plt x = np.random.rand(100) y = np.random.rand(100) t = np.arange(100) plt.scatter(x, y, c=t) plt.show() Here you are setting the color based on the index, t, which is just an array of [1, 2, ..., 100]. Perhaps an easier-to-understand example is the slightly simpler import numpy as np import matplotlib.pyplot as plt x = np.arange(100) y = x t = x plt.scatter(x, y, c=t) plt.show() Note that the array you pass as c doesn't need to have any particular order or type, i.e. it doesn't need to be sorted or integers as in these examples. The plotting routine will scale the colormap such that the minimum/maximum values in c correspond to the bottom/top of the colormap. Colormaps You can change the colormap by adding import matplotlib.cm as cm plt.scatter(x, y, c=t, cmap=cm.cmap_name) Importing matplotlib.cm is optional as you can call colormaps as cmap=\"cmap_name\" just as well. There is a reference page of colormaps showing what each looks like. Also know that you can reverse a colormap by simply calling it as cmap_name_r. So either plt.scatter(x, y, c=t, cmap=cm.cmap_name_r) # or plt.scatter(x, y, c=t, cmap=\"cmap_name_r\") will work. Examples are \"jet_r\" or cm.plasma_r. Here's an example with the new 1.5 colormap viridis: import numpy as np import matplotlib.pyplot as plt x = np.arange(100) y = x t = x fig, (ax1, ax2) = plt.subplots(1, 2) ax1.scatter(x, y, c=t, cmap='viridis') ax2.scatter(x, y, c=t, cmap='viridis_r') plt.show() Colorbars You can add a colorbar by using plt.scatter(x, y, c=t, cmap='viridis') plt.colorbar() plt.show() Note that if you are using figures and subplots explicitly (e.g. fig, ax = plt.subplots() or ax = fig.add_subplot(111)), adding a colorbar can be a bit more involved. Good examples can be found here for a single subplot colorbar and here for 2 subplots 1 colorbar."} +{"question_id": 1701211, "score": 138, "creation_date": 1257776072, "tags": ["python", "list", "indexing"], "instruction": "Return the index of the first element of a list which makes a passed function true\n\nThe list.index(x) function returns the index in the list of the first item whose value is x. Is there a function, list_func_index(), similar to the index() function that has a function, f(), as a parameter. The function, f() is run on every element, e, of the list until f(e) returns True. Then list_func_index() returns the index of e. Codewise: >>> def list_func_index(lst, func): for i in range(len(lst)): if func(lst[i]): return i raise ValueError('no element making func True') >>> l = [8,10,4,5,7] >>> def is_odd(x): return x % 2 != 0 >>> list_func_index(l,is_odd) 3 Is there a more elegant solution? (and a better name for the function)", "output": "You could do that in a one-liner using generators: next(i for i,v in enumerate(l) if is_odd(v)) The nice thing about generators is that they only compute up to the requested amount. So requesting the first two indices is (almost) just as easy: y = (i for i,v in enumerate(l) if is_odd(v)) x1 = next(y) x2 = next(y) Though, expect a StopIteration exception after the last index (that is how generators work). This is also convenient in your \"take-first\" approach, to know that no such value was found --- the list.index() function would raise ValueError here."} +{"question_id": 1095543, "score": 138, "creation_date": 1247013047, "tags": ["python", "stack-trace", "introspection"], "instruction": "Get __name__ of calling function's module in Python\n\nSuppose myapp/foo.py contains: def info(msg): caller_name = ???? print '[%s] %s' % (caller_name, msg) And myapp/bar.py contains: import foo foo.info('Hello') # => [myapp.bar] Hello I want caller_name to be set to the __name__ attribute of the calling functions' module (which is 'myapp.foo') in this case. How can this be done?", "output": "Check out the inspect module: inspect.stack() will return the stack information. Inside a function, inspect.stack()[1] will return your caller's stack. From there, you can get more information about the caller's function name, module, etc. See the docs for details: http://docs.python.org/library/inspect.html Also, Doug Hellmann has a nice writeup of the inspect module in his PyMOTW series: http://pymotw.com/2/inspect/index.html#module-inspect EDIT: Here's some code which does what you want, I think: import inspect def info(msg): frm = inspect.stack()[1] mod = inspect.getmodule(frm[0]) print '[%s] %s' % (mod.__name__, msg)"} +{"question_id": 7029993, "score": 138, "creation_date": 1313082354, "tags": ["python", "random", "random-seed"], "instruction": "Differences between numpy.random and random.random in Python\n\nI have a big script in Python. I inspired myself in other people's code so I ended up using the numpy.random module for some things (for example for creating an array of random numbers taken from a binomial distribution) and in other places I use the module random.random. Can someone please tell me the major differences between the two? Looking at the doc webpage for each of the two it seems to me that numpy.random just has more methods, but I am unclear about how the generation of the random numbers is different. The reason why I am asking is because I need to seed my main program for debugging purposes. But it doesn't work unless I use the same random number generator in all the modules that I am importing, is this correct? Also, I read here, in another post, a discussion about NOT using numpy.random.seed(), but I didn't really understand why this was such a bad idea. I would really appreciate if someone explain me why this is the case.", "output": "You have made many correct observations already! Unless you'd like to seed both of the random generators, it's probably simpler in the long run to choose one generator or the other. But if you do need to use both, then yes, you'll also need to seed them both, because they generate random numbers independently of each other. For numpy.random.seed(), the main difficulty is that it is not thread-safe - that is, it's not safe to use if you have many different threads of execution, because it's not guaranteed to work if two different threads are executing the function at the same time. If you're not using threads, and if you can reasonably expect that you won't need to rewrite your program this way in the future, numpy.random.seed() should be fine. If there's any reason to suspect that you may need threads in the future, it's much safer in the long run to do as suggested, and to make a local instance of the numpy.random.Random class. As far as I can tell, random.seed() is thread-safe (or at least, I haven't found any evidence to the contrary). The numpy.random library contains a few extra probability distributions commonly used in scientific research, as well as a couple of convenience functions for generating arrays of random data. The random.random library is a little more lightweight, and should be fine if you're not doing scientific research or other kinds of work in statistics. Otherwise, they both use the Mersenne twister sequence to generate their random numbers, and they're both completely deterministic - that is, if you know a few key bits of information, it's possible to predict with absolute certainty what number will come next. For this reason, neither numpy.random nor random.random is suitable for any serious cryptographic uses. But because the sequence is so very very long, both are fine for generating random numbers in cases where you aren't worried about people trying to reverse-engineer your data. This is also the reason for the necessity to seed the random value - if you start in the same place each time, you'll always get the same sequence of random numbers! As a side note, if you do need cryptographic level randomness, you should use the secrets module, or something like Crypto.Random if you're using a Python version earlier than Python 3.6."} +{"question_id": 10893374, "score": 138, "creation_date": 1338881690, "tags": ["python", "python-3.x"], "instruction": "Python: confusions with urljoin\n\nI am trying to form URLs from different pieces, and having trouble understanding the behavior of this method. For example: Python 3.x from urllib.parse import urljoin >>> urljoin('some', 'thing') 'thing' >>> urljoin('http://some', 'thing') 'http://some/thing' >>> urljoin('http://some/more', 'thing') 'http://some/thing' >>> urljoin('http://some/more/', 'thing') # just a tad / after 'more' 'http://some/more/thing' urljoin('http://some/more/', '/thing') 'http://some/thing' Can you explain the exact behavior of this method?", "output": "The best way (for me) to think of this is the first argument, base is like the page you are on in your browser. The second argument url is the href of an anchor on that page. The result is the final url to which you will be directed should you click. >>> urljoin('some', 'thing') 'thing' This one makes sense given my description. Though one would hope base includes a scheme and domain. >>> urljoin('http://some', 'thing') 'http://some/thing' If you are on a vhost some, and there is an anchor like <a href='thing'>Foo</a> then the link will take you to http://some/thing >>> urljoin('http://some/more', 'thing') 'http://some/thing' We are on some/more here, so a relative link of thing will take us to /some/thing >>> urljoin('http://some/more/', 'thing') # just a tad / after 'more' 'http://some/more/thing' Here, we aren't on some/more, we are on some/more/ which is different. Now, our relative link will take us to some/more/thing >>> urljoin('http://some/more/', '/thing') 'http://some/thing' And lastly. If on some/more/ and the href is to /thing, you will be linked to some/thing."} +{"question_id": 13872533, "score": 137, "creation_date": 1355458415, "tags": ["python", "pandas", "matplotlib"], "instruction": "How to plot different groups of data from a dataframe into a single figure\n\nI have a temperature file with many years of temperature records in the format below: 2012-04-12,16:13:09,20.6 2012-04-12,17:13:09,20.9 2012-04-12,18:13:09,20.6 2007-05-12,19:13:09,5.4 2007-05-12,20:13:09,20.6 2007-05-12,20:13:09,20.6 2005-08-11,11:13:09,20.6 2005-08-11,11:13:09,17.5 2005-08-13,07:13:09,20.6 2006-04-13,01:13:09,20.6 Every year has different numbers of time of records, so the pandas datetimeindices are all different. I want to plot the different year's data in the same figure for comparison: The X-axis is datetimeindices from Jan to Dec The Y-axis is the temperature How should I go about doing this?", "output": "Chang's answer shows how to plot a different DataFrame on the same axes. In this case, all of the data is in the same dataframe, so it's better to use groupby and unstack. Alternatively, pandas.DataFrame.pivot_table can be used. dfp = df.pivot_table(index='Month', columns='Year', values='value', aggfunc='mean') When using pandas.read_csv, names= creates column headers when there are none in the file. The 'date' column must be parsed into datetime64[ns] Dtype so the .dt extractor can be used to extract the month and year. import pandas as pd # given the data in a file as shown in the op df = pd.read_csv('temp.csv', names=['date', 'time', 'value'], parse_dates=['date']) # create additional month and year columns for convenience df['Year'] = df.date.dt.year df['Month'] = df.date.dt.month # groupby the month a year and aggreate mean on the value column dfg = df.groupby(['Month', 'Year'])['value'].mean().unstack() # display(dfg) Year 2005 2006 2007 2012 Month 4 NaN 20.6 NaN 20.7 5 NaN NaN 15.533333 NaN 8 19.566667 NaN NaN NaN Now it's easy to plot each year as a separate line. The OP only has one observation for each year, so only a marker is displayed. ax = dfg.plot(figsize=(9, 7), marker='.', xticks=dfg.index)"} +{"question_id": 43983622, "score": 137, "creation_date": 1494862841, "tags": ["python", "pandas", "dataframe"], "instruction": "Remove Unnamed columns in pandas dataframe\n\nI have a data file from columns A-G like below but when I am reading it with pd.read_csv('data.csv') it prints an extra unnamed column at the end for no reason. colA ColB colC colD colE colF colG Unnamed: 7 44 45 26 26 40 26 46 NaN 47 16 38 47 48 22 37 NaN 19 28 36 18 40 18 46 NaN 50 14 12 33 12 44 23 NaN 39 47 16 42 33 48 38 NaN I have seen my data file various times but I have no extra data in any other column. How I should remove this extra column while reading ? Thanks", "output": "df = df.loc[:, ~df.columns.str.contains('^Unnamed')] In [162]: df Out[162]: colA ColB colC colD colE colF colG 0 44 45 26 26 40 26 46 1 47 16 38 47 48 22 37 2 19 28 36 18 40 18 46 3 50 14 12 33 12 44 23 4 39 47 16 42 33 48 38 NOTE: very often there is only one unnamed column Unnamed: 0, which is the first column in the CSV file. This is the result of the following steps: a DataFrame is saved into a CSV file using parameter index=True, which is the default behaviour we read this CSV file into a DataFrame using pd.read_csv() without explicitly specifying index_col=0 (default: index_col=None) The easiest way to get rid of this column is to specify the parameter pd.read_csv(..., index_col=0): df = pd.read_csv('data.csv', index_col=0)"} +{"question_id": 45600579, "score": 137, "creation_date": 1502312506, "tags": ["python", "python-asyncio", "python-3.5"], "instruction": "\"Asyncio Event Loop is Closed\" when getting loop\n\nWhen trying to run the asyncio hello world code example given in the docs: import asyncio async def hello_world(): print(\"Hello World!\") loop = asyncio.get_event_loop() # Blocking call which returns when the hello_world() coroutine is done loop.run_until_complete(hello_world()) loop.close() I get the error: RuntimeError: Event loop is closed I am using python 3.5.3.", "output": "You have already called loop.close() before you ran that sample piece of code, on the global event loop: >>> import asyncio >>> asyncio.get_event_loop().close() >>> asyncio.get_event_loop().is_closed() True >>> asyncio.get_event_loop().run_until_complete(asyncio.sleep(1)) Traceback (most recent call last): File \"<stdin>\", line 1, in <module> File \"/.../lib/python3.6/asyncio/base_events.py\", line 443, in run_until_complete self._check_closed() File \"/.../lib/python3.6/asyncio/base_events.py\", line 357, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed You need to create a new loop: loop = asyncio.new_event_loop() You can set that as the new global loop with: asyncio.set_event_loop(asyncio.new_event_loop()) and then just use asyncio.get_event_loop() again. Alternatively, just restart your Python interpreter, the first time you try to get the global event loop you get a fresh new one, unclosed. As of Python 3.7, the process of creating, managing, then closing the loop (as well as a few other resources) is handled for you when use asyncio.run(). It should be used instead of loop.run_until_complete(), and there is no need any more to first get or set the loop."} +{"question_id": 37332434, "score": 137, "creation_date": 1463686172, "tags": ["python", "apache-spark", "pyspark", "apache-spark-sql"], "instruction": "Concatenate two PySpark dataframes\n\nI'm trying to concatenate two PySpark dataframes with some columns that are only on one of them: from pyspark.sql.functions import randn, rand df_1 = sqlContext.range(0, 10) +--+ |id| +--+ | 0| | 1| | 2| | 3| | 4| | 5| | 6| | 7| | 8| | 9| +--+ df_2 = sqlContext.range(11, 20) +--+ |id| +--+ | 10| | 11| | 12| | 13| | 14| | 15| | 16| | 17| | 18| | 19| +--+ df_1 = df_1.select(\"id\", rand(seed=10).alias(\"uniform\"), randn(seed=27).alias(\"normal\")) df_2 = df_2.select(\"id\", rand(seed=10).alias(\"uniform\"), randn(seed=27).alias(\"normal_2\")) and now I want to generate a third dataframe. I would like something like pandas concat: df_1.show() +---+--------------------+--------------------+ | id| uniform| normal| +---+--------------------+--------------------+ | 0| 0.8122802274304282| 1.2423430583597714| | 1| 0.8642043127063618| 0.3900018344856156| | 2| 0.8292577771850476| 1.8077401259195247| | 3| 0.198558705368724| -0.4270585782850261| | 4|0.012661361966674889| 0.702634599720141| | 5| 0.8535692890157796|-0.42355804115129153| | 6| 0.3723296190171911| 1.3789648582622995| | 7| 0.9529794127670571| 0.16238718777444605| | 8| 0.9746632635918108| 0.02448061333761742| | 9| 0.513622008243935| 0.7626741803250845| +---+--------------------+--------------------+ df_2.show() +---+--------------------+--------------------+ | id| uniform| normal_2| +---+--------------------+--------------------+ | 11| 0.3221262660507942| 1.0269298899109824| | 12| 0.4030672316912547| 1.285648175568798| | 13| 0.9690555459609131|-0.22986601831364423| | 14|0.011913836266515876| -0.678915153834693| | 15| 0.9359607054250594|-0.16557488664743034| | 16| 0.45680471157575453| -0.3885563551710555| | 17| 0.6411908952297819| 0.9161177183227823| | 18| 0.5669232696934479| 0.7270125277020573| | 19| 0.513622008243935| 0.7626741803250845| +---+--------------------+--------------------+ #do some concatenation here, how? df_concat.show() | id| uniform| normal| normal_2 | +---+--------------------+--------------------+------------+ | 0| 0.8122802274304282| 1.2423430583597714| None | | 1| 0.8642043127063618| 0.3900018344856156| None | | 2| 0.8292577771850476| 1.8077401259195247| None | | 3| 0.198558705368724| -0.4270585782850261| None | | 4|0.012661361966674889| 0.702634599720141| None | | 5| 0.8535692890157796|-0.42355804115129153| None | | 6| 0.3723296190171911| 1.3789648582622995| None | | 7| 0.9529794127670571| 0.16238718777444605| None | | 8| 0.9746632635918108| 0.02448061333761742| None | | 9| 0.513622008243935| 0.7626741803250845| None | | 11| 0.3221262660507942| None | 0.123 | | 12| 0.4030672316912547| None |0.12323 | | 13| 0.9690555459609131| None |0.123 | | 14|0.011913836266515876| None |0.18923 | | 15| 0.9359607054250594| None |0.99123 | | 16| 0.45680471157575453| None |0.123 | | 17| 0.6411908952297819| None |1.123 | | 18| 0.5669232696934479| None |0.10023 | | 19| 0.513622008243935| None |0.916332123 | +---+--------------------+--------------------+------------+ Is that possible?", "output": "Maybe you can try creating the unexisting columns and calling union (unionAll for Spark 1.6 or lower): from pyspark.sql.functions import lit cols = ['id', 'uniform', 'normal', 'normal_2'] df_1_new = df_1.withColumn(\"normal_2\", lit(None)).select(cols) df_2_new = df_2.withColumn(\"normal\", lit(None)).select(cols) result = df_1_new.union(df_2_new) # To remove the duplicates: result = result.dropDuplicates()"} +{"question_id": 23113494, "score": 137, "creation_date": 1397661281, "tags": ["python", "progress-bar"], "instruction": "Double Progress Bar in Python\n\nIs there a way to create a double progress bar in Python? I want to run two loops inside each other. For each loop I want to have a progress bar. My program looks like: import time for i1 in range(5): for i2 in range(300): # do something, e.g. sleep time.sleep(0.01) # update upper progress bar # update lower progress bar The output somewhere in the middle should look something like 50%|############################ |ETA: 0:00:02 80%|################################################## |ETA: 0:00:04 The already existing really cool progressbar module doesn't seem to support that.", "output": "Use the nested progress bars feature of tqdm, an extremely low overhead, very customisable progress bar library: $ pip install -U tqdm Then: from tqdm import tqdm # from tqdm.auto import tqdm # notebook compatible import time for i1 in tqdm(range(5)): for i2 in tqdm(range(300), leave=False): # do something, e.g. sleep time.sleep(0.01) (The leave=False is optional - needed to discard the nested bars upon completion.) You can also use from tqdm import trange and then replace tqdm(range(...)) with trange(...). You can also get it working in a notebook. Alternatively if you want just one bar to monitor everything, you can use tqdm's version of itertools.product: from tqdm.contrib import itertools import time for i1, i2 in itertools.product(range(5), range(300)): # do something, e.g. sleep time.sleep(0.01)"} +{"question_id": 40700039, "score": 137, "creation_date": 1479608680, "tags": ["python", "conda"], "instruction": "How can you \"clone\" a conda environment into the base (root) environment?\n\nI'd like the base (root) environment of conda to copy all of the packages in another environment. How can this be done?", "output": "There are options to copy dependency names/urls/versions to files. Recommendation Normally it is safer to work from a new environment rather than changing root. However, consider backing up your existing environments before attempting changes. Verify the desired outcome by testing these commands in a demo environment. To backup your root env for example: \u03bb conda activate root \u03bb conda env export > environment_root.yml \u03bb conda list --explicit > spec_file_root.txt Options Option 1 - YAML file Within the second environment (e.g. myenv), export names+ to a yaml file: \u03bb activate myenv \u03bb conda env export > environment.yml then update the first environment+ (e.g. root) with the yaml file: \u03bb conda env update --name root --file environment.yml Option 2 - Cloning an environment Use the --clone flag to clone environments (see @DevC's answer): \u03bb conda create --name myclone --clone root This basically creates a direct copy of an environment. Option 3 - Spec file Make a spec-file++ to append dependencies from an env (see @Ormetrom): \u03bb activate myenv \u03bb conda list --explicit > spec_file.txt \u03bb conda install --name root --file spec_file.txt Alternatively, replicate a new environment (recommended): \u03bb conda create --name myenv2 --file spec_file.txt See Also conda env for more details on the env sub-commands. Anaconada Navigator desktop program for a more graphical experience. Docs on updated commands. With older conda versions use activate (Windows) and source activate (Linux/Mac OS). Newer versions of conda can use conda activate (this may require some setup with your shell configuration via conda init). Discussion on keeping conda env Extras There appears to be an undocumented conda run option to help execute commands in specific environments. # New command \u03bb conda run --name myenv conda list --explicit > spec_file.txt The latter command is effective at running commands in environments without the activation/deactivation steps. See the equivalent command below: # Equivalent \u03bb activate myenv \u03bb conda list --explicit > spec_file.txt \u03bb deactivate Note, this is likely an experimental feature, so this may not be appropriate in production until official adoption into the public API. + Conda docs have changed since the original post; links updated. ++ Spec-files only work with environments created on the same OS. Unlike the first two options, spec-files only capture links to conda dependencies; pip dependencies are not included."} +{"question_id": 44033670, "score": 137, "creation_date": 1495050801, "tags": ["python", "django", "django-rest-framework"], "instruction": "Python Django Rest Framework UnorderedObjectListWarning\n\nI upgraded from Django 1.10.4 to 1.11.1 and all of a sudden I'm getting a ton of these messages when I run my tests: lib/python3.5/site-packages/rest_framework/pagination.py:208: UnorderedObjectListWarning: Pagination may yield inconsistent results with an unordered object_list: <QuerySet [<Group: Requester>]> paginator = self.django_paginator_class(queryset, page_size) I've traced that back to the Django Pagination module: https://github.com/django/django/blob/master/django/core/paginator.py#L100 It seems to be related to my queryset code: return get_user_model().objects.filter(id=self.request.user.id) How can I find more details on this warning? It seems to be that I need to add a order_by(id) on the end of every filter, but I can't seem to find which code needs the order_by added (because the warning doesn't return a stack trace and so it happens randomly during my test run). Thanks! Edit: So by using @KlausD. verbosity tip, I looked at a test causing this error: response = self.client.get('/api/orders/') This goes to OrderViewSet but none of the things in get_queryset cause it and nothing in serializer class causes it. I have other tests that use the same code to get /api/orders and those don't cause it.... What does DRF do after get_queryset? https://github.com/encode/django-rest-framework/blob/master/rest_framework/pagination.py#L166 If I put a traceback into pagination then I get a whole bunch of stuff related to django rest framework but nothing that points back to which of my queries is triggering the order warning.", "output": "So in order to fix this I had to find all of the all, offset, filter, and limit clauses and add a order_by clause to them. Some I fixed by adding a default ordering: class Meta: ordering = ['-id'] In the ViewSets for Django Rest Framework (app/apiviews.py) I had to update all of the get_queryset methods as adding a default ordering didn't seem to work."} +{"question_id": 71718167, "score": 137, "creation_date": 1648907775, "tags": ["python", "compiler-errors", "jinja2", "pydash"], "instruction": "ImportError: cannot import name 'escape' from 'jinja2'\n\nI am getting the error ImportError: cannot import name 'escape' from 'jinja2' When trying to run code using the following requirements.txt: chart_studio==1.1.0 dash==2.1.0 dash_bootstrap_components==1.0.3 dash_core_components==2.0.0 dash_html_components==2.0.0 dash_renderer==1.9.1 dash_table==5.0.0 Flask==1.1.2 matplotlib==3.4.3 numpy==1.20.3 pandas==1.3.4 plotly==5.5.0 PyYAML==6.0 scikit_learn==1.0.2 scipy==1.7.1 seaborn==0.11.2 statsmodels==0.12.2 urllib3==1.26.7 Tried pip install jinja2 But the requirement is already satisfied. Running this code on a windows system.", "output": "Jinja is a dependency of Flask and Flask V1.X.X uses the escape module from Jinja, however recently support for the escape module was dropped in newer versions of Jinja. To fix this issue, simply update to the newer version of Flask V2.X.X in your requirements.txt where Flask no longer uses the escape module from Jinja. Flask>=2.2.2 Also, do note that Flask V1.X.X is no longer supported by the team. If you want to continue to use this older version, this Github issue may help."} +{"question_id": 60987997, "score": 137, "creation_date": 1585818739, "tags": ["python", "pytorch"], "instruction": "Why `torch.cuda.is_available()` returns False even after installing pytorch with cuda?\n\nOn a Windows 10 PC with an NVidia GeForce 820M I installed CUDA 9.2 and cudnn 7.1 successfully, and then installed PyTorch using the instructions at pytorch.org: pip install torch==1.4.0+cu92 torchvision==0.5.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html But I get: >>> import torch >>> torch.cuda.is_available() False", "output": "Your graphics card does not support CUDA 9.0. Since I've seen a lot of questions that refer to issues like this I'm writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer. The system requirements to use PyTorch with CUDA are as follows: Your graphics card must support the required version of CUDA Your graphics card driver must support the required version of CUDA The PyTorch binaries must be built with support for the compute capability of your graphics card Note: If you install pre-built binaries (using either pip or conda) then you do not need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library. 1. How to check if your GPU/graphics card supports a particular CUDA version First, identify the model of your graphics card. Before moving forward ensure that you've got an NVIDIA graphics card. AMD and Intel graphics cards do not support CUDA. NVIDIA doesn't do a great job of providing CUDA compatibility information in a single location. The best resource is probably this section on the CUDA Wikipedia page. To determine which versions of CUDA are supported Locate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1. In the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1. If your card doesn't support the required CUDA version then see the options in section 4 of this answer. Note: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA. 2. How to check if your GPU/graphics driver supports a particular CUDA version The graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA. First, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from NVIDIA's website. If you've installed the latest driver version then your graphics driver probably supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 2 in the CUDA release notes. In rare cases I've heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn't required. If you can't, or don't want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows: On Windows Determine your current graphics driver version (Source https://www.nvidia.com/en-gb/drivers/drivers-faq/) Right-click on your desktop and select NVIDIA Control Panel. From the NVIDIA Control Panel menu, select Help > System Information. The driver version is listed at the top of the Details window. For more advanced users, you can also get the driver version number from the Windows Device Manager. Right-click on your graphics device under display adapters and then select Properties. Select the Driver tab and read the Driver version. The last 5 digits are the NVIDIA driver version number. Visit the CUDA release notes and scroll down to Table 2. Use this table to verify your graphics driver is new enough to support the required version of CUDA. On Linux/OS X Run the following command in a terminal window nvidia-smi This should result in something like the following Sat Apr 4 15:31:57 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N/A | | 0% 35C P8 16W / 175W | 502MiB / 7974MiB | 1% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1138 G /usr/lib/xorg/Xorg 300MiB | | 0 2550 G /usr/bin/compiz 189MiB | | 0 5735 G /usr/lib/firefox/firefox 5MiB | | 0 7073 G /usr/lib/firefox/firefox 5MiB | +-----------------------------------------------------------------------------+ Driver Version: ###.## is your graphic driver version. In the example above the driver version is 435.21. CUDA Version: ##.# is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 as well as all compatible CUDA versions before 10.1. Note: The CUDA Version displayed in this table does not indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with. To be extra sure that your driver supports the desired CUDA version you can visit Table 2 on the CUDA release notes page. 3. How to check if a particular version of PyTorch is compatible with your GPU/graphics card compute capability Even if your graphics card supports the required version of CUDA then it's possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 support for compute capability <= 5.0 was dropped. First, verify that your graphics card and driver both support the required CUDA version (see Sections 1 and 2 above), the information in this section assumes that this is the case. The easiest way to check if PyTorch supports your compute capability is to install the desired version of PyTorch with CUDA support and run the following from a python interpreter >>> import torch >>> torch.zeros(1).cuda() If you get an error message that reads Found GPU0 XXXXX which is of cuda capability #.#. PyTorch no longer supports this GPU because it is too old. then that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go. Update If you're installing an old version of PyTorch on a system with a newer GPU then it's possible that the old PyTorch release wasn't compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities. 4. Conclusion If your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don't support your compute capability (section 3) then your options are Compile PyTorch from source with support for your compute capability (see here) Install PyTorch without CUDA support (CPU-only) Install an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries Upgrade your graphics card If your graphics card doesn't support the required version of CUDA (section 1) then your options are Install PyTorch without CUDA support (CPU-only) Install an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don't support your compute capability) Upgrade your graphics card"} +{"question_id": 1593019, "score": 137, "creation_date": 1256024454, "tags": ["python", "time-complexity", "benchmarking", "space-complexity"], "instruction": "Is there any simple way to benchmark Python script?\n\nUsually I use shell command time. My purpose is to test if data is small, medium, large or very large set, how much time and memory usage will be. Any tools for Linux or just Python to do this?", "output": "Have a look at timeit, the python profiler and pycallgraph. Also make sure to have a look at the comment below by nikicc mentioning \"SnakeViz\". It gives you yet another visualisation of profiling data which can be helpful. timeit def test(): \"\"\"Stupid test function\"\"\" lst = [] for i in range(100): lst.append(i) if __name__ == '__main__': import timeit print(timeit.timeit(\"test()\", setup=\"from __main__ import test\")) # For Python>=3.5 one can also write: print(timeit.timeit(\"test()\", globals=locals())) Essentially, you can pass it python code as a string parameter, and it will run in the specified amount of times and prints the execution time. The important bits from the docs: timeit.timeit(stmt='pass', setup='pass', timer=<default timer>, number=1000000, globals=None) Create a Timer instance with the given statement, setup code and timer function and run its timeit method with number executions. The optional globals argument specifies a namespace in which to execute the code. ... and: Timer.timeit(number=1000000) Time number executions of the main statement. This executes the setup statement once, and then returns the time it takes to execute the main statement a number of times, measured in seconds as a float. The argument is the number of times through the loop, defaulting to one million. The main statement, the setup statement and the timer function to be used are passed to the constructor. Note: By default, timeit temporarily turns off garbage collection during the timing. The advantage of this approach is that it makes independent timings more comparable. This disadvantage is that GC may be an important component of the performance of the function being measured. If so, GC can be re-enabled as the first statement in the setup string. For example: timeit.Timer('for i in xrange(10): oct(i)', 'gc.enable()').timeit() Profiling Profiling will give you a much more detailed idea about what's going on. Here's the \"instant example\" from the official docs: import cProfile import re cProfile.run('re.compile(\"foo|bar\")') Which will give you: 197 function calls (192 primitive calls) in 0.002 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.001 0.001 <string>:1(<module>) 1 0.000 0.000 0.001 0.001 re.py:212(compile) 1 0.000 0.000 0.001 0.001 re.py:268(_compile) 1 0.000 0.000 0.000 0.000 sre_compile.py:172(_compile_charset) 1 0.000 0.000 0.000 0.000 sre_compile.py:201(_optimize_charset) 4 0.000 0.000 0.000 0.000 sre_compile.py:25(_identityfunction) 3/1 0.000 0.000 0.000 0.000 sre_compile.py:33(_compile) Both of these modules should give you an idea about where to look for bottlenecks. Also, to get to grips with the output of profile, have a look at this post pycallgraph NOTE pycallgraph has been officially abandoned since Feb. 2018. As of Dec. 2020 it was still working on Python 3.6 though. As long as there are no core changes in how python exposes the profiling API it should remain a helpful tool though. This module uses graphviz to create callgraphs like the following: You can easily see which paths used up the most time by colour. You can either create them using the pycallgraph API, or using a packaged script: pycallgraph graphviz -- ./mypythonscript.py The overhead is quite considerable though. So for already long-running processes, creating the graph can take some time."} +{"question_id": 43109355, "score": 137, "creation_date": 1490852130, "tags": ["python", "logging", "python-logging"], "instruction": "Logging setLevel is being ignored\n\nThe below code is copied from the documentation. I am supposed to be able to see all the info logs. But I don't. I am only able to see the warn and above even though I've set setLevel to INFO. Why is this happening? foo.py: import logging logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.debug('debug message') logger.info('info message') logger.warn('warn message') logger.error('error message') logger.critical('critical message') Output: workingDirectory$ python foo.py warn message error message critical message Where did the info and debug messages go??", "output": "As pointed by some users, using: logging.basicConfig(level=logging.DEBUG, format='%(message)s') like written in the accepted answer is not a good option because it sets the log level for the root logger, so it may lead to unexpected behaviours (eg. third party libraries may start to log debug messages if you set loglevel=logging.DEBUG) In my opinion the best solution is to set log level just for your logger, like this: import logging logger = logging.getLogger('MyLogger') handler = logging.StreamHandler() formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) logger.setLevel(logging.DEBUG) Not really intuitive solution, but is necessary if you want to set log level only for 'MyLogger' and leave the root logger untouched. So, why is logging.basicConfig(level=logging.DEBUG, format='%(message)s') setting the log level globally? Well, actually it doesn't. As said, it's just changing the configuration of the root logger and, as described in the python documentation: Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will always return a reference to the same Logger object. So, logging.basicConfig is creating a StreamHandler with a default Formatter and adding it to the root logger. The point is that if any other library is using the \"root logger\", you're going to set that log level for that library too so it can happen that you start to see debug logs from third party libraries. This is why I think it's better to create your own logger and set your own formatters and handlers, so you can leave the root logger untouched."} +{"question_id": 41849718, "score": 137, "creation_date": 1485341686, "tags": ["python", "python-2.7", "anaconda", "spyder"], "instruction": "how to update spyder on anaconda\n\nI have Anaconda installed (Python 2.7.11 |Anaconda custom (64-bit)| (default, Feb 16 2016, 09:58:36) [MSC v.1500 64 bit (AMD64)] on win32) and I am using Spyder 2.3.8 Would like to update Spyder to the latest version, so I went through the commands: conda update conda conda update anaconda conda update spyder They all ran without errors, but the spyder version didn't change - this is command I'm using to launch: C:\\Anaconda2\\pythonw.exe C:\\Anaconda2\\cwp.py C:\\Anaconda2 \"C:/Anaconda2/pythonw.exe\" \"C:/Anaconda2/Scripts/spyder-script.py\" --new-instance Am I missing something?", "output": "To expand on juanpa.arrivillaga's comment: If you want to update Spyder in the root environment, then conda update spyder works for me. If you want to update Spyder for a virtual environment you have created (e.g., for a different version of Python), then conda update -n $ENV_NAME spyder where $ENV_NAME is your environment name. EDIT: In case conda update spyder isn't working, this post indicates you might need to run conda update anaconda before updating spyder. Also note that you can specify an exact spyder version if you want."} +{"question_id": 15034151, "score": 137, "creation_date": 1361571557, "tags": ["python", "shutil", "copytree"], "instruction": "Copy directory contents into a directory with python\n\nI have a directory /a/b/c that has files and subdirectories. I need to copy the /a/b/c/* in the /x/y/z directory. What python methods can I use? I tried shutil.copytree(\"a/b/c\", \"/x/y/z\"), but python tries to create /x/y/z and raises an error \"Directory exists\".", "output": "I found this code working which is part of the standard library: from distutils.dir_util import copy_tree # copy subdirectory example from_directory = \"/a/b/c\" to_directory = \"/x/y/z\" copy_tree(from_directory, to_directory) Reference: Python 2: https://docs.python.org/2/distutils/apiref.html#distutils.dir_util.copy_tree Python 3: https://docs.python.org/3/distutils/apiref.html#distutils.dir_util.copy_tree"} +{"question_id": 9090079, "score": 137, "creation_date": 1328064690, "tags": ["python", "import"], "instruction": "How to import from filename that starts with a number\n\nBasically there is a file called 8puzzle.py and I want to import the file into another file (in the same folder and I cannot change the file name as the file is provided). Is there any way to do this in Python? I tried usual way from 8puzzle import *, it gives me an error. Error is: >>> import 8puzzle File \"<input>\", line 1 import 8puzzle ^ SyntaxError: invalid syntax >>>", "output": "You could do puzzle = __import__('8puzzle') Very interesting problem. I'll remember not to name anything with a number. If you'd like to import * -- you should check out this question and answer. 12 years later... See answer below (use importlib.import_module from Python 3.1) for interoperability and understandability. https://docs.python.org/3/library/importlib.html#importlib.import_module This provides an implementation of import which is portable to any Python interpreter. This also provides an implementation which is easier to comprehend than one implemented in a programming language other than Python. .... The import_module() function acts as a simplifying wrapper around importlib.__import__(). This means all semantics of the function are derived from importlib.__import__(). The most important difference between these two functions is that import_module() returns the specified package or module (e.g. pkg.mod), while __import__() returns the top-level package or module (e.g. pkg)."} +{"question_id": 13440875, "score": 137, "creation_date": 1353247469, "tags": ["python", "redis", "celery", "scheduled-tasks", "python-rq"], "instruction": "Pros and cons to use Celery vs. RQ\n\nCurrently I'm working on python project that requires implement some background jobs (mostly for email sending and heavily database updates). I use Redis for task broker. So in this point I have two candidates: Celery and RQ. I had some experience with these job queues, but I want to ask you guys to share you experience of using this tools. So. What pros and cons to use Celery vs. RQ. Any examples of projects/task suitable to use Celery vs. RQ. Celery looks pretty complicated but it's full featured solution. Actually I don't think that I need all these features. From other side RQ is very simple (e.g configuration, integration), but it seems that it lacks some useful features (e.g task revoking, code auto-reloading)", "output": "Here is what I have found while trying to answer this exact same question. It's probably not comprehensive, and may even be inaccurate on some points. In short, RQ is designed to be simpler all around. Celery is designed to be more robust. They are both excellent. Documentation. RQ's documentation is comprehensive without being complex, and mirrors the project's overall simplicity - you never feel lost or confused. Celery's documentation is also comprehensive, but expect to be re-visiting it quite a lot when you're first setting things up as there are too many options to internalize Monitoring. Celery's Flower and the RQ dashboard are both very simple to setup and give you at least 90% of all information you would ever want Broker support. Celery is the clear winner, RQ only supports Redis. This means less documentation on \"what is a broker\", but also means you cannot switch brokers in the future if Redis no longer works for you. For example, Instagram considered both Redis and RabbitMQ with Celery. This is important because different brokers have different guarantees e.g. Redis cannot (as of writing) guarantee 100% that your messages are delivered. Priority queues. RQs priority queue model is simple and effective - workers read from queues in order. Celery requires spinning up multiple workers to consume from different queues. Both approaches work OS Support. Celery is the clear winner here, as RQ only runs on systems that support fork e.g. Unix systems Language support. RQ only supports Python, whereas Celery lets you send tasks from one language to a different language API. Celery is extremely flexible (multiple result backends, nice config format, workflow canvas support) but naturally this power can be confusing. By contrast, the RQ api is simple. Subtask support. Celery supports subtasks (e.g. creating new tasks from within existing tasks). I don't know if RQ does Community and Stability. Celery is probably more established, but they are both active projects. As of writing, Celery has ~3500 stars on Github while RQ has ~2000 and both projects show active development In my opinion, Celery is not as complex as its reputation might lead you to believe, but you will have to RTFM. So, why would anyone be willing to trade the (arguably more full-featured) Celery for RQ? In my mind, it all comes down to the simplicity. By restricting itself to Redis+Unix, RQ provides simpler documentation, simpler codebase, and a simpler API. This means you (and potential contributors to your project) can focus on the code you care about, instead of having to keep details about the task queue system in your working memory. We all have a limit on how many details can be in our head at once, and by removing the need to keep task queue details in there RQ lets get back to the code you care about. That simplicity comes at the expense of features like inter-language task queues, wide OS support, 100% reliable message guarantees, and ability to switch message brokers easily."} +{"question_id": 13092267, "score": 137, "creation_date": 1351275493, "tags": ["python", "python-3.x", "iterator", "generator", "typeerror"], "instruction": "If range() is a generator in Python 3.3, why can I not call next() on a range?\n\nPerhaps I've fallen victim to misinformation on the web, but I think it's more likely just that I've misunderstood something. Based on what I've learned so far, range() is a generator, and generators can be used as iterators. However, this code: myrange = range(10) print(next(myrange)) gives me this error: TypeError: 'range' object is not an iterator What am I missing here? I was expecting this to print 0, and to advance to the next value in myrange. I'm new to Python, so please accept my apologies for the rather basic question, but I couldn't find a good explanation anywhere else.", "output": "range is a class of immutable iterable objects. Their iteration behavior can be compared to lists: you can't call next directly on them; you have to get an iterator by using iter. So no, range is not a generator. You may be thinking, \"why didn't they make it an iterator\"? Well, ranges have some useful properties that wouldn't be possible that way: They are immutable, so they can be used as dictionary keys. They have the start, stop and step attributes (since Python 3.3), count and index methods and they support in, len and __getitem__ operations. You can iterate over the same range multiple times. >>> myrange = range(1, 21, 2) >>> myrange.start 1 >>> myrange.step 2 >>> myrange.index(17) 8 >>> myrange.index(18) Traceback (most recent call last): File \"<stdin>\", line 1, in <module> ValueError: 18 is not in range >>> it = iter(myrange) >>> it <range_iterator object at 0x7f504a9be960> >>> next(it) 1 >>> next(it) 3 >>> next(it) 5"} +{"question_id": 5124376, "score": 136, "creation_date": 1298682492, "tags": ["python", "numpy", "nan"], "instruction": "convert nan value to zero\n\nI have a 2D numpy array. Some of the values in this array are NaN. I want to perform certain operations using this array. For example consider the array: [[ 0. 43. 67. 0. 38.] [ 100. 86. 96. 100. 94.] [ 76. 79. 83. 89. 56.] [ 88. NaN 67. 89. 81.] [ 94. 79. 67. 89. 69.] [ 88. 79. 58. 72. 63.] [ 76. 79. 71. 67. 56.] [ 71. 71. NaN 56. 100.]] I am trying to take each row, one at a time, sort it in reversed order to get max 3 values from the row and take their average. The code I tried is: # nparr is a 2D numpy array for entry in nparr: sortedentry = sorted(entry, reverse=True) highest_3_values = sortedentry[:3] avg_highest_3 = float(sum(highest_3_values)) / 3 This does not work for rows containing NaN. My question is, is there a quick way to convert all NaN values to zero in the 2D numpy array so that I have no problems with sorting and other things I am trying to do.", "output": "This should work: from numpy import * a = array([[1, 2, 3], [0, 3, NaN]]) where_are_NaNs = isnan(a) a[where_are_NaNs] = 0 In the above case where_are_NaNs is: In [12]: where_are_NaNs Out[12]: array([[False, False, False], [False, False, True]], dtype=bool) A complement about efficiency. The examples below were run with numpy 1.21.2 >>> aa = np.random.random(1_000_000) >>> a = np.where(aa < 0.15, np.nan, aa) >>> %timeit a[np.isnan(a)] = 0 536 \u00b5s \u00b1 8.11 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1,000 loops each) >>> a = np.where(aa < 0.15, np.nan, aa) >>> %timeit np.where(np.isnan(a), 0, a) 2.38 ms \u00b1 27.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) >>> a = np.where(aa < 0.15, np.nan, aa) >>> %timeit np.nan_to_num(a, copy=True) 8.11 ms \u00b1 401 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) >>> a = np.where(aa < 0.15, np.nan, aa) >>> %timeit np.nan_to_num(a, copy=False) 3.8 ms \u00b1 70.2 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) In consequence a[np.isnan(a)] = 0 is faster."} +{"question_id": 9567069, "score": 136, "creation_date": 1330952431, "tags": ["python", "html", "selenium", "webdriver"], "instruction": "Checking if an element exists with Python Selenium\n\nI have a problem; I am using the Selenium (Firefox) web driver to open a webpage, click a few links, etc., and then capture a screenshot. My script runs fine from the CLI, but when run via a cron job it is not getting past the first find_element() test. I need to add some debug, or something to help me figure out why it is failing. Basically, I have to click a 'log in' anchor before going to the login page. The construct of the element is: <a class=\"lnk\" rel=\"nofollow\" href=\"/login.jsp?destination=/secure/Dash.jspa\">log in</a> I am using the find_element By LINK_TEXT method: login = driver.find_element(By.LINK_TEXT, \"log in\").click() A) How do I check that the link is actually being picked up by Python? Should I use try/catch block? B) Is there a better/more reliable way to locate the DOM element than by LINK_TEXT? E.g., in jQuery, you can use a more specific selector, $('a.lnk:contains(log in)').do_something(); I have solved the main problem and it was just finger trouble. I was calling the script with incorrect parameters - a simple mistake. I'd still like some pointers on how to check whether an element exists. Also, an example/explanation of implicit / explicit Waits instead of using a crappy time.sleep() call.", "output": "A) Yes. The easiest way to check if an element exists is to simply call find_element inside a try/catch. B) Yes, I always try to identify elements without using their text for two reasons: the text is more likely to change and; if it is important to you, you won't be able to run your tests against localized builds. The solution is either: You can use XPath to find a parent or ancestor element that has an ID or some other unique identifier and then find its child/descendant that matches or; you could request an ID or name or some other unique identifier for the link itself. For the follow-up questions, using try/catch is how you can tell if an element exists or not and good examples of waits can be found here: http://seleniumhq.org/docs/04_webdriver_advanced.html"} +{"question_id": 11509721, "score": 136, "creation_date": 1342460910, "tags": ["python", "list", "dictionary"], "instruction": "How do I initialize a dictionary of empty lists in Python?\n\nMy attempt to programmatically create a dictionary of lists is failing to allow me to individually address dictionary keys. Whenever I create the dictionary of lists and try to append to one key, all of them are updated. Here's a very simple test case: data = {} data = data.fromkeys(range(2),[]) data[1].append('hello') print data Actual result: {0: ['hello'], 1: ['hello']} Expected result: {0: [], 1: ['hello']} Here's what works data = {0:[],1:[]} data[1].append('hello') print data Actual and Expected Result: {0: [], 1: ['hello']} Why is the fromkeys method not working as expected?", "output": "When [] is passed as the second argument to dict.fromkeys(), all values in the resulting dict will be the same list object. In Python 2.7 or above, use a dict comprehension instead: data = {k: [] for k in range(2)} In earlier versions of Python, there is no dict comprehension, but a list comprehension can be passed to the dict constructor instead: data = dict([(k, []) for k in range(2)]) In 2.4-2.6, it is also possible to pass a generator expression to dict, and the surrounding parentheses can be dropped: data = dict((k, []) for k in range(2))"} +{"question_id": 8920643, "score": 136, "creation_date": 1326943283, "tags": ["python", "django", "celery", "message-passing"], "instruction": "Cancel an already executing task with Celery?\n\nI have been reading the doc and searching but cannot seem to find a straight answer: Can you cancel an already executing task? (as in the task has started, takes a while, and half way through it needs to be cancelled) I found this from the doc at Celery FAQ >>> result = add.apply_async(args=[2, 2], countdown=120) >>> result.revoke() But I am unclear if this will cancel queued tasks or if it will kill a running process on a worker. Thanks for any light you can shed!", "output": "revoke cancels the task execution. If a task is revoked, the workers ignore the task and do not execute it. If you don't use persistent revokes your task can be executed after worker's restart. https://docs.celeryq.dev/en/stable/userguide/workers.html#worker-persistent-revokes revoke has an terminate option which is False by default. If you need to kill the executing task you need to set terminate to True. >>> from celery.task.control import revoke >>> revoke(task_id, terminate=True) https://docs.celeryq.dev/en/stable/userguide/workers.html#revoke-revoking-tasks"} +{"question_id": 23256536, "score": 136, "creation_date": 1398292814, "tags": ["python", "apache-spark", "pyspark"], "instruction": "importing pyspark in python shell\n\nThis is a copy of someone else's question on another forum that was never answered, so I thought I'd re-ask it here, as I have the same issue. (See http://geekple.com/blogs/feeds/Xgzu7/posts/351703064084736) I have Spark installed properly on my machine and am able to run python programs with the pyspark modules without error when using ./bin/pyspark as my python interpreter. However, when I attempt to run the regular Python shell, when I try to import pyspark modules I get this error: from pyspark import SparkContext and it says \"No module named pyspark\". How can I fix this? Is there an environment variable I need to set to point Python to the pyspark headers/libraries/etc.? If my spark installation is /spark/, which pyspark paths do I need to include? Or can pyspark programs only be run from the pyspark interpreter?", "output": "Assuming one of the following: Spark is downloaded on your system and you have an environment variable SPARK_HOME pointing to it You have ran pip install pyspark Here is a simple method (If you don't bother about how it works!!!) Use findspark Go to your python shell pip install findspark import findspark findspark.init() import the necessary modules from pyspark import SparkContext from pyspark import SparkConf Done!!!"} +{"question_id": 17716624, "score": 136, "creation_date": 1374130735, "tags": ["python", "django", "django-cookies"], "instruction": "Django CSRF Cookie Not Set\n\nI have some problem for a while now, I'm experiencing CSRF Cookie not set. Please look at the code below: views.py: def deposit(request, account_num): if request.method == 'POST': account = get_object_or_404(account_info, acct_number=account_num) form_ = AccountForm(request.POST or None, instance=account) form = BalanceForm(request.POST) info = str(account_info.objects.filter(acct_number=account_num)) inf = info.split() if form.is_valid(): # cd=form.cleaned_data now = datetime.datetime.now() cmodel = form.save() cmodel.acct_number = account_num # RepresentsInt(cmodel.acct_number) cmodel.bal_change = \"%0.2f\" % float(cmodel.bal_change) cmodel.total_balance = \"%0.2f\" % (float(inf[1]) + float(cmodel.bal_change)) account.balance = \"%0.2f\" % float(cmodel.total_balance) cmodel.total_balance = \"%0.2f\" % float(cmodel.total_balance) # cmodel.bal_change=cmodel.bal_change cmodel.issued = now.strftime(\"%m/%d/%y %I:%M:%S %p\") account.recent_change = cmodel.issued cmodel.save() account.save() return HttpResponseRedirect(\"/history/\" + account_num + \"/\") else: return render_to_response('history.html', {'account_form': form}, context_instance=RequestContext(request)) Template file: <form action=\"/deposit/{{ account_num }}/\" method=\"post\"> <table> <tr> {{ account_form.bal_change }}   <input type=\"submit\" value=\"Deposit\"/> </tr> {% csrf_token %} </table> </form> I'n stuck, I already cleared the cookie, used other browser but still csrf cookie not set.", "output": "This can also occur if CSRF_COOKIE_SECURE = True is set and you are accessing the site non-securely or if CSRF_COOKIE_HTTPONLY = True is set as stated here and here"} +{"question_id": 30239152, "score": 136, "creation_date": 1431611510, "tags": ["python", "git", "pip"], "instruction": "Specify extras_require with pip install -e\n\nHow can one manage to install extras_requires with pip when installing from a git repository ? I know that you can do pip install project[extra] when the project is on pypi. And you have to do pip install -e git+https://github.com/user/project.git#egg=project for a git repo but I didn't manage to find how to link these two options together.", "output": "This should work, per examples #6 and #7 For remote repos: pip install -e git+https://github.com/user/project.git#egg=project[extra] And this for local ones (thanks to @Kurt-Bourbaki): pip install -e .[extra] As per @Kurt-Bourbaki: If you are using zsh you need to escape square brackets or use quotes: pip install -e .\\[extra\\] # or pip install -e \".[extra]\" As per @Epoc: Windows Powershell will also require quoting the brackets."} +{"question_id": 15304229, "score": 136, "creation_date": 1362780808, "tags": ["python", "xml", "marshalling", "elementtree"], "instruction": "Convert Python ElementTree to string\n\nWhenever I call ElementTree.tostring(e), I get the following error message: AttributeError: 'Element' object has no attribute 'getroot' Is there any other way to convert an ElementTree object into an XML string? TraceBack: Traceback (most recent call last): File \"Development/Python/REObjectSort/REObjectResolver.py\", line 145, in <module> cm = integrateDataWithCsv(cm, csvm) File \"Development/Python/REObjectSort/REObjectResolver.py\", line 137, in integrateDataWithCsv xmlstr = ElementTree.tostring(et.getroot(),encoding='utf8',method='xml') AttributeError: 'Element' object has no attribute 'getroot'", "output": "Element objects have no .getroot() method. Drop that call, and the .tostring() call works: xmlstr = ElementTree.tostring(et, encoding='utf8') You only need to use .getroot() if you have an ElementTree instance. Other notes: This produces a bytestring, which in Python 3 is the bytes type. If you must have a str object, you have two options: Decode the resulting bytes value, from UTF-8: xmlstr.decode(\"utf8\") Use encoding='unicode'; this avoids an encode / decode cycle: xmlstr = ElementTree.tostring(et, encoding='unicode') If you wanted the UTF-8 encoded bytestring value or are using Python 2, take into account that ElementTree doesn't properly detect utf8 as the standard XML encoding, so it'll add a <?xml version='1.0' encoding='utf8'?> declaration. Use utf-8 or UTF-8 (with a dash) if you want to prevent this. When using encoding=\"unicode\" no declaration header is added."} +{"question_id": 5980042, "score": 136, "creation_date": 1305212499, "tags": ["python", "command-line-arguments"], "instruction": "How to implement the --verbose or -v option into a script?\n\nI know the --verbose or -v from several tools and I'd like to implement this into some of my own scripts and tools. I thought of placing: if verbose: print ... through my source code, so that if a user passes the -v option, the variable verbose will be set to True and the text will be printed. Is this the right approach or is there a more common way? Addition: I am not asking for a way to implement the parsing of arguments. That I know how it is done. I am only interested specially in the verbose option.", "output": "My suggestion is to use a function. But rather than putting the if in the function, which you might be tempted to do, do it like this: if verbose: def verboseprint(*args): # Print each argument separately so caller doesn't need to # stuff everything to be printed into a single string for arg in args: print arg, print else: verboseprint = lambda *a: None # do-nothing function (Yes, you can define a function in an if statement, and it'll only get defined if the condition is true!) If you're using Python 3, where print is already a function (or if you're willing to use print as a function in 2.x using from __future__ import print_function) it's even simpler: verboseprint = print if verbose else lambda *a, **k: None This way, the function is defined as a do-nothing if verbose mode is off (using a lambda), instead of constantly testing the verbose flag. If the user could change the verbosity mode during the run of your program, this would be the wrong approach (you'd need the if in the function), but since you're setting it with a command-line flag, you only need to make the decision once. You then use e.g. verboseprint(\"look at all my verbosity!\", object(), 3) whenever you want to print a \"verbose\" message. If you are willing and able to use the Python -O flag to turn verbosity on and off when launching the \"production\" version of the script (or set the PYTHONOPTIMIZE environment variable) then the better way is to test the __debug__ flag everywhere you want to print a verbose output: if __debug__: print(\"Verbosity enabled\") When run without optimization, these statements are executed (and the if is stripped out, leaving only the body of the statement). When run with optimization, Python actually strips those statements out entirely. They have no performance impact whatsoever! See my blog post on __debug__ and -O for a more in-depth discussion."} +{"question_id": 52676020, "score": 136, "creation_date": 1538803897, "tags": ["python", "opencv"], "instruction": "imread returns None, violating assertion !_src.empty() in function 'cvtColor' error\n\nI am trying to do a basic colour conversion in python however I can't seem to get past the below error. I have re-installed python, opencv and tried on both python 3.4.3 (latest) and python 2.7 (which is on my Mac). I installed opencv using python's package manager opencv-python. Here is the code that fails: frame = cv2.imread('frames/frame%d.tiff' % count) frame_HSV= cv2.cvtColor(frame,cv2.COLOR_RGB2HSV) This is the error message: cv2.error: OpenCV(3.4.3) /Users/travis/build/skvark/opencv-python/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'", "output": "This error happened because the image didn't load properly. So you have a problem with the previous line cv2.imread. My suggestion is : check if the image exists in the path you give check if the count variable has a valid number"} +{"question_id": 12053236, "score": 136, "creation_date": 1345545968, "tags": ["php", "python", "implode"], "instruction": "Python equivalent for PHP's implode?\n\nIs there an equivalent for PHP's implode in Python? I've read in and split up a set of delimited words, and now I want to sort them out in random orders and print the words out with spaces in between. implode \u2014 Join array elements with a string http://php.net/manual/en/function.implode.php", "output": "Use the strings join-method. print(' '.join(['word1', 'word2', 'word3'])) You can join any iterable (not only the list used here) and of course you can use any string (not only ' ') as the delimiter. If you want a random order like you said in your question use shuffle. In the comment there was the question why Python throws an error if you do \"glue\".join([\"startString\", 123, \"endString\"]). join operates on an iterable of strings. There is no implicit type conversion in Python. But of course there is a solution. Just do the conversion yourself. \"glue\".join(map(str, [\"startString\",123,\"endString\"]))"} +{"question_id": 55749206, "score": 136, "creation_date": 1555601100, "tags": ["python", "pycharm", "virtualenv", "python-3.6"], "instruction": "ModuleNotFoundError: No module named 'distutils.core'\n\nThis question asks: \"I do everything right, but nothing happens!\" See this question for getting technical info about how to solve the direct cause of this error message. I've recently upgraded from Ubuntu 18.04 to 19.04 which has Python 3.7. But I work on many projects using Python 3.6. Now when I try to create a virtualenv with Python 3.6 in PyCharm, it raises: ModuleNotFoundError: No module named 'distutils.core' I can't figure out what to do. I tried to install distutils: milano@milano-PC:~$ sudo apt-get install python3-distutils Reading package lists... Done Building dependency tree Reading state information... Done python3-distutils is already the newest version (3.7.3-1ubuntu1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. But as you can see I have the newest version. Do you know what to do?", "output": "Python base interpreter does require some additional modules. Those are not installed with e.g. Ubuntu 18.04 as default. To solve this we need to first find the python version you're running. If you have only installed one python version on your system (and you are sure about it) you can skip this step. # from your project interpreter run # your_project_python --version $ python3 --version Python 3.7.8 You now need to install for this precise python interpreter the distutils. So here the solution for this example would be: sudo apt install python3.7-distutils # sudo apt install python3-distutils # would just update default python intrpreter Keep in mind, that just running python from any command line might be an other version of python then you're running in your project! If this hasn't helped, look for the following possibilities. This will bring you the binary which resolved from the alias in the command line. $ which python /usr/bin/python $ ls -lach /usr/bin/python lrwxrwxrwx 1 root root 9 Jun 8 2018 /usr/bin/python -> python2.7 original source: refer to this article For this answer I've also merged, summarized, ordered and explained some of the content which has been provided by Neo, Andrei, Mostafa and Wolfgang. As a side note for sorcerer's apprentice: You might be tempted to uninstall python interpreters. For proposed solution not necessary at all!! How ever, keep in mind that there is one python interpreter which your whole OS depends on. So this default one, you don't want to uninstall. If you do so, you're in a certain mess in finding your desktop taskbar and basically everything."} +{"question_id": 17909294, "score": 136, "creation_date": 1375022237, "tags": ["python", "argparse"], "instruction": "argparse mutual exclusive group\n\nWhat I need is: pro [-a xxx | [-b yyy -c zzz]] I tried this but it doesn't work: group= parser.add_argument_group('Model 2') group_ex = group.add_mutually_exclusive_group() group_ex.add_argument(\"-a\", type=str, action = \"store\", default = \"\", help=\"test\") group_ex_2 = group_ex.add_argument_group(\"option 2\") group_ex_2.add_argument(\"-b\", type=str, action = \"store\", default = \"\", help=\"test\") group_ex_2.add_argument(\"-c\", type=str, action = \"store\", default = \"\", help=\"test\")", "output": "add_mutually_exclusive_group doesn't make an entire group mutually exclusive. It makes options within the group mutually exclusive. What you're looking for is subcommands. Instead of prog [ -a xxxx | [-b yyy -c zzz]], you'd have: prog command 1 -a: ... command 2 -b: ... -c: ... To invoke with the first set of arguments: prog command_1 -a xxxx To invoke with the second set of arguments: prog command_2 -b yyyy -c zzzz You can also set the sub command arguments as positional. prog command_1 xxxx Kind of like git or svn: git commit -am git merge develop Working Example # create the top-level parser parser = argparse.ArgumentParser(prog='PROG') parser.add_argument('--foo', action='store_true', help='help for foo arg.') subparsers = parser.add_subparsers(help='help for subcommand', dest=\"subcommand\") # create the parser for the \"command_1\" command parser_a = subparsers.add_parser('command_1', help='command_1 help') parser_a.add_argument('a', type=str, help='help for bar, positional') # create the parser for the \"command_2\" command parser_b = subparsers.add_parser('command_2', help='help for command_2') parser_b.add_argument('-b', type=str, help='help for b') parser_b.add_argument('-c', type=str, action='store', default='', help='test') Test it >>> parser.print_help() usage: PROG [-h] [--foo] {command_1,command_2} ... positional arguments: {command_1,command_2} help for subcommand command_1 command_1 help command_2 help for command_2 optional arguments: -h, --help show this help message and exit --foo help for foo arg. >>> >>> parser.parse_args(['command_1', 'working']) Namespace(subcommand='command_1', a='working', foo=False) >>> parser.parse_args(['command_1', 'wellness', '-b x']) usage: PROG [-h] [--foo] {command_1,command_2} ... PROG: error: unrecognized arguments: -b x Good luck."} +{"question_id": 33785755, "score": 135, "creation_date": 1447865362, "tags": ["python"], "instruction": "Getting \"Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?\" when installing lxml through pip\n\nI'm getting an error Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? when trying to install lxml through pip. c:\\users\\f\\appdata\\local\\temp\\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* error: command 'C:\\\\Users\\\\f\\\\AppData\\\\Local\\\\Programs\\\\Common\\\\Microsoft\\\\Visual C++ for Python\\\\9.0\\\\VC\\\\Bin\\\\cl.exe' failed with exit status 2 I don't find any libxml2 dev packages to install via pip. Using Python 2.7 and Python 3.x on x86 in a virtualenv under Windows 10.", "output": "Install lxml from http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml for your python version. It's a precompiled WHL with required modules/dependencies. The site lists several packages, when e.g. using Win32 Python 3.11, use lxml\u20114.9.0\u2011cp311\u2011cp311\u2011win32.whl. Download the file, and then install with: pip install C:\\path\\to\\downloaded\\file\\lxml\u20114.9.0\u2011cp311\u2011cp311\u2011win32.whl"} +{"question_id": 21606987, "score": 135, "creation_date": 1391700385, "tags": ["python", "pandas", "whitespace"], "instruction": "How can I strip the whitespace from Pandas DataFrame headers?\n\nI am parsing data from an Excel file that has extra white space in some of the column headings. When I check the columns of the resulting dataframe, with df.columns, I see: Index(['Year', 'Month ', 'Value']) ^ # Note the unwanted trailing space on 'Month ' Consequently, I can't do: df[\"Month\"] Because it will tell me the column is not found, as I asked for \"Month\", not \"Month \". My question, then, is how can I strip out the unwanted white space from the column headings?", "output": "You can give functions to the rename method. The str.strip() method should do what you want: In [5]: df Out[5]: Year Month Value 0 1 2 3 [1 rows x 3 columns] In [6]: df.rename(columns=lambda x: x.strip()) Out[6]: Year Month Value 0 1 2 3 [1 rows x 3 columns] Note: that this returns a DataFrame object and it's shown as output on screen, but the changes are not actually set on your columns. To make the changes, either use this in a method chain or re-assign the df variabe: df = df.rename(columns=lambda x: x.strip())"} +{"question_id": 44026548, "score": 135, "creation_date": 1495028351, "tags": ["python", "python-3.x", "django", "django-models", "django-2.0"], "instruction": "Getting TypeError: __init__() missing 1 required positional argument: 'on_delete' when trying to add parent table after child table with entries\n\nI have two classes in my sqlite database, a parent table named Categorie and the child table called Article. I created first the child table class and addes entries. So first I had this: class Article(models.Model): titre=models.CharField(max_length=100) auteur=models.CharField(max_length=42) contenu=models.TextField(null=True) date=models.DateTimeField( auto_now_add=True, auto_now=False, verbose_name=\"Date de parution\" ) def __str__(self): return self.titre And after I have added parent table, and now my models.py looks like this: from django.db import models # Create your models here. class Categorie(models.Model): nom = models.CharField(max_length=30) def __str__(self): return self.nom class Article(models.Model): titre=models.CharField(max_length=100) auteur=models.CharField(max_length=42) contenu=models.TextField(null=True) date=models.DateTimeField( auto_now_add=True, auto_now=False, verbose_name=\"Date de parution\" ) categorie = models.ForeignKey('Categorie') def __str__(self): return self.titre So when I run python manage.py makemigrations <my_app_name>, I get this error: Traceback (most recent call last): File \"manage.py\", line 15, in <module> execute_from_command_line(sys.argv) File \"C:\\Users\\lislis\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\django-2.0-py3.5.egg\\django\\core\\management\\__init__.py\", line 354, in execute_from_command_line utility.execute() File \"C:\\Users\\lislis\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\django-2.0-py3.5.egg\\django\\core\\management\\__init__.py\", line 330, in execute django.setup() File \"C:\\Users\\lislis\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\django-2.0-py3.5.egg\\django\\__init__.py\", line 24, in setup apps.populate(settings.INSTALLED_APPS) File \"C:\\Users\\lislis\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\django-2.0-py3.5.egg\\django\\apps\\registry.py\", line 112, in populate app_config.import_models() File \"C:\\Users\\lislis\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\django-2.0-py3.5.egg\\django\\apps\\config.py\", line 198, in import_models self.models_module = import_module(models_module_name) File \"C:\\Users\\lislis\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\importlib\\__init__.py\", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File \"<frozen importlib._bootstrap>\", line 986, in _gcd_import File \"<frozen importlib._bootstrap>\", line 969, in _find_and_load File \"<frozen importlib._bootstrap>\", line 958, in _find_and_load_unlocked File \"<frozen importlib._bootstrap>\", line 673, in _load_unlocked File \"<frozen importlib._bootstrap_external>\", line 665, in exec_module File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed File \"C:\\Users\\lislis\\Django\\mon_site\\blog\\models.py\", line 6, in <module> class Article(models.Model): File \"C:\\Users\\lislis\\Django\\mon_site\\blog\\models.py\", line 16, in Article categorie = models.ForeignKey('Categorie') TypeError: __init__() missing 1 required positional argument: 'on_delete' I've seen some similar issues in stackoverflow, but it seems to not be the same problem: __init__() missing 1 required positional argument: 'quantity'", "output": "You can change the property categorie of the class Article like this: categorie = models.ForeignKey( 'Categorie', on_delete=models.CASCADE, ) and the error should disappear. Eventually you might need another option for on_delete, check the documentation for more details: Arguments -- Model field reference -- Django documentation As you stated in your comment, that you don't have any special requirements for on_delete, you could use the option DO_NOTHING: # ... on_delete=models.DO_NOTHING, # ..."} +{"question_id": 72773206, "score": 135, "creation_date": 1656337711, "tags": ["python", "selenium", "google-chrome", "selenium-webdriver", "selenium-chromedriver"], "instruction": "Selenium - Python - AttributeError: 'WebDriver' object has no attribute 'find_element_by_name'\n\nI am trying to get Selenium working with Chrome, but I keep running into this error message (and others like it): AttributeError: 'WebDriver' object has no attribute 'find_element_by_name' The same problem occurs with find_element_by_id(), find_element_by_class(), etc. I also could not call send_keys(). I am just running the test code provided at ChromeDriver - WebDriver for Chrome - Getting started. import time from selenium import webdriver driver = webdriver.Chrome(\"C:/Program Files/Chrome Driver/chromedriver.exe\") # Path to where I installed the web driver driver.get('http://www.google.com/'); time.sleep(5) # Let the user actually see something! search_box = driver.find_element_by_name('q') search_box.send_keys('ChromeDriver') search_box.submit() time.sleep(5) # Let the user actually see something! driver.quit() I am using Google Chrome version 103.0.5060.53 and downloaded ChromeDriver 103.0.5060.53 from Downloads. When running the code, Chrome opens and navigates to google.com, but it receives the following output: C:\\Users\\Admin\\Programming Projects\\Python Projects\\Clock In\\clock_in.py:21: DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome(\"C:/Program Files/Chrome Driver/chromedriver.exe\") # Optional argument, if not specified will search path. DevTools listening on ws://127.0.0.1:58397/devtools/browser/edee940d-61e0-4cc3-89e1-2aa08ab16432 [9556:21748:0627/083741.135:ERROR:device_event_log_impl.cc(214)] [08:37:41.131] USB: usb_service_win.cc:415 Could not read device interface GUIDs: The system cannot find the file specified. (0x2) [9556:21748:0627/083741.149:ERROR:device_event_log_impl.cc(214)] [08:37:41.148] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [9556:21748:0627/083741.156:ERROR:device_event_log_impl.cc(214)] [08:37:41.155] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [9556:21748:0627/083741.157:ERROR:device_event_log_impl.cc(214)] [08:37:41.156] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) [9556:21748:0627/083741.157:ERROR:device_event_log_impl.cc(214)] [08:37:41.156] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F) Traceback (most recent call last): File \"C:\\[REDACTED]\", line 27, in <module> search_box = driver.find_element_by_name('q') AttributeError: 'WebDriver' object has no attribute 'find_element_by_name' [21324:19948:0627/083937.892:ERROR:gpu_init.cc(486)] Passthrough is not supported, GL is disabled, ANGLE is Note: I replaced the file path for this post. I don't think that the DevTools listening section is related to the issue, but I thought I would include it, just in case.", "output": "Selenium just removed that method in version 4.3.0. See the CHANGES: https://github.com/SeleniumHQ/selenium/blob/a4995e2c096239b42c373f26498a6c9bb4f2b3e7/py/CHANGES Selenium 4.3.0 * Deprecated find_element_by_* and find_elements_by_* are now removed (#10712) * Deprecated Opera support has been removed (#10630) * Fully upgraded from python 2x to 3.7 syntax and features (#10647) * Added a devtools version fallback mechanism to look for an older version when mismatch occurs (#10749) * Better support for co-operative multi inheritance by utilising super() throughout * Improved type hints throughout You now need to use: driver.find_element(\"name\", \"q\") In your example, it would become: search_box = driver.find_element(\"name\", \"q\") search_box.send_keys('ChromeDriver') search_box.submit() For improved reliability, you should consider using WebDriverWait in combination with element_to_be_clickable."} +{"question_id": 24849998, "score": 135, "creation_date": 1405855457, "tags": ["python", "bash", "subprocess"], "instruction": "How to catch exception output from Python subprocess.check_output()?\n\nI'm trying to do a Bitcoin payment from within Python. In bash I would normally do this: bitcoin sendtoaddress <bitcoin address> <amount> So for example: bitcoin sendtoaddress 1HoCUcbK9RbVnuaGQwiyaJGGAG6xrTPC9y 1.4214 If it is successful I get a transaction id as output, but if I try to transfer an amount larger than my bitcoin balance, I get the following output: error: {\"code\":-4,\"message\":\"Insufficient funds\"} In my Python program I now try to do the payment as follows: import subprocess try: output = subprocess.check_output(['bitcoin', 'sendtoaddress', address, str(amount)]) except: print \"Unexpected error:\", sys.exc_info() If there's enough balance it works fine, but if there's not enough balance sys.exc_info() prints out this: (<class 'subprocess.CalledProcessError'>, CalledProcessError(), <traceback object at 0x7f339599ac68>) It doesn't include the error which I get on the command line though. So my question is; how can I get the outputted error ({\"code\":-4,\"message\":\"Insufficient funds\"}) from within Python?", "output": "According to the subprocess.check_output() docs, the exception raised on error has an output attribute that you can use to access the error details: try: subprocess.check_output(...) except subprocess.CalledProcessError as e: print(e.output) You should then be able to analyse this string and parse the error details with the json module: if e.output.startswith('error: {'): error = json.loads(e.output[7:]) # Skip \"error: \" print(error['code']) print(error['message'])"} +{"question_id": 44943646, "score": 135, "creation_date": 1499329837, "tags": ["python", "anaconda", "jupyter-notebook"], "instruction": "Jupyter notebook not trusted\n\nI am using Anaconda to work on a Jupyter notebook which displays \"Not Trusted\" (see on screenshot below). What does it mean? Is it a problem? How can I solve it?", "output": "It is a security feature to disable the execution of arbitrary code from untrusted notebooks, without the user's consent. There is a doc page about it: https://jupyter-server.readthedocs.io/en/stable/operators/security.html#security-in-notebook-documents It won't prevent you from manually running the code, that's actually one way of \"trusting\" the outputs."} +{"question_id": 30483977, "score": 135, "creation_date": 1432733397, "tags": ["python", "date", "datetime"], "instruction": "Get Yesterday's date as a string in YYYY-MM-DD format\n\nAs an input to an API request I need to get yesterday's date as a string in the format YYYY-MM-DD. My working version seems inelegant: yesterday = datetime.date.fromordinal(datetime.date.today().toordinal()-1) report_date = str(yesterday.year) + \\ ('-' if len(str(yesterday.month)) == 2 else '-0') + str(yesterday.month) + \\ ('-' if len(str(yesterday.day)) == 2 else '-0') + str(yesterday.day)", "output": "You just need to subtract one day from today's date. In Python datetime.timedelta object lets you create specific spans of time as a timedelta object. datetime.timedelta(1) gives you the duration of \"one day\" and is subtractable from a datetime object. After you subtracted the objects you can use datetime.strftime to convert the result --which is a date object-- to string format based on your format of choice: >>> from datetime import datetime, timedelta >>> yesterday = datetime.now() - timedelta(1) >>> type(yesterday) >>> datetime.datetime >>> datetime.strftime(yesterday, '%Y-%m-%d') '2015-05-26' Note that instead of calling the datetime.strftime function, you can also directly use strftime method of datetime objects: >>> (datetime.now() - timedelta(1)).strftime('%Y-%m-%d') '2015-05-26' As a function: from datetime import datetime, timedelta def yesterday(frmt='%Y-%m-%d', string=True): yesterday = datetime.now() - timedelta(1) if string: return yesterday.strftime(frmt) return yesterday example: In [10]: yesterday() Out[10]: '2022-05-13' In [11]: yesterday(string=False) Out[11]: datetime.datetime(2022, 5, 13, 12, 34, 31, 701270)"} +{"question_id": 6750017, "score": 135, "creation_date": 1311090222, "tags": ["python", "sql", "model", "sqlalchemy"], "instruction": "How to query database by id using SqlAlchemy?\n\nI need to query a SQLAlchemy database by its id something similar to User.query.filter_by(username='peter') but for id. How do I do this? [Searching over Google and SO didn't help]", "output": "Query has a get function that supports querying by the primary key of the table, which I assume that id is. For example, to query for an object with ID of 23: User.query.get(23) Note: As a few other commenters and answers have mentioned, this is not simply shorthand for \"Perform a query filtering on the primary key\". Depending on the state of the SQLAlchemy session, running this code may query the database and return a new instance, or it may return an instance of an object queried earlier in your code without actually querying the database. If you have not already done so, consider reading the documentation on the SQLAlchemy Session to understand the ramifications."} +{"question_id": 49684951, "score": 135, "creation_date": 1522986336, "tags": ["python", "pandas", "csv"], "instruction": "Pandas read_csv dtype read all columns but few as string\n\nI'm using Pandas to read a bunch of CSVs. Passing an options json to dtype parameter to tell pandas which columns to read as string instead of the default: dtype_dic= { 'service_id':str, 'end_date':str, ... } feedArray = pd.read_csv(feedfile , dtype = dtype_dic) In my scenario, all the columns except a few specific ones are to be read as strings. So instead of defining several columns as str in dtype_dic, I'd like to set just my chosen few as int or float. Is there a way to do that? It's a loop cycling through various CSVs with differing columns, so a direct column conversion after having read the whole csv as string (dtype=str), would not be easy as I would not immediately know which columns that csv is having. (I'd rather spend that effort in defining all the columns in the dtype json!) Edit: But if there's a way to process the list of column names to be converted to number without erroring out if that column isn't present in that csv, then yes that'll be a valid solution, if there's no other way to do this at csv reading stage itself. Note: this sounds like a previously asked question but the answers there went down a very different path (bool related) which doesn't apply to this question. Pls don't mark as duplicate!", "output": "For Pandas 1.5.0+, there's an easy way to do this. If you use a defaultdict instead of a normal dict for the dtype argument, any columns which aren't explicitly listed in the dictionary will use the default as their type. E.g. from collections import defaultdict types = defaultdict(lambda: str, A=\"int\", B=\"float\") df = pd.read_csv(\"/path/to/file.csv\", dtype=types, keep_default_na=False) (I haven't tested this, but I assume you still need keep_default_na=False) For older versions of Pandas: You can read the entire csv as strings then convert your desired columns to other types afterwards like this: df = pd.read_csv('/path/to/file.csv', dtype=str, keep_default_na=False) # example df; yours will be from pd.read_csv() above df = pd.DataFrame({'A': ['1', '3', '5'], 'B': ['2', '4', '6'], 'C': ['x', 'y', 'z']}) types_dict = {'A': int, 'B': float} for col, col_type in types_dict.items(): df[col] = df[col].astype(col_type) keep_default_na=False is necessary if some of the columns are empty strings or something like NA which pandas convert to NA of type float by default, which would make you end up with a mixed datatype of str/float Another approach, if you really want to specify the proper types for all columns when reading the file in and not change them after: read in just the column names (no rows), then use those to fill in which columns should be strings col_names = pd.read_csv('file.csv', nrows=0).columns types_dict = {'A': int, 'B': float} types_dict.update({col: str for col in col_names if col not in types_dict}) pd.read_csv('file.csv', dtype=types_dict)"} +{"question_id": 30515456, "score": 135, "creation_date": 1432841146, "tags": ["python", "jinja2"], "instruction": "Split a string into a list in Jinja\n\nI have some variables in a Jinja 2 template which are strings separated by a ';'. I need to use these strings separately in the code. I.e., the variable is variable1 = \"green;blue\" {% list1 = {{ variable1 }}.split(';') %} The grass is {{ list1[0] }} and the boat is {{ list1[1] }} I can split them up before rendering the template, but since it is sometimes up to 10 strings inside the string, this gets messy. I had a JSP part before where I did: <% String[] list1 = val.get(\"variable1\").split(\";\");%> The grass is <%= list1[0] %> and the boat is <%= list1[1] %> It works with: {% set list1 = variable1.split(';') %} The grass is {{ list1[0] }} and the boat is {{ list1[1] }}", "output": "A string variable can be split into a list by using the split function (it can contain similar values, set is for the assignment). I haven't found this function in the official documentation, but it works similar to normal Python. The items can be called via an index, used in a loop or, like Dave suggested, if you know the values, it can set variables like a tuple. {% set list1 = variable1.split(';') %} The grass is {{ list1[0] }} and the boat is {{ list1[1] }} or {% set list1 = variable1.split(';') %} {% for item in list1 %} <p>{{ item }}<p/> {% endfor %} or {% set item1, item2 = variable1.split(';') %} The grass is {{ item1 }} and the boat is {{ item2 }} (This is after coming back to my own question after 5 years and seeing so many people found this useful, a little update.)"} +{"question_id": 49733699, "score": 135, "creation_date": 1523279104, "tags": ["python", "python-typing", "mypy"], "instruction": "Python type hints and context managers\n\nHow should a context manager be annotated with Python type hints? import typing @contextlib.contextmanager def foo() -> ???: yield The documentation on contextlib doesn't mention types much. The documentation on typing.ContextManager is not all that helpful either. There's also typing.Generator, which at least has an example. Does that mean I should use typing.Generator[None, None, None] and not typing.ContextManager? import typing @contextlib.contextmanager def foo() -> typing.Generator[None, None, None]: yield", "output": "Whenever I'm not 100% sure what types a function accepts, I like to consult typeshed, which is the canonical repository of type hints for Python. Mypy directly bundles and uses typeshed to help it perform its typechecking, for example. We can find the stubs for contextlib here: https://github.com/python/typeshed/blob/master/stdlib/contextlib.pyi if sys.version_info >= (3, 2): class GeneratorContextManager(ContextManager[_T], Generic[_T]): def __call__(self, func: Callable[..., _T]) -> Callable[..., _T]: ... def contextmanager(func: Callable[..., Iterator[_T]]) -> Callable[..., GeneratorContextManager[_T]]: ... else: def contextmanager(func: Callable[..., Iterator[_T]]) -> Callable[..., ContextManager[_T]]: ... It's a little overwhelming, but the line we care about is this one: def contextmanager(func: Callable[..., Iterator[_T]]) -> Callable[..., ContextManager[_T]]: ... It states that the decorator takes in a Callable[..., Iterator[_T]] -- a function with arbitrary arguments returning some iterator. So in conclusion, it would be fine to do: @contextlib.contextmanager def foo() -> Iterator[None]: yield So, why does using Generator[None, None, None] also work, as suggested by the comments? It's because Generator is a subtype of Iterator -- we can again check this for ourselves by consulting typeshed. So, if our function returns a generator, it's still compatible with what contextmanager expects so mypy accepts it without an issue."} +{"question_id": 25735153, "score": 135, "creation_date": 1410223505, "tags": ["python", "numpy", "scipy", "fft"], "instruction": "Plotting a fast Fourier transform in Python\n\nI have access to NumPy and SciPy and want to create a simple FFT of a data set. I have two lists, one that is y values and the other is timestamps for those y values. What is the simplest way to feed these lists into a SciPy or NumPy method and plot the resulting FFT? I have looked up examples, but they all rely on creating a set of fake data with some certain number of data points, and frequency, etc. and don't really show how to do it with just a set of data and the corresponding timestamps. I have tried the following example: from scipy.fftpack import fft # Number of samplepoints N = 600 # Sample spacing T = 1.0 / 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = fft(y) xf = np.linspace(0.0, 1.0/(2.0*T), N/2) import matplotlib.pyplot as plt plt.plot(xf, 2.0/N * np.abs(yf[0:N/2])) plt.grid() plt.show() But when I change the argument of fft to my data set and plot it, I get extremely odd results, and it appears the scaling for the frequency may be off. I am unsure. Here is a pastebin of the data I am attempting to FFT http://pastebin.com/0WhjjMkb http://pastebin.com/ksM4FvZS When I use fft() on the whole thing it just has a huge spike at zero and nothing else. Here is my code: ## Perform FFT with SciPy signalFFT = fft(yInterp) ## Get power spectral density signalPSD = np.abs(signalFFT) ** 2 ## Get frequencies corresponding to signal PSD fftFreq = fftfreq(len(signalPSD), spacing) ## Get positive half of frequencies i = fftfreq>0 ## plt.figurefigsize = (8, 4) plt.plot(fftFreq[i], 10*np.log10(signalPSD[i])); #plt.xlim(0, 100); plt.xlabel('Frequency [Hz]'); plt.ylabel('PSD [dB]') Spacing is just equal to xInterp[1]-xInterp[0].", "output": "So I run a functionally equivalent form of your code in an IPython notebook: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.fftpack # Number of samplepoints N = 600 # sample spacing T = 1.0 / 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = scipy.fftpack.fft(y) xf = np.linspace(0.0, 1.0/(2.0*T), N//2) fig, ax = plt.subplots() ax.plot(xf, 2.0/N * np.abs(yf[:N//2])) plt.show() I get what I believe to be very reasonable output. It's been longer than I care to admit since I was in engineering school thinking about signal processing, but spikes at 50 and 80 are exactly what I would expect. So what's the issue? In response to the raw data and comments being posted The problem here is that you don't have periodic data. You should always inspect the data that you feed into any algorithm to make sure that it's appropriate. import pandas import matplotlib.pyplot as plt #import seaborn %matplotlib inline # the OP's data x = pandas.read_csv('http://pastebin.com/raw.php?i=ksM4FvZS', skiprows=2, header=None).values y = pandas.read_csv('http://pastebin.com/raw.php?i=0WhjjMkb', skiprows=2, header=None).values fig, ax = plt.subplots() ax.plot(x, y)"} +{"question_id": 54633657, "score": 134, "creation_date": 1549898239, "tags": ["python", "ubuntu", "pip"], "instruction": "How can I Install pip for python 3.7 on Ubuntu 18?\n\nI've installed Python 3.7 on my Ubuntu 18.04 machine. Following this instructions in case it's relevant: Download : Python 3.7 from Python Website [1] ,on Desktop and manually unzip it, on Desktop Installation : Open Terminal (ctrl +shift+T) Go to the Extracted folder $ cd ~/Desktop/Python-3.7.0 $ ./configure $ make $ sudo make install Making Python 3.7 default Python : $ sudo vim ~/.bashrc press i on the last and new line - Type alias python= python3.7 press Esc type - to save and exit vim :wq now type $ source ~/.bashrc From here: https://www.quora.com/How-can-I-upgrade-Python-3-6-to-3-7-in-Ubuntu-18-04 I've downloaded several modules through pip install module but when I try to import them, I get a ModuleNotFoundError: No module names 'xx' So I did some research and apparently when used pip to install, it installed in the modules in previous version of Python. Somewhere (probably a question in SO) I found a suggestion to install the module using python3.7 -m pip install module but then I get /usr/local/bin/python3.7: no module named pip. Now I'm stuck, pip is installed, but apparently not for Python 3.7. I'm assuming that if I can install pip for Python 3.7, I can run the pip install command and get the modules I need. If that is the case, how can I install pip for python 3.7, since it's already installed? This is the best I have come up with: I have installed python 3.7 successfully and I can install modules using pip (or pip3) but those modules are installed in Python 3.6 (Comes with ubuntu). Therefore I can't import those modules in python 3.7 (get a module not found) Python 3.7 doesn't recognize pip/pip3, so I can't install through pip/pip3. I need python 3.7.", "output": "In general, don't do this: pip install package because, as you have correctly noticed, it's not clear what Python version you're installing package for. Instead, if you want to install package for Python 3.7, do this: python3.7 -m pip install package Replace package with the name of whatever you're trying to install. Took me a surprisingly long time to figure it out, too. The docs about it are here. Your other option is to set up a virtual environment. Once your virtual environment is active, executable names like python and pip will point to the correct ones."} +{"question_id": 63816790, "score": 134, "creation_date": 1599673099, "tags": ["python", "python-3.x", "download", "youtube", "youtube-dl"], "instruction": "Youtube_dl : ERROR : YouTube said: Unable to extract video data\n\nI'm making a little graphic interface with Python 3 which should download a youtube video with its URL. I used the youtube_dl module for that. This is my code : import youtube_dl # Youtube_dl is used for download the video ydl_opt = {\"outtmpl\" : \"/videos/%(title)s.%(ext)s\", \"format\": \"bestaudio/best\"} # Here we give some advanced settings. outtmpl is used to define the path of the video that we are going to download def operation(link): \"\"\" Start the download operation \"\"\" try: with youtube_dl.YoutubeDL(ydl_opt) as yd: # The method YoutubeDL() take one argument which is a dictionary for changing default settings video = yd.download([link]) # Start the download result.set(\"Your video has been downloaded !\") except Exception: result.set(\"Sorry, we got an error.\") operation(\"https://youtube.com/watch?v=...\") When I execute my code, I get this error: ERROR: YouTube said: Unable to extract video data I saw here that it was because it doesn't find any video info, how can I resolve this problem?", "output": "Updating youtube-dl helped me. Depending on the way you installed it, here are the commands: youtube-dl --update (self-update) pip install -U youtube-dl (via python) brew upgrade youtube-dl (macOS + homebrew) choco upgrade youtube-dl (Windows + Chocolatey)"} +{"question_id": 54802616, "score": 134, "creation_date": 1550738368, "tags": ["python", "django", "python-3.x", "django-models", "enums"], "instruction": "How can one use enums as a choice field in a Django model?\n\nI have a model class in which I want two fields to be choice fields, so to populate those choices I am using an enum as listed below: #models.py class Transaction(models.Model): transaction_status = models.CharField(max_length=255, choices=TransactionStatus.choices()) transaction_type = models.CharField(max_length=255, choices=TransactionType.choices()) #enums.py class TransactionType(Enum): IN = \"IN\", OUT = \"OUT\" @classmethod def choices(cls): print(tuple((i.name, i.value) for i in cls)) return tuple((i.name, i.value) for i in cls) class TransactionStatus(Enum): INITIATED = \"INITIATED\", PENDING = \"PENDING\", COMPLETED = \"COMPLETED\", FAILED = \"FAILED\" ERROR = \"ERROR\" @classmethod def choices(cls): print(tuple((i.name, i.value) for i in cls)) return tuple((i.name, i.value) for i in cls) However, when I am trying to access this model through the Django Admin I am getting the following error: Django Version: 1.11 Exception Type: ValueError Exception Value: too many values to unpack (expected 2) I followed two articles that described how to use enums: https://hackernoon.com/using-enum-as-model-field-choice-in-django-92d8b97aaa63 https://blog.richard.do/2014/02/18/how-to-use-enums-for-django-field-choices/", "output": "django > 3.0 - have built-in support for Enums from django.utils.translation import gettext_lazy as _ class Student(models.Model): class YearInSchool(models.TextChoices): FRESHMAN = \"FR\", _(\"Freshman\") SOPHOMORE = \"SO\", _(\"Sophomore\") JUNIOR = \"JR\", _(\"Junior\") SENIOR = \"SR\", _(\"Senior\") GRADUATE = \"GR\", _(\"Graduate\") year_in_school = models.CharField( max_length=2, choices=YearInSchool.choices, default=YearInSchool.FRESHMAN, ) def is_upperclass(self): return self.year_in_school in { self.YearInSchool.JUNIOR, self.YearInSchool.SENIOR, } Problem in your code is that INITIATED = \"INITIATED\", a comma after INITIATED option and other options. when we add comma after any string it will become a tuple. See an example below s = 'my str' print(type(s)) # output: str s = 'my str', print(type(s)) # output: tuple #models.py class Transaction(models.Model): trasaction_status = models.CharField(max_length=255, choices=TransactionStatus.choices()) transaction_type = models.CharField(max_length=255, choices=TransactionType.choices()) #enums.py class TransactionType(Enum): IN = \"IN\" OUT = \"OUT\" @classmethod def choices(cls): print(tuple((i.name, i.value) for i in cls)) return tuple((i.name, i.value) for i in cls) class TransactionStatus(Enum): INITIATED = \"INITIATED\" PENDING = \"PENDING\" COMPLETED = \"COMPLETED\" FAILED = \"FAILED\" ERROR = \"ERROR\" @classmethod def choices(cls): print(tuple((i.value, i.name) for i in cls)) return tuple((i.value, i.name) for i in cls) For django > 3.0 https://docs.djangoproject.com/en/4.0/ref/models/fields/#field-choices-enum-types"} +{"question_id": 57505071, "score": 134, "creation_date": 1565845570, "tags": ["python", "python-typing", "nameerror"], "instruction": "NameError: name 'List' is not defined\n\nI'm really unsure why this isn't working. Here is the important part of the code (it's from a leetcode challenge). The first line throws the NameError. def totalFruit(self, tree: List[int]) -> int: pass If I try importing List first I get an error No module named 'List'. I'm using Python 3.7.3 from Anaconda.", "output": "To be able to annotate what types your list should accept, you need to use typing.List from typing import List So did you import List? If you're using Python >= 3.9, see @Adam.Er8's answer for the new syntax."} +{"question_id": 42704283, "score": 134, "creation_date": 1489089259, "tags": ["python", "machine-learning", "pytorch", "loss-function", "regularized"], "instruction": "L1/L2 regularization in PyTorch\n\nHow do I add L1/L2 regularization in PyTorch without manually computing it?", "output": "See the documentation. Add a weight_decay parameter to the optimizer for L2 regularization."} +{"question_id": 5812960, "score": 134, "creation_date": 1303958086, "tags": ["python", "matplotlib"], "instruction": "Change figure window title in pylab\n\nHow can I set a figure window's title in pylab/python? fig = figure(9) # 9 is now the title of the window fig.set_title(\"Test\") #doesn't work fig.title = \"Test\" #doesn't work", "output": "If you want to actually change the window you can do: fig = pylab.gcf() fig.canvas.manager.set_window_title('Test')"} +{"question_id": 10738919, "score": 134, "creation_date": 1337867823, "tags": ["python", "virtualenv"], "instruction": "How do I add a path to PYTHONPATH in virtualenv\n\nI am trying to add a path to the PYTHONPATH environment variable, that would be only visible from a particular virtualenv environment. I tried SET PYTHONPATH=... under a virtualenv command prompt, but that sets the variable for the whole environment. How do I achieve that?", "output": "You can usually avoid having to do anything with PYTHONPATH by using .pth files. Just put a file with a .pth extension (any basename works) in your virtualenv's site-packages folder, e.g. lib/python3.13/site-packages or (on Windows) lib\\site-packages. The file should contain just a path to the directory containing your package; it can be either an absolute path or one relative to the .pth file."} +{"question_id": 43252542, "score": 134, "creation_date": 1491474576, "tags": ["python", "python-3.x", "server", "network-programming", "python-requests"], "instruction": "How to measure server response time for Python requests POST-request\n\nI create POST requests with requests as follows, with a specified timeout threshold: response = requests.post(url, data=post_fields, timeout=timeout) However, to determine a \"good\" threshold, I need to benchmark the server response times. How do I compute the minimum and maximum response times for the server?", "output": "The Response object returned by requests.post() (and requests.get() etc.) has a property called elapsed, which provides the time delta between the Request was sent and the Response was received. To get the delta in seconds, use the total_seconds() method: response = requests.post(url, data=post_fields, timeout=timeout) print(response.elapsed.total_seconds()) Note that requests.post() is a synchronous operation, which means that it blocks until the Response is received."} +{"question_id": 17182656, "score": 134, "creation_date": 1371614357, "tags": ["python", "python-2.7", "alphabet"], "instruction": "How do I iterate through the alphabet?\n\nIn Python, could I simply ++ a char? What is an efficient way of doing this? I want to iterate through URLs and generate them in the following way: www.website.com/term/# www.website.com/term/a www.website.com/term/b www.website.com/term/c www.website.com/term/d ... www.website.com/term/z", "output": "You can use string.ascii_lowercase which is simply a convenience string of lowercase letters, Python 2 Example: from string import ascii_lowercase for c in ascii_lowercase: # append to your url Python 3 Example: #!/usr/bin/env python3 # -*- coding: utf-8 -*- from string import ascii_lowercase as alc for i in alc: print(f\"www.website.com/term/{i}\") # Result # www.website.com/term/a # www.website.com/term/b # www.website.com/term/c # ... # www.website.com/term/x # www.website.com/term/y # www.website.com/term/z Or if you want to keep nesting you can do like so: #!/usr/bin/env python3 # -*- coding: utf-8 -*- for i in alc: print(f\"www.website.com/term/{i}\") for j in alc: print(f\"www.website.com/term/{i}{j}\") # Result # www.website.com/term/a # www.website.com/term/aa # www.website.com/term/ab # www.website.com/term/ac # ... # www.website.com/term/ax # www.website.com/term/ay # www.website.com/term/az # www.website.com/term/b # www.website.com/term/ba # www.website.com/term/bb # www.website.com/term/bc # ... # www.website.com/term/bx # www.website.com/term/by # www.website.com/term/bz # www.website.com/term/c # www.website.com/term/ca # www.website.com/term/cb # www.website.com/term/cc # ... # ... # ... # www.website.com/term/z # www.website.com/term/za # www.website.com/term/zb # www.website.com/term/zc # www.website.com/term/zd # ... # www.website.com/term/zx # www.website.com/term/zy # www.website.com/term/zz"} +{"question_id": 36459969, "score": 134, "creation_date": 1459968908, "tags": ["python", "dictionary"], "instruction": "How to convert a list to a dictionary with indexes as values?\n\nI am trying to convert the following list: l = ['A', 'B', 'C'] To a dictionary like: d = {'A': 0, 'B': 1, 'C': 2} I have tried answers from other posts but none is working for me. I have the following code for now: d = {l[i]: i for i in range(len(l))} Which gives me this error: unhashable type: 'list'", "output": "You can get the indices of a list from the built-in enumerate. You just need to reverse the index-value map and use a dictionary comprehension to create a dictionary: >>> lst = ['A', 'B', 'C'] >>> {k: v for v, k in enumerate(lst)} {'A': 0, 'C': 2, 'B': 1}"} +{"question_id": 37726830, "score": 134, "creation_date": 1465477477, "tags": ["python", "numpy", "types"], "instruction": "How to determine if a number is any type of int (core or numpy, signed or not)?\n\nI need to test whether a variable is of type int, or any of np.int*, np.uint*, preferably using a single condition (i.e. no or). After some tests, I guess that: isinstance(n, int) will only match int and np.int32 (or np.int64 depending on plateform), np.issubdtype(type(n), int) seems to match all int and np.int*, but doesn\u2019t match np.uint*. This leads to two questions: will np.issubdtype match any kind of signed ints? Can determine in a single check whether a number is any kind of signed or unsigned int? This is about testing for integers, the test should return False for float-likes.", "output": "NumPy provides base classes that you can/should use for subtype-checking, rather than the Python types. Use np.integer to check for any instance of either signed or unsigned integers. Use np.signedinteger and np.unsignedinteger to check for signed types or unsigned types. >>> np.issubdtype(np.uint32, np.integer) True >>> np.issubdtype(np.uint32, np.signedinteger) False >>> np.issubdtype(int, np.integer) True >>> np.issubdtype(np.array([1, 2, 3]).dtype, np.integer) True All floating or complex number types will return False when tested. np.issubdtype(np.uint*, int) will always be False because the Python int is a signed type. A useful reference showing the relationship between all of these base classes is found in the documentation here."} +{"question_id": 20744277, "score": 134, "creation_date": 1387804159, "tags": ["python", "postgresql", "sqlalchemy", "flask-sqlalchemy"], "instruction": "SQLAlchemy create_all() does not create tables\n\nI'm trying to integrate PostgreSQL and SQLAlchemy but SQLAlchemy.create_all() is not creating any tables from my models. My code: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app' db = SQLAlchemy(app) db.create_all() db.session.commit() class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) def __init__(self, username, email): self.username = username self.email = email def __repr__(self): return '<User %r>' % self.username admin = User('admin', 'admin@example.com') guest = User('guest', 'guest@example.com') db.session.add(admin) db.session.add(guest) db.session.commit() users = User.query.all() print users But I get this error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) relation \"user\" does not exist How can I fix this?", "output": "You should put your model class before create_all() call, like this: from flask import Flask from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app' db = SQLAlchemy(app) class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) def __init__(self, username, email): self.username = username self.email = email def __repr__(self): return '<User %r>' % self.username with app.app_context(): db.create_all() db.session.add(User('admin', 'admin@example.com')) db.session.add(User('guest', 'guest@example.com')) db.session.commit() users = User.query.all() print(users) If your models are declared in a separate module, import them before calling create_all(). Say, the User model is in a file called models.py, from flask import Flask from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app' db = SQLAlchemy(app) # See important note below from models import User with app.app_context(): db.create_all() db.session.add(User('admin', 'admin@example.com')) db.session.add(User('guest', 'guest@example.com')) db.session.commit() users = User.query.all() print(users) Important note: It is important that you import your models after initializing the db object since, in your models.py you also need to import the db object from this module."} +{"question_id": 357785, "score": 134, "creation_date": 1228947045, "tags": ["python", "vim", "folding"], "instruction": "What is the recommended way to use Vim folding for Python code?\n\nI am interested in enabling code folding in Vim for Python code. I have noticed multiple ways to do so. Does anyone have a preferred way to do Python code folding in Vim? I.e, Do you have a particular Vim plugin that you use and like? Do you use manual folding or do you place markers in comments? Any other recommended ways to do code folding for Python in Vim?", "output": "Personally I can't convince myself to litter my code with the markers. I've become pretty used to (and efficient) at using indent-folding. Together with my mapping of space bar (see below) to open/close folds and the zR and zM commands, I'm right at home. Perfect for Python! set foldmethod=indent nnoremap <space> za vnoremap <space> zf This maps the spacebar to open/close the current indented fold in normal mode. It also maps the spacebar to create a manual fold in visual mode. However, this doesn't work if foldmethod is set to indent so its purpose is a mystery."} +{"question_id": 29590931, "score": 134, "creation_date": 1428851339, "tags": ["python", "templates", "file-extension", "jinja2"], "instruction": "Is there an idiomatic file extension for Jinja templates?\n\nI need to programatically distinguish between Jinja template files, other template files (such as ERB), and template-less plain text files. According to Jinja documentation: A Jinja template doesn\u2019t need to have a specific extension: .html, .xml, or any other extension is just fine. But what should I use when an explicit extension is required? .py is misleading, and any search including the words \"jinja\" and \"extension\" are badly searchwashed by discussion around Jinja Extensions. I could easily dictate a project-wide convention (.jnj or .ja come to mind) but this is for open source so I don't want to buck the trend if there's already established practice somewhere. EDIT 1: Again, I understand that the Jinja project \u2014 purposefully \u2014 does not define a default file extension. I'm asking if there are any unofficial conventions that have emerged for circumstances where one is desired for some project-specific reason. EDIT 2: Clarification: This is not for HTML content.", "output": "2021 update:: Jinja now officially recommends using the extension .jinja. See docs 2020 update: Things changed since I wrote this answer, .jinja2 and .j2 are trending. Jinja Authors did not define a default extension. Most of Jinja template editors like TextMate extension, Emacs extension, and PyCharm mention no default extension to enforce Jinja highlighting. Django already had a request for setting such a default extension, which ended up as a wontfix issue after some debate. I quote from the closing message: Filetype detection based on extension is flawed for the very reasons described in these comments, so you have to do some internal inspection, just like MIME type detection works. I suggest that you should use your own since there is no common one."} +{"question_id": 23289547, "score": 133, "creation_date": 1398419008, "tags": ["python", "list", "sorting", "shuffle"], "instruction": "Shuffle two list at once with same order\n\nI'm using the nltk library's movie_reviews corpus which contains a large number of documents. My task is get predictive performance of these reviews with pre-processing of the data and without pre-processing. But there is problem, in lists documents and documents2 I have the same documents and I need shuffle them in order to keep same order in both lists. I cannot shuffle them separately because each time I shuffle the list, I get other results. That is why I need to shuffle the at once with same order because I need compare them in the end (it depends on order). I'm using python 2.7 Example (in real are strings tokenized, but it is not relative): documents = [(['plot : two teen couples go to a church party , '], 'neg'), (['drink and then drive . '], 'pos'), (['they get into an accident . '], 'neg'), (['one of the guys dies'], 'neg')] documents2 = [(['plot two teen couples church party'], 'neg'), (['drink then drive . '], 'pos'), (['they get accident . '], 'neg'), (['one guys dies'], 'neg')] And I need get this result after shuffle both lists: documents = [(['one of the guys dies'], 'neg'), (['they get into an accident . '], 'neg'), (['drink and then drive . '], 'pos'), (['plot : two teen couples go to a church party , '], 'neg')] documents2 = [(['one guys dies'], 'neg'), (['they get accident . '], 'neg'), (['drink then drive . '], 'pos'), (['plot two teen couples church party'], 'neg')] I have this code: def cleanDoc(doc): stopset = set(stopwords.words('english')) stemmer = nltk.PorterStemmer() clean = [token.lower() for token in doc if token.lower() not in stopset and len(token) > 2] final = [stemmer.stem(word) for word in clean] return final documents = [(list(movie_reviews.words(fileid)), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] documents2 = [(list(cleanDoc(movie_reviews.words(fileid))), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] random.shuffle( and here shuffle documents and documents2 with same order) # or somehow", "output": "You can do it as: import random a = ['a', 'b', 'c'] b = [1, 2, 3] c = list(zip(a, b)) random.shuffle(c) a, b = zip(*c) print a print b [OUTPUT] ['a', 'c', 'b'] [1, 3, 2] Of course, this was an example with simpler lists, but the adaptation will be the same for your case."} +{"question_id": 30229231, "score": 133, "creation_date": 1431577712, "tags": ["python", "image", "download", "urllib"], "instruction": "python save image from url\n\nI got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print \"downloading: \" + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) The image URL that I want to download is http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says \"It may be damaged or use a file format that Preview doesn't recognize.\" I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.", "output": "A sample code that works for me on Windows: import requests with open('pic1.jpg', 'wb') as handle: response = requests.get(pic_url, stream=True) if not response.ok: print(response) for block in response.iter_content(1024): if not block: break handle.write(block)"} +{"question_id": 22520932, "score": 133, "creation_date": 1395274612, "tags": ["python", "regex"], "instruction": "Python, remove all non-alphabet chars from string\n\nI am writing a python MapReduce word count program. Problem is that there are many non-alphabet chars strewn about in the data, I have found this post Stripping everything but alphanumeric chars from a string in Python which shows a nice solution using regex, but I am not sure how to implement it def mapfn(k, v): print v import re, string pattern = re.compile('[\\W_]+') v = pattern.match(v) print v for w in v.split(): yield w, 1 I'm afraid I am not sure how to use the library re or even regex for that matter. I am not sure how to apply the regex pattern to the incoming string (line of a book) v properly to retrieve the new line without any non-alphanumeric chars. Suggestions?", "output": "Use re.sub import re regex = re.compile('[^a-zA-Z]') #First parameter is the replacement, second parameter is your input string regex.sub('', 'ab3d*E') #Out: 'abdE' Alternatively, if you only want to remove a certain set of characters (as an apostrophe might be okay in your input...) regex = re.compile('[,\\.!?]') #etc."} +{"question_id": 7546050, "score": 133, "creation_date": 1316960044, "tags": ["python", "python-3.x", "tkinter", "frame"], "instruction": "Switch between two frames in tkinter?\n\nI have built my first few scripts with a nice little GUI on them, as the tutorials have shown me, but none of them address what to do for a more complex program. If you have something with a 'start menu', for your opening screen, and upon user selection you move to a different section of the program and redraw the screen appropriately, what is the elegant way of doing this? Does one just .destroy() the 'start menu' frame and then create a new one filled with the widgets for another part? And reverse this process when they press the back button?", "output": "One way is to stack the frames on top of each other, then you can simply raise one above the other in the stacking order. The one on top will be the one that is visible. This works best if all the frames are the same size, but with a little work you can get it to work with any sized frames. Note: for this to work, all of the widgets for a page must have that page (ie: self) or a descendant as a parent (or master, depending on the terminology you prefer). Here's a bit of a contrived example to show you the general concept: try: import tkinter as tk # python 3 from tkinter import font as tkfont # python 3 except ImportError: import Tkinter as tk # python 2 import tkFont as tkfont # python 2 class SampleApp(tk.Tk): def __init__(self, *args, **kwargs): tk.Tk.__init__(self, *args, **kwargs) self.title_font = tkfont.Font(family='Helvetica', size=18, weight=\"bold\", slant=\"italic\") # the container is where we'll stack a bunch of frames # on top of each other, then the one we want visible # will be raised above the others container = tk.Frame(self) container.pack(side=\"top\", fill=\"both\", expand=True) container.grid_rowconfigure(0, weight=1) container.grid_columnconfigure(0, weight=1) self.frames = {} for F in (StartPage, PageOne, PageTwo): page_name = F.__name__ frame = F(parent=container, controller=self) self.frames[page_name] = frame # put all of the pages in the same location; # the one on the top of the stacking order # will be the one that is visible. frame.grid(row=0, column=0, sticky=\"nsew\") self.show_frame(\"StartPage\") def show_frame(self, page_name): '''Show a frame for the given page name''' frame = self.frames[page_name] frame.tkraise() class StartPage(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) self.controller = controller label = tk.Label(self, text=\"This is the start page\", font=controller.title_font) label.pack(side=\"top\", fill=\"x\", pady=10) button1 = tk.Button(self, text=\"Go to Page One\", command=lambda: controller.show_frame(\"PageOne\")) button2 = tk.Button(self, text=\"Go to Page Two\", command=lambda: controller.show_frame(\"PageTwo\")) button1.pack() button2.pack() class PageOne(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) self.controller = controller label = tk.Label(self, text=\"This is page 1\", font=controller.title_font) label.pack(side=\"top\", fill=\"x\", pady=10) button = tk.Button(self, text=\"Go to the start page\", command=lambda: controller.show_frame(\"StartPage\")) button.pack() class PageTwo(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) self.controller = controller label = tk.Label(self, text=\"This is page 2\", font=controller.title_font) label.pack(side=\"top\", fill=\"x\", pady=10) button = tk.Button(self, text=\"Go to the start page\", command=lambda: controller.show_frame(\"StartPage\")) button.pack() if __name__ == \"__main__\": app = SampleApp() app.mainloop() If you find the concept of creating instance in a class confusing, or if different pages need different arguments during construction, you can explicitly call each class separately. The loop serves mainly to illustrate the point that each class is identical. For example, to create the classes individually you can remove the loop (for F in (StartPage, ...) with this: self.frames[\"StartPage\"] = StartPage(parent=container, controller=self) self.frames[\"PageOne\"] = PageOne(parent=container, controller=self) self.frames[\"PageTwo\"] = PageTwo(parent=container, controller=self) self.frames[\"StartPage\"].grid(row=0, column=0, sticky=\"nsew\") self.frames[\"PageOne\"].grid(row=0, column=0, sticky=\"nsew\") self.frames[\"PageTwo\"].grid(row=0, column=0, sticky=\"nsew\") Over time people have asked other questions using this code (or an online tutorial that copied this code) as a starting point. You might want to read the answers to these questions: Understanding parent and controller in Tkinter __init__ Tkinter! Understanding how to switch frames How to get variable data from a class Calling functions from a Tkinter Frame to another How to access variables from different classes in tkinter? How would I make a method which is run every time a frame is shown in tkinter Tkinter Frame Resize Tkinter have code for pages in separate files Refresh a tkinter frame on button press"} +{"question_id": 38088652, "score": 133, "creation_date": 1467162148, "tags": ["python", "pandas", "series", "categorical-data", "binning"], "instruction": "Pandas: convert categories to numbers\n\nSuppose I have a dataframe with countries that goes as: cc | temp US | 37.0 CA | 12.0 US | 35.0 AU | 20.0 I know that there is a pd.get_dummies function to convert the countries to 'one-hot encodings'. However, I wish to convert them to indices instead such that I will get cc_index = [1,2,1,3] instead. I'm assuming that there is a faster way than using the get_dummies along with a numpy where clause as shown below: [np.where(x) for x in df.cc.get_dummies().values] This is somewhat easier to do in R using 'factors' so I'm hoping pandas has something similar.", "output": "First, change the type of the column: df.cc = pd.Categorical(df.cc) Now the data look similar but are stored categorically. To capture the category codes: df['code'] = df.cc.codes Now you have: cc temp code 0 US 37.0 2 1 CA 12.0 1 2 US 35.0 2 3 AU 20.0 0 If you don't want to modify your DataFrame but simply get the codes: df.cc.astype('category').codes Or use the categorical column as an index: df2 = pd.DataFrame(df.temp) df2.index = pd.CategoricalIndex(df.cc)"} +{"question_id": 23969619, "score": 133, "creation_date": 1401536341, "tags": ["python", "oop", "matplotlib", "seaborn"], "instruction": "Plotting with seaborn using the matplotlib object-oriented interface\n\nI strongly prefer using matplotlib in OOP style: f, axarr = plt.subplots(2, sharex=True) axarr[0].plot(...) axarr[1].plot(...) This makes it easier to keep track of multiple figures and subplots. Question: How to use seaborn this way? Or, how to change this example to OOP style? How to tell seaborn plotting functions like lmplot which Figure or Axes it plots to?", "output": "It depends a bit on which seaborn function you are using. The plotting functions in seaborn are broadly divided into two types: \"Axes-level\" functions, including regplot, boxplot, kdeplot, and many others \"Figure-level\" functions, including relplot, catplot, displot, pairplot, jointplot and one or two others The first group is identified by taking an explicit ax argument and returning an Axes object. As this suggests, you can use them in an \"object oriented\" style by passing your Axes to them: f, (ax1, ax2) = plt.subplots(2) sns.regplot(x, y, ax=ax1) sns.kdeplot(x, ax=ax2) Axes-level functions will only draw onto an Axes and won't otherwise mess with the figure, so they can coexist perfectly happily in an object-oriented matplotlib script. The second group of functions (Figure-level) are distinguished by the fact that the resulting plot can potentially include several Axes which are always organized in a \"meaningful\" way. That means that the functions need to have total control over the figure, so it isn't possible to plot, say, an lmplot onto one that already exists. Calling the function always initializes a figure and sets it up for the specific plot it's drawing. However, once you've called lmplot, it will return an object of the type FacetGrid. This object has some methods for operating on the resulting plot that know a bit about the structure of the plot. It also exposes the underlying figure and array of axes at the FacetGrid.fig and FacetGrid.axes arguments. The jointplot function is very similar, but it uses a JointGrid object. So you can still use these functions in an object-oriented context, but all of your customization has to come after you've called the function."} +{"question_id": 8381193, "score": 133, "creation_date": 1323061134, "tags": ["python", "json", "python-3.x"], "instruction": "Handle JSON Decode Error when nothing returned\n\nI am parsing json data. I don't have an issue with parsing and I am using simplejson module. But some api requests returns empty value. Here is my example: { \"all\" : { \"count\" : 0, \"questions\" : [ ] } } This is the segment of my code where I parse the json object: qByUser = byUsrUrlObj.read() qUserData = json.loads(qByUser).decode('utf-8') questionSubjs = qUserData[\"all\"][\"questions\"] As I mentioned for some requests I get the following error: Traceback (most recent call last): File \"YahooQueryData.py\", line 164, in <module> qUserData = json.loads(qByUser) File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/simplejson/__init__.py\", line 385, in loads return _default_decoder.decode(s) File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/simplejson/decoder.py\", line 402, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/simplejson/decoder.py\", line 420, in raw_decode raise JSONDecodeError(\"No JSON object could be decoded\", s, idx) simplejson.decoder.JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0) What would be the best way to handle this error?", "output": "There is a rule in Python programming called \"it is Easier to Ask for Forgiveness than for Permission\" (in short: EAFP). It means that you should catch exceptions instead of checking values for validity. Thus, try the following: try: qByUser = byUsrUrlObj.read() qUserData = json.loads(qByUser).decode('utf-8') questionSubjs = qUserData[\"all\"][\"questions\"] except ValueError: # includes simplejson.decoder.JSONDecodeError print('Decoding JSON has failed') EDIT: Since simplejson.decoder.JSONDecodeError actually inherits from ValueError (proof here), I simplified the catch statement by just using ValueError."} +{"question_id": 3995034, "score": 133, "creation_date": 1287735697, "tags": ["python", "regex", "word-boundaries"], "instruction": "Do regular expressions from the re module support word boundaries (\\b)?\n\nWhile trying to learn a little more about regular expressions, a tutorial suggested that you can use the \\b to match a word boundary. However, the following snippet in the Python interpreter does not work as expected: >>> x = 'one two three' >>> y = re.search(\"\\btwo\\b\", x) It should have been a match object if anything was matched, but it is None. Is the \\b expression not supported in Python or am I using it wrong?", "output": "You should be using raw strings in your code >>> x = 'one two three' >>> y = re.search(r\"\\btwo\\b\", x) >>> y <_sre.SRE_Match object at 0x100418a58> >>> Also, why don't you try word = 'two' re.compile(r'\\b%s\\b' % word, re.I) Output: >>> word = 'two' >>> k = re.compile(r'\\b%s\\b' % word, re.I) >>> x = 'one two three' >>> y = k.search( x) >>> y <_sre.SRE_Match object at 0x100418850>"} +{"question_id": 1423804, "score": 133, "creation_date": 1252960910, "tags": ["python", "character-encoding", "sqlalchemy", "connection-string"], "instruction": "Writing a connection string when password contains special characters\n\nI'm using SQLalchemy for a Python project, and I want to have a tidy connection string to access my database. So for example: engine = create_engine('postgresql://user:pass@host/database') The problem is my password contains a sequence of special characters that get interpreted as delimiters when I try to connect. I realize that I could just use engine.URL.create() and then pass my credentials like this: import sqlalchemy as sa connection_url = sa.engine.URL.create( drivername=\"postgresql\", username=\"user\", password=\"p@ss\", host=\"host\", database=\"database\", ) print(connection_url) # postgresql://user:p%40ss@host/database But I'd much rather use a connection string if this is possible. So to be clear, is it possible to encode my connection string, or the password part of the connection string - so that it can be properly parsed?", "output": "You need to URL-encode the password portion of the connect string: from urllib.parse import quote_plus from sqlalchemy.engine import create_engine engine = create_engine(\"postgres://user:%s@host/database\" % quote_plus(\"p@ss\")) If you look at the implementation of the class used in SQLAlchemy to represent database connection URLs (in sqlalchemy/engine/url.py), you can see that they use the same method to escape passwords when converting the URL instances into strings."} +{"question_id": 12848605, "score": 133, "creation_date": 1349989956, "tags": ["python", "django", "django-models", "django-forms", "modelform"], "instruction": "Django ModelForm: What is save(commit=False) used for?\n\nWhy would I ever use save(commit=False) instead of just creating a form object from the ModelForm subclass and running is_valid() to validate both the form and model? In other words, what is save(commit=False) for? Can you provide hypothetical situations where this might be useful?", "output": "That's useful when you get most of your model data from a form, but you need to populate some null=False fields with non-form data. Saving with commit=False gets you a model object, then you can add your extra data and save it. This is a good example of that situation. Here's the documentation on the save method. Note that if your form includes many-to-many fields, you'll also want to call form.save_m2m() after saving the model instance."} +{"question_id": 7404116, "score": 133, "creation_date": 1315926277, "tags": ["python", "matplotlib", "colormap"], "instruction": "Defining the midpoint of a colormap in matplotlib\n\nI want to set the middle point of a colormap, i.e., my data goes from -5 to 10 and I want zero to be the middle point. I think the way to do it is by subclassing normalize and using the norm, but I didn't find any example and it is not clear to me, what exactly have I to implement?", "output": "With matplotlib version 3.4 or later, the perhaps simplest solution is to use the new CenteredNorm. Example using CenteredNorm and one of the diverging colormaps: import matplotlib.pyplot as plt import matplotlib as mpl plt.pcolormesh(data_to_plot, norm=mpl.colors.CenteredNorm(), cmap='coolwarm') Being simple, CenteredNorm is symmetrical, so that if the data goes from -5 to 10, the colormap will be stretched from -10 to 10. If you want a different mapping on either side of the center, so that the colormap ranges from -5 to 10, use the TwoSlopeNorm as described in @macKaiver's answer."} +{"question_id": 17602878, "score": 133, "creation_date": 1373574599, "tags": ["python"], "instruction": "How to handle both `with open(...)` and `sys.stdout` nicely?\n\nOften I need to output data either to file or, if file is not specified, to stdout. I use the following snippet: if target: with open(target, 'w') as h: h.write(content) else: sys.stdout.write(content) I would like to rewrite it and handle both targets uniformly. In ideal case it would be: with open(target, 'w') as h: h.write(content) but this will not work well because sys.stdout is be closed when leaving with block and I don't want that. I neither want to stdout = open(target, 'w') ... because I would need to remember to restore original stdout. Related: Redirect stdout to a file in Python? Handling Exceptions - interesting article about handling exceptions in Python, as compared to C++ Edit I know that I can wrap target, define separate function or use context manager. I look for a simple, elegant, idiomatic solution fitting that wouldn't require more than 5 lines", "output": "Just thinking outside of the box here, how about a custom open() method? import sys import contextlib @contextlib.contextmanager def smart_open(filename=None): if filename and filename != '-': fh = open(filename, 'w') else: fh = sys.stdout try: yield fh finally: if fh is not sys.stdout: fh.close() Use it like this: # For Python 2 you need this line from __future__ import print_function # writes to some_file with smart_open('some_file') as fh: print('some output', file=fh) # writes to stdout with smart_open() as fh: print('some output', file=fh) # writes to stdout with smart_open('-') as fh: print('some output', file=fh)"} +{"question_id": 43397162, "score": 133, "creation_date": 1492100095, "tags": ["python", "linux", "matplotlib", "windows-subsystem-for-linux"], "instruction": "Show matplotlib plots (and other GUI) in Ubuntu (WSL1 & WSL2)\n\nSo it seems on ubuntu for windows (windows subsystem for linux) people are suggesting we need to use Agg backend and just save images, not show plots. import matplotlib matplotlib.use('Agg') # no UI backend import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2*np.pi*t) plt.plot(t, s) plt.title('About as simple as it gets, folks') #plt.show() plt.savefig(\"matplotlib.png\") #savefig, don't show How could we get it to where plt.show() would actually show us an image? My current option is to override plot.show() to instead just savefig a plot-148123456.png under /mnt/c/Users/james/plots/ in windows and just have an explorer window open viewing the images. I suppose I could host that folder and use a browser. My goal is to be able to run simple examples like the code above without changing the code to ftp the images somewhere etc. I just want the plot to show up in a window. Has anyone figured out a decent way to do it?", "output": "Ok, so I got it working as follows. I have Ubuntu on windows, with anaconda python 3.6 installed. Download and install VcXsrv or Xming (X11 for Windows) from sourceforge(see edit below) sudo apt-get update sudo apt-get install python3.6-tk (you may have to install a different python*-tk depnding on the python version you're using) pip install matplotlib (for matplotlib. but many other things now work too) export DISPLAY=localhost:0.0 (add to ~/.bashrc to make permanent. see WSL2 below) Anyways, after all that, this code running in ubuntu on wsl worked as is: import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2*np.pi*t) plt.plot(t, s) plt.title('About as simple as it gets, folks') plt.show() result: Maybe this is better done through a Jupyter notebook or something, but it's nice to have basic command-line python matplotlib functionality in Ubuntu for Windows on Subsystem for Linux, and this makes many other gui apps work too. For example you can install xeyes, and it will say to install x11-apps and installing that will install GTK which a lot of GUI apps use. But the point is once you have your DISPLAY set correctly, and your x server on windows, then most things that would work on a native ubuntu will work for the WSL. Edit 2019-09-04 : Today I was having issues with 'unable to get screen resources' after upgrading some libraries. So I installed VcXsrv and used that instead of Xming. Just install from https://sourceforge.net/projects/vcxsrv/ and run xlaunch.exe, select multiple windows, next next next ok. Then everything worked. Edit for WSL 2 users 2020-06-23 WSL2 (currently insider fast ring) has GPU/docker support so worth upgrade. However it runs in vm. For WSL 2, follow same steps 1-4 then: the ip is not localhost. it's in resolv.conf so run this instead (and include in ~/.bashrc): export DISPLAY=`grep -oP \"(?<=nameserver ).+\" /etc/resolv.conf`:0.0 Now double-check firewall: Windows Security -> Firewall & network protection -> Allow an app through firewall -> make sure VcXsrv has both public and private checked. (When Launching xlaunch first time, you might get a prompt to allow through firewall. This works too. Also, if VcXsrv is not in list of apps, you can manually add it, eg from 'C:\\program files\\vcxsrv\\vcxsrv.exe') Launch VcXsrv with \"Disable access control\" ticked Note: a few WSL2 users got error like couldn't connect to display \"172.x.x.x:0\". If that's you try to check the IP address stored in DISPLAY with this command: echo $DISPLAY. If the showed IP seems to be wrong (i.e. \"8.8.8.8\" or another not working IP address) you need to change the code in ~/.bashrc showed in the point 5 to something that will get your instance's ip address. One user said this worked: export DISPLAY=$(ifconfig | grep inet | awk '{print $2}' | head -n 1 | awk '{print $0\":0\"}'). However for some others it did not work. YMMV, but just find your IP and use if for DISPLAY. For most WSL2 users, the command in #5 works. Edit for Windows 11 : if MS convinced you to throw out your old computer and buy one with a TPM and so you got Windows 11, you get GUI for free. I hope they add upgrade path to do that on Windows 10 because Win10 will be like XP and last a long time since MS decided you need recent computer even though Win11 would work fine on old computers."} +{"question_id": 23831510, "score": 133, "creation_date": 1400854341, "tags": ["python", "abstract"], "instruction": "Abstract attribute (not property)?\n\nWhat's the best practice to define an abstract instance attribute, but not as a property? I would like to write something like: class AbstractFoo(metaclass=ABCMeta): @property @abstractmethod def bar(self): pass class Foo(AbstractFoo): def __init__(self): self.bar = 3 Instead of: class Foo(AbstractFoo): def __init__(self): self._bar = 3 @property def bar(self): return self._bar @bar.setter def setbar(self, bar): self._bar = bar @bar.deleter def delbar(self): del self._bar Properties are handy, but for simple attribute requiring no computation they are an overkill. This is especially important for abstract classes which will be subclassed and implemented by the user (I don't want to force someone to use @property when he just could have written self.foo = foo in the __init__). Abstract attributes in Python question proposes as only answer to use @property and @abstractmethod: it doesn't answer my question. The ActiveState recipe for an abstract class attribute via AbstractAttribute may be the right way, but I am not sure. It also only works with class attributes and not instance attributes.", "output": "If you really want to enforce that a subclass define a given attribute, you can use metaclasses: class AbstractFooMeta(type): def __call__(cls, *args, **kwargs): \"\"\"Called when you call Foo(*args, **kwargs) \"\"\" obj = type.__call__(cls, *args, **kwargs) obj.check_bar() return obj class AbstractFoo(object): __metaclass__ = AbstractFooMeta bar = None def check_bar(self): if self.bar is None: raise NotImplementedError('Subclasses must define bar') class GoodFoo(AbstractFoo): def __init__(self): self.bar = 3 class BadFoo(AbstractFoo): def __init__(self): pass Basically the meta class redefine __call__ to make sure check_bar is called after the init on an instance. GoodFoo() # ok BadFoo () # yield NotImplementedError"} +{"question_id": 28331512, "score": 133, "creation_date": 1423084211, "tags": ["python", "datetime", "iso8601", "rfc3339"], "instruction": "How to convert Python's .isoformat() string back into datetime object\n\nSo in Python 3, you can generate an ISO 8601 date with .isoformat(), but you can't convert a string created by isoformat() back into a datetime object because Python's own datetime directives don't match properly. That is, %z = 0500 instead of 05:00 (which is produced by .isoformat()). For example: >>> strDate = d.isoformat() >>> strDate '2015-02-04T20:55:08.914461+00:00' >>> objDate = datetime.strptime(strDate,\"%Y-%m-%dT%H:%M:%S.%f%z\") Traceback (most recent call last): File \"<stdin>\", line 1, in <module> File \"C:\\Python34\\Lib\\_strptime.py\", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File \"C:\\Python34\\Lib\\_strptime.py\", line 337, in _strptime (data_string, format)) ValueError: time data '2015-02-04T20:55:08.914461+00:00' does not match format '%Y-%m-%dT%H:%M:%S.%f%z' From Python's strptime documentation: (https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior) %z UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive). (empty), +0000, -0400, +1030 So, in short, Python does not even adhere to its own string formatting directives. I know datetime is already terrible in Python, but this really goes beyond unreasonable into the land of plain stupidity. Tell me this isn't true.", "output": "Python 3.7+ As of Python 3.7 there is a method datetime.fromisoformat() which is exactly the reverse for isoformat(). Older Python If you have older Python, then this is the current best \"solution\" to this question: pip install python-dateutil Then... import datetime import dateutil def getDateTimeFromISO8601String(s): d = dateutil.parser.parse(s) return d"} +{"question_id": 48750199, "score": 133, "creation_date": 1518450254, "tags": ["python", "machine-learning", "gpu", "ram", "google-colaboratory"], "instruction": "Google Colaboratory: misleading information about its GPU (only 5% RAM available to some users)\n\nupdate: this question is related to Google Colab's \"Notebook settings: Hardware accelerator: GPU\". This question was written before the \"TPU\" option was added. Reading multiple excited announcements about Google Colaboratory providing free Tesla K80 GPU, I tried to run fast.ai lesson on it for it to never complete - quickly running out of memory. I started investigating of why. The bottom line is that \u201cfree Tesla K80\u201d is not \"free\" for all - for some only a small slice of it is \"free\". I connect to Google Colab from West Coast Canada and I get only 0.5GB of what supposed to be a 24GB GPU RAM. Other users get access to 11GB of GPU RAM. Clearly 0.5GB GPU RAM is insufficient for most ML/DL work. If you're not sure what you get, here is little debug function I scraped together (only works with the GPU setting of the notebook): # memory footprint support libraries/code !ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi !pip install gputil !pip install psutil !pip install humanize import psutil import humanize import os import GPUtil as GPU GPUs = GPU.getGPUs() # XXX: only one GPU on Colab and isn\u2019t guaranteed gpu = GPUs[0] def printm(): process = psutil.Process(os.getpid()) print(\"Gen RAM Free: \" + humanize.naturalsize( psutil.virtual_memory().available ), \" | Proc size: \" + humanize.naturalsize( process.memory_info().rss)) print(\"GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB\".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal)) printm() Executing it in a jupyter notebook before running any other code gives me: Gen RAM Free: 11.6 GB | Proc size: 666.0 MB GPU RAM Free: 566MB | Used: 10873MB | Util 95% | Total 11439MB The lucky users who get access to the full card will see: Gen RAM Free: 11.6 GB | Proc size: 666.0 MB GPU RAM Free: 11439MB | Used: 0MB | Util 0% | Total 11439MB Do you see any flaw in my calculation of the GPU RAM availability, borrowed from GPUtil? Can you confirm that you get similar results if you run this code on Google Colab notebook? If my calculations are correct, is there any way to get more of that GPU RAM on the free box? update: I'm not sure why some of us get 1/20th of what other users get. e.g. the person who helped me to debug this is from India and he gets the whole thing! note: please don't send any more suggestions on how to kill the potentially stuck/runaway/parallel notebooks that might be consuming parts of the GPU. No matter how you slice it, if you are in the same boat as I and were to run the debug code you'd see that you still get a total of 5% of GPU RAM (as of this update still).", "output": "So to prevent another dozen of answers suggesting invalid in the context of this thread suggestion to !kill -9 -1, let's close this thread: The answer is simple: As of this writing Google simply gives only 5% of GPU to some of us, whereas 100% to the others. Period. dec-2019 update: The problem still exists - this question's upvotes continue still. mar-2019 update: A year later a Google employee @AmiF commented on the state of things, stating that the problem doesn't exist, and anybody who seems to have this problem needs to simply reset their runtime to recover memory. Yet, the upvotes continue, which to me this tells that the problem still exists, despite @AmiF's suggestion to the contrary. dec-2018 update: I have a theory that Google may have a blacklist of certain accounts, or perhaps browser fingerprints, when its robots detect a non-standard behavior. It could be a total coincidence, but for quite some time I had an issue with Google Re-captcha on any website that happened to require it, where I'd have to go through dozens of puzzles before I'd be allowed through, often taking me 10+ min to accomplish. This lasted for many months. All of a sudden as of this month I get no puzzles at all and any google re-captcha gets resolved with just a single mouse click, as it used to be almost a year ago. And why I'm telling this story? Well, because at the same time I was given 100% of the GPU RAM on Colab. That's why my suspicion is that if you are on a theoretical Google black list then you aren't being trusted to be given a lot of resources for free. I wonder if any of you find the same correlation between the limited GPU access and the Re-captcha nightmare. As I said, it could be totally a coincidence as well."} +{"question_id": 14327036, "score": 132, "creation_date": 1358198965, "tags": ["python", "django", "performance", "count", "django-queryset"], "instruction": "Count() vs len() on a Django QuerySet\n\nIn Django, given that I have a QuerySet that I am going to iterate over and print the results of, what is the best option for counting the objects? len(qs) or qs.count()? (Also given that counting the objects in the same iteration is not an option.)", "output": "Choosing between len() and count() depends on the situation and it's worth to deeply understand how they work to use them correctly. Let me provide you with a few scenarios: (most crucial) When you only want to know the number of elements and you do not plan to process them in any way it's crucial to use count(): DO: queryset.count() - this will perform single SELECT COUNT(*) FROM some_table query, all computation is carried on RDBMS side, Python just needs to retrieve the result number with fixed cost of O(1) DON'T: len(queryset) - this will perform SELECT * FROM some_table query, fetching whole table O(N) and requiring additional O(N) memory for storing it. This is the worst that can be done When you intend to fetch the queryset anyway it's slightly better to use len() which won't cause an extra database query as count() would len() (one db query) len(queryset) # SELECT * fetching all the data - NO extra cost - data would be fetched anyway in the for loop for obj in queryset: # data is already fetched by len() - using cache pass count() (two db queries!): queryset.count() # First db query SELECT COUNT(*) for obj in queryset: # Second db query (fetching data) SELECT * pass Reverted 2nd case (when queryset has already been fetched): for obj in queryset: # iteration fetches the data len(queryset) # using already cached data - O(1) no extra cost queryset.count() # using cache - O(1) no extra db query len(queryset) # the same O(1) queryset.count() # the same: no query, O(1) Everything will be clear once you take a glance \"under the hood\": class QuerySet(object): def __init__(self, model=None, query=None, using=None, hints=None): # (...) self._result_cache = None def __len__(self): self._fetch_all() return len(self._result_cache) def _fetch_all(self): if self._result_cache is None: self._result_cache = list(self.iterator()) if self._prefetch_related_lookups and not self._prefetch_done: self._prefetch_related_objects() def count(self): if self._result_cache is not None: return len(self._result_cache) return self.query.get_count(using=self.db) Good references in Django docs: When QuerySets are evaluated count()"} +{"question_id": 38152389, "score": 132, "creation_date": 1467399814, "tags": ["python", "pandas", "numpy", "dataframe"], "instruction": "Coalesce values from 2 columns into a single column in a pandas dataframe\n\nI'm looking for a method that behaves similarly to coalesce in T-SQL. I have 2 columns (column A and B) that are sparsely populated in a pandas dataframe. I'd like to create a new column using the following rules: If the value in column A is not null, use that value for the new column C If the value in column A is null, use the value in column B for the new column C Like I mentioned, this can be accomplished in MS SQL Server via the coalesce function. I haven't found a good pythonic method for this; does one exist?", "output": "use combine_first(): In [16]: df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=list('ab')) In [17]: df.loc[::2, 'a'] = np.nan In [18]: df Out[18]: a b 0 NaN 0 1 5.0 5 2 NaN 8 3 2.0 8 4 NaN 3 5 9.0 4 6 NaN 7 7 2.0 0 8 NaN 6 9 2.0 5 In [19]: df['c'] = df.a.combine_first(df.b) In [20]: df Out[20]: a b c 0 NaN 0 0.0 1 5.0 5 5.0 2 NaN 8 8.0 3 2.0 8 2.0 4 NaN 3 3.0 5 9.0 4 9.0 6 NaN 7 7.0 7 2.0 0 2.0 8 NaN 6 6.0 9 2.0 5 2.0"} +{"question_id": 40555930, "score": 132, "creation_date": 1478897089, "tags": ["python", "selenium", "selenium-chromedriver"], "instruction": "selenium - chromedriver executable needs to be in PATH\n\nError message: 'chromedriver' executable needs to be in PATH I was trying to code a script using selenium in pycharm, however the error above occured. I have already linked my selenium to pycharm as seen here (fresh and up to date). I am new to selenium, isn't chromedriver in the folder \"selenium.\" If it isn't, where can I find it and add it to the path? By the way, I tried typing \"chromedriver\" in cmd, however, it wasn't recognized as an internal or external command. error shown below: Traceback (most recent call last): File \"C:\\Users\\sebastian\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 64, in start stdout=self.log_file, stderr=self.log_file) File \"C:\\Users\\sebastian\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\subprocess.py\", line 947, in __init__ restore_signals, start_new_session) File \"C:\\Users\\sebastian\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\subprocess.py\", line 1224, in _execute_child startupinfo) PermissionError: [WinError 5] Permission denied During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"C:/Users/sebastian/PycharmProjects/web/bot.py\", line 10, in <module> browser = webdriver.Chrome(\"C:/Users/sebastian/desktop/selenium-3.0.1\") File \"C:\\Users\\sebastian\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\selenium\\webdriver\\chrome\\webdriver.py\", line 62, in __init__ self.service.start() File \"C:\\Users\\sebastian\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 76, in start os.path.basename(self.path), self.start_error_message) selenium.common.exceptions.WebDriverException: Message: 'selenium-3.0.1' executable may have wrong permissions. Please see https://sites.google.com/a/chromium.org/chromedriver/home Exception ignored in: <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x01EDEAF0>> Traceback (most recent call last): File \"C:\\Users\\sebastian\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 163, in __del__ self.stop() File \"C:\\Users\\sebastian\\AppData\\Local\\Programs\\Python\\Python35-32\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 135, in stop if self.process is None: AttributeError: 'Service' object has no attribute 'process'", "output": "You can download ChromeDriver here: https://sites.google.com/chromium.org/driver/ Then you have multiple options: add it to your system path put it in the same directory as your python script specify the location directly via executable_path driver = webdriver.Chrome(executable_path='C:/path/to/chromedriver.exe')"} +{"question_id": 14959217, "score": 132, "creation_date": 1361282050, "tags": ["python", "django", "django-urls"], "instruction": "Django URL Redirect\n\nHow can I redirect traffic that doesn't match any of my other URLs back to the home page? urls.py: urlpatterns = patterns('', url(r'^$', 'macmonster.views.home'), #url(r'^macmon_home$', 'macmonster.views.home'), url(r'^macmon_output/$', 'macmonster.views.output'), url(r'^macmon_about/$', 'macmonster.views.about'), url(r'^.*$', 'macmonster.views.home'), ) As it stands, the last entry sends all \"other\" traffic to the home page but I want to redirect via either an HTTP 301 or 302.", "output": "You can try the Class Based View called RedirectView from django.views.generic.base import RedirectView urlpatterns = patterns('', url(r'^$', 'macmonster.views.home'), #url(r'^macmon_home$', 'macmonster.views.home'), url(r'^macmon_output/$', 'macmonster.views.output'), url(r'^macmon_about/$', 'macmonster.views.about'), url(r'^.*$', RedirectView.as_view(url='<url_to_home_view>', permanent=False), name='index') ) Notice how as url in the <url_to_home_view> you need to actually specify the url. permanent=False will return HTTP 302, while permanent=True will return HTTP 301. Alternatively you can use django.shortcuts.redirect Update for Django 2+ versions With Django 2+, url() is deprecated and replaced by re_path(). Usage is exactly the same as url() with regular expressions. For replacements without the need of regular expression, use path(). from django.urls import re_path re_path(r'^.*$', RedirectView.as_view(url='<url_to_home_view>', permanent=False), name='index')"} +{"question_id": 7099290, "score": 132, "creation_date": 1313614130, "tags": ["python", "hidden-files"], "instruction": "How to ignore hidden files using os.listdir()?\n\nMy python script executes an os.listdir(path) where the path is a queue containing archives that I need to treat one by one. The problem is that I'm getting the list in an array and then I just do a simple array.pop(0). It was working fine until I put the project in subversion. Now I get the .svn folder in my array and of course it makes my application crash. So here is my question: is there a function that ignores hidden files when executing an os.listdir() and if not what would be the best way?", "output": "You can write one yourself: import os def listdir_nohidden(path): for f in os.listdir(path): if not f.startswith('.'): yield f Or you can use a glob: import glob import os def listdir_nohidden(path): return glob.glob(os.path.join(path, '*')) Either of these will ignore all filenames beginning with '.'."} +{"question_id": 9878020, "score": 132, "creation_date": 1332788221, "tags": ["python", "flask"], "instruction": "How do I get the user agent with Flask?\n\nI'm trying to get access to the user agent with Flask, but I either can't find the documentation on it, or it doesn't tell me.", "output": "from flask import request request.headers.get('User-Agent') You can also use the request.user_agent object which contains the following attributes which are created based on the useragent string: platform (windows, linux, macos, etc.) browser (chrome, firefox, msie, etc.) version language string (== request.headers.get('User-Agent')) Note: As of werkzeug 2.0, the parsed data of request.user_agent has been deprecated; if you want to keep getting details you need to use a custom UserAgent implementation and set it as user_agent_class on a custom Request subclass, which is set as request_class on the Flask instance (or a subclass). Here's an example implementation that uses ua-parser: from ua_parser import user_agent_parser from werkzeug.user_agent import UserAgent from werkzeug.utils import cached_property class ParsedUserAgent(UserAgent): @cached_property def _details(self): return user_agent_parser.Parse(self.string) @property def platform(self): return self._details['os']['family'] @property def browser(self): return self._details['user_agent']['family'] @property def version(self): return '.'.join( part for key in ('major', 'minor', 'patch') if (part := self._details['user_agent'][key]) is not None )"} +{"question_id": 2477117, "score": 132, "creation_date": 1269001042, "tags": ["python", "pip", "pypi"], "instruction": "pip requirements.txt with alternative index\n\nI want to put all the requirements of a repoze Zope2 install in a pip requirements file. Most of the repoze packages don't seem to be on PyPi, but there's an alternative PyPi index for them here. But I can't figure out how to tell pip to use that index together with a requirements file. For single packages, it's easy pip install zopelib -i http://dist.repoze.org/zope2/2.10/simple/ I tried the following pip install -r requirements.txt -i http://dist.repoze.org/zope2/2.10/simple/ or in my requirements.txt all kind or permutations of these: zopelib -i http://dist.repoze.org/zope2/2.10/simple/ zopelib --index http://dist.repoze.org/zope2/2.10/simple/ -i http://dist.repoze.org/zope2/2.10/simple/ zopelib or (because the documentation says \"Note that all these options must be on a line of their own.\") --index http://dist.repoze.org/zope2/2.10/simple/ zopelib So, what's the correct way of telling pip to use http://dist.repoze.org/zope2/2.10/simple/ as index?", "output": "requirements.txt: -i http://dist.repoze.org/zope2/2.10/simple zopelib Example: $ pip install -r requirements.txt ... Successfully installed zopelib Note that -i takes effect for the entire requirements file. If you want to use different indexes, you can have multiple requirements files. https://pip.pypa.io/en/stable/reference/requirements-file-format/#supported-options"} +{"question_id": 3371136, "score": 132, "creation_date": 1280490619, "tags": ["python", "virtualenv"], "instruction": "Revert the `--no-site-packages` option with virtualenv\n\nI have created a virtualenv using the --no-site-packages option and installed lots of libraries. Now I would like to revert the --no-site-packages option and use also the global packages. Can I do that without recreating the virtualenv? More precisely: I wonder what exactly happens when creating a virtualenv using the --no-site-packages option as opposed to not using that option. If I know what happens then I can figure out how to undo it.", "output": "Try removing (or renaming) the file no-global-site-packages.txt in your Lib folder under your virtual environment. Where venv is the name of your virtual environment, and python3.4 corresponds to whichever version of python involved, for example: $ rm venv/lib/python3.4/no-global-site-packages.txt And if you change your mind and want to put it back: $ touch venv/lib/python3.4/no-global-site-packages.txt Note: If you don't see the above file, then you have a newer version of virtualenv. You'll want to follow this answer instead"} +{"question_id": 8960777, "score": 132, "creation_date": 1327232415, "tags": ["python", "fabric"], "instruction": "Pass parameter to fabric task\n\nHow can I pass a parameter to a fabric task when calling \"fab\" from the command line? For example: def task(something=''): print \"You said %s\" % something $ fab task \"hello\" You said hello Done. Is it possible to do this without prompting with fabric.operations.prompt?", "output": "Fabric 2 task arguments documentation: http://docs.pyinvoke.org/en/latest/concepts/invoking-tasks.html#task-command-line-arguments Fabric 1.X uses the following syntax for passing arguments to tasks: fab task:'hello world' fab task:something='hello' fab task:foo=99,bar=True fab task:foo,bar You can read more about it in Fabric docs."} +{"question_id": 6866600, "score": 132, "creation_date": 1311892561, "tags": ["python", "parsing", "yaml"], "instruction": "How to parse/read a YAML file into a Python object?\n\nHow to parse/read a YAML file into a Python object? For example, this YAML: Person: name: XYZ To this Python class: class Person(yaml.YAMLObject): yaml_tag = 'Person' def __init__(self, name): self.name = name I am using PyYAML by the way.", "output": "If your YAML file looks like this: # tree format treeroot: branch1: name: Node 1 branch1-1: name: Node 1-1 branch2: name: Node 2 branch2-1: name: Node 2-1 And you've installed PyYAML like this: pip install PyYAML And the Python code looks like this: import yaml with open('tree.yaml') as f: # use safe_load instead load dataMap = yaml.safe_load(f) The variable dataMap now contains a dictionary with the tree data. If you print dataMap using PrettyPrint, you will get something like: { 'treeroot': { 'branch1': { 'branch1-1': { 'name': 'Node 1-1' }, 'name': 'Node 1' }, 'branch2': { 'branch2-1': { 'name': 'Node 2-1' }, 'name': 'Node 2' } } } So, now we have seen how to get data into our Python program. Saving data is just as easy: with open('newtree.yaml', \"w\") as f: yaml.dump(dataMap, f) You have a dictionary, and now you have to convert it to a Python object: class Struct: def __init__(self, **entries): self.__dict__.update(entries) Then you can use: >>> args = your YAML dictionary >>> s = Struct(**args) >>> s <__main__.Struct instance at 0x01D6A738> >>> s... and follow \"Convert Python dict to object\". For more information you can look at pyyaml.org and this."} +{"question_id": 14225676, "score": 132, "creation_date": 1357686745, "tags": ["python", "pandas", "openpyxl"], "instruction": "Save list of DataFrames to multisheet Excel spreadsheet\n\nHow can I export a list of DataFrames into one Excel spreadsheet? The docs for to_excel state: Notes If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can be used to save different DataFrames to one workbook writer = ExcelWriter('output.xlsx') df1.to_excel(writer, 'sheet1') df2.to_excel(writer, 'sheet2') writer.save() Following this, I thought I could write a function which saves a list of DataFrames to one spreadsheet as follows: from openpyxl.writer.excel import ExcelWriter def save_xls(list_dfs, xls_path): writer = ExcelWriter(xls_path) for n, df in enumerate(list_dfs): df.to_excel(writer,'sheet%s' % n) writer.save() However (with a list of two small DataFrames, each of which can save to_excel individually), an exception is raised (Edit: traceback removed): AttributeError: 'str' object has no attribute 'worksheets' Presumably I am not calling ExcelWriter correctly, how should I be in order to do this?", "output": "You should be using pandas own ExcelWriter class: from pandas import ExcelWriter # from pandas.io.parsers import ExcelWriter Then the save_xls function works as expected: def save_xls(list_dfs, xls_path): with ExcelWriter(xls_path) as writer: for n, df in enumerate(list_dfs): df.to_excel(writer,'sheet%s' % n)"} +{"question_id": 905189, "score": 132, "creation_date": 1243221044, "tags": ["python", "python-2.6"], "instruction": "Why does sys.exit() not exit when called inside a thread in Python?\n\nI am confused as to why the following code snippet would not exit when called in the thread, but would exit when called in the main thread. import sys, time from threading import Thread def testexit(): time.sleep(5) sys.exit() print \"post thread exit\" t = Thread(target = testexit) t.start() t.join() print \"pre main exit, post thread exit\" sys.exit() print \"post main exit\" The docs for sys.exit() state that the call should exit from Python. I can see from the output of this program that \"post thread exit\" is never printed, but the main thread just keeps on going even after the thread calls exit. Is a separate instance of the interpreter being created for each thread, and the call to exit() is just exiting that separate instance? If so, how does the threading implementation manage access to shared resources? What if I did want to exit the program from the thread (not that I actually want to, but just so I understand)?", "output": "sys.exit() raises the SystemExit exception, as does thread.exit(). So, when sys.exit() raises that exception inside that thread, it has the same effect as calling thread.exit(), which is why only the thread exits."} +{"question_id": 7116889, "score": 132, "creation_date": 1313727488, "tags": ["python"], "instruction": "Is module __file__ attribute absolute or relative?\n\nI'm having trouble understanding __file__. From what I understand, __file__ returns the absolute path from which the module was loaded. I'm having problem producing this: I have a abc.py with one statement print __file__, running from /d/projects/ python abc.py returns abc.py. running from /d/ returns projects/abc.py. Any reasons why?", "output": "__file__ is guaranteed to be an absolute path in Python 3.9+. In Python 3.4 (changelog) Module __file__ attributes (and related values) should now always contain absolute paths by default, with the sole exception of __main__.__file__ when a script has been executed directly using a relative path. In Python 3.9 (changelog): ... the __file__ attribute of the __main__ module became an absolute path From the documentation: The pathname of the file from which the module was loaded, if it was loaded from a file. The __file__ attribute may be missing for certain types of modules, such as C modules that are statically linked into the interpreter. For extension modules loaded dynamically from a shared library, it's the pathname of the shared library file. From the mailing list thread linked by @kindall in a comment to the question: I haven't tried to repro this particular example, but the reason is that we don't want to have to call getpwd() on every import nor do we want to have some kind of in-process variable to cache the current directory. (getpwd() is relatively slow and can sometimes fail outright, and trying to cache it has a certain risk of being wrong.) What we do instead, is code in site.py that walks over the elements of sys.path and turns them into absolute paths. However this code runs before '' is inserted in the front of sys.path, so that the initial value of sys.path is ''. For the rest of this, consider sys.path not to include ''. So, if you are outside the part of sys.path that contains the module, you'll get an absolute path. If you are inside the part of sys.path that contains the module, you'll get a relative path. If you load a module in the current directory, and the current directory isn't in sys.path, you'll get an absolute path. If you load a module in the current directory, and the current directory is in sys.path, you'll get a relative path."} +{"question_id": 41417679, "score": 132, "creation_date": 1483293635, "tags": ["python", "python-typing"], "instruction": "How to annotate a type that's a class object (instead of a class instance)?\n\nWhat is the proper way to annotate a function argument that expects a class object instead of an instance of that class? In the example below, some_class argument is expected to be a type instance (which is a class), but the problem here is that type is too broad: def construct(some_class: type, related_data:Dict[str, Any]) -> Any: ... In the case where some_class expects a specific set of types objects, using type does not help at all. The typing module might be in need of a Class generic that does this: def construct(some_class: Class[Union[Foo, Bar, Baz]], related_data:Dict[str, Any]) -> Union[Foo, Bar, Baz]: ... In the example above, some_class is the Foo, Bar or Faz class, not an instance of it. It should not matter their positions in the class tree because some_class: Class[Foo] should also be a valid case. Therefore, # classes are callable, so it is OK inst = some_class(**related_data) or # instances does not have __name__ clsname = some_class.__name__ or # an operation that only Foo, Bar and Baz can perform. some_class.a_common_classmethod() should be OK to mypy, pytype, PyCharm, etc. How can this be done with current implementation (Python 3.6 or earlier)?", "output": "To annotate an object that is a class, use typing.Type. For example, this would tell the type checker that some_class is class Foo or any of its subclasses: from typing import Type class Foo: ... class Bar(Foo): ... class Baz: ... some_class: Type[Foo] some_class = Foo # ok some_class = Bar # ok some_class = Baz # error some_class = Foo() # error Note that Type[Union[Foo, Bar, Baz]] and Union[Type[Foo], Type[Bar], Type[Baz]] are completely equivalent. If some_class could be any of a number of classes, you may want to make them all inherit from the same base class, and use Type[BaseClass]. Note that the inheritance must be non-virtual for now (mypy support for virtual inheritance is being discussed)."} +{"question_id": 8820276, "score": 131, "creation_date": 1326289372, "tags": ["python"], "instruction": "Docstring for variable\n\nIs it posible to use docstring for plain variable? For example I have module called t def f(): \"\"\"f\"\"\" l = lambda x: x \"\"\"l\"\"\" and I do >>> import t >>> t.f.__doc__ 'f' but >>> t.l.__doc__ >>> Example is similar to PEP 258's (search for \"this is g\").", "output": "Use typing.Annotated to provide a docstring for variables. I originally wrote an answer (see below) where I said this wasn't possible. That was true back in 2012 but Python has moved on. Today you can provide the equivalent of a docstring for a global variable or an attribute of a class or instance. You will need to be running at least Python 3.9 for this to work: from __future__ import annotations from typing import Annotated Feet = Annotated[float, \"feet\"] Seconds = Annotated[float, \"seconds\"] MilesPerHour = Annotated[float, \"miles per hour\"] day: Seconds = 86400 legal_limit: Annotated[MilesPerHour, \"UK national limit for single carriageway\"] = 60 current_speed: MilesPerHour def speed(distance: Feet, time: Seconds) -> MilesPerHour: \"\"\"Calculate speed as distance over time\"\"\" fps2mph = 3600 / 5280 # Feet per second to miles per hour return distance / time * fps2mph You can access the annotations at run time using typing.get_type_hints(): Python 3.9.1 (default, Jan 19 2021, 09:36:39) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import calc >>> from typing import get_type_hints >>> hints = get_type_hints(calc, include_extras=True) >>> hints {'day': typing.Annotated[float, 'seconds'], 'legal_limit': typing.Annotated[float, 'miles per hour', 'UK national limit for single carriageway'], 'current_speed': typing.Annotated[float, 'miles per hour']} Extract information about variables using the hints for the module or class where they were declared. Notice how the annotations combine when you nest them: >>> hints['legal_limit'].__metadata__ ('miles per hour', 'UK national limit for single carriageway') >>> hints['day'] typing.Annotated[float, 'seconds'] It even works for variables that have type annotations but have not been assigned a value. If I tried to reference calc.current_speed I would get an attribute error but I can still access its metadata: >>> hints['current_speed'].__metadata__ ('miles per hour',) The type hints for a module only include the global variables, to drill down you need to call get_type_hints() again on functions or classes: >>> get_type_hints(calc.speed, include_extras=True) {'distance': typing.Annotated[float, 'feet'], 'time': typing.Annotated[float, 'seconds'], 'return': typing.Annotated[float, 'miles per hour']} I only know of one tool so far that can use typing.Annotated to store documentation about a variable and that is Pydantic. It is slightly more complicated than just storing a docstring though it actually expects an instance of pydantic.Field. Here's an example: from typing import Annotated import typing_extensions from pydantic import Field from pydantic.main import BaseModel from datetime import date # TypeAlias is in typing_extensions for Python 3.9: FirstName: typing_extensions.TypeAlias = Annotated[str, Field( description=\"The subject's first name\", example=\"Linus\" )] class Subject(BaseModel): # Using an annotated type defined elsewhere: first_name: FirstName = \"\" # Documenting a field inline: last_name: Annotated[str, Field( description=\"The subject's last name\", example=\"Torvalds\" )] = \"\" # Traditional method without using Annotated # Field needs an extra argument for the default value date_of_birth: date = Field( ..., description=\"The subject's date of birth\", example=\"1969-12-28\", ) Using the model class: >>> guido = Subject(first_name='Guido', last_name='van Rossum', date_of_birth=date(1956, 1, 31)) >>> print(guido) first_name='Guido' last_name='van Rossum' date_of_birth=datetime.date(1956, 1, 31) Pydantic models can give you a JSON schema: >>> from pprint import pprint >>> pprint(Subject.schema()) {'properties': {'date_of_birth': {'description': \"The subject's date of birth\", 'example': '1969-12-28', 'format': 'date', 'title': 'Date Of Birth', 'type': 'string'}, 'first_name': {'default': '', 'description': \"The subject's first name\", 'example': 'Linus', 'title': 'First Name', 'type': 'string'}, 'last_name': {'default': '', 'description': \"The subject's last name\", 'example': 'Torvalds', 'title': 'Last Name', 'type': 'string'}}, 'required': ['date_of_birth'], 'title': 'Subject', 'type': 'object'} >>> If you use this class in a FastAPI application the OpenApi specification has example and description for all three of these taken from the relevant Field. And here's the original answer which was true back then but hasn't stood the test of time: No, it is not possible and it wouldn't be useful if you could. The docstring is always an attribute of an object (module, class or function), not tied to a specific variable. That means if you could do: t = 42 t.__doc__ = \"something\" # this raises AttributeError: '__doc__' is read-only you would be setting the documentation for the integer 42 not for the variable t. As soon as you rebind t you lose the docstring. Immutable objects such as numbers of strings sometimes have a single object shared between different users, so in this example you would probably actually have set the docstring for all occurences of 42 throughout your program. print(42 .__doc__) # would print \"something\" if the above worked! For mutable objects it wouldn't necessarily be harmful but would still be of limited use if you rebind the object. If you want to document an attribute of a class then use the class's docstring to describe it."} +{"question_id": 18717877, "score": 131, "creation_date": 1378812847, "tags": ["python", "matplotlib", "jupyter-notebook", "figures"], "instruction": "prevent plot from showing in jupyter notebook\n\nHow can I prevent a specific plot to be shown in Jupyter notebook? I have several plots in a notebook but I want a subset of them to be saved to a file and not shown on the notebook as this slows considerably. A minimal working example for a Jupyter notebook is: %matplotlib inline from numpy.random import randn from matplotlib.pyplot import plot, figure a=randn(3) b=randn(3) for i in range(10): fig=figure() plot(b) fname='s%03d.png'%i fig.savefig(fname) if(i%5==0): figure() plot(a) As you can see I have two types of plots, a and b. I want a's to be plotted and shown and I don't want the b plots to be shown, I just want them them to be saved in a file. Hopefully this will speed things a bit and won't pollute my notebook with figures I don't need to see. Thank you for your time", "output": "Perhaps just clear the axis, for example: fig = plt.figure() plt.plot(range(10)) fig.savefig(\"save_file_name.pdf\") plt.close() This will not plot the output in inline mode. I can't work out if it is really clearing the data though."} +{"question_id": 25754405, "score": 131, "creation_date": 1410300889, "tags": ["python", "datetime", "twitter", "tweepy"], "instruction": "How can I extract hours and minutes from a datetime.datetime object?\n\nI am required to extract the time of the day from the datetime.datetime object returned by the created_at attribute, but how can I do that? This is my code for getting the datetime.datetime object. from datetime import * import tweepy consumer_key = '' consumer_secret = '' access_token = '' access_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth) tweets = tweepy.Cursor(api.home_timeline).items(limit = 2) t1 = datetime.strptime('Wed Jun 01 12:53:42 +0000 2011', '%a %b %d %H:%M:%S +0000 %Y') for tweet in tweets: print (tweet.created_at - t1) t1 = tweet.created_at I need to only extract the hour and minutes from t1.", "output": "I don't know how you want to format it, but you can do: print(\"Created at %s:%s\" % (t1.hour, t1.minute)) for example."} +{"question_id": 15138614, "score": 131, "creation_date": 1362063356, "tags": ["python"], "instruction": "How can I read the contents of an URL with Python?\n\nThe following works when I paste it on the browser: http://www.somesite.com/details.pl?urn=2344 But when I try reading the URL with Python nothing happens: link = 'http://www.somesite.com/details.pl?urn=2344' f = urllib.urlopen(link) myfile = f.readline() print myfile Do I need to encode the URL, or is there something I'm not seeing?", "output": "To answer your question: import urllib.request link = \"http://www.somesite.com/details.pl?urn=2344\" f = urllib.request.urlopen(link) myfile = f.read() print(myfile) You need to read(), not readline() Also see answers by Martin Thoma or i.n.n.m within this question: Python 2/3 compat, Python 3 Alternatively, with requests, import requests link = \"http://www.somesite.com/details.pl?urn=2344\" f = requests.get(link) print(f.text)"} +{"question_id": 22860085, "score": 131, "creation_date": 1396606954, "tags": ["python", "html", "flask", "jinja2"], "instruction": "How do I include a HTML file in a Jinja2 template?\n\nI am using Flask micro-framework for my server which uses Jinja templates. I have a parent template.html and some children templates called child1.html and child2.html, some of these children templates are pretty large HTML files and I would like to somehow split them for better lucidity over my work. Contents of my main.py script: from flask import Flask, request, render_template app = Flask(__name__) @app.route('/') @app.route('/<task>') def home(task=''): return render_template('child1.html', task=task) app.run() The simplified template.html: <!DOCTYPE html> <html> <head></head> <body> <div class=\"container\"> {% block content %}{% endblock %} </div> </body> </html> The magic is in child1.html: {% extends 'template.html' %} {% block content %} {% if task == 'content1' %} <!-- include content1.html --> {% endif %} {% if task == 'content2' %} <!-- include content2.html --> {% endif %} {% endblock %} Instead of the comments: <!-- include content1.html --> I have a lot of html text, and it is very hard to keep track of changes and not to make some mistakes, which are then pretty hard to find and correct. I'd like to just load the content1.html instead of writing it all in child1.html. I came across this question, but I had problems implementing it. I think Jinja2 might have a better tool for that. NOTE: The code above might not be working properly, I just wrote it to illustrate the problem.", "output": "Use the jinja2 {% include %} directive. {% extends 'template.html' %} {% block content %} {% if task == 'content1' %} {% include 'content1.html' %} {% endif %} {% if task == 'content2' %} {% include 'content2.html' %} {% endif %} {% endblock %} This will include the content from the correct content-file."} +{"question_id": 9010222, "score": 131, "creation_date": 1327525186, "tags": ["python", "hash", "dictionary", "set", "equality"], "instruction": "Why can a Python dict have multiple keys with the same hash?\n\nI am trying to understand the Python hash function under the hood. I created a custom class where all instances return the same hash value. class C: def __hash__(self): return 42 I just assumed that only one instance of the above class can be in a dict at any time, but in fact a dict can have multiple elements with the same hash. c, d = C(), C() x = {c: 'c', d: 'd'} print(x) # {<__main__.C object at 0x7f0824087b80>: 'c', <__main__.C object at 0x7f0823ae2d60>: 'd'} # note that the dict has 2 elements I experimented a little more and found that if I override the __eq__ method such that all the instances of the class compare equal, then the dict only allows one instance. class D: def __hash__(self): return 42 def __eq__(self, other): return True p, q = D(), D() y = {p: 'p', q: 'q'} print(y) # {<__main__.D object at 0x7f0823a9af40>: 'q'} # note that the dict only has 1 element So I am curious to know how a dict can have multiple elements with the same hash.", "output": "For a detailed description of how Python's hashing works see my answer to Why is early return slower than else? Basically it uses the hash to pick a slot in the table. If there is a value in the slot and the hash matches, it compares the items to see if they are equal. If the hash matches but the items aren't equal, then it tries another slot. There's a formula to pick this (which I describe in the referenced answer), and it gradually pulls in unused parts of the hash value; but once it has used them all up, it will eventually work its way through all slots in the hash table. That guarantees eventually we either find a matching item or an empty slot. When the search finds an empty slot, it inserts the value or gives up (depending whether we are adding or getting a value). The important thing to note is that there are no lists or buckets: there is just a hash table with a particular number of slots, and each hash is used to generate a sequence of candidate slots."} +{"question_id": 44620013, "score": 131, "creation_date": 1497823776, "tags": ["python", "matplotlib", "patch"], "instruction": "Title for matplotlib legend\n\nI know it seems fairly redundant to have a title for a legend, but is it possible using matplotlib? Here's a snippet of the code I have: import matplotlib.patches as mpatches import matplotlib.pyplot as plt one = mpatches.Patch(facecolor='#f3f300', label='label1', linewidth = 0.5, edgecolor = 'black') two = mpatches.Patch(facecolor='#ff9700', label = 'label2', linewidth = 0.5, edgecolor = 'black') three = mpatches.Patch(facecolor='#ff0000', label = 'label3', linewidth = 0.5, edgecolor = 'black') legend = plt.legend(handles=[one, two, three], loc = 4, fontsize = 'small', fancybox = True) frame = legend.get_frame() #sets up for color, edge, and transparency frame.set_facecolor('#b4aeae') #color of legend frame.set_edgecolor('black') #edge color of legend frame.set_alpha(1) #deals with transparency plt.show() I would want the title of the legend above label1. For reference, this is the output:", "output": "Add the title parameter to the this line: legend = plt.legend(handles=[one, two, three], title=\"title\", loc=4, fontsize='small', fancybox=True) See also the official docs for the legend constructor."} +{"question_id": 33626623, "score": 131, "creation_date": 1447147435, "tags": ["python", "list", "performance"], "instruction": "The most efficient way to remove the first N elements from a list\n\nI need to remove the first N elements from a list of objects. Is there an easy way, without using loops?", "output": "You can use list slicing to achieve your goal. Remove the first 5 elements: n = 5 mylist = [1,2,3,4,5,6,7,8,9] newlist = mylist[n:] print(newlist) Outputs: [6, 7, 8, 9] Or del if you only want to use one list: n = 5 mylist = [1,2,3,4,5,6,7,8,9] del mylist[:n] print(mylist) Outputs: [6, 7, 8, 9]"} +{"question_id": 18595686, "score": 131, "creation_date": 1378220935, "tags": ["python", "sorting", "operator-keyword"], "instruction": "How do operator.itemgetter() and sort() work?\n\nI have the following code: # initialize a = [] # create the table (name, age, job) a.append([\"Nick\", 30, \"Doctor\"]) a.append([\"John\", 8, \"Student\"]) a.append([\"Paul\", 22, \"Car Dealer\"]) a.append([\"Mark\", 66, \"Retired\"]) # sort the table by age import operator a.sort(key=operator.itemgetter(1)) # print the table print(a) It creates a 4x3 table and then it sorts it by age. My question is, what exactly key=operator.itemgetter(1) does? Does the operator.itemgetter function return the item's value? Why can't I just type something like key=a[x][1] there? Or can I? How could with operator print a certain value of the form like 3x2 which is 22? How does exactly Python sort the table? Can I reverse-sort it? How can I sort it based on two columns like first age, and then if age is the same b name? How could I do it without operator?", "output": "Looks like you're a little bit confused about all that stuff. operator is a built-in module providing a set of convenient operators. In two words operator.itemgetter(n) constructs a callable that assumes an iterable object (e.g. list, tuple, set) as input, and fetches the n-th element out of it. So, you can't use key=a[x][1] there, because python has no idea what x is. Instead, you could use a lambda function (elem is just a variable name, no magic there): a.sort(key=lambda elem: elem[1]) Or just an ordinary function: def get_second_elem(iterable): return iterable[1] a.sort(key=get_second_elem) So, here's an important note: in python functions are first-class citizens, so you can pass them to other functions as a parameter. Other questions: Yes, you can reverse sort, just add reverse=True: a.sort(key=..., reverse=True) To sort by more than one column you can use itemgetter with multiple indices: operator.itemgetter(1,2), or with lambda: lambda elem: (elem[1], elem[2]). This way, iterables are constructed on the fly for each item in list, which are than compared against each other in lexicographic(?) order (first elements compared, if equal - second elements compared, etc) You can fetch value at [3,2] using a[2,1] (indices are zero-based). Using operator... It's possible, but not as clean as just indexing. Refer to the documentation for details: operator.itemgetter explained Sorting list by custom key in Python"} +{"question_id": 23248017, "score": 131, "creation_date": 1398264900, "tags": ["python", "pycharm"], "instruction": "Cannot find reference 'xxx' in __init__.py\n\nI have a project in PyCharm organized as follows: -- Sources |--__init__.py |--Calculators |--__init__.py |--Filters.py |--Controllers |--__init__.py |--FiltersController.py |--Viewers |--__init__.py |--DataVisualization.py |--Models |--__init__.py |--Data All of my __init__.py, except for the one right above Sources are blank files. I am receiving a lot of warnings of the kind: Cannot find reference 'xxx' in __init__.py For example, my FiltersController.py has this piece of code: import numpy.random as npr bootstrap = npr.choice(image_base.data[max(0, x-2):x+3, max(0, y-2):y+3].flatten(), size=(3, 3), replace=True) And I get this warning: Cannot find reference 'choice' in __init__.py I'm googling wondering what does this mean and what should I do to code properly in Python.", "output": "This is a bug in pycharm. PyCharm seems to be expecting the referenced module to be included in an __all__ = [] statement. For proper coding etiquette, should you include the __all__ statement from your modules? ..this is actually the question we hear young Spock answering while he was being tested, to which he responded: \"It is morally praiseworthy but not morally obligatory.\" To get around it, you can simply disable that (extremely non-critical) (highly useful) inspection globally, or suppress it for the specific function or statement. To do so: put the caret over the erroring text ('choice', from your example above) Bring up the intention menu (alt-enter by default, mine is set to alt-backspace) hit the right arrow to open the submenu, and select the relevant action PyCharm has its share of small bugs like this, but in my opinion its benefits far outweigh its drawbacks. If you'd like to try another good IDE, there's also Spyder/Spyderlib. Edited: Originally, I thought that this was specific to checking __all__, but it looks like it's the more general 'Unresolved References' check, which can be very useful. It's probably best to use statement-level disabling of the feature, either by using the menu as mentioned above, or by specifying # noinspection PyUnresolvedReferences on the line preceding the statement."} +{"question_id": 13270877, "score": 131, "creation_date": 1352295295, "tags": ["python", "installation", "pip", "growl"], "instruction": "How to manually install a pypi module without pip/easy_install?\n\nI want to use the gntp module to display toaster-like notifications for C/C++ software. I want to package all the dependencies for the software to be self-executable on another computer. The gntp module is only available through the pip installer, which cannot be used (the computer running the software does not have an internet connection) How can I install it from source? I would prefer not to force the user to install easy_install/pip and manually add the pip path to the %PATH. PS: I'm using Python 2.7 on a Windows machine.", "output": "Download the package unzip it if it is zipped cd into the directory containing setup.py If there are any installation instructions contained in documentation, read and follow the instructions OTHERWISE type in python setup.py install You may need administrator privileges for step 5. What you do here depends on your operating system. For example in Ubuntu you would say sudo python setup.py install EDIT- thanks to kwatford (see first comment) To bypass the need for administrator privileges during step 5 above you may be able to make use of the --user flag. This way you can install the package only for the current user. The docs say: Files will be installed into subdirectories of site.USER_BASE (written as userbase hereafter). This scheme installs pure Python modules and extension modules in the same location (also known as site.USER_SITE). More details can be found here: http://docs.python.org/2.7/install/index.html"} +{"question_id": 41567895, "score": 131, "creation_date": 1484048243, "tags": ["python", "tensorflow", "scikit-learn", "k-means", "neuraxle"], "instruction": "Will scikit-learn utilize GPU?\n\nReading implementation of scikit-learn in TensorFlow: http://learningtensorflow.com/lesson6/ and scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html I'm struggling to decide which implementation to use. scikit-learn is installed as part of the tensorflow docker container so can use either implementation. Reason to use scikit-learn : scikit-learn contains less boilerplate than the tensorflow implementation. Reason to use tensorflow : If running on Nvidia GPU the algorithm will be run against in parallel , I'm not sure if scikit-learn will utilize all available GPUs? Reading https://www.quora.com/What-are-the-main-differences-between-TensorFlow-and-SciKit-Learn TensorFlow is more low-level; basically, the Lego bricks that help you to implement machine learning algorithms whereas scikit-learn offers you off-the-shelf algorithms, e.g., algorithms for classification such as SVMs, Random Forests, Logistic Regression, and many, many more. TensorFlow shines if you want to implement deep learning algorithms, since it allows you to take advantage of GPUs for more efficient training. This statement re-enforces my assertion that \"scikit-learn contains less boilerplate than the tensorflow implementation\" but also suggests scikit-learn will not utilize all available GPUs?", "output": "Tensorflow only uses GPU if it is built against Cuda and CuDNN. By default it does not use GPU, especially if it is running inside Docker, unless you use nvidia-docker and an image with a built-in support. Scikit-learn is not intended to be used as a deep-learning framework and it does not provide any GPU support. Why is there no support for deep or reinforcement learning / Will there be support for deep or reinforcement learning in scikit-learn? Deep learning and reinforcement learning both require a rich vocabulary to define an architecture, with deep learning additionally requiring GPUs for efficient computing. However, neither of these fit within the design constraints of scikit-learn; as a result, deep learning and reinforcement learning are currently out of scope for what scikit-learn seeks to achieve. Extracted from http://scikit-learn.org/stable/faq.html#why-is-there-no-support-for-deep-or-reinforcement-learning-will-there-be-support-for-deep-or-reinforcement-learning-in-scikit-learn Will you add GPU support in scikit-learn? No, or at least not in the near future. The main reason is that GPU support will introduce many software dependencies and introduce platform specific issues. scikit-learn is designed to be easy to install on a wide variety of platforms. Outside of neural networks, GPUs don\u2019t play a large role in machine learning today, and much larger gains in speed can often be achieved by a careful choice of algorithms. Extracted from http://scikit-learn.org/stable/faq.html#will-you-add-gpu-support"} +{"question_id": 19019720, "score": 131, "creation_date": 1380171540, "tags": ["python", "opencv", "dll", "path"], "instruction": "ImportError: DLL load failed: %1 is not a valid Win32 application. But the DLL's are there\n\nI have a situation very much like the one at Error \"ImportError: DLL load failed: %1 is not a valid Win32 application\", but the answer there isn't working for me. My Python code says: import cv2 But that line throws the error shown in the title of this question. I have OpenCV installed in C:\\lib\\opencv on this 64-bit machine. I'm using 64-bit Python. My PYTHONPATH variable: PYTHONPATH=C:\\lib\\opencv\\build\\python\\2.7. This folder contains cv2.pyd and that's all. My PATH variable: Path=%OPENCV_DIR%\\bin;... This folder contains 39 DLL files such as opencv_core246d.dll. OPENCV_DIR has this value: OPENCV_DIR=C:\\lib\\opencv\\build\\x64\\vc11. The solution at Error \"ImportError: DLL load failed: %1 is not a valid Win32 application\" says to add \"the new opencv binaries path (C:\\opencv\\build\\bin\\Release) to the Windows PATH environment variable\". But as shown above, I already have the OpenCV binaries folder (C:\\lib\\opencv\\build\\x64\\vc11\\bin) in my PATH. And my OpenCV installation doesn't have any Release folders (except for an empty one under build/java). What's going wrong? Can I tell Python to verbosely trace the loading process? Exactly what DLL files is it looking for? I noticed that, according to http://www.dependencywalker.com/, the cv2.pyd in C:\\lib\\opencv\\build\\python\\2.7 is 32-bit, whereas the machine and the Python I'm running are 64-bit. Could that be the problem? And if so, where can I find a 64-bit version of cv2.pyd?", "output": "Unofficial Windows Binaries for Python Extension Packages You can find any Python libraries from here."} +{"question_id": 69440494, "score": 131, "creation_date": 1633370696, "tags": ["python", "python-typing", "mypy", "python-3.10"], "instruction": "Python 3.10+: Optional[Type] or Type | None\n\nNow that Python 3.10 has been released, is there any preference when indicating that a parameter or returned value might be optional, i.e., can be None. So what is preferred: Option 1: def f(parameter: Optional[int]) -> Optional[str]: Option 2: def f(parameter: int | None) -> str | None: Also, is there any preference between Type | None and None | Type?", "output": "PEP 604 covers these topics in the specification section. The existing typing.Union and | syntax should be equivalent. int | str == typing.Union[int, str] The order of the items in the Union should not matter for equality. (int | str) == (str | int) (int | str | float) == typing.Union[str, float, int] Optional values should be equivalent to the new union syntax None | t == typing.Optional[t] As @jonrsharpe comments, Union and Optional are not deprecated, so the Union and | syntax are acceptable. \u0141ukasz Langa, a Python core developer, replied on a YouTube live related to the Python 3.10 release that Type | None is preferred over Optional[Type] for Python 3.10+."} +{"question_id": 41457612, "score": 131, "creation_date": 1483511634, "tags": ["python", "pip", "requirements.txt", "freetype"], "instruction": "How to fix error with freetype while installing all packages from a requirements.txt file?\n\nI ran the following command to install dependencies for a Python project? # pip install requirements.txt Collecting requirements.txt Could not find a version that satisfies the requirement requirements.txt (from versions: ) No matching distribution found for requirements.txt I searched on Google and found this post: python pip trouble installing from requirements.txt but I don't quite understand what the solution was in that post. Here is my requirements.txt file: # cat requirements.txt ordereddict==1.1 argparse==1.2.1 python-dateutil==2.2 matplotlib==1.3.1 nose==1.3.0 numpy==1.8.0 pymongo==3.3.0 psutil>=2.0 I then tried to do pip3 install -r requirements.txt and here is the output: # pip3 install -r requirements.txt Requirement already satisfied: ordereddict==1.1 in /usr/local/lib/python3.5/dist-packages (from -r requirements.txt (line 1)) Collecting argparse==1.2.1 (from -r requirements.txt (line 2)) Using cached argparse-1.2.1.tar.gz Collecting python-dateutil==2.2 (from -r requirements.txt (line 3)) Using cached python-dateutil-2.2.tar.gz Collecting matplotlib==1.3.1 (from -r requirements.txt (line 4)) Using cached matplotlib-1.3.1.tar.gz Complete output from command python setup.py egg_info: ============================================================================ Edit setup.cfg to change the build options BUILDING MATPLOTLIB matplotlib: yes [1.3.1] python: yes [3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609]] platform: yes [linux] REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [version 1.11.3] dateutil: yes [using dateutil version 2.6.0] tornado: yes [tornado was not found. It is required for the WebAgg backend. pip/easy_install may attempt to install it after matplotlib.] pyparsing: yes [using pyparsing version 2.1.10] pycxx: yes [Official versions of PyCXX are not compatible with Python 3.x. Using local copy] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] freetype: no [The C/C++ header for freetype2 (ft2build.h) could not be found. You may need to install the development package.] png: yes [pkg-config information for 'libpng' could not be found. Using unknown version.] OPTIONAL SUBPACKAGES sample_data: yes [installing] toolkits: yes [installing] tests: yes [using nose version 1.3.7] OPTIONAL BACKEND EXTENSIONS macosx: no [Mac OS-X only] qt4agg: no [PyQt4 not found] gtk3agg: no [gtk3agg backend does not work on Python 3] gtk3cairo: no [Requires cairo to be installed.] gtkagg: no [Requires pygtk] tkagg: no [TKAgg requires Tkinter.] wxagg: no [requires wxPython] gtk: no [Requires pygtk] agg: yes [installing] cairo: no [cairo not found] windowing: no [Microsoft Windows only] OPTIONAL LATEX DEPENDENCIES dvipng: no ghostscript: no latex: no pdftops: no ============================================================================ * The following required packages can not be built: * freetype ---------------------------------------- Command \"python setup.py egg_info\" failed with error code 1 in /tmp/pip-build-don4ne_2/matplotlib/ I have already installed libfreetype6-dev but the pip command still reports missing this dependency. # apt-get install libfreetype6-dev Reading package lists... Done Building dependency tree Reading state information... Done libfreetype6-dev is already the newest version (2.6.1-0.1ubuntu2). 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. Is there an easy way to install all required dependencies for this python project?", "output": "If you are using Linux OS: Remove matplotlib==1.3.1 from requirements.txt Try to install matplotlib with sudo apt-get install python-matplotlib Install all packages again from requirements.txt For Python 2: run pip install -r requirements.txt For Python 3: run pip3 install -r requirements.txt pip freeze > requirements.txt If you are using Windows OS: python -m pip install -U pip setuptools python -m pip install matplotlib"} +{"question_id": 55050988, "score": 131, "creation_date": 1551985103, "tags": ["python", "google-colaboratory"], "instruction": "Can I run a Google Colab (free edition) script and then shut down my computer?\n\nCan I run a google colab (free edition) script and then shut down my computer? I am training several deeplearning models with crossvalidation, and therefore I would like to know if I can close the window or the computer with the training running at the same time in the cloud.", "output": "2024 Update: At the time of writing the original answer, Google Colaboratory was an entirely free service and there was no official support for background execution of a notebook as the question describes. While we previously arrived at some estimates for notebook lifetimes through anecdotal experience, these estimates are no longer reliable and you will find that Colaboratory prunes your notebook runtimes much more aggressively now. The answer to the question now would be simply no, you may not reliably have a Google Colab notebook running in the background without shelling out for a Colab Pro+ subscription, or using a workaround that infringes on Colab's usage policies. If you're in a position where you do not have access to GPUs, nor the funds to pay for access to them but need them for long-running tasks, you may take a look at this non-exhaustive list of alternatives. However, they all come with usage restrictions, or involve a learning curve to use: Saturn Cloud Free Plan The Saturn Cloud platform provides 150 free compute hours per month Amazon SageMaker As part of the AWS Free Tier, you get 250 hours of usage of an ML instance for the first two months Colab Pro+ If you bite the bullet and sign up for a Colab Pro+ subscription, you may run a notebook in the background for up to 24 hours as long as you have available compute units. Previous answer circa 2019: With the browser closed, a Colabs instance will run for at most 12 hours 90 minutes before your runtime is considered idle and is recycled. At the same time, it would be good practice to save your model weights periodically to avoid losing work."} +{"question_id": 29208984, "score": 131, "creation_date": 1427109727, "tags": ["android", "python", "html", "webview"], "instruction": "Cannot display HTML string\n\nI am struggling with display string HTML in Android WebView. On the server side, I downloaded a web page and escape HTML characters and quotes (I used Python): my_string = html.escape(my_string, True) On the Android client side: strings are unescaped by: myString = StringEscapeUtils.unescapeHtml4(myString) webview.loadData( myString, \"text/html\", \"encoding\"); However webview just display them as literal strings. Here are the result: Edit: I add original string returned from server side: \"<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="description" content=""> <title>Saulify</title> <!-- All the Favicons... --> <link rel="shortcut icon" href="/static/favicon/favicon.ico"> <link rel="apple-touch-icon" sizes="57x57" href="/static/favicon/apple-touch-icon-57x57.png"> <link rel="apple-touch-icon" sizes="114x114" href="/static/favicon/apple-touch-icon-114x114.png"> <link rel="apple-touch-icon" sizes="72x72" href="/static/favicon/apple-touch-icon-72x72.png"> <link rel="apple-touch-icon" sizes="144x144" href="/static/favicon/apple-touch-icon-144x144.png"> <link rel="apple-touch-icon" sizes="60x60" href="/static/favicon/apple-touch-icon-60x60.png"> <link rel="apple-touch-icon" sizes="120x120" href="/static/favicon/apple-touch-icon-120x120.png"> <link rel="apple-touch-icon" sizes="76x76" href="/static/favicon/apple-touch-icon-76x76.png"> <link rel="apple-touch-icon" sizes="152x152" href="/static/favicon/apple-touch-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="/static/favicon/apple-touch-icon-180x180.png"> <link rel="icon" type="image/png" href="/static/favicon/favicon-192x192.png" sizes="192x192"> <link rel="icon" type="image/png" href="/static/favicon/favicon-160x160.png" sizes="160x160"> <link rel="icon" type="image/png" href="/static/favicon/favicon-96x96.png" sizes="96x96"> <link rel="icon" type="image/png" href="/static/favicon/favicon-16x16.png" sizes="16x16"> <link rel="icon" type="image/png" href="/static/favicon/favicon-32x32.png" sizes="32x32"> <meta name="msapplication-TileColor" content="#da532c"> <meta name="msapplication-TileImage" content="/static/favicon/mstile-144x144.png"> <meta name="msapplication-config" content="/static/favicon/browserconfig.xml"> <!-- External CSS --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css"> <!-- External Fonts --> <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet"> <link href='//fonts.googleapis.com/css?family=Open+Sans:300,600' rel='stylesheet' type='text/css'> <link href='//fonts.googleapis.com/css?family=Lora:400,700' rel='stylesheet' type='text/css'> <!--[if lt IE 9]> <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.2/html5shiv.min.js"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/respond.js/1.4.2/respond.min.js"></script> <![endif]--> <!-- Site CSS --> <link rel="stylesheet" type="text/css" href="/static/css/style.css"> <link rel="stylesheet" type="text/css" href="/static/css/glyphicon.css"> </head> <body> <div class="container article-page"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <h2><a href="http://www.huffingtonpost.com/2015/03/22/ted-cruz-climate-change_n_6919002.html">Gov. Jerry Brown Says Ted Cruz Is &#39;Absolutely Unfit&#39; To Run For Office Because Of Climate Change Views</a></h2> <h4>Sam Levine</h4> <div class="article"> <p>California Gov. Jerry Brown (D) said on Sunday that Texas Sen. Ted Cruz (R-Texas) is "absolutely unfit to be running for office" because of his position on climate change.</p> <p>"I just came back from New Hampshire, where there's snow and ice everywhere. My view on this is simple: Debates on this should follow science and should follow data, and many of the alarmists on global warming, they have a problem because the science doesn't back them up," Cruz <a href="https://www.youtube.com/watch?v=m0UJ_Sc0Udk">said</a> on "Late Night with Seth Meyers" last week.</p> <p>To back up his claim, Cruz cited satellite data that has shown a lack of significant warming over the last 17 years. But Cruz's reasoning <a href="http://www.politifact.com/truth-o-meter/statements/2015/mar/20 /ted-cruz/ted-cruzs-worlds-fire-not-last-17-years/">has been debunked by Politifact</a>, which has shown that scientists have ample evidence to believe that the climate will continue to warm.</p> <p>"What he said is absolutely false,\u201d Brown said on <a href="http://www.nbcnews.com/meet-the-press/california-governor-ted-cruz- unfit-be-running-n328046">NBC's "Meet the Press."</a> He added that <a href="http://climate.nasa.gov/scientific-consensus/">over 90 percent</a> of scientists who study the climate agree that climate change is caused by human activity. "That man betokens such a level of ignorance and a direct falsification of existing scientific data. It's shocking, and I think that man has rendered himself absolutely unfit to be running for office," Brown said.</p> <p>Brown added that climate change has <a href="http://www.huffingtonpost.com/2015/03/06/california-drought-february- record_n_6820704.html?utm_hp_ref=california-drought">caused droughts in his state</a>, as well as severe cold and storms on the east coast.</p> <p>While Cruz may have seen snow and ice everywhere in New Hampshire, data shows that the country is actually experiencing a <a href="http://www.huffingtonpost.com/2015/02/19/cold-weather- winter_n_6713104.html">warmer than average</a> winter.</p> <p>Brown\u2019s criticism of Cruz comes one day before the Texas senator is set to announce a <a href="http://www.huffingtonpost.com/2015/03/22 /ted-cruz-2016_n_6917824.html">presidential campaign</a>. </p> </div> <div class="original"> <a href="http://www.huffingtonpost.com/2015/03/22/ted-cruz-climate-change_n_6919002.html">VIEW ORIGINAL</a> </div> </div> </div> </div> <script src="//code.jquery.com/jquery-latest.js"></script> <script src="/static/js/modal.js"></script> <script src="/static/js/bootbox.min.js"></script> <script src="/static/js/site.js"></script> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-56257533-1', 'auto'); ga('send', 'pageview'); </script> </body> </html>\"", "output": "I have modified the code here: public class test extends Activity { private WebView wv; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.test); wv = (WebView) findViewById(R.id.wv); String s = \"<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="description" content=""> <title>Saulify</title> <!-- All the Favicons... --> <link rel="shortcut icon" href="/static/favicon/favicon.ico"> <link rel="apple-touch-icon" sizes="57x57" href="/static/favicon/apple-touch-icon-57x57.png"> <link rel="apple-touch-icon" sizes="114x114" href="/static/favicon/apple-touch-icon-114x114.png"> <link rel="apple-touch-icon" sizes="72x72" href="/static/favicon/apple-touch-icon-72x72.png"> <link rel="apple-touch-icon" sizes="144x144" href="/static/favicon/apple-touch-icon-144x144.png"> <link rel="apple-touch-icon" sizes="60x60" href="/static/favicon/apple-touch-icon-60x60.png"> <link rel="apple-touch-icon" sizes="120x120" href="/static/favicon/apple-touch-icon-120x120.png"> <link rel="apple-touch-icon" sizes="76x76" href="/static/favicon/apple-touch-icon-76x76.png"> <link rel="apple-touch-icon" sizes="152x152" href="/static/favicon/apple-touch-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="/static/favicon/apple-touch-icon-180x180.png"> <link rel="icon" type="image/png" href="/static/favicon/favicon-192x192.png" sizes="192x192"> <link rel="icon" type="image/png" href="/static/favicon/favicon-160x160.png" sizes="160x160"> <link rel="icon" type="image/png" href="/static/favicon/favicon-96x96.png" sizes="96x96"> <link rel="icon" type="image/png" href="/static/favicon/favicon-16x16.png" sizes="16x16"> <link rel="icon" type="image/png" href="/static/favicon/favicon-32x32.png" sizes="32x32"> <meta name="msapplication-TileColor" content="#da532c"> <meta name="msapplication-TileImage" content="/static/favicon/mstile-144x144.png"> <meta name="msapplication-config" content="/static/favicon/browserconfig.xml"> <!-- External CSS --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css"> <!-- External Fonts --> <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet"> <link href='//fonts.googleapis.com/css?family=Open+Sans:300,600' rel='stylesheet' type='text/css'> <link href='//fonts.googleapis.com/css?family=Lora:400,700' rel='stylesheet' type='text/css'> <!--[if lt IE 9]> <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.2/html5shiv.min.js"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/respond.js/1.4.2/respond.min.js"></script> <![endif]--> <!-- Site CSS --> <link rel="stylesheet" type="text/css" href="/static/css/style.css"> <link rel="stylesheet" type="text/css" href="/static/css/glyphicon.css"> </head> <body> <div class="container article-page"> <div class="row"> <div class="col-md-8 col-md-offset-2"> <h2><a href="http://www.huffingtonpost.com/2015/03/22/ted-cruz-climate-change_n_6919002.html">Gov. Jerry Brown Says Ted Cruz Is &#39;Absolutely Unfit&#39; To Run For Office Because Of Climate Change Views</a></h2> <h4>Sam Levine</h4> <div class="article"> <p>California Gov. Jerry Brown (D) said on Sunday that Texas Sen. Ted Cruz (R-Texas) is "absolutely unfit to be running for office" because of his position on climate change.</p> <p>"I just came back from New Hampshire, where there's snow and ice everywhere. My view on this is simple: Debates on this should follow science and should follow data, and many of the alarmists on global warming, they have a problem because the science doesn't back them up," Cruz <a href="https://www.youtube.com/watch?v=m0UJ_Sc0Udk">said</a> on "Late Night with Seth Meyers" last week.</p> <p>To back up his claim, Cruz cited satellite data that has shown a lack of significant warming over the last 17 years. But Cruz's reasoning <a href="http://www.politifact.com/truth-o-meter/statements/2015/mar/20 /ted-cruz/ted-cruzs-worlds-fire-not-last-17-years/">has been debunked by Politifact</a>, which has shown that scientists have ample evidence to believe that the climate will continue to warm.</p> <p>"What he said is absolutely false,\u201d Brown said on <a href="http://www.nbcnews.com/meet-the-press/california-governor-ted-cruz- unfit-be-running-n328046">NBC's "Meet the Press."</a> He added that <a href="http://climate.nasa.gov/scientific-consensus/">over 90 percent</a> of scientists who study the climate agree that climate change is caused by human activity. "That man betokens such a level of ignorance and a direct falsification of existing scientific data. It's shocking, and I think that man has rendered himself absolutely unfit to be running for office," Brown said.</p> <p>Brown added that climate change has <a href="http://www.huffingtonpost.com/2015/03/06/california-drought-february- record_n_6820704.html?utm_hp_ref=california-drought">caused droughts in his state</a>, as well as severe cold and storms on the east coast.</p> <p>While Cruz may have seen snow and ice everywhere in New Hampshire, data shows that the country is actually experiencing a <a href="http://www.huffingtonpost.com/2015/02/19/cold-weather- winter_n_6713104.html">warmer than average</a> winter.</p> <p>Brown\u2019s criticism of Cruz comes one day before the Texas senator is set to announce a <a href="http://www.huffingtonpost.com/2015/03/22 /ted-cruz-2016_n_6917824.html">presidential campaign</a>. </p> </div> <div class="original"> <a href="http://www.huffingtonpost.com/2015/03/22/ted-cruz-climate-change_n_6919002.html">VIEW ORIGINAL</a> </div> </div> </div> </div> <script src="//code.jquery.com/jquery-latest.js"></script> <script src="/static/js/modal.js"></script> <script src="/static/js/bootbox.min.js"></script> <script src="/static/js/site.js"></script> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-56257533-1', 'auto'); ga('send', 'pageview'); </script> </body> </html>\"; wv.loadData(stripHtml(s), \"text/html\", \"UTF-8\"); } public String stripHtml(String html) { return Html.fromHtml(html).toString(); } }"} +{"question_id": 1970807, "score": 130, "creation_date": 1262024480, "tags": ["python", "python-imaging-library"], "instruction": "Center-/middle-align text with PIL?\n\nHow would I center-align (and middle-vertical-align) text when using PIL?", "output": "Deprecation Warning: textsize is deprecated and will be removed in Pillow 10 (2023-07-01). Use textbbox or textlength instead. Code using textbbox instead of textsize. from PIL import Image, ImageDraw, ImageFont def create_image(size, bgColor, message, font, fontColor): W, H = size image = Image.new('RGB', size, bgColor) draw = ImageDraw.Draw(image) _, _, w, h = draw.textbbox((0, 0), message, font=font) draw.text(((W-w)/2, (H-h)/2), message, font=font, fill=fontColor) return image myFont = ImageFont.truetype('Roboto-Regular.ttf', 16) myMessage = 'Hello World' myImage = create_image((300, 200), 'yellow', myMessage, myFont, 'black') myImage.save('hello_world.png', \"PNG\") Result Use Draw.textsize method to calculate text size and re-calculate position accordingly. Here is an example: from PIL import Image, ImageDraw W, H = (300,200) msg = \"hello\" im = Image.new(\"RGBA\",(W,H),\"yellow\") draw = ImageDraw.Draw(im) w, h = draw.textsize(msg) draw.text(((W-w)/2,(H-h)/2), msg, fill=\"black\") im.save(\"hello.png\", \"PNG\") and the result: If your fontsize is different, include the font like this: myFont = ImageFont.truetype(\"my-font.ttf\", 16) draw.textsize(msg, font=myFont)"} +{"question_id": 16249736, "score": 130, "creation_date": 1367049547, "tags": ["python", "mongodb", "pandas", "pymongo"], "instruction": "How to import data from mongodb to pandas?\n\nI have a large amount of data in a collection in mongodb which I need to analyze. How do i import that data to pandas? I am new to pandas and numpy. EDIT: The mongodb collection contains sensor values tagged with date and time. The sensor values are of float datatype. Sample Data: { \"_cls\" : \"SensorReport\", \"_id\" : ObjectId(\"515a963b78f6a035d9fa531b\"), \"_types\" : [ \"SensorReport\" ], \"Readings\" : [ { \"a\" : 0.958069536790466, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:26:35.297Z\"), \"b\" : 6.296118156595, \"_cls\" : \"Reading\" }, { \"a\" : 0.95574014778624, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:27:09.963Z\"), \"b\" : 6.29651468650064, \"_cls\" : \"Reading\" }, { \"a\" : 0.953648289182713, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:27:37.545Z\"), \"b\" : 7.29679823731148, \"_cls\" : \"Reading\" }, { \"a\" : 0.955931884300997, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:28:21.369Z\"), \"b\" : 6.29642922525632, \"_cls\" : \"Reading\" }, { \"a\" : 0.95821381, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:41:20.801Z\"), \"b\" : 7.28956613, \"_cls\" : \"Reading\" }, { \"a\" : 4.95821335, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:41:36.931Z\"), \"b\" : 6.28956574, \"_cls\" : \"Reading\" }, { \"a\" : 9.95821341, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:42:09.971Z\"), \"b\" : 0.28956488, \"_cls\" : \"Reading\" }, { \"a\" : 1.95667927, \"_types\" : [ \"Reading\" ], \"ReadingUpdatedDate\" : ISODate(\"2013-04-02T08:43:55.463Z\"), \"b\" : 0.29115237, \"_cls\" : \"Reading\" } ], \"latestReportTime\" : ISODate(\"2013-04-02T08:43:55.463Z\"), \"sensorName\" : \"56847890-0\", \"reportCount\" : 8 }", "output": "pymongo might give you a hand, followings is some code I'm using: import pandas as pd from pymongo import MongoClient def _connect_mongo(host, port, username, password, db): \"\"\" A util for making a connection to mongo \"\"\" if username and password: mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db) conn = MongoClient(mongo_uri) else: conn = MongoClient(host, port) return conn[db] def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True): \"\"\" Read from Mongo and Store into DataFrame \"\"\" # Connect to MongoDB db = _connect_mongo(host=host, port=port, username=username, password=password, db=db) # Make a query to the specific DB and Collection cursor = db[collection].find(query) # Expand the cursor and construct the DataFrame df = pd.DataFrame(list(cursor)) # Delete the _id if no_id: del df['_id'] return df"} +{"question_id": 1058712, "score": 130, "creation_date": 1246286542, "tags": ["python", "arrays", "random"], "instruction": "How do I select a random element from an array in Python?\n\nThe first examples that I googled didn't work. This should be trivial, right?", "output": "import random mylist=['Pinky', 'Blinky', 'Inky', 'Clyde'] random.choice (mylist)"} +{"question_id": 18659858, "score": 130, "creation_date": 1378476702, "tags": ["python", "random"], "instruction": "Generating a list of random numbers, summing to 1\n\nThis question is not a duplicate of Getting N random numbers whose sum is M because: Most answers there are about theory, not a specific coding solution in python to answer this question The accepted answer here is 5 years older than the one answer in the duplicate that answers this question. The duplicate accepted answer does not answer this question How would I make a list of N (say 100) random numbers, so that their sum is 1? I can make a list of random numbers with r = [ran.random() for i in range(1,100)] How would I modify this so that the list sums to 1 (this is for a probability simulation).", "output": "The simplest solution is indeed to take N random values and divide by the sum. A more generic solution is to use the Dirichlet distribution which is available in numpy. By changing the parameters of the distribution you can change the \"randomness\" of individual numbers >>> import numpy as np, numpy.random >>> print np.random.dirichlet(np.ones(10),size=1) [[ 0.01779975 0.14165316 0.01029262 0.168136 0.03061161 0.09046587 0.19987289 0.13398581 0.03119906 0.17598322]] >>> print np.random.dirichlet(np.ones(10)/1000.,size=1) [[ 2.63435230e-115 4.31961290e-209 1.41369771e-212 1.42417285e-188 0.00000000e+000 5.79841280e-143 0.00000000e+000 9.85329725e-005 9.99901467e-001 8.37460207e-246]] >>> print np.random.dirichlet(np.ones(10)*1000.,size=1) [[ 0.09967689 0.10151585 0.10077575 0.09875282 0.09935606 0.10093678 0.09517132 0.09891358 0.10206595 0.10283501]] Depending on the main parameter the Dirichlet distribution will either give vectors where all the values are close to 1./N where N is the length of the vector, or give vectors where most of the values of the vectors will be ~0 , and there will be a single 1, or give something in between those possibilities. EDIT (5 years after the original answer): Another useful fact about the Dirichlet distribution is that you naturally get it, if you generate a Gamma-distributed set of random variables and then divide them by their sum."} +{"question_id": 36571560, "score": 130, "creation_date": 1460459446, "tags": ["python", "python-3.x", "file"], "instruction": "Directing print output to a .txt file\n\nIs there a way to save all of the print output to a txt file in python? Lets say I have the these two lines in my code and I want to save the print output to a file named output.txt. print (\"Hello stackoverflow!\") print (\"I have a question.\") I want the output.txt file to to contain Hello stackoverflow! I have a question.", "output": "Give print a file keyword argument, where the value of the argument is a file stream. The best practice is to open the file with the open function using a with block, which will ensure that the file gets closed for you at the end of the block: with open(\"output.txt\", \"a\") as f: print(\"Hello stackoverflow!\", file=f) print(\"I have a question.\", file=f) From the Python documentation about print: The file argument must be an object with a write(string) method; if it is not present or None, sys.stdout will be used. And the documentation for open: Open file and return a corresponding file object. If the file cannot be opened, an OSError is raised. The \"a\" as the second argument of open means \"append\" - in other words, the existing contents of the file won't be overwritten. If you want the file to be overwritten instead at the beginning of the with block, use \"w\". The with block is useful because, otherwise, you'd need to remember to close the file yourself like this: f = open(\"output.txt\", \"a\") print(\"Hello stackoverflow!\", file=f) print(\"I have a question.\", file=f) f.close()"} +{"question_id": 1377446, "score": 130, "creation_date": 1252043675, "tags": ["python", "html", "django", "pdf", "pdf-generation"], "instruction": "Render HTML to PDF in Django site\n\nFor my django powered site, I am looking for an easy solution to convert dynamic html pages to pdf. Pages include HTML and charts from Google visualization API (which is javascript based, yet including those graphs is a must).", "output": "Try the solution from Reportlab. Download it and install it as usual with python setup.py install You will also need to install the following modules: xhtml2pdf, html5lib, pypdf with easy_install. Here is an usage example: First define this function: import cStringIO as StringIO from xhtml2pdf import pisa from django.template.loader import get_template from django.template import Context from django.http import HttpResponse from cgi import escape def render_to_pdf(template_src, context_dict): template = get_template(template_src) context = Context(context_dict) html = template.render(context) result = StringIO.StringIO() pdf = pisa.pisaDocument(StringIO.StringIO(html.encode(\"ISO-8859-1\")), result) if not pdf.err: return HttpResponse(result.getvalue(), content_type='application/pdf') return HttpResponse('We had some errors<pre>%s</pre>' % escape(html)) Then you can use it like this: def myview(request): #Retrieve data or whatever you need return render_to_pdf( 'mytemplate.html', { 'pagesize':'A4', 'mylist': results, } ) The template: <!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\" \"http://www.w3.org/TR/html4/loose.dtd\"> <html> <head> <title>My Title
{% for item in mylist %} RENDER MY CONTENT {% endfor %}
{%block page_foot%} Page {%endblock%}
"} +{"question_id": 62658215, "score": 130, "creation_date": 1593522500, "tags": ["python", "machine-learning", "scikit-learn", "logistic-regression"], "instruction": "ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT\n\nI have a dataset consisting of both numeric and categorical data and I want to predict adverse outcomes for patients based on their medical characteristics. I defined a prediction pipeline for my dataset like so: X = dataset.drop(columns=['target']) y = dataset['target'] # define categorical and numeric transformers numeric_transformer = Pipeline(steps=[ ('knnImputer', KNNImputer(n_neighbors=2, weights=\"uniform\")), ('scaler', StandardScaler())]) categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) # dispatch object columns to the categorical_transformer and remaining columns to numerical_transformer preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, selector(dtype_exclude=\"object\")), ('cat', categorical_transformer, selector(dtype_include=\"object\")) ]) # Append classifier to preprocessing pipeline. # Now we have a full prediction pipeline. clf = Pipeline(steps=[('preprocessor', preprocessor), ('classifier', LogisticRegression())]) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) clf.fit(X_train, y_train) print(\"model score: %.3f\" % clf.score(X_test, y_test)) However, when running this code, I get the following warning message: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) model score: 0.988 Can someone explain to me what this warning means? I am new to machine learning so am a little lost as to what I can do to improve the prediction model. As you can see from the numeric_transformer, I scaled the data through standardisation. I am also confused as to how the model score is quite high and whether this is a good or bad thing.", "output": "The warning means what it mainly says: Suggestions to try to make the solver (the algorithm) converges. lbfgs stand for: \"Limited-memory Broyden\u2013Fletcher\u2013Goldfarb\u2013Shanno Algorithm\". It is one of the solvers' algorithms provided by Scikit-Learn Library. The term limited-memory simply means it stores only a few vectors that represent the gradients approximation implicitly. It has better convergence on relatively small datasets. But what is algorithm convergence? In simple words. If the error of solving is ranging within very small range (i.e., it is almost not changing), then that means the algorithm reached the solution (not necessary to be the best solution as it might be stuck at what so-called \"local Optima\"). On the other hand, if the error is varying noticeably (even if the error is relatively small [like in your case the score was good], but rather the differences between the errors per iteration is greater than some tolerance) then we say the algorithm did not converge. Now, you need to know that Scikit-Learn API sometimes provides the user the option to specify the maximum number of iterations the algorithm should take while it's searching for the solution in an iterative manner: LogisticRegression(... solver='lbfgs', max_iter=100 ...) As you can see, the default solver in LogisticRegression is 'lbfgs' and the maximum number of iterations is 100 by default. Final words, please, however, note that increasing the maximum number of iterations does not necessarily guarantee convergence, but it certainly helps! Update: Based on your comment below, some tips to try (out of many) that might help the algorithm to converge are: Increase the number of iterations: As in this answer; Try a different optimizer: Look here; Scale your data: Look here; Add engineered features: Look here; Data pre-processing: Look here - use case and here; Add more data: Look here."} +{"question_id": 2537471, "score": 130, "creation_date": 1269862063, "tags": ["python", "sqlalchemy"], "instruction": "method of iterating over sqlalchemy model's defined columns?\n\nI've been trying to figure out how to iterate over the list of columns defined in a SQLAlchemy model. I want it for writing some serialization and copy methods to a couple of models. I can't just iterate over the obj.__dict__ since it contains a lot of SA specific items. Anyone know of a way to just get the id and desc names from the following? class JobStatus(Base): __tablename__ = 'jobstatus' id = Column(Integer, primary_key=True) desc = Column(Unicode(20)) In this small case I could easily create a: def logme(self): return {'id': self.id, 'desc': self.desc} but I'd prefer something that auto-generates the dict (for larger objects).", "output": "You could use the following function: def __unicode__(self): return \"[%s(%s)]\" % ( self.__class__.__name__, \", \".join( \"%s=%s\" % (k, self.__dict__[k]) for k in sorted(self.__dict__) if \"_sa_\" != k[:4] ), ) It will exclude SA magic attributes, but will not exclude the relations. So basically it might load the dependencies, parents, children etc, which is definitely not desirable. But it is actually much easier because if you inherit from Base, you have a __table__ attribute, so that you can do: for c in JobStatus.__table__.columns: print c for c in JobStatus.__table__.foreign_keys: print c See How to discover table properties from SQLAlchemy mapped object - similar question. Edit by Mike: Please see functions such as Mapper.c and Mapper.mapped_table. If using 0.8 and higher also see Mapper.attrs and related functions. Example for Mapper.attrs: from sqlalchemy import inspect mapper = inspect(JobStatus) for column in mapper.attrs: print column.key"} +{"question_id": 25169297, "score": 130, "creation_date": 1407355779, "tags": ["python", "math", "numpy", "logarithm"], "instruction": "NumPy: Logarithm with base n\n\nFrom the numpy documentation on logarithms, I have found functions to take the logarithm with base e, 2, and 10: import numpy as np np.log(np.e**3) #3.0 np.log2(2**3) #3.0 np.log10(10**3) #3.0 However, how do I take the logarithm with base n (e.g. 42) in numpy?", "output": "If you have numpy 1.23 or later, you can use np.emath.logn: import numpy as np array = np.array([74088, 3111696]) # = [42^3, 42^4] base = 42 exponent = np.emath.logn(base, array) # = [3, 4] If your version of numpy is older: To get the logarithm with a custom base using math.log: import math number = 74088 # = 42^3 base = 42 exponent = math.log(number, base) # = 3 To get the logarithm with a custom base using numpy.log: import numpy as np array = np.array([74088, 3111696]) # = [42^3, 42^4] base = 42 exponent = np.log(array) / np.log(base) # = [3, 4] Which uses the logarithm base change rule:"} +{"question_id": 37872171, "score": 130, "creation_date": 1466130468, "tags": ["python", "scipy", "interpolation"], "instruction": "How can I perform two-dimensional interpolation using scipy?\n\nThis Q&A is intended as a canonical(-ish) concerning two-dimensional (and multi-dimensional) interpolation using scipy. There are often questions concerning the basic syntax of various multidimensional interpolation methods, I hope to set these straight too. I have a set of scattered two-dimensional data points, and I would like to plot them as a nice surface, preferably using something like contourf or plot_surface in matplotlib.pyplot. How can I interpolate my two-dimensional or multidimensional data to a mesh using scipy? I've found the scipy.interpolate sub-package, but I keep getting errors when using interp2d or bisplrep or griddata or RBFInterpolator (or the older Rbf). What is the proper syntax of these methods?", "output": "Disclaimer: I'm mostly writing this post with syntactical considerations and general behaviour in mind. I'm not familiar with the memory and CPU aspect of the methods described, and I aim this answer at those who have reasonably small sets of data, such that the quality of the interpolation can be the main aspect to consider. I am aware that when working with very large data sets, the better-performing methods (namely griddata and RBFInterpolator without a neighbors keyword argument) might not be feasible. Note that this answer uses the new RBFInterpolator class introduced in SciPy 1.7.0. For the legacy Rbf class see the previous version of this answer. I'm going to compare three kinds of multi-dimensional interpolation methods (interp2d/splines, griddata and RBFInterpolator). I will subject them to two kinds of interpolation tasks and two kinds of underlying functions (points from which are to be interpolated). The specific examples will demonstrate two-dimensional interpolation, but the viable methods are applicable in arbitrary dimensions. Each method provides various kinds of interpolation; in all cases I will use cubic interpolation (or something close1). It's important to note that whenever you use interpolation you introduce bias compared to your raw data, and the specific methods used affect the artifacts that you will end up with. Always be aware of this, and interpolate responsibly. The two interpolation tasks will be upsampling (input data is on a rectangular grid, output data is on a denser grid) interpolation of scattered data onto a regular grid The two functions (over the domain [x, y] in [-1, 1]x[-1, 1]) will be a smooth and friendly function: cos(pi*x)*sin(pi*y); range in [-1, 1] an evil (and in particular, non-continuous) function: x*y / (x^2 + y^2) with a value of 0.5 near the origin; range in [-0.5, 0.5] Here's how they look: I will first demonstrate how the three methods behave under these four tests, then I'll detail the syntax of all three. If you know what you should expect from a method, you might not want to waste your time learning its syntax (looking at you, interp2d). Test data For the sake of explicitness, here is the code with which I generated the input data. While in this specific case I'm obviously aware of the function underlying the data, I will only use this to generate input for the interpolation methods. I use numpy for convenience (and mostly for generating the data), but scipy alone would suffice too. import numpy as np import scipy.interpolate as interp # auxiliary function for mesh generation def gimme_mesh(n): minval = -1 maxval = 1 # produce an asymmetric shape in order to catch issues with transpositions return np.meshgrid(np.linspace(minval, maxval, n), np.linspace(minval, maxval, n + 1)) # set up underlying test functions, vectorized def fun_smooth(x, y): return np.cos(np.pi*x) * np.sin(np.pi*y) def fun_evil(x, y): # watch out for singular origin; function has no unique limit there return np.where(x**2 + y**2 > 1e-10, x*y/(x**2+y**2), 0.5) # sparse input mesh, 6x7 in shape N_sparse = 6 x_sparse, y_sparse = gimme_mesh(N_sparse) z_sparse_smooth = fun_smooth(x_sparse, y_sparse) z_sparse_evil = fun_evil(x_sparse, y_sparse) # scattered input points, 10^2 altogether (shape (100,)) N_scattered = 10 rng = np.random.default_rng() x_scattered, y_scattered = rng.random((2, N_scattered**2))*2 - 1 z_scattered_smooth = fun_smooth(x_scattered, y_scattered) z_scattered_evil = fun_evil(x_scattered, y_scattered) # dense output mesh, 20x21 in shape N_dense = 20 x_dense, y_dense = gimme_mesh(N_dense) Smooth function and upsampling Let's start with the easiest task. Here's how an upsampling from a mesh of shape [6, 7] to one of [20, 21] works out for the smooth test function: Even though this is a simple task, there are already subtle differences between the outputs. At a first glance all three outputs are reasonable. There are two features to note, based on our prior knowledge of the underlying function: the middle case of griddata distorts the data most. Note the y == -1 boundary of the plot (nearest the x label): the function should be strictly zero (since y == -1 is a nodal line for the smooth function), yet this is not the case for griddata. Also note the x == -1 boundary of the plots (behind, to the left): the underlying function has a local maximum (implying zero gradient near the boundary) at [-1, -0.5], yet the griddata output shows clearly non-zero gradient in this region. The effect is subtle, but it's a bias none the less. Evil function and upsampling A bit harder task is to perform upsampling on our evil function: Clear differences are starting to show among the three methods. Looking at the surface plots, there are clear spurious extrema appearing in the output from interp2d (note the two humps on the right side of the plotted surface). While griddata and RBFInterpolator seem to produce similar results at first glance, producing local minima near [0.4, -0.4] that is absent from the underlying function. However, there is one crucial aspect in which RBFInterpolator is far superior: it respects the symmetry of the underlying function (which is of course also made possible by the symmetry of the sample mesh). The output from griddata breaks the symmetry of the sample points, which is already weakly visible in the smooth case. Smooth function and scattered data Most often one wants to perform interpolation on scattered data. For this reason I expect these tests to be more important. As shown above, the sample points were chosen pseudo-uniformly in the domain of interest. In realistic scenarios you might have additional noise with each measurement, and you should consider whether it makes sense to interpolate your raw data to begin with. Output for the smooth function: Now there's already a bit of a horror show going on. I clipped the output from interp2d to between [-1, 1] exclusively for plotting, in order to preserve at least a minimal amount of information. It's clear that while some of the underlying shape is present, there are huge noisy regions where the method completely breaks down. The second case of griddata reproduces the shape fairly nicely, but note the white regions at the border of the contour plot. This is due to the fact that griddata only works inside the convex hull of the input data points (in other words, it doesn't perform any extrapolation). I kept the default NaN value for output points lying outside the convex hull.2 Considering these features, RBFInterpolator seems to perform best. Evil function and scattered data And the moment we've all been waiting for: It's no huge surprise that interp2d gives up. In fact, during the call to interp2d you should expect some friendly RuntimeWarnings complaining about the impossibility of the spline to be constructed. As for the other two methods, RBFInterpolator seems to produce the best output, even near the borders of the domain where the result is extrapolated. So let me say a few words about the three methods, in decreasing order of preference (so that the worst is the least likely to be read by anybody). scipy.interpolate.RBFInterpolator The RBF in the name of the RBFInterpolator class stands for \"radial basis functions\". To be honest I've never considered this approach until I started researching for this post, but I'm pretty sure I'll be using these in the future. Just like the spline-based methods (see later), usage comes in two steps: first one creates a callable RBFInterpolator class instance based on the input data, and then calls this object for a given output mesh to obtain the interpolated result. Example from the smooth upsampling test: import scipy.interpolate as interp sparse_points = np.stack([x_sparse.ravel(), y_sparse.ravel()], -1) # shape (N, 2) in 2d dense_points = np.stack([x_dense.ravel(), y_dense.ravel()], -1) # shape (N, 2) in 2d zfun_smooth_rbf = interp.RBFInterpolator(sparse_points, z_sparse_smooth.ravel(), smoothing=0, kernel='cubic') # explicit default smoothing=0 for interpolation z_dense_smooth_rbf = zfun_smooth_rbf(dense_points).reshape(x_dense.shape) # not really a function, but a callable class instance zfun_evil_rbf = interp.RBFInterpolator(sparse_points, z_sparse_evil.ravel(), smoothing=0, kernel='cubic') # explicit default smoothing=0 for interpolation z_dense_evil_rbf = zfun_evil_rbf(dense_points).reshape(x_dense.shape) # not really a function, but a callable class instance Note that we had to do some array building gymnastics to make the API of RBFInterpolator happy. Since we have to pass the 2d points as arrays of shape (N, 2), we have to flatten the input grid and stack the two flattened arrays. The constructed interpolator also expects query points in this format, and the result will be a 1d array of shape (N,) which we have to reshape back to match our 2d grid for plotting. Since RBFInterpolator makes no assumptions about the number of dimensions of the input points, it supports arbitrary dimensions for interpolation. So, scipy.interpolate.RBFInterpolator produces well-behaved output even for crazy input data supports interpolation in higher dimensions extrapolates outside the convex hull of the input points (of course extrapolation is always a gamble, and you should generally not rely on it at all) creates an interpolator as a first step, so evaluating it in various output points is less additional effort can have output point arrays of arbitrary shape (as opposed to being constrained to rectangular meshes, see later) more likely to preserving the symmetry of the input data supports multiple kinds of radial functions for keyword kernel: multiquadric, inverse_multiquadric, inverse_quadratic, gaussian, linear, cubic, quintic, thin_plate_spline (the default). As of SciPy 1.7.0 the class doesn't allow passing a custom callable due to technical reasons, but this is likely to be added in a future version. can give inexact interpolations by increasing the smoothing parameter One drawback of RBF interpolation is that interpolating N data points involves inverting an N x N matrix. This quadratic complexity very quickly blows up memory need for a large number of data points. However, the new RBFInterpolator class also supports a neighbors keyword parameter that restricts computation of each radial basis function to k nearest neighbours, thereby reducing memory need. scipy.interpolate.griddata My former favourite, griddata, is a general workhorse for interpolation in arbitrary dimensions. It doesn't perform extrapolation beyond setting a single preset value for points outside the convex hull of the nodal points, but since extrapolation is a very fickle and dangerous thing, this is not necessarily a con. Usage example: sparse_points = np.stack([x_sparse.ravel(), y_sparse.ravel()], -1) # shape (N, 2) in 2d z_dense_smooth_griddata = interp.griddata(sparse_points, z_sparse_smooth.ravel(), (x_dense, y_dense), method='cubic') # default method is linear Note that the same array transformations were necessary for the input arrays as for RBFInterpolator. The input points have to be specified in an array of shape [N, D] in D dimensions, or alternatively as a tuple of 1d arrays: z_dense_smooth_griddata = interp.griddata((x_sparse.ravel(), y_sparse.ravel()), z_sparse_smooth.ravel(), (x_dense, y_dense), method='cubic') The output point arrays can be specified as a tuple of arrays of arbitrary dimensions (as in both above snippets), which gives us some more flexibility. In a nutshell, scipy.interpolate.griddata produces well-behaved output even for crazy input data supports interpolation in higher dimensions does not perform extrapolation, a single value can be set for the output outside the convex hull of the input points (see fill_value) computes the interpolated values in a single call, so probing multiple sets of output points starts from scratch can have output points of arbitrary shape supports nearest-neighbour and linear interpolation in arbitrary dimensions, cubic in 1d and 2d. Nearest-neighbour and linear interpolation use NearestNDInterpolator and LinearNDInterpolator under the hood, respectively. 1d cubic interpolation uses a spline, 2d cubic interpolation uses CloughTocher2DInterpolator to construct a continuously differentiable piecewise-cubic interpolator. might violate the symmetry of the input data scipy.interpolate.interp2d/scipy.interpolate.bisplrep The only reason I'm discussing interp2d and its relatives is that it has a deceptive name, and people are likely to try using it. Spoiler alert: don't use it. interp2d was deprecated in SciPy version 1.10, and will be removed in SciPy 1.12. See this mailing list discussion for details. It's also more special than the previous subjects in that it's specifically used for two-dimensional interpolation, but I suspect this is by far the most common case for multivariate interpolation. As far as syntax goes, interp2d is similar to RBFInterpolator in that it first needs constructing an interpolation instance, which can be called to provide the actual interpolated values. There's a catch, however: the output points have to be located on a rectangular mesh, so inputs going into the call to the interpolator have to be 1d vectors which span the output grid, as if from numpy.meshgrid: # reminder: x_sparse and y_sparse are of shape [6, 7] from numpy.meshgrid zfun_smooth_interp2d = interp.interp2d(x_sparse, y_sparse, z_sparse_smooth, kind='cubic') # default kind is 'linear' # reminder: x_dense and y_dense are of shape (20, 21) from numpy.meshgrid xvec = x_dense[0,:] # 1d array of unique x values, 20 elements yvec = y_dense[:,0] # 1d array of unique y values, 21 elements z_dense_smooth_interp2d = zfun_smooth_interp2d(xvec, yvec) # output is (20, 21)-shaped array One of the most common mistakes when using interp2d is putting your full 2d meshes into the interpolation call, which leads to explosive memory consumption, and hopefully to a hasty MemoryError. Now, the greatest problem with interp2d is that it often doesn't work. In order to understand this, we have to look under the hood. It turns out that interp2d is a wrapper for the lower-level functions bisplrep + bisplev, which are in turn wrappers for FITPACK routines (written in Fortran). The equivalent call to the previous example would be kind = 'cubic' if kind == 'linear': kx = ky = 1 elif kind == 'cubic': kx = ky = 3 elif kind == 'quintic': kx = ky = 5 # bisplrep constructs a spline representation, bisplev evaluates the spline at given points bisp_smooth = interp.bisplrep(x_sparse.ravel(), y_sparse.ravel(), z_sparse_smooth.ravel(), kx=kx, ky=ky, s=0) z_dense_smooth_bisplrep = interp.bisplev(xvec, yvec, bisp_smooth).T # note the transpose Now, here's the thing about interp2d: (in scipy version 1.7.0) there is a nice comment in interpolate/interpolate.py for interp2d: if not rectangular_grid: # TODO: surfit is really not meant for interpolation! self.tck = fitpack.bisplrep(x, y, z, kx=kx, ky=ky, s=0.0) and indeed in interpolate/fitpack.py, in bisplrep there's some setup and ultimately tx, ty, c, o = _fitpack._surfit(x, y, z, w, xb, xe, yb, ye, kx, ky, task, s, eps, tx, ty, nxest, nyest, wrk, lwrk1, lwrk2) And that's it. The routines underlying interp2d are not really meant to perform interpolation. They might suffice for sufficiently well-behaved data, but under realistic circumstances you will probably want to use something else. Just to conclude, interpolate.interp2d can lead to artifacts even with well-tempered data is specifically for bivariate problems (although there's the limited interpn for input points defined on a grid) performs extrapolation creates an interpolator as a first step, so evaluating it in various output points is less additional effort can only produce output over a rectangular grid, for scattered output you would have to call the interpolator in a loop supports linear, cubic and quintic interpolation might violate the symmetry of the input data 1I'm fairly certain that the cubic and linear kind of basis functions of RBFInterpolator do not exactly correspond to the other interpolators of the same name. 2These NaNs are also the reason for why the surface plot seems so odd: matplotlib historically has difficulties with plotting complex 3d objects with proper depth information. The NaN values in the data confuse the renderer, so parts of the surface that should be in the back are plotted to be in the front. This is an issue with visualization, and not interpolation."} +{"question_id": 33161448, "score": 130, "creation_date": 1444961367, "tags": ["python", "list", "iterable-unpacking"], "instruction": "Getting only element from a single-element list in Python?\n\nWhen a Python list is known to always contain a single item, is there a way to access it other than: mylist[0] You may ask, 'Why would you want to?'. Curiosity alone. There seems to be an alternative way to do everything in Python.", "output": "Raises exception if not exactly one item: Sequence unpacking: singleitem, = mylist # Identical in behavior (byte code produced is the same), # but arguably more readable since a lone trailing comma could be missed: [singleitem] = mylist # Not a fan of this one, as the comma still feels missable but the parentheses do # draw attention (they're completely superfluous), so it's less likely to be missed # Still 100% identical in behavior, with identical byte code (singleitem,) = mylist Rampant insanity, unpack the input to the identity lambda function: # The only even semi-reasonable way to retrieve a single item and raise an exception on # failure for too many, not just too few, elements as an expression, rather than a # statement, without resorting to defining/importing functions elsewhere to do the work singleitem = (lambda x: x)(*mylist) All others silently ignore spec violation for too many items, producing first or last item: Explicit use of iterator protocol: singleitem = next(iter(mylist)) Destructive pop: singleitem = mylist.pop() Negative index: singleitem = mylist[-1] Set via single iteration for (because the loop variable remains available with its last value when a loop terminates): for singleitem in mylist: break There are many others (combining or varying bits of the above, or otherwise relying on implicit iteration), but you get the idea."} +{"question_id": 44761748, "score": 130, "creation_date": 1498486411, "tags": ["python", "emscripten", "webassembly"], "instruction": "Compiling Python to WebAssembly\n\nI have read that it is possible to convert Python 2.7 code to Web Assembly, but I cannot find a definitive guide on how to to so. So far I have compiled a C program to Web Assembly using Emscripten and all its necessary components, so I know it is working (guide used: http://webassembly.org/getting-started/developers-guide/) What are the steps I must take in order to do this on an Ubuntu machine? Do I have to convert the python code to LLVM bitcode then compile it using Emscripten? If so, how would I achieve this?", "output": "WebAssembly vs asm.js First, let's take a look how, in principle, WebAssembly is different from asm.js, and whether there's potential to reuse existing knowledge and tooling. The following gives pretty good overview: Why create a new standard when there is already asm.js? What is the difference between asm.js and web assembly? Why WebAssembly is Faster Than asm.js Let's recapitulate, WebAssembly (MVP, as there's more on its roadmap, roughly): is a binary format of AST with static typing, which can be executed by existing JavaScript engines (and thus JIT-able or compiled AOT), it's 10-20% more compact (gzipped comparison) and an order of magnitude faster to parse than JavaScript, it can express more low-level operation that won't fit into JavaScript syntax, read asm.js (e.g. 64-bit integers, special CPU instructions, SIMD, etc) is convertible (to some extent) to/from asm.js. Thus, currently WebAssembly is an iteration on asm.js and targets only C/C++ (and similar languages). Python on the Web It doesn't look like GC is the only thing that stops Python code from targeting WebAssembly/asm.js. Both represent low-level statically typed code, in which Python code can't (realistically) be represented. As current toolchain of WebAssembly/asm.js is based on LLVM, a language that can be easily compiled to LLVM IR can be converted to WebAssembly/asm.js. But alas, Python is too dynamic to fit into it as well, as proven by Unladen Swallow and several attempts of PyPy. This asm.js presentation has slides about the state of dynamic languages. What it means is that currently it's only possible to compile whole VM (language implementation in C/C++) to WebAssembly/asm.js and interpret (with JIT where possible) original sources. For Python there're several existing projects: PyPy: PyPy.js (author's talk at PyCon). Here's release repo. Main JS file, pypyjs.vm.js, is 13 MB (2MB after gzip -6) + Python stdlib + other stuff. CPython: pyodide, EmPython, CPython-Emscripten, EmCPython, etc. empython.js is 5.8 MB (2.1 MB after gzip -6), no stdlib. Micropython: this fork. There was no built JS file there, so I was able to build it with trzeci/emscripten/, a ready-made Emscripten toolchain. Something like: git clone https://github.com/matthewelse/micropython.git cd micropython docker run --rm -it -v $(pwd):/src trzeci/emscripten bash apt-get update && apt-get install -y python3 cd emscripten make -j # to run REPL: npm install && nodejs server.js It produces micropython.js of 1.1 MB (225 KB after gzip -d). The latter is already something to consider, if you need only very compliant implementation without stdlib. To produce WebAssembly build you can change line 13 of the Makefile to CC = emcc -s RESERVED_FUNCTION_POINTERS=20 -s WASM=1 Then make -j produces: 113 KB micropython.js 240 KB micropython.wasm You can look at HTML output of emcc hello.c -s WASM=1 -o hello.html, to see how to use these files. This way you can also potentially build PyPy and CPython in WebAssembly to interpret your Python application in a compliant browser. Another potentially interesting thing here is Nuitka, a Python to C++ compiler. Potentially it can be possible to build your Python app to C++ and then compile it along with CPython with Emscripten. But practically I've no idea how to do it. Solutions For the time being, if you're building a conventional web site or web app where download several-megabyte JS file is barely an option, take a look at Python-to-JavaScript transpilers (e.g. Transcrypt) or JavaScript Python implementations (e.g. Brython). Or try your luck with others from list of languages that compile to JavaScript. Otherwise, if download size is not an issue, and you're ready to tackle a lot of rough edges, choose between the three above. Q3 2020 update JavaScript port was integrated into MicroPython. It lives in ports/javascript. The port is available as a npm package called MicroPython.js. You can try it out in RunKit. There's an actively developed Python implementation in Rust, called RustPython. Because Rust officially supports WebAssembly as compile target, no surprise there's demo link right in the top of the readme. Though, it's early. Their disclaimer follows. RustPython is in a development phase and should not be used in production or a fault intolerant setting. Our current build supports only a subset of Python syntax. Q1 2023 update Python 3.11 recognises two WebAssembly \"platforms\" in its documentation and documents its API availability on them among other platforms like Linux and Unix (see this PR for more details) and also recommends Pyodide (from Mozilla) and another port, PyScript (from Anaconda), based on it: The WebAssembly platforms wasm32-emscripten (Emscripten) and wasm32-wasi (WASI) provide a subset of POSIX APIs. WebAssembly runtimes and browsers are sandboxed and have limited access to the host and external resources. Any Python standard library module that uses processes, threading, networking, signals, or other forms of inter-process communication (IPC), is either not available or may not work as on other Unix-like systems. [...] For Python in the browser, users should consider Pyodide or PyScript. PyScript is built on top of Pyodide, which itself is built on top of CPython and Emscripten. Pyodide provides access to browsers\u2019 JavaScript and DOM APIs as well as limited networking capabilities with JavaScript\u2019s XMLHttpRequest and Fetch APIs."} +{"question_id": 2060628, "score": 129, "creation_date": 1263420208, "tags": ["python", "audio", "wav", "wave"], "instruction": "Reading *.wav files in Python\n\nI need to analyze sound written in a .wav file. For that I need to transform this file into set of numbers (arrays, for example). I think I need to use the wave package. However, I do not know how exactly it works. For example I did the following: import wave w = wave.open('/usr/share/sounds/ekiga/voicemail.wav', 'r') for i in range(w.getnframes()): frame = w.readframes(i) print frame As a result of this code I expected to see sound pressure as function of time. In contrast I see a lot of strange, mysterious symbols (which are not hexadecimal numbers). Can anybody, pleas, help me with that?", "output": "Per the documentation, scipy.io.wavfile.read(somefile) returns a tuple of two items: the first is the sampling rate in samples per second, the second is a numpy array with all the data read from the file: from scipy.io import wavfile samplerate, data = wavfile.read('./output/audio.wav')"} +{"question_id": 27671748, "score": 129, "creation_date": 1419716792, "tags": ["python", "matplotlib", "plot", "label", "axis-labels"], "instruction": "How to show Y axis label horizontally\n\nI'm creating very simple charts with matplotlib / pylab Python module. The letter \"y\" that labels the Y axis is on its side. You would expect this if the label was longer, such as a word, so as not to extend the outside of the graph to the left too much. But for a one-letter label, this doesn't make sense; the label should be upright. How can I show the \"y\" horizontally?", "output": "It is very simple. After plotting the label, you can simply change the rotation: import matplotlib.pyplot as plt plt.ion() plt.plot([1, 2, 3]) plt.ylabel(\"y\", rotation=0) # or # h = plt.ylabel(\"y\") # h.set_rotation(0) plt.draw()"} +{"question_id": 8685809, "score": 129, "creation_date": 1325297187, "tags": ["python", "csv", "dictionary"], "instruction": "Writing a dictionary to a csv file with one line for every 'key: value'\n\nI've got a dictionary: mydict = {key1: value_a, key2: value_b, key3: value_c} I want to write the data to a file dict.csv, in this style: key1: value_a key2: value_b key3: value_c I wrote: import csv f = open('dict.csv','wb') w = csv.DictWriter(f,mydict.keys()) w.writerow(mydict) f.close() But now I have all keys in one row and all values in the next row.. When I manage to write a file like this, I also want to read it back to a new dictionary. Just to explain my code, the dictionary contains values and bools from textctrls and checkboxes (using wxpython). I want to add \"Save settings\" and \"Load settings\" buttons. Save settings should write the dictionary to the file in the mentioned way (to make it easier for the user to edit the csv file directly), load settings should read from the file and update the textctrls and checkboxes.", "output": "The DictWriter doesn't work the way you expect. with open('dict.csv', 'w') as csv_file: writer = csv.writer(csv_file) for key, value in mydict.items(): writer.writerow([key, value]) To read it back: with open('dict.csv') as csv_file: reader = csv.reader(csv_file) mydict = dict(reader) which is quite compact, but it assumes you don't need to do any type conversion when reading"} +{"question_id": 76616042, "score": 129, "creation_date": 1688507881, "tags": ["python", "python-imaging-library"], "instruction": "AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'\n\nI am trying to have images in my Tkinter GUI, hence I am using PIL. Image.ANTIALAIS is not working. However, Image.BILINEAR works Here's some sample code: import tkinter as tk from PIL import Image, ImageTk window = tk.Tk() image = Image.open(r\"VC.png\") image = image.resize((20, 20), Image.ANTIALIAS) tk_image = ImageTk.PhotoImage(image) image_label = tk.Label(window, image=tk_image) image_label.pack() window.mainloop() Here's the error: Traceback (most recent call last): File \"\", line 19, in AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS' I tried reinstalling pip and Pillow. It didn't work. I asked ChatGPT about this, and it advised me to upgrade to Pillow's latest version. I am on the latest version (10.0.0).", "output": "The problem is with Pillow 10.0. Trying to uninstall Pillow might give some errors. Just put this in cmd: pip install Pillow==9.5.0"} +{"question_id": 5598524, "score": 129, "creation_date": 1302282872, "tags": ["python", "html", "beautifulsoup"], "instruction": "Can I remove script tags with BeautifulSoup?\n\nCan baba', 'html.parser') for s in soup.select('script'): s.extract() print(soup) baba"} +{"question_id": 66557543, "score": 129, "creation_date": 1615343603, "tags": ["python", "pandas", "plotly", "nbformat"], "instruction": "ValueError: Mime type rendering requires nbformat>=4.2.0 but it is not installed\n\nI was trying to print a plotly plot in Visual Studio Code and caught this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in 30 31 fig.update_layout(height=nrows*500) ---> 32 fig.show() C:\\Python38\\lib\\site-packages\\plotly\\basedatatypes.py in show(self, *args, **kwargs) 3147 import plotly.io as pio 3148 -> 3149 return pio.show(self, *args, **kwargs) 3150 3151 def to_json(self, *args, **kwargs): C:\\Python38\\lib\\site-packages\\plotly\\io\\_renderers.py in show(fig, renderer, validate, **kwargs) 383 384 if not nbformat or LooseVersion(nbformat.__version__) < LooseVersion(\"4.2.0\"): --> 385 raise ValueError( 386 \"Mime type rendering requires nbformat>=4.2.0 but it is not installed\" 387 ) ValueError: Mime type rendering requires nbformat>=4.2.0 but it is not installed The code I used: import plotly.graph_objects as go from plotly.subplots import make_subplots import plotly.express as px df = df[df['Data']>0] df['Timestamp'] = pd.to_datetime(df['Timestamp']) df = df[(df['Id'] ==1)|(df['Id'] ==6)] dfp = pd.pivot_table(df, values='Data', index=['Timestamp'], columns=['Id'], ) nrows = len(dfp.columns) fig = make_subplots(rows=nrows, cols=1, subplot_titles=['Id '+str(c) for c in dfp.columns]) # add traces x = 1 for i, col in enumerate(dfp.columns): fig.add_trace(go.Scatter(x=dfp.index, y=dfp[col].values, name = 'Id '+str(col), mode = 'lines', ), row=i+1, col=1) fig.update_layout(height=nrows*500) fig.show() I tried pip install nbformat in the console following this feed on GitHub and this question on stackoverflow but it did not work. However, it seems the code could run with the last 2 rows removed: fig.update_layout(height=nrows*500) fig.show()", "output": "Method 1 reinstall ipykernel via pip install ipykernel Method 2 pip install --upgrade nbformat And restart your kernel, extremely important."} +{"question_id": 2233204, "score": 129, "creation_date": 1265756841, "tags": ["python", "iterator"], "instruction": "How does zip(*[iter(s)]*n) work in Python?\n\ns = [1,2,3,4,5,6,7,8,9] n = 3 list(zip(*[iter(s)]*n)) # returns [(1,2,3),(4,5,6),(7,8,9)] How does zip(*[iter(s)]*n) work? What would it look like if it was written with more verbose code? This is a technique used for splitting a list into chunks of equal size - see that question for a general overview of the problem.", "output": "iter() is an iterator over a sequence. [x] * n produces a list containing n quantity of x, i.e. a list of length n, where each element is x. *arg unpacks a sequence into arguments for a function call. Therefore you're passing the same iterator 3 times to zip(), and it pulls an item from the iterator each time. x = iter([1,2,3,4,5,6,7,8,9]) print(list(zip(x, x, x)))"} +{"question_id": 32550487, "score": 129, "creation_date": 1442153248, "tags": ["python", "flask"], "instruction": "How to print from Flask @app.route to python console\n\nI would like to simply print a \"hello world\" to the python console after /button is called by the user. This is my naive approach: @app.route('/button/') def button_clicked(): print 'Hello world!' return redirect('/') Background: I would like to execute other python commands from flask (not shell). \"print\" should be the easiest case. I believe I have not understood a basic twist here.", "output": "An easy way to do this is by printing to stderr. You can do that like this: from __future__ import print_function # In python 2.7 import sys @app.route('/button/') def button_clicked(): print('Hello world!', file=sys.stderr) return redirect('/') Flask will display things printed to stderr in the console. For other ways of printing to stderr, see this stackoverflow post"} +{"question_id": 61544854, "score": 129, "creation_date": 1588344824, "tags": ["python", "python-typing"], "instruction": "from __future__ import annotations\n\nPython doc __future__ In the python docs about __future__ there is a table where it shows that annotations are \"optional in\" 3.7.0b1 and \"mandatory in\" 4.0 but I am still able to use annotations in 3.8.2 without importing annotations. Given that, what is the use of it? >>> def add_int(a:int, b:int) -> int: ... return a + b >>> add_int.__annotations__ {'a': , 'b': , 'return': } I doubt I clearly understand the meaning of \"optional in\" and \"mandatory in\" here", "output": "Mandatory is an interesting word choice. I guess it means that it's by default in the language. You don't have to enable it with from __future__ import annotations The annotations feature are referring to the PEP 563: Postponed evaluation of annotations. It's an enhancement to the existing annotations feature which was initially introduced in python 3.0 and redefined as type hints in python 3.5, that's why your code works under python 3.8. Here's what optional from __future__ import annotations changes in python 3.7+: class A: def f(self) -> A: # NameError: name 'A' is not defined pass but this works from __future__ import annotations class A: def f(self) -> A: pass See this chapter in python 3.7 what's new about postponed annotations: Since this change breaks compatibility, the new behavior needs to be enabled on a per-module basis in Python 3.7 using a __future__ import: from __future__ import annotations It will become the default in Python 3.10*. * it was announced to be default in 3.10 (when python3.7 was released), but it was now moved to a later release"} +{"question_id": 15017072, "score": 129, "creation_date": 1361508655, "tags": ["python", "pandas", "csv", "csv-import"], "instruction": "pandas read_csv and filter columns with usecols\n\nI have a csv file which isn't coming in correctly with pandas.read_csv when I filter the columns with usecols and use multiple indexes. import pandas as pd csv = r\"\"\"dummy,date,loc,x bar,20090101,a,1 bar,20090102,a,3 bar,20090103,a,5 bar,20090101,b,1 bar,20090102,b,3 bar,20090103,b,5\"\"\" f = open('foo.csv', 'w') f.write(csv) f.close() df1 = pd.read_csv('foo.csv', header=0, names=[\"dummy\", \"date\", \"loc\", \"x\"], index_col=[\"date\", \"loc\"], usecols=[\"dummy\", \"date\", \"loc\", \"x\"], parse_dates=[\"date\"]) print df1 # Ignore the dummy columns df2 = pd.read_csv('foo.csv', index_col=[\"date\", \"loc\"], usecols=[\"date\", \"loc\", \"x\"], # <----------- Changed parse_dates=[\"date\"], header=0, names=[\"dummy\", \"date\", \"loc\", \"x\"]) print df2 I expect that df1 and df2 should be the same except for the missing dummy column, but the columns come in mislabeled. Also the date is getting parsed as a date. In [118]: %run test.py dummy x date loc 2009-01-01 a bar 1 2009-01-02 a bar 3 2009-01-03 a bar 5 2009-01-01 b bar 1 2009-01-02 b bar 3 2009-01-03 b bar 5 date date loc a 1 20090101 3 20090102 5 20090103 b 1 20090101 3 20090102 5 20090103 Using column numbers instead of names give me the same problem. I can workaround the issue by dropping the dummy column after the read_csv step, but I'm trying to understand what is going wrong. I'm using pandas 0.10.1. edit: fixed bad header usage.", "output": "The solution lies in understanding these two keyword arguments: names is only necessary when there is no header row in your file and you want to specify other arguments (such as usecols) using column names rather than integer indices. usecols is supposed to provide a filter before reading the whole DataFrame into memory; if used properly, there should never be a need to delete columns after reading. So because you have a header row, passing header=0 is sufficient and additionally passing names appears to be confusing pd.read_csv. Removing names from the second call gives the desired output: import pandas as pd from StringIO import StringIO csv = r\"\"\"dummy,date,loc,x bar,20090101,a,1 bar,20090102,a,3 bar,20090103,a,5 bar,20090101,b,1 bar,20090102,b,3 bar,20090103,b,5\"\"\" df = pd.read_csv(StringIO(csv), header=0, index_col=[\"date\", \"loc\"], usecols=[\"date\", \"loc\", \"x\"], parse_dates=[\"date\"]) Which gives us: x date loc 2009-01-01 a 1 2009-01-02 a 3 2009-01-03 a 5 2009-01-01 b 1 2009-01-02 b 3 2009-01-03 b 5"} +{"question_id": 54441424, "score": 129, "creation_date": 1548853870, "tags": ["python", "python-asyncio", "aiohttp"], "instruction": "Learning asyncio: \"coroutine was never awaited\" warning error\n\nI am trying to learn to use asyncio in Python to optimize scripts. My example returns a coroutine was never awaited warning, can you help to understand and find how to solve it? import time import datetime import random import asyncio import aiohttp import requests def requete_bloquante(num): print(f'Get {num}') uid = requests.get(\"https://httpbin.org/uuid\").json()['uuid'] print(f\"Res {num}: {uid}\") def faire_toutes_les_requetes(): for x in range(10): requete_bloquante(x) print(\"Bloquant : \") start = datetime.datetime.now() faire_toutes_les_requetes() exec_time = (datetime.datetime.now() - start).seconds print(f\"Pour faire 10 requ\u00eates, \u00e7a prend {exec_time}s\\n\") async def requete_sans_bloquer(num, session): print(f'Get {num}') async with session.get(\"https://httpbin.org/uuid\") as response: uid = (await response.json()['uuid']) print(f\"Res {num}: {uid}\") async def faire_toutes_les_requetes_sans_bloquer(): loop = asyncio.get_event_loop() with aiohttp.ClientSession() as session: futures = [requete_sans_bloquer(x, session) for x in range(10)] loop.run_until_complete(asyncio.gather(*futures)) loop.close() print(\"Fin de la boucle !\") print(\"Non bloquant : \") start = datetime.datetime.now() faire_toutes_les_requetes_sans_bloquer() exec_time = (datetime.datetime.now() - start).seconds print(f\"Pour faire 10 requ\u00eates, \u00e7a prend {exec_time}s\\n\") The first classic part of the code runs correctly, but the second half only produces: synchronicite.py:43: RuntimeWarning: coroutine 'faire_toutes_les_requetes_sans_bloquer' was never awaited", "output": "You made faire_toutes_les_requetes_sans_bloquer an awaitable function, a coroutine, by using async def. When you call an awaitable function, you create a new coroutine object. The code inside the function won't run until you then await on the function or run it as a task: >>> async def foo(): ... print(\"Running the foo coroutine\") ... >>> foo() >>> import asyncio >>> asyncio.run(foo()) Running the foo coroutine You want to keep that function synchronous, because you don't start the loop until inside that function: def faire_toutes_les_requetes_sans_bloquer(): loop = asyncio.get_event_loop() # ... loop.close() print(\"Fin de la boucle !\") However, you are also trying to use a aiohttp.ClientSession() object, and that's an asynchronous context manager, you are expected to use it with async with, not just with, and so has to be run in aside an awaitable task. If you use with instead of async with a TypeError(\"Use async with instead\") exception will be raised. That all means you need to move the loop.run_until_complete() call out of your faire_toutes_les_requetes_sans_bloquer() function, so you can keep that as the main task to be run; you can call and await on asycio.gather() directly then: async def faire_toutes_les_requetes_sans_bloquer(): async with aiohttp.ClientSession() as session: futures = [requete_sans_bloquer(x, session) for x in range(10)] await asyncio.gather(*futures) print(\"Fin de la boucle !\") print(\"Non bloquant : \") start = datetime.datetime.now() asyncio.run(faire_toutes_les_requetes_sans_bloquer()) exec_time = (datetime.datetime.now() - start).seconds print(f\"Pour faire 10 requ\u00eates, \u00e7a prend {exec_time}s\\n\") I used the new asyncio.run() function (Python 3.7 and up) to run the single main task. This creates a dedicated loop for that top-level coroutine and runs it until complete. Next, you need to move the closing ) parenthesis on the await resp.json() expression: uid = (await response.json())['uuid'] You want to access the 'uuid' key on the result of the await, not the coroutine that response.json() produces. With those changes your code works, but the asyncio version finishes in sub-second time; you may want to print microseconds: exec_time = (datetime.datetime.now() - start).total_seconds() print(f\"Pour faire 10 requ\u00eates, \u00e7a prend {exec_time:.3f}s\\n\") On my machine, the synchronous requests code in about 4-5 seconds, and the asycio code completes in under .5 seconds."} +{"question_id": 13650293, "score": 128, "creation_date": 1354297154, "tags": ["python", "operators"], "instruction": "Understanding the \"is\" operator\n\nThe is operator does not match the values of the variables, but the instances themselves. What does it really mean? I declared two variables named x and y assigning the same values in both variables, but it returns false when I use the is operator. I need a clarification. Here is my code: x = [1, 2, 3] y = [1, 2, 3] print(x is y) # False", "output": "You misunderstood what the is operator tests. It tests if two variables point the same object, not if two variables have the same value. From the documentation for the is operator: The operators is and is not test for object identity: x is y is true if and only if x and y are the same object. Use the == operator instead: print(x == y) This prints True. x and y are two separate lists: x[0] = 4 print(y) # prints [1, 2, 3] print(x == y) # prints False If you use the id() function you'll see that x and y have different identifiers: >>> id(x) 4401064560 >>> id(y) 4401098192 but if you were to assign y to x then both point to the same object: >>> x = y >>> id(x) 4401064560 >>> id(y) 4401064560 >>> x is y True and is shows both are the same object, it returns True. Remember that in Python, names are just labels referencing values; you can have multiple names point to the same object. is tells you if two names point to one and the same object. == tells you if two names refer to objects that have the same value."} +{"question_id": 13825278, "score": 128, "creation_date": 1355246522, "tags": ["python", "authentication", "curl", "access-token"], "instruction": "Python request with authentication (access_token)\n\nI am trying to use an API query in Python. From the command line I can use curl like so: curl --header \"Authorization:access_token myToken\" https://website.example/id This gives some JSON output. myToken is a hexadecimal variable that remains constant throughout. I would like to make this call from python so that I can loop through different ids and analyze the output. Before authentication was needed I had done that with urllib2. I have also taken a look at the requests module but couldn't figure out how to authenticate with it.", "output": "The requests package has a very nice API for HTTP requests, adding a custom header works like this (source: official docs): >>> import requests >>> response = requests.get( ... 'https://website.example/id', headers={'Authorization': 'access_token myToken'}) If you don't want to use an external dependency, the same thing using urllib2 of the Python 2 standard library looks like this (source: official docs): >>> import urllib2 >>> response = urllib2.urlopen( ... urllib2.Request('https://website.example/id', headers={'Authorization': 'access_token myToken'}) For Python 3, simply use urllib instead of urllib2"} +{"question_id": 53073411, "score": 128, "creation_date": 1540936446, "tags": ["python", "google-chrome", "selenium-webdriver", "selenium-chromedriver"], "instruction": "Selenium: WebDriverException:Chrome failed to start: crashed as google-chrome is no longer running so ChromeDriver is assuming that Chrome has crashed\n\nRecently I switched computers and since then I can't launch chrome with selenium. I've also tried Firefox but the browser instance just doesn't launch. from selenium import webdriver d = webdriver.Chrome('/home/PycharmProjects/chromedriver') d.get('https://www.google.nl/') I get the following error: selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: crashed (unknown error: DevToolsActivePort file doesn't exist) (The process started from chrome location /opt/google/chrome/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.) (Driver info: chromedriver=2.43.600233, platform=Linux 4.15.0-38-generic x86_64) I have the latest chrome version and chromedriver installed. EDIT: After trying @b0sss solution I am getting the following error. selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: crashed (chrome not reachable) (The process started from chrome location /opt/google/chrome/google-chrome is no longer running, so chromedriver is assuming that Chrome has crashed.) (Driver info: chromedriver=2.43.600233 (523efee95e3d68b8719b3a1c83051aa63aa6b10d),platform=Linux 4.15.0-38-generic x86_64)", "output": "Try to download HERE and use this latest chrome driver version: https://sites.google.com/chromium.org/driver/ Try this: from selenium import webdriver from selenium.webdriver.chrome.options import Options chrome_options = Options() chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--headless') chrome_options.add_argument('--disable-dev-shm-usage') d = webdriver.Chrome('/home//chromedriver',chrome_options=chrome_options) d.get('https://www.google.nl/')"} +{"question_id": 1982770, "score": 128, "creation_date": 1262210640, "tags": ["python", "matplotlib", "axis"], "instruction": "Changing the color of an axis\n\nIs there a way to change the color of an axis (not the ticks) in matplotlib? I have been looking through the docs for Axes, Axis, and Artist, but no luck; the matplotlib gallery also has no hint. Any idea?", "output": "When using figures, you can easily change the spine color with: ax.spines['bottom'].set_color('#dddddd') ax.spines['top'].set_color('#dddddd') ax.spines['right'].set_color('red') ax.spines['left'].set_color('red') Use the following to change only the ticks: which=\"both\" changes both the major and minor tick colors ax.tick_params(axis='x', colors='red') ax.tick_params(axis='y', colors='red') And the following to change only the label: ax.yaxis.label.set_color('red') ax.xaxis.label.set_color('red') And finally the title: ax.title.set_color('red')"} +{"question_id": 30808430, "score": 128, "creation_date": 1434128119, "tags": ["python", "python-2.7", "pandas"], "instruction": "How to select columns from dataframe by regex\n\nI have a dataframe in python pandas. The structure of the dataframe is as the following: a b c d1 d2 d3 10 14 12 44 45 78 I would like to select the columns which begin with d. Is there a simple way to achieve this in python .", "output": "You can use DataFrame.filter this way: import pandas as pd df = pd.DataFrame(np.array([[2,4,4],[4,3,3],[5,9,1]]),columns=['d','t','didi']) >> d t didi 0 2 4 4 1 4 3 3 2 5 9 1 df.filter(regex=(\"d.*\")) >> d didi 0 2 4 1 4 3 2 5 1 The idea is to select columns by regex"} +{"question_id": 21303224, "score": 128, "creation_date": 1390466541, "tags": ["python", "list", "iterator"], "instruction": "Iterate over all pairs of consecutive items in a list\n\nGiven a list l = [1, 7, 3, 5] I want to iterate over all pairs of consecutive list items (1,7), (7,3), (3,5), i.e. for i in xrange(len(l) - 1): x = l[i] y = l[i + 1] # do something I would like to do this in a more compact way, like for x, y in someiterator(l): ... Is there a way to do do this using builtin Python iterators? I'm sure the itertools module should have a solution, but I just can't figure it out.", "output": "Just use zip >>> l = [1, 7, 3, 5] >>> for first, second in zip(l, l[1:]): ... print(first, second) ... 1 7 7 3 3 5 If you use Python 2 (not suggested) you might consider using the izip function in itertools for very long lists where you don't want to create a new list. import itertools for first, second in itertools.izip(l, l[1:]): ..."} +{"question_id": 12570859, "score": 128, "creation_date": 1348511844, "tags": ["python", "image", "python-imaging-library"], "instruction": "How to show PIL images on the screen?\n\nI am doing some image editing with the PIL libary. The point is, that I don't want to save the image each time on my HDD to view it in Explorer. Is there a small module that simply enables me to set up a window and display the image?", "output": "From near the beginning of the PIL Tutorial: Once you have an instance of the Image class, you can use the methods defined by this class to process and manipulate the image. For example, let's display the image we just loaded: >>> im.show() Update: Nowadays the Image.show() method is formally documented in the Pillow fork of PIL along with an explanation of how it's implemented on different OSs."} +{"question_id": 1619514, "score": 128, "creation_date": 1256427710, "tags": ["python", "set"], "instruction": "How to extract the member from single-member set in python?\n\nI recently encountered a scenario in which if a set only contained a single element, I wanted to do something with that element. To get the element, I settled on this approach: element = list(myset)[0] But this isn't very satisfying, as it creates an unnecessary list. It could also be done with iteration, but iteration seems unnatural as well, since there is only a single element. Am I missing something simple?", "output": "Tuple unpacking works, and verifies the assumption that the set contains exactly one element (raising ValueError if it has too many or too few elements). (element,) = myset # Or equivalently, without requiring trailing comma: [element] = myset (By the way, python-dev has explored but rejected the addition of myset.get() to return an arbitrary element from a set. Discussion here, Guido van Rossum answers 1 and 2.) My personal favorite for getting an arbitrary element is (when you have an unknown number, but also works if you have just one): element = next(iter(myset)) \u00b9 1: in Python 2.5 and before, you have to use iter(myset).next()"} +{"question_id": 34671217, "score": 128, "creation_date": 1452236459, "tags": ["python", "flask", "pagination", "get", "request"], "instruction": "In Flask, what is \"request.args\" and how is it used?\n\nAs a Flask beginner, I can't understand how request.args is used. I read somewhere that it is used to return values of query string (correct me if I'm wrong) and how many parameters request.args.get() takes. I know that when I have to store submitted form data, I can use fname = request.form.get(\"firstname\"). Here, only one parameter is passed, whereas the code below takes two parameters. @app.route(\"/\") def home(): cnx = db_connect() cur = cnx.cursor() output = [] page = request.args.get('page', 1) try: page = int(page) skip = (page-1)*4 except: abort(404) stmt_select = \"select * from posts limit %s, 4;\" values=[skip] cur.execute(stmt_select,values) x=cur.fetchall() for row in reversed(x): data = { \"uid\":row[0], \"pid\":row[1], \"subject\":row[2], \"post_content\":row[3], \"date\":datetime.fromtimestamp(row[4]), } output.append(data) next = page + 1 previous = page-1 if previous<1: previous=1 return render_template(\"home.html\", persons=output, next=next, previous=previous) Please explain why it takes two parameters, and then what its use is.", "output": "According to the flask.Request.args documents. flask.Request.args A MultiDict with the parsed contents of the query string. (The part in the URL after the question mark). So the args.get() is method get() for MultiDict, whose prototype is as follows: get(key, default=None, type=None) In newer version of flask (v1.0.x and v1.1.x), flask.Request.args is an ImmutableMultiDict(an immutable MultiDict), so the prototype and specific method above are still valid."} +{"question_id": 589284, "score": 128, "creation_date": 1235627965, "tags": ["python", "mysql"], "instruction": "Imploding a list for use in a Python MySQL IN clause\n\nI know how to map a list to a string: foostring = \",\".join( map(str, list_of_ids) ) And I know that I can use the following to get that string into an IN clause: cursor.execute(\"DELETE FROM foo.bar WHERE baz IN ('%s')\" % (foostring)) How can I accomplish the same thing safely (avoiding SQL injection) using a MySQL database? In the above example, because foostring is not passed as an argument to execute, it is vulnerable. I also have to quote and escape outside of the MySQL library. (There is a related Stack Overflow question, but the answers listed there either do not work for MySQL database or are vulnerable to SQL injection.)", "output": "Use the list_of_ids directly: format_strings = ','.join(['%s'] * len(list_of_ids)) cursor.execute(\"DELETE FROM foo.bar WHERE baz IN (%s)\" % format_strings, tuple(list_of_ids)) That way you avoid having to quote yourself, and avoid all kinds of SQL injection. Note that the data (list_of_ids) is going directly to MySQL's driver, as a parameter (not in the query text) so there isn't any injection. You can leave any characters you want in the string; there isn't any need to remove or quote characters."} +{"question_id": 15026698, "score": 128, "creation_date": 1361544231, "tags": ["python", "csv", "pandas", "dataframe", "whitespace"], "instruction": "How to make separator in pandas read_csv more flexible wrt whitespace, for irregular separators?\n\nI need to create a data frame by reading in data from a file, using read_csv method. However, the separators are not very regular: some columns are separated by tabs (\\t), other are separated by spaces. Moreover, some columns can be separated by 2 or 3 or more spaces or even by a combination of spaces and tabs (for example 3 spaces, two tabs and then 1 space). Is there a way to tell pandas to treat these files properly? By the way, I do not have this problem if I use Python. I use: for line in file(file_name): fld = line.split() And it works perfect. It does not care if there are 2 or 3 spaces between the fields. Even combinations of spaces and tabs do not cause any problem. Can pandas do the same?", "output": "From the documentation, you can use either a regex or delim_whitespace: >>> import pandas as pd >>> for line in open(\"whitespace.csv\"): ... print repr(line) ... 'a\\t b\\tc 1 2\\n' 'd\\t e\\tf 3 4\\n' >>> pd.read_csv(\"whitespace.csv\", header=None, delimiter=r\"\\s+\") 0 1 2 3 4 0 a b c 1 2 1 d e f 3 4 >>> pd.read_csv(\"whitespace.csv\", header=None, delim_whitespace=True) 0 1 2 3 4 0 a b c 1 2 1 d e f 3 4"} +{"question_id": 40309098, "score": 128, "creation_date": 1477671134, "tags": ["python", "jupyter-notebook"], "instruction": "Is it possible to copy a cell from one jupyter notebook to another?\n\nI am trying to copy cells from one jupyter notebook to another. How this is possible?", "output": "This feature has been introduced in Jupyter Notebook 5.0. Use pip install notebook --upgrade to upgrade to the latest release. The old shortcuts C, V and X work only for copying and pasting inside one notebook. In order to do it across tabs, you should use Ctrl-C and Ctrl-V (Cmd-C and Cmd-V on Mac). Note on multiple cells: currently (jupyter 6.0.0) on Mac+chrome using shift-click to select the cells then cmd-C does NOT work, while using the keyboard with shift-down-arrow does! (thanks drevicko for pointing this out)"} +{"question_id": 33045222, "score": 127, "creation_date": 1444416479, "tags": ["python", "python-typing", "type-alias"], "instruction": "How do you alias a type in Python?\n\nIn some (mostly functional) languages you can do something like this: type row = list(datum) or type row = [datum] So that we can build things like this: type row = [datum] type table = [row] type database = [table] Is there a way to do this in Python? You could do it using classes, but Python has quite some functional aspects so I was wondering if it could be done an easier way.", "output": "Python 3.12+ Python 3.12 contains the implementation of PEP 695: Type Parameter Syntax which provides a new way to declare type aliases using the type statement (similar to TypeScript). type Point = tuple[float, float] Type aliases can also be generic: type Point[T] = tuple[T, T] The values of type aliases created through the type statement are lazily evaluated. Quoting from the documentation: The values of type aliases created through the type statement are lazily evaluated. The same applies to the bounds and constraints of type variables created through the type parameter syntax. This means that they are not evaluated when the type alias or type variable is created. Instead, they are only evaluated when doing so is necessary to resolve an attribute access. Example: >>> type Alias = 1/0 >>> Alias.__value__ Traceback (most recent call last): ... ZeroDivisionError: division by zero >>> def func[T: 1/0](): pass >>> T = func.__type_params__[0] >>> T.__bound__ Traceback (most recent call last): ... ZeroDivisionError: division by zero Here the exception is raised only when the __value__ attribute of the type alias or the __bound__ attribute of the type variable is accessed. This behavior is primarily useful for references to types that have not yet been defined when the type alias or type variable is created. For example, lazy evaluation enables creation of mutually recursive type aliases: from typing import Literal type SimpleExpr = int | Parenthesized type Parenthesized = tuple[Literal[\"(\"], Expr, Literal[\")\"]] type Expr = SimpleExpr | tuple[SimpleExpr, Literal[\"+\", \"-\"], Expr] Lazily evaluated values are evaluated in annotation scope, which means that names that appear inside the lazily evaluated value are looked up as if they were used in the immediately enclosing scope. Please note that support for PEP 695 in mypy is still under active development. But pyright did support this syntax by specifying the --pythonversion 3.12 option. For example, running pyright on the following snippet using pyright myscript.py --pythonversion 3.12 # myscript.py type vector = list[float] def foo(items: vector): pass foo([\"foo\", \"bar\"]) will produce the following type error: myscript.py:6:5 - error: Argument of type \"list[str]\" cannot be assigned to parameter \"items\" of type \"vector\" in function \"foo\" \"Literal['foo']\" is incompatible with \"float\" \"Literal['bar']\" is incompatible with \"float\" (reportGeneralTypeIssues) 1 error, 0 warnings, 0 informations EDIT: Mypy now does support this syntax."} +{"question_id": 31385363, "score": 127, "creation_date": 1436795774, "tags": ["python", "apache-spark", "dataframe", "apache-spark-sql", "export-to-csv"], "instruction": "How to export a table dataframe in PySpark to csv?\n\nI am using Spark 1.3.1 (PySpark) and I have generated a table using a SQL query. I now have an object that is a DataFrame. I want to export this DataFrame object (I have called it \"table\") to a csv file so I can manipulate it and plot the columns. How do I export the DataFrame \"table\" to a csv file?", "output": "If data frame fits in a driver memory and you want to save to local files system you can convert Spark DataFrame to local Pandas DataFrame using toPandas method and then simply use to_csv: df.toPandas().to_csv('mycsv.csv') Otherwise you can use spark-csv: Spark 1.3 df.save('mycsv.csv', 'com.databricks.spark.csv') Spark 1.4+ df.write.format('com.databricks.spark.csv').save('mycsv.csv') In Spark 2.0+ you can use csv data source directly: df.write.csv('mycsv.csv')"} +{"question_id": 1906717, "score": 127, "creation_date": 1260875340, "tags": ["python"], "instruction": "How to split an integer into a list of digits?\n\nSuppose I have an input integer 12345. How can I split it into a list like [1, 2, 3, 4, 5]?", "output": "Convert the number to a string so you can iterate over it, then convert each digit (character) back to an int inside a list-comprehension: >>> [int(i) for i in str(12345)] [1, 2, 3, 4, 5]"} +{"question_id": 646286, "score": 127, "creation_date": 1237050917, "tags": ["python", "python-imaging-library"], "instruction": "How to write PNG image to string with the PIL?\n\nI have generated an image using PIL. How can I save it to a string in memory? The Image.save() method requires a file. I'd like to have several such images stored in dictionary.", "output": "You can use the BytesIO class to get a wrapper around strings that behaves like a file. The BytesIO object provides the same interface as a file, but saves the contents just in memory: import io with io.BytesIO() as output: image.save(output, format=\"GIF\") contents = output.getvalue() You have to explicitly specify the output format with the format parameter, otherwise PIL will raise an error when trying to automatically detect it. If you loaded the image from a file it has a format property that contains the original file format, so in this case you can use format=image.format. In old Python 2 versions before introduction of the io module you would have used the StringIO module instead."} +{"question_id": 6148207, "score": 127, "creation_date": 1306474342, "tags": ["python", "numpy", "matplotlib", "linear-regression", "curve-fitting"], "instruction": "Linear regression with matplotlib / numpy\n\nI'm trying to generate a linear regression on a scatter plot I have generated, however my data is in list format, and all of the examples I can find of using polyfit require using arange. arange doesn't accept lists though. I have searched high and low about how to convert a list to an array and nothing seems clear. Am I missing something? Following on, how best can I use my list of integers as inputs to the polyfit? Here is the polyfit example I am following: import numpy as np import matplotlib.pyplot as plt x = np.arange(data) y = np.arange(data) m, b = np.polyfit(x, y, 1) plt.plot(x, y, 'yo', x, m*x+b, '--k') plt.show()", "output": "arange generates lists (well, numpy arrays); type help(np.arange) for the details. You don't need to call it on existing lists. >>> x = [1,2,3,4] >>> y = [3,5,7,9] >>> >>> m,b = np.polyfit(x, y, 1) >>> m 2.0000000000000009 >>> b 0.99999999999999833 I should add that I tend to use poly1d here rather than write out \"m*x+b\" and the higher-order equivalents, so my version of your code would look something like this: import numpy as np import matplotlib.pyplot as plt x = [1,2,3,4] y = [3,5,7,10] # 10, not 9, so the fit isn't perfect coef = np.polyfit(x,y,1) poly1d_fn = np.poly1d(coef) # poly1d_fn is now a function which takes in x and returns an estimate for y plt.plot(x,y, 'yo', x, poly1d_fn(x), '--k') #'--k'=black dashed line, 'yo' = yellow circle marker plt.xlim(0, 5) plt.ylim(0, 12)"} +{"question_id": 5710867, "score": 127, "creation_date": 1303179204, "tags": ["python", "unzip"], "instruction": "Downloading and unzipping a .zip file without writing to disk\n\nI have managed to get my first python script to work which downloads a list of .ZIP files from a URL and then proceeds to extract the ZIP files and writes them to disk. I am now at a loss to achieve the next step. My primary goal is to download and extract the zip file and pass the contents (CSV data) via a TCP stream. I would prefer not to actually write any of the zip or extracted files to disk if I could get away with it. Here is my current script which works but unfortunately has to write the files to disk. import urllib, urllister import zipfile import urllib2 import os import time import pickle # check for extraction directories existence if not os.path.isdir('downloaded'): os.makedirs('downloaded') if not os.path.isdir('extracted'): os.makedirs('extracted') # open logfile for downloaded data and save to local variable if os.path.isfile('downloaded.pickle'): downloadedLog = pickle.load(open('downloaded.pickle')) else: downloadedLog = {'key':'value'} # remove entries older than 5 days (to maintain speed) # path of zip files zipFileURL = \"http://www.thewebserver.com/that/contains/a/directory/of/zip/files\" # retrieve list of URLs from the webservers usock = urllib.urlopen(zipFileURL) parser = urllister.URLLister() parser.feed(usock.read()) usock.close() parser.close() # only parse urls for url in parser.urls: if \"PUBLIC_P5MIN\" in url: # download the file downloadURL = zipFileURL + url outputFilename = \"downloaded/\" + url # check if file already exists on disk if url in downloadedLog or os.path.isfile(outputFilename): print \"Skipping \" + downloadURL continue print \"Downloading \",downloadURL response = urllib2.urlopen(downloadURL) zippedData = response.read() # save data to disk print \"Saving to \",outputFilename output = open(outputFilename,'wb') output.write(zippedData) output.close() # extract the data zfobj = zipfile.ZipFile(outputFilename) for name in zfobj.namelist(): uncompressed = zfobj.read(name) # save uncompressed data to disk outputFilename = \"extracted/\" + name print \"Saving extracted file to \",outputFilename output = open(outputFilename,'wb') output.write(uncompressed) output.close() # send data via tcp stream # file successfully downloaded and extracted store into local log and filesystem log downloadedLog[url] = time.time(); pickle.dump(downloadedLog, open('downloaded.pickle', \"wb\" ))", "output": "My suggestion would be to use a StringIO object. They emulate files, but reside in memory. So you could do something like this: # get_zip_data() gets a zip archive containing 'foo.txt', reading 'hey, foo' import zipfile from StringIO import StringIO zipdata = StringIO() zipdata.write(get_zip_data()) myzipfile = zipfile.ZipFile(zipdata) foofile = myzipfile.open('foo.txt') print foofile.read() # output: \"hey, foo\" Or more simply (apologies to Vishal): myzipfile = zipfile.ZipFile(StringIO(get_zip_data())) for name in myzipfile.namelist(): [ ... ] In Python 3 use BytesIO instead of StringIO: import zipfile from io import BytesIO filebytes = BytesIO(get_zip_data()) myzipfile = zipfile.ZipFile(filebytes) for name in myzipfile.namelist(): [ ... ]"} +{"question_id": 15763394, "score": 127, "creation_date": 1364903549, "tags": ["python", "unit-testing", "testing", "mocking", "nose"], "instruction": "Mocking two functions with patch for a unit test\n\nI have a function I want to unit test contains calls two other functions. I am unsure how can I mock both functions at the same time properly using patch. I have provided an example of what I mean below. When I run nosetests, the tests pass but I feel that there must be a cleaner way to do this and I do not really Understand the piece regarding f.close()... The directory structure looks like this: program/ program/ data.py tests/ data_test.py data.py: import cPickle def write_out(file_path, data): f = open(file_path, 'wb') cPickle.dump(data, f) f.close() data_test.py: from mock import MagicMock, patch def test_write_out(): path = '~/collection' mock_open = MagicMock() mock_pickle = MagicMock() f_mock = MagicMock() with patch('__builtin__.open', mock_open): f = mock_open.return_value f.method.return_value = path with patch('cPickle.dump', mock_pickle): write_out(path, 'data') mock_open.assert_called_once_with('~/collection', 'wb') f.close.assert_any_call() mock_pickle.assert_called_once_with('data', f) Results: $ nosetests . ---------------------------------------------------------------------- Ran 1 test in 0.008s OK", "output": "You can simplify your test by using the patch decorator and nesting them like so (they are MagicMock objects by default): from unittest.mock import patch @patch('cPickle.dump') @patch('__builtin__.open') def test_write_out(mock_open, mock_pickle): path = '~/collection' f = mock_open.return_value f.method.return_value = path write_out(path, 'data') mock_open.assert_called_once_with('~/collection', 'wb') mock_pickle.assert_called_once_with('data', f) f.close.assert_any_call() Note that, as shown in the example above, the decorators are applied bottom-up in the order of the parameters. See the quick guide from the docs for official documentation of this. Calls to a MagicMock instance return a new MagicMock instance, so you can check that the returned value was called just like any other mocked object. In this case f is a MagicMock named 'open()' (try printing f)."} +{"question_id": 54307225, "score": 127, "creation_date": 1548156287, "tags": ["python", "machine-learning", "deep-learning", "pytorch", "concatenation"], "instruction": "`stack()` vs `cat()` in PyTorch\n\nOpenAI's REINFORCE and actor-critic example for reinforcement learning has the following code: REINFORCE: policy_loss = torch.cat(policy_loss).sum() actor-critic: loss = torch.stack(policy_losses).sum() + torch.stack(value_losses).sum() One is using torch.cat, the other uses torch.stack, for similar use cases. As far as my understanding goes, the doc doesn't give any clear distinction between them. I would be happy to know the differences between the functions.", "output": "stack Concatenates sequence of tensors along a new dimension. cat Concatenates the given sequence of seq tensors in the given dimension. So if A and B are of shape (3, 4): torch.cat([A, B], dim=0) will be of shape (6, 4) torch.stack([A, B], dim=0) will be of shape (2, 3, 4)"} +{"question_id": 10989005, "score": 127, "creation_date": 1339459313, "tags": ["python", "file"], "instruction": "Do I understand os.walk right?\n\nThe loop for root, dir, file in os.walk(startdir) works through these steps? for root in os.walk(startdir) for dir in root for files in dir get root of start dir : C:\\dir1\\dir2\\startdir get folders in C:\\dir1\\dir2\\startdir and return list of folders \"dirlist\" get files in the first dirlist item and return the list of files \"filelist\" as the first item of a list of filelists. move to the second item in dirlist and return the list of files in this folder \"filelist2\" as the second item of a list of filelists. etc. move to the next root in the folder tree and start from 2. etc. Right? Or does it just get all roots first, then all dirs second, and all files third?", "output": "os.walk returns a generator, that creates a tuple of values (current_path, directories in current_path, files in current_path). Every time the generator is called it will follow each directory recursively until no further sub-directories are available from the initial directory that walk was called upon. As such, os.walk('C:\\dir1\\dir2\\startdir').next()[0] # returns 'C:\\dir1\\dir2\\startdir' os.walk('C:\\dir1\\dir2\\startdir').next()[1] # returns all the dirs in 'C:\\dir1\\dir2\\startdir' os.walk('C:\\dir1\\dir2\\startdir').next()[2] # returns all the files in 'C:\\dir1\\dir2\\startdir' So import os.path .... for path, directories, files in os.walk('C:\\dir1\\dir2\\startdir'): if file in files: print('found %s' % os.path.join(path, file)) or this def search_file(directory = None, file = None): assert os.path.isdir(directory) for cur_path, directories, files in os.walk(directory): if file in files: return os.path.join(directory, cur_path, file) return None or if you want to look for file you can do this: import os def search_file(directory = None, file = None): assert os.path.isdir(directory) current_path, directories, files = os.walk(directory).next() if file in files: return os.path.join(directory, file) elif directories == '': return None else: for new_directory in directories: result = search_file(directory = os.path.join(directory, new_directory), file = file) if result: return result return None"} +{"question_id": 187621, "score": 127, "creation_date": 1223564102, "tags": ["python", "linux", "unix", "command-line", "autocomplete"], "instruction": "How to make a python, command-line program autocomplete arbitrary things NOT interpreter\n\nI am aware of how to setup autocompletion of python objects in the python interpreter (on unix). Google shows many hits for explanations on how to do this. Unfortunately, there are so many references to that it is difficult to find what I need to do, which is slightly different. I need to know how to enable, tab/auto completion of arbitrary items in a command-line program written in python. My specific use case is a command-line python program that needs to send emails. I want to be able to autocomplete email addresses (I have the addresses on disk) when the user types part of it (and optionally presses the TAB key). I do not need it to work on windows or mac, just linux.", "output": "Use Python's readline bindings. For example, import readline def completer(text, state): options = [i for i in commands if i.startswith(text)] if state < len(options): return options[state] else: return None readline.parse_and_bind(\"tab: complete\") readline.set_completer(completer) The official module docs aren't much more detailed, see the readline docs for more info."} +{"question_id": 196960, "score": 127, "creation_date": 1223884518, "tags": ["python", "arguments", "introspection"], "instruction": "Can you list the keyword arguments a function receives?\n\nI have a dict, which I need to pass key/values as keyword arguments.. For example.. d_args = {'kw1': 'value1', 'kw2': 'value2'} example(**d_args) This works fine, but if there are values in the d_args dict that are not accepted by the example function, it obviously dies.. Say, if the example function is defined as def example(kw2): This is a problem since I don't control either the generation of the d_args, or the example function.. They both come from external modules, and example only accepts some of the keyword-arguments from the dict.. Ideally I would just do parsed_kwargs = feedparser.parse(the_url) valid_kwargs = get_valid_kwargs(parsed_kwargs, valid_for = PyRSS2Gen.RSS2) PyRSS2Gen.RSS2(**valid_kwargs) I will probably just filter the dict, from a list of valid keyword-arguments, but I was wondering: Is there a way to programatically list the keyword arguments the a specific function takes?", "output": "A little nicer than inspecting the code object directly and working out the variables is to use the inspect module. >>> import inspect >>> def func(a,b,c=42, *args, **kwargs): pass >>> inspect.getargspec(func) (['a', 'b', 'c'], 'args', 'kwargs', (42,)) If you want to know if its callable with a particular set of args, you need the args without a default already specified. These can be got by: def get_required_args(func): args, varargs, varkw, defaults = inspect.getargspec(func) if defaults: args = args[:-len(defaults)] return args # *args and **kwargs are not required, so ignore them. Then a function to tell what you are missing from your particular dict is: def missing_args(func, argdict): return set(get_required_args(func)).difference(argdict) Similarly, to check for invalid args, use: def invalid_args(func, argdict): args, varargs, varkw, defaults = inspect.getargspec(func) if varkw: return set() # All accepted return set(argdict) - set(args) And so a full test if it is callable is : def is_callable_with_args(func, argdict): return not missing_args(func, argdict) and not invalid_args(func, argdict) (This is good only as far as python's arg parsing. Any runtime checks for invalid values in kwargs obviously can't be detected.)"} +{"question_id": 8475290, "score": 127, "creation_date": 1323697615, "tags": ["python", "subprocess", "stdin", "nuke"], "instruction": "How do I write to a Python subprocess' stdin?\n\nI'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes. The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this: subprocess.call([\"C:/Program Files/Nuke6.3v5/Nuke6.3\", \"-t\", \"E:/NukeTest/test.nk\"]) Then nothing happens because nuke is waiting for user input. How would I now write to standard input? I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.", "output": "It might be better to use communicate: from subprocess import Popen, PIPE, STDOUT p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE, text=True) stdout_data = p.communicate(input='data_to_write')[0] \"Better\", because of this warning: Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process."} +{"question_id": 43274476, "score": 127, "creation_date": 1491557149, "tags": ["python", "subprocess"], "instruction": "Is there a way to check if a subprocess is still running?\n\nI'm launching a number of subprocesses with subprocess.Popen in Python. I'd like to check whether one such process has completed. I've found two ways of checking the status of a subprocess, but both seem to force the process to complete. One is using process.communicate() and printing the returncode, as explained here: checking status of process with subprocess.Popen in Python. Another is simply calling process.wait() and checking that it returns 0. Is there a way to check if a process is still running without waiting for it to complete if it is?", "output": "Ouestion: ... a way to check if a process is still running ... You can do it for instance: p = subprocess.Popen(... \"\"\" A None value indicates that the process hasn't terminated yet. \"\"\" poll = p.poll() if poll is None: # p.subprocess is alive Python \u00bb 3.6.1 Documentation popen-objects Tested with Python:3.4.2"} +{"question_id": 9595135, "score": 126, "creation_date": 1331088531, "tags": ["python", "math", "python-2.x", "sqrt"], "instruction": "Why does Python give the \"wrong\" answer for square root? What is integer division in Python 2?\n\nx = 16 sqrt = x**(.5) #returns 4 sqrt = x**(1/2) #returns 1 I know I can import math and use sqrt, but I'm looking for an answer to the above. What is integer division in Python 2? This behavior is fixed in Python 3.", "output": "In Python 2, sqrt=x**(1/2) does integer division. 1/2 == 0. So x(1/2) equals x(0), which is 1. It's not wrong, it's the right answer to a different question. If you want to calculate the square root without an import of the math module, you'll need to use x**(1.0/2) or x**(1/2.). One of the integers needs to be a floating number. Note: this is not the case in Python 3, where 1/2 would be 0.5 and 1//2 would instead be integer division."} +{"question_id": 55873174, "score": 126, "creation_date": 1556303375, "tags": ["python", "fastapi"], "instruction": "How do I return an image in FastAPI?\n\nUsing the python module FastAPI, I can't figure out how to return an image. In flask I would do something like this: @app.route(\"/vector_image\", methods=[\"POST\"]) def image_endpoint(): # img = ... # Create the image here return Response(img, mimetype=\"image/png\") what's the corresponding call in this module?", "output": "If you already have the bytes of the image in memory Return a fastapi.responses.Response with your custom content and media_type. You'll also need to muck with the endpoint decorator to get FastAPI to put the correct media type in the OpenAPI specification. @app.get( \"/image\", # Set what the media type will be in the autogenerated OpenAPI specification. # fastapi.tiangolo.com/advanced/additional-responses/#additional-media-types-for-the-main-response responses = { 200: { \"content\": {\"image/png\": {}} } }, # Prevent FastAPI from adding \"application/json\" as an additional # response media type in the autogenerated OpenAPI specification. # https://github.com/tiangolo/fastapi/issues/3258 response_class=Response ) def get_image() image_bytes: bytes = generate_cat_picture() # media_type here sets the media type of the actual response sent to the client. return Response(content=image_bytes, media_type=\"image/png\") See the Response documentation. If your image exists only on the filesystem Return a fastapi.responses.FileResponse. See the FileResponse documentation. Be careful with StreamingResponse Other answers suggest StreamingResponse. StreamingResponse is harder to use correctly, so I don't recommend it unless you're sure you can't use Response or FileResponse. In particular, code like this is pointless. It will not \"stream\" the image in any useful way. @app.get(\"/image\") def get_image() image_bytes: bytes = generate_cat_picture() # \u274c Don't do this. image_stream = io.BytesIO(image_bytes) return StreamingResponse(content=image_stream, media_type=\"image/png\") First of all, StreamingResponse(content=my_iterable) streams by iterating over the chunks provided by my_iterable. But when that iterable is a BytesIO, the chunks will be \\n-terminated lines, which won't make sense for a binary image. And even if the chunk divisions made sense, chunking is pointless here because we had the whole image_bytes bytes object available from the start. We may as well have just passed the whole thing into a Response from the beginning. We don't gain anything by holding data back from FastAPI. Second, StreamingResponse corresponds to HTTP chunked transfer encoding. (This might depend on your ASGI server, but it's the case for Uvicorn, at least.) And this isn't a good use case for chunked transfer encoding. Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images. Unnecessary chunked transfer encoding can be harmful. For example, it means clients can't show progress bars when they're downloading the file. See: Content-Length header versus chunked encoding Is it a good idea to use Transfer-Encoding: chunked on static files?"} +{"question_id": 38511444, "score": 126, "creation_date": 1469124576, "tags": ["python", "download", "google-drive-api", "urllib2", "pydrive"], "instruction": "Python: download files from google drive using url\n\nI am trying to download files from google drive and all I have is the drive's URL. I have read about google API that talks about some drive_service and MedioIO, which also requires some credentials( mainly JSON file/OAuth). But I am unable to get any idea about how it is working. Also, tried urllib2.urlretrieve, but my case is to get files from the drive. Tried wget too but no use. Tried PyDrive library. It has good upload functions to drive but no download options. Any help will be appreciated. Thanks.", "output": "If by \"drive's url\" you mean the shareable link of a file on Google Drive, then the following might help: import sys import requests def download_file_from_google_drive(file_id, destination): URL = \"https://docs.google.com/uc?export=download&confirm=1\" session = requests.Session() response = session.get(URL, params={\"id\": file_id}, stream=True) token = get_confirm_token(response) if token: params = {\"id\": file_id, \"confirm\": token} response = session.get(URL, params=params, stream=True) save_response_content(response, destination) def get_confirm_token(response): for key, value in response.cookies.items(): if key.startswith(\"download_warning\"): return value return None def save_response_content(response, destination): CHUNK_SIZE = 32768 with open(destination, \"wb\") as f: for chunk in response.iter_content(CHUNK_SIZE): if chunk: # filter out keep-alive new chunks f.write(chunk) def main(): if len(sys.argv) >= 3: file_id = sys.argv[1] destination = sys.argv[2] else: file_id = \"TAKE_ID_FROM_SHAREABLE_LINK\" destination = \"DESTINATION_FILE_ON_YOUR_DISK\" print(f\"dowload {file_id} to {destination}\") download_file_from_google_drive(file_id, destination) if __name__ == \"__main__\": main() The snipped does not use pydrive, nor the Google Drive SDK, though. It uses the requests module (which is, somehow, an alternative to urllib2). When downloading large files from Google Drive, a single GET request is not sufficient. A second one is needed - see wget/curl large file from google drive."} +{"question_id": 19053707, "score": 126, "creation_date": 1380293286, "tags": ["python", "python-2.7", "camelcasing", "snakecasing"], "instruction": "Converting Snake Case to Lower Camel Case (lowerCamelCase)\n\nWhat would be a good way to convert from snake case (my_string) to lower camel case (myString) in Python 2.7? The obvious solution is to split by underscore, capitalize each word except the first one and join back together. However, I'm curious as to other, more idiomatic solutions or a way to use RegExp to achieve this (with some case modifier?)", "output": "def to_camel_case(snake_str): return \"\".join(x.capitalize() for x in snake_str.lower().split(\"_\")) def to_lower_camel_case(snake_str): # We capitalize the first letter of each component except the first one # with the 'capitalize' method and join them together. camel_string = to_camel_case(snake_str) return snake_str[0].lower() + camel_string[1:] Example: In [11]: to_lower_camel_case('snake_case') Out[11]: 'snakeCase' In [12]: to_lower_camel_case('SNAKE_Case') Out[12]: 'snakeCase' In [13]: to_lower_camel_case('SNAKE_CASE') Out[13]: 'snakeCase'"} +{"question_id": 21752399, "score": 126, "creation_date": 1392289580, "tags": ["python", "pandas"], "instruction": "Pandas dataframe total row\n\nI have a dataframe, something like: foo bar qux 0 a 1 3.14 1 b 3 2.72 2 c 2 1.62 3 d 9 1.41 4 e 3 0.58 and I would like to add a 'total' row to the end of dataframe: foo bar qux 0 a 1 3.14 1 b 3 2.72 2 c 2 1.62 3 d 9 1.41 4 e 3 0.58 5 total 18 9.47 I've tried to use the sum command but I end up with a Series, which although I can convert back to a Dataframe, doesn't maintain the data types: tot_row = pd.DataFrame(df.sum()).T tot_row['foo'] = 'tot' tot_row.dtypes: foo object bar object qux object I would like to maintain the data types from the original data frame as I need to apply other operations to the total row, something like: baz = 2*tot_row['qux'] + 3*tot_row['bar']", "output": "Update June 2022 pd.append is now deprecated. You could use pd.concat instead but it's probably easier to use df.loc['Total'] = df.sum(numeric_only=True), as Kevin Zhu commented. Or, better still, don't modify the data frame in place and keep your data separate from your summary statistics! Append a totals row with df.append(df.sum(numeric_only=True), ignore_index=True) The conversion is necessary only if you have a column of strings or objects. It's a bit of a fragile solution so I'd recommend sticking to operations on the dataframe, though. eg. baz = 2*df['qux'].sum() + 3*df['bar'].sum()"} +{"question_id": 2413522, "score": 126, "creation_date": 1268178816, "tags": ["python", "numpy", "statsmodels", "standard-deviation", "weighted"], "instruction": "Weighted standard deviation in NumPy\n\nnumpy.average() has a weights option, but numpy.std() does not. Does anyone have suggestions for a workaround?", "output": "How about the following short \"manual calculation\"? def weighted_avg_and_std(values, weights): \"\"\" Return the weighted average and standard deviation. They weights are in effect first normalized so that they sum to 1 (and so they must not all be 0). values, weights -- NumPy ndarrays with the same shape. \"\"\" average = numpy.average(values, weights=weights) # Fast and numerically precise: variance = numpy.average((values-average)**2, weights=weights) return (average, math.sqrt(variance))"} +{"question_id": 22263807, "score": 126, "creation_date": 1394242151, "tags": ["python", "matplotlib", "legend"], "instruction": "How is order of items in matplotlib legend determined?\n\nI am having to reorder items in a legend, when I don't think I should have to. I try: from pylab import * clf() ax=gca() ht=ax.add_patch(Rectangle((1,1),1,1,color='r',label='Top',alpha=.1)) h1=ax.bar(1,2,label='Middle') hb=ax.add_patch(Rectangle((1,1),1,1,color='k',label='Bottom',alpha=.11)) legend() show() and end up with Bottom above Middle. How can I get the right order? Is it not determined by creation order? Update: The following can be used to force the order. I think this may be the simplest way to do it, and that seems awkward. The question is what determines the original order? hh=[ht,h1,hb] legend([ht,h1.patches[0],hb],[H.get_label() for H in hh])", "output": "The order is deterministic, but part of the private guts so can be changed at any time, see the code here which goes to here and eventually here. The children are the artists that have been added, hence the handle list is sorted by order they were added (this is a change in behavior with mpl34 or mpl35). If you want to explicitly control the order of the elements in your legend then assemble a list of handlers and labels like you did in the your edit."} +{"question_id": 16125229, "score": 126, "creation_date": 1366492236, "tags": ["python", "python-2.7"], "instruction": "Last Key in Python Dictionary\n\nI am having difficulty figuring out what the syntax would be for the last key in a Python dictionary. I know that for a Python list, one may say this to denote the last: list[-1] I also know that one can get a list of the keys of a dictionary as follows: dict.keys() However, when I attempt to use the logical following code, it doesn't work: dict.keys(-1) It says that keys can't take any arguments and 1 is given. If keys can't take arguments, then how can I denote that I want the last key in the list? I am operating under the assumption that Python dictionaries are ordered in the order in which items are added to the dictionary with most recent item last. For this reason, I would like to access the last key in the dictionary. I am now told that the dictionary keys are not in order based on when they were added. How then would I be able to choose the most recently added key?", "output": "If insertion order matters, take a look at collections.OrderedDict: An OrderedDict is a dict that remembers the order that keys were first inserted. If a new entry overwrites an existing entry, the original insertion position is left unchanged. Deleting an entry and reinserting it will move it to the end. In [1]: from collections import OrderedDict In [2]: od = OrderedDict(zip('bar','foo')) In [3]: od Out[3]: OrderedDict([('b', 'f'), ('a', 'o'), ('r', 'o')]) In [4]: od.keys()[-1] Out[4]: 'r' In [5]: od.popitem() # also removes the last item Out[5]: ('r', 'o') Update: An OrderedDict is no longer necessary as dictionary keys are officially ordered in insertion order as of Python 3.7 (unofficially in 3.6). For these recent Python versions, you can instead just use list(my_dict)[-1] or list(my_dict.keys())[-1]."} +{"question_id": 41061457, "score": 126, "creation_date": 1481289655, "tags": ["python", "machine-learning", "neural-network", "deep-learning", "keras"], "instruction": "keras: how to save the training history attribute of the history object\n\nIn Keras, we can return the output of model.fit to a history as follows: history = model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, validation_data=(X_test, y_test)) Now, how to save the history attribute of the history object to a file for further uses (e.g. draw plots of acc or loss against epochs)?", "output": "What I use is the following: with open('/trainHistoryDict', 'wb') as file_pi: pickle.dump(history.history, file_pi) In this way I save the history as a dictionary in case I want to plot the loss or accuracy later on. Later, when you want to load the history again, you can use: with open('/trainHistoryDict', \"rb\") as file_pi: history = pickle.load(file_pi) Why choose pickle over json? The comment under this answer accurately states: [Storing the history as json] does not work anymore in tensorflow keras. I had issues with: TypeError: Object of type 'float32' is not JSON serializable. There are ways to tell json how to encode numpy objects, which you can learn about from this other question, so there's nothing wrong with using json in this case, it's just more complicated than simply dumping to a pickle file."} +{"question_id": 68673221, "score": 126, "creation_date": 1628195874, "tags": ["python", "docker", "pip"], "instruction": "Why do I still get a warning about \"Running pip as the 'root' user\" inside a Docker container?\n\nI made a simple image of my python Django app in Docker, using this Dockerfile: FROM python:3.8-slim-buster WORKDIR /app COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD [\"python\", \"manage.py\", \"runserver\", \"0.0.0.0:8000\"] And building it using: sudo docker build -t my_app:1 . But after building the container (on Ubuntu 20.04) I get this warning: WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead Why does it throw this warning if I am installing Python requirements inside my image? Can I actually break my system this way?", "output": "The way your container is built doesn't add a user, so everything is done as root. You could create a user and install to that users's home directory by doing something like this; FROM python:3.8.3-alpine RUN pip install --upgrade pip RUN adduser -D myuser USER myuser WORKDIR /home/myuser COPY --chown=myuser:myuser requirements.txt requirements.txt RUN pip install --user -r requirements.txt ENV PATH=\"/home/myuser/.local/bin:${PATH}\" COPY --chown=myuser:myuser . . CMD [\"python\", \"manage.py\", \"runserver\", \"0.0.0.0:8000\"]"} +{"question_id": 26080872, "score": 126, "creation_date": 1411868874, "tags": ["python", "session", "flask"], "instruction": "secret key not set in flask session, using the Flask-Session extension\n\nRight now I am using a flask 3rd party library Flask-Session and I am having no luck getting a session working. When I connect to my site, I get the following error: RuntimeError: the session is unavailable because no secret key was set. Set the secret_key on the application to something unique and secret. Below is my server code. from flask import Flask, session from flask.ext.session import Session SESSION_TYPE = 'memcache' app = Flask(__name__) sess = Session() nextId = 0 def verifySessionId(): global nextId if not 'userId' in session: session['userId'] = nextId nextId += 1 sessionId = session['userId'] print (\"set userid[\" + str(session['userId']) + \"]\") else: print (\"using already set userid[\" + str(session['userId']) + \"]\") sessionId = session.get('userId', None) return sessionId @app.route(\"/\") def hello(): userId = verifySessionId() print(\"User id[\" + str(userId) + \"]\") return str(userId) if __name__ == \"__main__\": app.secret_key = 'super secret key' sess.init_app(app) app.debug = True app.run() As you can see, I do set the app secret key. What am I doing wrong? Are there other session options? Other info: Running Python 2.7 on Linux Mint Full paste: Traceback (most recent call last): File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1836, in __call__ return self.wsgi_app(environ, start_response) File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1820, in wsgi_app response = self.make_response(self.handle_exception(e)) File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1403, in handle_exception reraise(exc_type, exc_value, tb) File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1817, in wsgi_app response = self.full_dispatch_request() File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1475, in full_dispatch_request rv = self.dispatch_request() File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/app.py\", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File \"/home/sean/code/misc/session/sessiontest.py\", line 27, in hello userId = verifySessionId() File \"/home/sean/code/misc/session/sessiontest.py\", line 16, in verifySessionId session['userId'] = nextId File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/werkzeug/local.py\", line 341, in __setitem__ self._get_current_object()[key] = value File \"/home/sean/code/misc/hangman/venv/lib/python2.7/site-packages/flask/sessions.py\", line 126, in _fail raise RuntimeError('the session is unavailable because no secret ' RuntimeError: the session is unavailable because no secret key was set. Set the secret_key on the application to something unique and secret.", "output": "In your case the exception is raised by the NullSessionInterface session implementation, which is the default session type when you use Flask-Session. That's because you don't ever actually give the SESSION_TYPE configuration to Flask; it is not enough to set it as a global in your module. The Flask-Session quickstart example code does set a global, but then uses the current module as a configuration object by calling app.config.from_object(__name__). This default doesn't make much sense with Flask 0.10 or newer; NullSession may have made sense with Flask 0.8 or 0.9, but in current version the flask.session.NullSession class is used as an error signal. In your case it gives you the wrong error message now. Set the SESSION_TYPE configuration option to something else. Pick one of redis, memcached, filesystem or mongodb, and make sure to set it in app.config (directly or via the various Config.from_* methods). For a quick test, setting it to filesystem is easiest; there is enough default configuration there to have that work without additional dependencies: if __name__ == \"__main__\": # Quick test configuration. Please use proper Flask configuration options # in production settings, and use a separate file or environment variables # to manage the secret key! app.secret_key = 'super secret key' app.config['SESSION_TYPE'] = 'filesystem' sess.init_app(app) app.debug = True app.run() If you see this error and you are not using Flask-Session, then something has gone wrong with setting the secret. If you are setting app.config['SECRET_KEY'] or app.secret_key in a if __name__ == \"__main__\": guard like above and you get this error, then you are probably running your Flask app via a WSGI server that imports your Flask project as a module, and the __name__ == \"__main__\" block is never run. It is always better to manage configuration for Flask apps in a separate file, anyway."} +{"question_id": 13588920, "score": 126, "creation_date": 1354034544, "tags": ["python", "matplotlib", "legend"], "instruction": "Stop matplotlib repeating labels in legend\n\nHere is a very simplified example: xvalues = [2,3,4,6] for x in xvalues: plt.axvline(x,color='b',label='xvalues') plt.legend() The legend will now show 'xvalues' as a blue line 4 times in the legend. Is there a more elegant way of fixing this than the following? for i,x in enumerate(xvalues): if not i: plt.axvline(x,color='b',label='xvalues') else: plt.axvline(x,color='b')", "output": "plt.legend takes as parameters A list of axis handles which are Artist objects A list of labels which are strings These parameters are both optional defaulting to plt.gca().get_legend_handles_labels(). You can remove duplicate labels by putting them in a dictionary before calling legend. This is because dicts can't have duplicate keys. For example: For Python versions < 3.7 from collections import OrderedDict import matplotlib.pyplot as plt handles, labels = plt.gca().get_legend_handles_labels() by_label = OrderedDict(zip(labels, handles)) plt.legend(by_label.values(), by_label.keys()) For Python versions > 3.7 As of Python 3.7, dictionaries retain input order by default. Thus, there is no need for OrderedDict form the collections module. import matplotlib.pyplot as plt handles, labels = plt.gca().get_legend_handles_labels() by_label = dict(zip(labels, handles)) plt.legend(by_label.values(), by_label.keys()) Docs for plt.legend"} +{"question_id": 1811691, "score": 126, "creation_date": 1259386610, "tags": ["python", "executable"], "instruction": "Running an outside program (executable) in Python?\n\nI just started working on Python and have been trying to run an outside executable from Python. I have an executable for a program written in Fortran. Let\u2019s say the name for the executable is flow.exe, and my executable is located in C:\\Documents and Settings\\flow_model. I tried both os.system and popen commands, but so far, I couldn't make it work. The following code seems like it opens the command window, but it wouldn't execute the model. # Import system modules import sys, string, os, arcgisscripting os.system(\"C:/Documents and Settings/flow_model/flow.exe\") How can I fix this?", "output": "Those whitespaces can really be a bother. Try os.chdir('C:/Documents\\ and\\ Settings/') followed by relative paths for os.system, subprocess methods, or whatever... If best-effort attempts to bypass the whitespaces-in-path hurdle keep failing, then my next best suggestion is to avoid having blanks in your crucial paths. Couldn't you make a blanks-less directory, copy the crucial .exe file there, and try that? Are those havoc-wrecking space absolutely essential to your well-being...?"} +{"question_id": 6890170, "score": 126, "creation_date": 1312124313, "tags": ["python", "list"], "instruction": "How to find the last occurrence of an item in a Python list\n\nSay I have this list: li = [\"a\", \"b\", \"a\", \"c\", \"x\", \"d\", \"a\", \"6\"] As far as help showed me, there is not a builtin function that returns the last occurrence of a string (like the reverse of index). So basically, how can I find the last occurrence of \"a\" in the given list?", "output": "If you are actually using just single letters like shown in your example, then str.rindex would work handily. This raises a ValueError if there is no such item, the same error class as list.index would raise. Demo: >>> li = [\"a\", \"b\", \"a\", \"c\", \"x\", \"d\", \"a\", \"6\"] >>> ''.join(li).rindex('a') 6 For the more general case you could use list.index on the reversed list: >>> len(li) - 1 - li[::-1].index('a') 6 The slicing here creates a copy of the entire list. That's fine for short lists, but for the case where li is very long, it may be more efficient to use a reverse iteration and avoid the copy: def list_rindex(li, x): for i in reversed(range(len(li))): if li[i] == x: return i raise ValueError(\"{} is not in list\".format(x)) One-liner version: next(i for i in reversed(range(len(li))) if li[i] == 'a')"} +{"question_id": 4558879, "score": 126, "creation_date": 1293666297, "tags": ["python", "django", "logging", "python-logging"], "instruction": "Python/Django: log to console under runserver, log to file under Apache\n\nHow can I send trace messages to the console (like print) when I'm running my Django app under manage.py runserver, but have those messages sent to a log file when I'm running the app under Apache? I reviewed Django logging and although I was impressed with its flexibility and configurability for advanced uses, I'm still stumped with how to handle my simple use-case.", "output": "Text printed to stderr will show up in httpd's error log when running under mod_wsgi. You can either use print directly, or use logging instead. python 3: print(\"Goodbye cruel world!\", file=sys.stderr) python 2: print >>sys.stderr, 'Goodbye, cruel world!'"} +{"question_id": 48881196, "score": 126, "creation_date": 1519116959, "tags": ["python", "python-3.x", "string", "f-string"], "instruction": "How can I split up a long f-string in Python?\n\nI am getting a line too long PEP 8 E501 issue. f'Leave Request created successfully. Approvers sent the request for approval: {leave_approver_list}' I tried using a multi-line string, but that brings in a \\n, which breaks my test: f'''Leave Request created successfully. Approvers sent the request for approval: {leave_approver_list}''' How can I keep it single line and pass PEP 8 linting?", "output": "Use parentheses and string literal concatenation: msg = ( f'Leave Request created successfully. ' f'Approvers sent the request for approval: {leave_approver_list}' ) Note, the first literal doesn't need an f, but I include it for consistency/readability."} +{"question_id": 32204631, "score": 126, "creation_date": 1440507417, "tags": ["python", "pandas", "datetime", "indexing", "type-conversion"], "instruction": "How to convert string to datetime format in pandas?\n\nI have a column I_DATE of type string (object) in a dataframe called train as show below. I_DATE 28-03-2012 2:15:00 PM 28-03-2012 2:17:28 PM 28-03-2012 2:50:50 PM How to convert I_DATE from string to datetime format & specify the format of input string. Also, how to filter rows based on a range of dates in pandas?", "output": "Use to_datetime. There is no need to specify the format in this case since the parser is able to figure it out. In [51]: pd.to_datetime(df['I_DATE']) Out[51]: 0 2012-03-28 14:15:00 1 2012-03-28 14:17:28 2 2012-03-28 14:50:50 Name: I_DATE, dtype: datetime64[ns] To access the date/day/time component use the dt accessor: In [54]: df['I_DATE'].dt.date Out[54]: 0 2012-03-28 1 2012-03-28 2 2012-03-28 dtype: object In [56]: df['I_DATE'].dt.time Out[56]: 0 14:15:00 1 14:17:28 2 14:50:50 dtype: object You can use strings to filter as an example: In [59]: df = pd.DataFrame({'date':pd.date_range(start = dt.datetime(2015,1,1), end = dt.datetime.now())}) df[(df['date'] > '2015-02-04') & (df['date'] < '2015-02-10')] Out[59]: date 35 2015-02-05 36 2015-02-06 37 2015-02-07 38 2015-02-08 39 2015-02-09"} +{"question_id": 4930524, "score": 126, "creation_date": 1297149434, "tags": ["python", "linux", "graphics", "matplotlib"], "instruction": "How can I set the matplotlib 'backend'?\n\nI am new user of matplotlib, my platform is Ubuntu 10.04 Python 2.6.5 This is my code import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt plt.plot([1,2,3]) The error is: /usr/local/lib/python2.6/dist-packages/matplotlib/backends/__init__.py:41: UserWarning: Your currently selected backend, 'agg' does not support show(). Please select a GUI backend in your matplotlibrc file ('/usr/local/lib/python2.6/dist-packages/matplotlib/mpl-data/matplotlibrc') or with matplotlib.use() (backend, matplotlib.matplotlib_fname())) I installed the Anti-Grain Geometry library apt-get install libagg but it is doesn't work. I tried to use other argument of backend like 'GTK' and 'TkAgg'. I installed python-gtk2-dev package, but still the error is below. Can anyone tell me an executable backend argument and its dependency library? Here is the error: >>> matplotlib.use('GTK') >>> import matplotlib.pyplot as plt Traceback (most recent call last): File \"\", line 1, in File \"/usr/local/lib/python2.6/dist-packages/matplotlib/pyplot.py\", line 95, in new_figure_manager, draw_if_interactive, show = pylab_setup() File \"/usr/local/lib/python2.6/dist-packages/matplotlib/backends/__init__.py\", line 25, in pylab_setup globals(),locals(),[backend_name]) File \"/usr/local/lib/python2.6/dist-packages/matplotlib/backends/backend_gtk.py\", line 28, in from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK File \"/usr/local/lib/python2.6/dist-packages/matplotlib/backends/backend_gdk.py\", line 29, in from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array ImportError: No module named _backend_gdk", "output": "Your currently selected backend, 'agg' does not support show(). AGG backend is for writing to file, not for rendering in a window. See the backend FAQ at the matplotlib web site. ImportError: No module named _backend_gdk For the second error, maybe your matplotlib distribution is not compiled with GTK support, or you miss the PyGTK package. Try to install it. Do you call the show() method inside a terminal or application that has access to a graphical environment? Try other GUI backends, in this order: TkAgg wxAgg Qt5Agg Qt4Agg"} +{"question_id": 15107714, "score": 126, "creation_date": 1361955008, "tags": ["python", "subprocess", "ipc"], "instruction": "wait process until all subprocess finish?\n\nI have a main process which creates two or more sub processes, I want main process to wait until all sub processes finish their operations and exits? # main_script.py p1 = subprocess.Popen(['python script1.py']) p2 = subprocess.Popen(['python script2.py']) ... #wait main process until both p1, p2 finish ...", "output": "A Popen object has a .wait() method exactly defined for this: to wait for the completion of a given subprocess (and, besides, for retuning its exit status). If you use this method, you'll prevent that the process zombies are lying around for too long. (Alternatively, you can use subprocess.call() or subprocess.check_call() for calling and waiting. If you don't need IO with the process, that might be enough. But probably this is not an option, because your if the two subprocesses seem to be supposed to run in parallel, which they won't with (call()/check_call().) If you have several subprocesses to wait for, you can do exit_codes = [p.wait() for p in p1, p2] (or maybe exit_codes = [p.wait() for p in (p1, p2)] for syntactical reasons) which returns as soon as all subprocesses have finished. You then have a list of return codes which you maybe can evaluate."} +{"question_id": 50155464, "score": 126, "creation_date": 1525351645, "tags": ["python", "pytest", "python-packaging"], "instruction": "Using pytest with a src layer\n\npytest recommends including an additional directory to separate the source code within a project: my_package \u251c\u2500\u2500 src # <-- no __init__.py on this layer \u2502 \u2514\u2500\u2500 my_package \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 util_module \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 utils.py \u2514\u2500\u2500 tests \u251c\u2500\u2500 __init__.py \u2514\u2500\u2500 test_util_module \u251c\u2500\u2500 __init__.py \u2514\u2500\u2500 test_utils.py Sadly, they say nothing[1] about how imports in the test code should work in such a case, which work for my IDE just fine in this naive example[2], but causes the following error with pytest: ~/my_package$ pytest ====================== test session starts ====================== platform linux -- Python 3.6.4, pytest-3.5.1, py-1.5.3, pluggy-0.6.0 rootdir: /home/user/workspace/my_package, inifile: collected 0 items / 1 errors ============================ ERRORS ============================= ___ ERROR collecting tests/test_util_module/test_utils.py ___ ImportError while importing test module '/home/user/workspace/my_package/tests/test_util_module/test_utils.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/test_util_module/test_utils.py:1: in from test.test_module.some_file import starify E ModuleNotFoundError: No module named 'my_package.util_module' !!!! Interrupted: 1 errors during collection !!!!! I can fix the issue by changing the import of the test to from src.my_package.util_module.utils import starify but then my IDE complaints about the src part being redundant, so I'd like to keep it out. [1]: Not the case any more. As of version 3.7.3, pytest recommends the editable install also featured in @hoefling's answer at the top of its good practices. [2]: Setup is virtualenv env -p python3.6; source env/bin/activate; pip install pytest", "output": "Recommended approach for pytest>=7: use the pythonpath setting Recently, pytest has added a new core plugin that supports sys.path modifications via the pythonpath configuration value. The solution is thus much simpler now and doesn't require any workarounds anymore: pyproject.toml example: [tool.pytest.ini_options] pythonpath = [ \"src\" ] pytest.ini example: [pytest] pythonpath = src The path entries are calculated relative to the rootdir, thus the src entry adds path/to/project/src directory to sys.path in this case. Multiple path entries are also allowed: for a layout repo/ \u251c\u2500\u2500 src/ | \u2514\u2500\u2500 lib.py \u251c\u2500\u2500 src2/ | \u2514\u2500\u2500 lib2.py \u2514\u2500\u2500 tests \u2514\u2500\u2500 test_lib.py the configuration [tool.pytest.ini_options] pythonpath = [ \"src\", \"src2\", ] or [pytest] pythonpath = src src2 will add both lib and lib2 modules to sys.path, so import lib import lib2 will both work. Original answer Adjusting the PYTHONPATH (as suggested in the comments) is one possibility to solve the import issue. Another is adding an empty conftest.py file in the src directory: $ touch src/conftest.py and pytest will add src to sys.path. This is a simple way to trick pytest into adding codebase to sys.path. However, the src layout is usually selected when you intend to build a distribution, e.g. providing a setup.py with (in this case) explicitly specifying the root package dir: from setuptools import find_packages, setup setup( ... package_dir={'': 'src'}, packages=find_packages(where='src'), ... ) and installing the package in the development mode (via python setup.py develop or pip install --editable .) while you're still developing it. This way, your package my_package is correctly integrated in the Python's site packages structure and there's no need to fiddle with PYTHONPATH."} +{"question_id": 4628290, "score": 126, "creation_date": 1294420804, "tags": ["python", "list", "zip", "slice", "idioms"], "instruction": "Pairs from single list\n\nOften enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving idioms versus efficiency, I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. Is there another, \"better\" way of traversing a list in pairs? Note that if the list has an odd number of elements then the last one will not be in any of the pairs. Which would be the right way to ensure that all elements are included? I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (None) that gets paired with the previous last element, something that can be achieved with itertools.izip_longest(). Finally Note that, in Python 3.x, zip() behaves as itertools.izip(), and itertools.izip() is gone.", "output": "My favorite way to do it: def pairwise(t): it = iter(t) return zip(it,it) # for \"pairs\" of any length def chunkwise(t, size=2): it = iter(t) return zip(*[it]*size) When you want to pair all elements you obviously might need a fillvalue: from itertools import izip_longest def blockwise(t, size=2, fillvalue=None): it = iter(t) return izip_longest(*[it]*size, fillvalue=fillvalue) With Python 3, itertools.izip is now simply zip .. to work with an older Python, use from itertools import izip as zip"} +{"question_id": 50714316, "score": 126, "creation_date": 1528269289, "tags": ["python", "logging", "python-logging"], "instruction": "How to use logging.getLogger(__name__) in multiple modules\n\nFrom the logging howto for Python 2.7 (my emphasis): A good convention to use when naming loggers is to use a module-level logger, in each module which uses logging, named as follows: logger = logging.getLogger(__name__) This means that logger names track the package/module hierarchy, and it\u2019s intuitively obvious where events are logged just from the logger name. Sounds like good advice. Now, the logging cookbook provides an example for multiple modules, which uses hard-coded logger names instead of the __name__ constant. In the \"main module\" from the example we find logger = logging.getLogger('spam_application') and in the \"auxiliary module\" we find module_logger = logging.getLogger('spam_application.auxiliary') I copied this example verbatim into a package folder with the following structure: cookbook-example |- __init__.py |- main_module.py |- auxiliary_module.py This runs without issue, producing the expected logging output from both the main module and the auxiliary module, but here's the thing: If I now replace the hard-coded logger names by the __name__ constant, as recommended by the logging howto, the cookbook example breaks down: I only get logging messages from the main module, but nothing from the auxiliary module. I must be missing something obvious. Any ideas what I am doing wrong? Note: There are a lot of very similar questions and related answers, e.g.: 1, 2, 3, 4, 5, 6, and many more. However, none of those appear to address this specific question. --Edit-- Here's a minimal example based on the cookbook example, with the explicit name strings replaced by __name__. main_module.py import logging import auxiliary_module # create and configure main logger logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # create console handler with a higher log level handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) # create formatter and add it to the handler formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) # add the handler to the logger logger.addHandler(handler) logger.info('message from main module') auxiliary_module.some_function() auxiliary_module.py import logging # create logger module_logger = logging.getLogger(__name__) def some_function(): module_logger.info('message from auxiliary module')", "output": "As pointed out by @shmee in this answer, the logger hierarchy must be defined explicitly in the logger name, using dot-notation. That is, if the logger name in main_module.py is e.g. 'a', then the logger name in auxiliary_module.py must be 'a.b' (not just 'b'), in order for it to inherit the configuration of logger 'a'. This is also mentioned in the getLogger() documentation. However, this should be taken care of automatically when using __name__, as noted in the logging how-to: This means that logger names track the package/module hierarchy, and it\u2019s intuitively obvious where events are logged just from the logger name. The thing is, for this to work, you need to use __name__ in the correct way, and I did not do that. The problem in my example is in the organization of the files in the cookbook-example package folder: Both the main module and the auxiliary module are at the same level (i.e. in the same folder). So, as explained here, the __name__ for the main module will then be '__main__' (as it is the top-level script), and the __name__ for the auxiliary module will be 'auxiliary_module' (i.e. the filename), NOT '__main__.auxiliary_module'. As a result, the logger in the auxiliary module will be a child of the root logger, not a child of the '__main__' logger, and it will thus inherit the root logger configuration (which still has the default logging level WARNING) instead of the configuration specified in the main module. So, to make the example work, we have several options: Replace getLogger(__name__) in the main module by getLogger(). This will apply the config to the root logger and therefore also to the auxiliary module logger, as suggested by @shmee. Replace getLogger(__name__) in the auxiliary module by getLogger('__main__.' + __name__). The result will be equivalent to the original cookbook-example (except that the main logger is now called '__main__' instead of 'spam_application')."} +{"question_id": 9875964, "score": 125, "creation_date": 1332779334, "tags": ["python", "math"], "instruction": "How can I convert radians to degrees with Python?\n\nIn the math module, I could only find cos/sin/tan/acos/asin/atan. These take the argument, respectively return the answer, in radians. How can I use degrees instead? Here's my code: import math x = math.cos(1) y = x * 180 / math.pi print(y) 30.9570417874 My calculator, on deg, gives me: cos(1) 0.9998476...", "output": "Python includes two functions in the math package; radians converts degrees to radians, and degrees converts radians to degrees. To match the output of your calculator you need: >>> math.cos(math.radians(1)) 0.9998476951563913 Note that all of the trig functions convert between an angle and the ratio of two sides of a triangle. cos, sin, and tan take an angle in radians as input and return the ratio; acos, asin, and atan take a ratio as input and return an angle in radians. You only convert the angles, never the ratios."} +{"question_id": 15121093, "score": 125, "creation_date": 1361994763, "tags": ["python", "sql", "django"], "instruction": "Django: Adding \"NULLS LAST\" to query\n\nI would like to sort a model by using Postgresql's \"NULLS LAST\" option. How could it be done? I tried something like MyModel.objects.all().extra(order_by=('-price', 'NULLS LAST')) But I get \"Cannot resolve keyword 'NULLS LAST' into field\"", "output": "from django.db.models import F MyModel.objects.all().order_by(F('price').desc(nulls_last=True)) This functionality has been added to Django 1.11. https://docs.djangoproject.com/en/dev/releases/1.11/ Added the nulls_first and nulls_last parameters to Expression.asc() and desc() to control the ordering of null values. Reference for Django 3.1: https://docs.djangoproject.com/en/3.1/ref/models/expressions/#using-f-to-sort-null-values"} +{"question_id": 20474549, "score": 125, "creation_date": 1386603634, "tags": ["python", "polygon", "shapely"], "instruction": "Extract points/coordinates from a polygon in Shapely\n\nHow do you get/extract the points that define a shapely polygon? Thanks! Example of a shapely polygon from shapely.geometry import Polygon # Create polygon from lists of points x = [list of x vals] y = [list of y vals] polygon = Polygon(x,y)", "output": "The trick is to use a combination of the Polygon class methods: from shapely.geometry import Polygon # Create polygon from lists of points x = [0.0, 0.0, 1.0, 1.0, 0.0] y = [0.0, 1.0, 1.0, 0.0, 0.0] poly = Polygon(zip(x,y)) # Extract the point values that define the perimeter of the polygon xx, yy = poly.exterior.coords.xy # Note above return values are of type `array.array` assert x == xx.tolist() assert y == yy.tolist() If you would like them as coordinate pairs assert tuple(poly.exterior.coords) == tuple(zip(x,y)) or as a numpy array assert np.array_equal(np.array(poly.exterior.coords), np.asarray(tuple(zip(x,y))))"} +{"question_id": 12353288, "score": 125, "creation_date": 1347285642, "tags": ["python", "json", "simplejson"], "instruction": "Getting values from JSON using Python\n\nWhile I am trying to retrieve values from JSON string, it gives me an error: data = json.loads('{\"lat\":444, \"lon\":555}') return data[\"lat\"] But, if I iterate over the data, it gives me the elements (lat and lon), but not the values: data = json.loads('{\"lat\":444, \"lon\":555}') ret = '' for j in data: ret = ret + ' ' + j return ret Which returns: lat lon What do I need to do to get the values of lat and lon? (444 and 555)", "output": "If you want to iterate over both keys and values of the dictionary, do this: for key, value in data.items(): print(key, value)"} +{"question_id": 10739843, "score": 125, "creation_date": 1337870867, "tags": ["python", "pep8"], "instruction": "How should I format a long url in a python comment and still be PEP8 compliant\n\nIn a block comment, I want to reference a URL that is over 80 characters long. What is the preferred convention for displaying this URL? I know bit.ly is an option, but the URL itself is descriptive. Shortening it and then having a nested comment describing the shortened URL seems like a crappy solution.", "output": "Don't break the url: Some other good reasons to ignore a particular guideline: When applying the guideline would make the code less readable, even for someone who is used to reading code that follows this PEP. ... Source: # A Foolish Consistency is the Hobgoblin of Little Minds [1] # [1]: http://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds You can use the # noqa at the end of the line to stop PEP8/pycodestyle/Flake8 from running that check. Should also avoid warnings in your IDE. # [1]: http://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds # noqa"} +{"question_id": 4209641, "score": 125, "creation_date": 1290031629, "tags": ["python", "package", "python-import"], "instruction": "Absolute vs. explicit relative import of Python module\n\nI'm wondering about the preferred way to import packages in a Python application. I have a package structure like this: project.app1.models project.app1.views project.app2.models project.app1.views imports project.app1.models and project.app2.models. There are two ways to do this that come to mind. With absolute imports: import A.A import A.B.B or with explicit relative imports, as introduced in Python 2.5 with PEP 328: # explicit relative from .. import A from . import B What is the most pythonic way to do this?", "output": "Absolute imports. From PEP 8: Relative imports for intra-package imports are highly discouraged. Always use the absolute package path for all imports. Even now that PEP 328 [7] is fully implemented in Python 2.5, its style of explicit relative imports is actively discouraged; absolute imports are more portable and usually more readable. Explicit relative imports are a nice language feature (I guess), but they're not nearly as explicit as absolute imports. The more readable form is: import A.A import A.B.B especially if you import several different namespaces. If you look at some well written projects/tutorials that include imports from within packages, they usually follow this style. The few extra keystrokes you take to be more explicit will save others (and perhaps you) plenty of time in the future when they're trying to figure out your namespace (especially if you migrate to 3.x, in which some of the package names have changed)."} +{"question_id": 147741, "score": 125, "creation_date": 1222670867, "tags": ["python", "unicode", "encoding", "ascii"], "instruction": "Character reading from file in Python\n\nIn a text file, there is a string \"I don't like this\". However, when I read it into a string, it becomes \"I don\\xe2\\x80\\x98t like this\". I understand that \\u2018 is the unicode representation of \"'\". I use f1 = open (file1, \"r\") text = f1.read() command to do the reading. Now, is it possible to read the string in such a way that when it is read into the string, it is \"I don't like this\", instead of \"I don\\xe2\\x80\\x98t like this like this\"? Second edit: I have seen some people use mapping to solve this problem, but really, is there no built-in conversion that does this kind of ANSI to unicode ( and vice versa) conversion?", "output": "Ref: http://docs.python.org/howto/unicode Reading Unicode from a file is therefore simple: import codecs with codecs.open('unicode.rst', encoding='utf-8') as f: for line in f: print repr(line) It's also possible to open files in update mode, allowing both reading and writing: with codecs.open('test', encoding='utf-8', mode='w+') as f: f.write(u'\\u4500 blah blah blah\\n') f.seek(0) print repr(f.readline()[:1]) EDIT: I'm assuming that your intended goal is just to be able to read the file properly into a string in Python. If you're trying to convert to an ASCII string from Unicode, then there's really no direct way to do so, since the Unicode characters won't necessarily exist in ASCII. If you're trying to convert to an ASCII string, try one of the following: Replace the specific unicode chars with ASCII equivalents, if you are only looking to handle a few special cases such as this particular example Use the unicodedata module's normalize() and the string.encode() method to convert as best you can to the next closest ASCII equivalent (Ref https://web.archive.org/web/20090228203858/http://techxplorer.com/2006/07/18/converting-unicode-to-ascii-using-python): >>> teststr u'I don\\xe2\\x80\\x98t like this' >>> unicodedata.normalize('NFKD', teststr).encode('ascii', 'ignore') 'I donat like this'"} +{"question_id": 10937350, "score": 125, "creation_date": 1339092400, "tags": ["python", "filesystems", "identification"], "instruction": "How to check type of files without extensions?\n\nI have a folder full of files and they don't have an extension. How can I check file types? I want to check the file type and change the filename accordingly. Let's assume a function filetype(x) returns a file type like png. I want to do this: files = os.listdir(\".\") for f in files: os.rename(f, f+filetype(f)) How do I do this?", "output": "There are Python libraries that can recognize files based on their content (usually a header / magic number) and that don't rely on the file name or extension. If you're addressing many different file types, you can use python-magic. That's just a Python binding for the well-established magic library. This has a good reputation and (small endorsement) in the limited use I've made of it, it has been solid. The native imghdr library can be useful, but is deprecated since Python 3.11 and will be removed in Python 3.13. If you need dependency-free (pure Python) file type checking, see filetype."} +{"question_id": 32801806, "score": 125, "creation_date": 1443299915, "tags": ["python", "pandas", "concatenation"], "instruction": "pandas concat columns ignore_index doesn't work\n\nI am trying to column-bind dataframes (like R's cbind() does) and having issue with pandas concat, as ignore_index=True doesn't seem to work: df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 2, 3, 4]) df2 = pd.DataFrame({'A1': ['A4', 'A5', 'A6', 'A7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D2': ['D4', 'D5', 'D6', 'D7']}, index=[5, 6, 7, 3]) df1 # A B D # 0 A0 B0 D0 # 2 A1 B1 D1 # 3 A2 B2 D2 # 4 A3 B3 D3 df2 # A1 C D2 # 5 A4 C4 D4 # 6 A5 C5 D5 # 7 A6 C6 D6 # 3 A7 C7 D7 dfs = [df1, df2] df = pd.concat(dfs, axis=1, ignore_index=True) print df and the result is 0 1 2 3 4 5 0 A0 B0 D0 NaN NaN NaN 2 A1 B1 D1 NaN NaN NaN 3 A2 B2 D2 A7 C7 D7 4 A3 B3 D3 NaN NaN NaN 5 NaN NaN NaN A4 C4 D4 6 NaN NaN NaN A5 C5 D5 7 NaN NaN NaN A6 C6 D6 Even if I reset index using df1.reset_index() df2.reset_index() and then try pd.concat([df1, df2], axis=1) it still produces the same result! The expected result is a 6x4 dataframe where the contents of columns A,B,D, A1,C,D2 are horizontally concatenated.", "output": "If I understood you correctly, this is what you would like to do. import pandas as pd df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 2, 3, 4]) df2 = pd.DataFrame({'A1': ['A4', 'A5', 'A6', 'A7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D2': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6 , 7]) df1.reset_index(drop=True, inplace=True) df2.reset_index(drop=True, inplace=True) df = pd.concat([df1, df2], axis=1) Which gives: A B D A1 C D2 0 A0 B0 D0 A4 C4 D4 1 A1 B1 D1 A5 C5 D5 2 A2 B2 D2 A6 C6 D6 3 A3 B3 D3 A7 C7 D7 Actually, I would have expected that df = pd.concat(dfs, axis=1, ignore_index=True) gives the same result. This is the excellent explanation from jreback: ignore_index=True \u2018ignores\u2019, meaning doesn\u2019t align on the joining axis. it simply pastes them together in the order that they are passed, then reassigns a range for the actual index (e.g. range(len(index))) so the difference between joining on non-overlapping indexes (assume axis=1 in the example), is that with ignore_index=False (the default), you get the concat of the indexes, and with ignore_index=True you get a range."} +{"question_id": 1273211, "score": 125, "creation_date": 1250182177, "tags": ["python", "debugging", "exception", "environment-variables", "assert"], "instruction": "Disable assertions in Python\n\nHow do I disable assertions in Python? That is, if an assertion fails, I don't want it to throw an AssertionError, but to keep going. How do I do that?", "output": "#How do I disable assertions in Python? There are multiple approaches that affect a single process, the environment, or a single line of code. I demonstrate each. For the whole process Using the -O flag (capital O) disables all assert statements in a process. For example: $ python -Oc \"assert False\" $ python -c \"assert False\" Traceback (most recent call last): File \"\", line 1, in AssertionError Note that by disable I mean it also does not execute the expression that follows it: $ python -Oc \"assert 1/0\" $ python -c \"assert 1/0\" Traceback (most recent call last): File \"\", line 1, in ZeroDivisionError: integer division or modulo by zero For the environment You can use an environment variable to set this flag as well. This will affect every process that uses or inherits the environment. E.g., in Windows, setting and then clearing the environment variable: C:\\>python -c \"assert False\" Traceback (most recent call last): File \"\", line 1, in AssertionError C:\\>SET PYTHONOPTIMIZE=TRUE C:\\>python -c \"assert False\" C:\\>SET PYTHONOPTIMIZE= C:\\>python -c \"assert False\" Traceback (most recent call last): File \"\", line 1, in AssertionError Same in Unix (using set and unset for respective functionality) Single point in code You continue your question: if an assertion fails, I don't want it to throw an AssertionError, but to keep going. You can either ensure control flow does not reach the assertion, for example: if False: assert False, \"we know this fails, but we don't get here\" or if you want the assert expression to be exercised then you can catch the assertion error: try: assert False, \"this code runs, fails, and the exception is caught\" except AssertionError as e: print(repr(e)) which prints: AssertionError('this code runs, fails, and the exception is caught') and you'll keep going from the point you handled the AssertionError. References From the assert documentation: An assert statement like this: assert expression #, optional_message Is equivalent to if __debug__: if not expression: raise AssertionError #(optional_message) And, the built-in variable __debug__ is True under normal circumstances, False when optimization is requested (command line option -O). and further Assignments to __debug__ are illegal. The value for the built-in variable is determined when the interpreter starts. From the usage docs: -O Turn on basic optimizations. This changes the filename extension for compiled (bytecode) files from .pyc to .pyo. See also PYTHONOPTIMIZE. and PYTHONOPTIMIZE If this is set to a non-empty string it is equivalent to specifying the -O option. If set to an integer, it is equivalent to specifying -O multiple times."} +{"question_id": 19559247, "score": 125, "creation_date": 1382598581, "tags": ["python", "pip"], "instruction": "requirements.txt depending on python version\n\nI'm trying to port a python2 package to python3 (not my own) using six so that it's compatible with both. However one of the packages listed in requirements.txt is now included in the python3 stdlib and the pypi version doesn't work in python3 so I want to conditionally exclude it. Doing this in setup.py is easy, I can just do something like: if sys.version_info[0] == 2: requirements += py2_requirements else: requirements += py3_requirements But I would like requirements.txt to reflect the correct list too. I can't find anything on this in the pip documentation. so does anyone know how to do it, or if it is even possible?", "output": "You can use the environment markers to achieve this in requirements.txt since pip 6.0: SomeProject==5.4; python_version < '2.7' SomeProject; sys_platform == 'win32' It is supported by setuptools too by declaring extra requirements in setup.py: setup( ... install_requires=[ 'six', 'humanize', ], extras_require={ ':python_version == \"2.7\"': [ 'ipaddress', ], }, ) See also requirement specifiers. And Strings for the string versions of corresponding Python commands."} +{"question_id": 17571438, "score": 125, "creation_date": 1373461602, "tags": ["python", "pytest"], "instruction": "How to control test case execution order in pytest?\n\nI have two files in a directory. In one of the files there is a long-running test case that generates some output. In the other file there is a test case that reads that output. How can I ensure the proper execution order of the two test cases? Is there any alternative other than puting the test cases in the same file in the proper order?", "output": "In general you can configure the behavior of basically any part of pytest using its well-specified hooks. In your case, you want the \"pytest_collection_modifyitems\" hook, which lets you re-order collected tests in place. That said, it does seem like ordering your tests should be easier -- this is Python after all! So I wrote a plugin for ordering tests: \"pytest-ordering\". Check out the docs or install it from pypi. Right now I recommend using @pytest.mark.first and @pytest.mark.second, or one of the @pytest.mark.order# markers, but I have some ideas about more useful APIs. Suggestions welcome :) Edit: pytest-ordering seems abandoned at the moment, you can also check out pytest-order (a fork of the original project by the author). Edit2: In pytest-order, only one marker (order) is supported, and the mentioned examples would read @pytest.mark.order(\"first\"), @pytest.mark.order(\"second\"), or @pytest.mark.order(#) (with # being any number)."} +{"question_id": 1112618, "score": 125, "creation_date": 1247273872, "tags": ["python"], "instruction": "Import python package from local directory into interpreter\n\nI'm developing/testing a package in my local directory. I want to import it in the interpreter (v2.5), but sys.path does not include the current directory. Right now I type in sys.path.insert(0,'.'). Is there a better way? Also, from . import mypackage fails with this error: ValueError: Attempted relative import in non-package", "output": "You can use relative imports only from in a module that was in turn imported as part of a package -- your script or interactive interpreter wasn't, so of course from . import (which means \"import from the same package I got imported from\") doesn't work. import mypackage will be fine once you ensure the parent directory of mypackage is in sys.path (how you managed to get your current directory away from sys.path I don't know -- do you have something strange in site.py, or...?) To get your current directory back into sys.path there is in fact no better way than putting it there."} +{"question_id": 24680981, "score": 124, "creation_date": 1405007412, "tags": ["python", "matplotlib", "legend"], "instruction": "Show only certain items in legend\n\nI currently am plotting a stacked bar graph of a large amount of taxonomic data, and only wish to show significant species in the legend (out of ~500 I wish to show ~25). Is there a simple way to do this? Below is the code I have: labels=['0','20','40','60','80','100','120'] ax1=subj1df.plot(kind='barh', stacked=True,legend=True,cmap='Paired', grid=False) legend(ncol=2,loc=2, bbox_to_anchor=(1.05, 1), borderaxespad=0.) label1=['Baseline','8h','24h','48h','96h','120h'] ax1.set_yticklabels(label1, fontdict=None, minor=False) plt.title('Subject 1 Phyla',fontweight='bold') plt.savefig('Subject1Phyla.eps', format='eps', dpi=1000) ax1.set_xticklabels(labels) Edit: tried adding this to show only one legend entry, however only returns an empty legend: h, l = ax1.get_legend_handles_labels() legend(l[4],h[4],ncol=2,loc=2, bbox_to_anchor=(1.05, 1), borderaxespad=0.)", "output": "This works: plt.plot([0, 4], [3,4]) plt.plot([0, 4], [2,3],label='_nolegend_') # element missing from legend plt.plot([0, 4], [1,2]) plt.legend(['first', 'third'])"} +{"question_id": 11556958, "score": 124, "creation_date": 1342687056, "tags": ["python", "html", "forms", "input", "flask"], "instruction": "Sending data from HTML form to a Python script in Flask\n\nI have the code below in my Python script: def cmd_wui(argv, path_to_tx): \"\"\"Run a web UI.\"\"\" from flask import Flask, flash, jsonify, render_template, request import webbrowser app = Flask(__name__) @app.route('/tx/index/') def index(): \"\"\"Load start page where you select your project folder or load history projects from local DB.\"\"\" from txclib import get_version txc_version = get_version() prj = project.Project(path_to_tx) # Let's create a resource list from our config file res_list = [] prev_proj = '' for idx, res in enumerate(prj.get_resource_list()): hostname = prj.get_resource_host(res) username, password = prj.getset_host_credentials(hostname) return render_template('init.html', txc_version=txc_version, username=username) Also, I have an HTML form in init.html:
How can I pass the user input from \"projectFilepath\" when a user clicks \"spotButton\" on a variable in my python script? I'm new in Python and Flask, so forgive me if I make any mistakes.", "output": "The form tag needs some attributes set: action: The URL that the form data is sent to on submit. Generate it with url_for. It can be omitted if the same URL handles showing the form and processing the data. method=\"post\": Submits the data as form data with the POST method. If not given, or explicitly set to get, the data is submitted in the query string (request.args) with the GET method instead. enctype=\"multipart/form-data\": When the form contains file inputs, it must have this encoding set, otherwise the files will not be uploaded and Flask won't see them. The input tag needs a name parameter. Add a view to handle the submitted data, which is in request.form under the same key as the input's name. Any file inputs will be in request.files. @app.route('/handle_data', methods=['POST']) def handle_data(): projectpath = request.form['projectFilepath'] # your code # return a response Set the form's action to that view's URL using url_for:
"} +{"question_id": 69642889, "score": 124, "creation_date": 1634719454, "tags": ["python", "switch-statement", "match", "python-3.10"], "instruction": "How to use multiple cases in Match (switch in other languages) cases in Python 3.10\n\nI am trying to use multiple cases in a function similar to the one shown below so that I can be able to execute multiple cases using match cases in python 3.10 def sayHi(name): match name: case ['Egide', 'Eric']: return f\"Hi Mr {name}\" case 'Egidia': return f\"Hi Ms {name}\" print(sayHi('Egide')) This is just returning None instead of the message, even if I remove square brackets.", "output": "According to What\u2019s New In Python 3.10, PEP 636, and the docs, you use a | between patterns: case 'Egide' | 'Eric':"} +{"question_id": 36028759, "score": 124, "creation_date": 1458111686, "tags": ["python", "database", "sqlite", "pandas", "dataframe"], "instruction": "How to open and convert sqlite database to pandas dataframe\n\nI have downloaded some datas as a sqlite database (data.db) and I want to open this database in python and then convert it into pandas dataframe. This is so far I have done import sqlite3 import pandas dat = sqlite3.connect('data.db') #connected to database with out error pandas.DataFrame.from_records(dat, index=None, exclude=None, columns=None, coerce_float=False, nrows=None) But its throwing this error Traceback (most recent call last): File \"\", line 1, in File \"/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py\", line 980, in from_records coerce_float=coerce_float) File \"/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py\", line 5353, in _to_arrays if not len(data): TypeError: object of type 'sqlite3.Connection' has no len() How to convert sqlite database to pandas dataframe", "output": "Despite sqlite being part of the Python Standard Library and is a nice and easy interface to SQLite databases, the Pandas tutorial states: Note In order to use read_sql_table(), you must have the SQLAlchemy optional dependency installed. But Pandas still supports sqlite3 access if you want to avoid installing SQLAlchemy: import sqlite3 import pandas as pd # Create your connection. cnx = sqlite3.connect('file.db') df = pd.read_sql_query(\"SELECT * FROM table_name\", cnx) As stated here, but you need to know the name of the used table in advance."} +{"question_id": 11205386, "score": 124, "creation_date": 1340706583, "tags": ["python", "beautifulsoup"], "instruction": "Get an attribute value based on the name attribute with BeautifulSoup\n\nI want to print an attribute value based on its name, take for example I want to do something like this soup = BeautifulSoup(f) # f is some HTML containing the above meta tag for meta_tag in soup(\"meta\"): if meta_tag[\"name\"] == \"City\": print(meta_tag[\"content\"]) The above code give a KeyError: 'name', I believe this is because name is used by BeatifulSoup so it can't be used as a keyword argument.", "output": "It's pretty simple, use the following: >>> from bs4 import BeautifulSoup >>> soup = BeautifulSoup('') >>> soup.find(\"meta\", {\"name\":\"City\"}) >>> soup.find(\"meta\", {\"name\":\"City\"})['content'] 'Austin'"} +{"question_id": 2805231, "score": 124, "creation_date": 1273515282, "tags": ["python", "dns"], "instruction": "How can I do DNS lookups in Python, including referring to /etc/hosts?\n\ndnspython will do my DNS lookups very nicely, but it entirely ignores the contents of /etc/hosts. Is there a python library call which will do the right thing? ie check first in etc/hosts, and only fall back to DNS lookups otherwise?", "output": "I'm not really sure if you want to do DNS lookups yourself or if you just want a host's ip. In case you want the latter, /!\\ socket.gethostbyname is deprecated, prefer socket.getaddrinfo from man gethostbyname: The gethostbyname*(), gethostbyaddr*(), [...] functions are obsolete. Applications should use getaddrinfo(3), getnameinfo(3), import socket print(socket.gethostbyname('localhost')) # result from hosts file print(socket.gethostbyname('google.com')) # your os sends out a dns query"} +{"question_id": 6285327, "score": 124, "creation_date": 1307567451, "tags": ["python", "django", "django-templates"], "instruction": "How to do math in a Django template?\n\nI want to do this: 100 - {{ object.article.rating_score }} So for example, the output would be 20 if {{ object.article.rating_score }} equaled 80. How do I do this at the template level? I don't have access to the Python code.", "output": "You can use the add filter: {{ object.article.rating_score|add:\"-100\" }}"} +{"question_id": 24577349, "score": 124, "creation_date": 1404487404, "tags": ["python", "flask", "download", "flask-sqlalchemy"], "instruction": "Flask Download a File\n\nI'm trying to create a web app with Flask that lets a user upload a file and serve them to another user. Right now, I can upload the file to the upload_folder correctly. But I can't seem to find a way to let the user download it back. I'm storing the name of the filename into a database. I have a view serving the database objects. I can delete them too. @app.route('/dashboard', methods=['GET', 'POST']) def dashboard(): problemes = Probleme.query.all() if 'user' not in session: return redirect(url_for('login')) if request.method == 'POST': delete = Probleme.query.filter_by(id=request.form['del_button']).first() db.session.delete(delete) db.session.commit() return redirect(url_for('dashboard')) return render_template('dashboard.html', problemes=problemes) In my HTML I have: Facture and a download view : @app.route('/uploads/', methods=['GET', 'POST']) def download(filename): return send_from_directory(directory=app.config['UPLOAD_FOLDER'], filename=filename) But it's returning : Not Found The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. I just want to link the filename to the object and let the user download it (For every object in the same view)", "output": "You need to make sure that the value you pass to the directory argument is an absolute path, corrected for the current location of your application. The best way to do this is to configure UPLOAD_FOLDER as a relative path (no leading slash), then make it absolute by prepending current_app.root_path: @app.route('/uploads/', methods=['GET', 'POST']) def download(filename): uploads = os.path.join(current_app.root_path, app.config['UPLOAD_FOLDER']) return send_from_directory(uploads, filename) It is important to reiterate that UPLOAD_FOLDER must be relative for this to work, e.g. not start with a /. A relative path could work but relies too much on the current working directory being set to the place where your Flask code lives. This may not always be the case."} +{"question_id": 11164144, "score": 124, "creation_date": 1340399255, "tags": ["python", "try-except"], "instruction": "Weird Try-Except-Else-Finally behavior with Return statements\n\nThis is some code that is behaving peculiarly. This is a simplified version of the behavior that I've written. This will still demonstrate the weird behavior and I had some specific questions on why this is occurring. I'm using Python 2.6.6 on Windows 7. def demo1(): try: raise RuntimeError(\"To Force Issue\") except: return 1 else: return 2 finally: return 3 def demo2(): try: try: raise RuntimeError(\"To Force Issue\") except: return 1 else: return 2 finally: return 3 except: print(4) else: print(5) finally: print(6) Results: >>> print demo1() 3 >>> print demo2() 6 3 Why is demo one returning 3 instead of 1? Why is demo two printing 6 instead of printing 6 w/ 4 or 5?", "output": "Because finally statements are guaranteed to be executed (well, presuming no power outage or anything outside of Python's control). This means that before the function can return, it must run the finally block, which returns a different value. The Python docs state: When a return, break or continue statement is executed in the try suite of a try\u2026finally statement, the finally clause is also executed \u2018on the way out.\u2019 The return value of a function is determined by the last return statement executed. Since the finally clause always executes, a return statement executed in the finally clause will always be the last one executed: This means that when you try to return, the finally block is called, returning it's value, rather than the one that you would have had."} +{"question_id": 33174276, "score": 124, "creation_date": 1445009514, "tags": ["python", "variables", "filter", "variable-assignment"], "instruction": "Why does foo = filter(...) return a , not a list?\n\nWorking in Python IDLE 3.5.0 shell. From my understanding of the builtin \"filter\" function it returns either a list, tuple, or string, depending on what you pass into it. So, why does the first assignment below work, but not the second (the '>>>'s are just the interactive Python prompts) >>> def greetings(): return \"hello\" >>> hesaid = greetings() >>> print(hesaid) hello >>> >>> shesaid = filter(greetings(), [\"hello\", \"goodbye\"]) >>> print(shesaid) ", "output": "Have a look at the python documentation for filter(function, iterable) (from here): Construct an iterator from those elements of iterable for which function returns true. So in order to get a list back you have to use list class: shesaid = list(filter(greetings(), [\"hello\", \"goodbye\"])) But this probably isn't what you wanted, because it tries to call the result of greetings(), which is \"hello\", on the values of your input list, and this won't work. Here also the iterator type comes into play, because the results aren't generated until you use them (for example by calling list() on it). So at first you won't get an error, but when you try to do something with shesaid it will stop working: >>> print(list(shesaid)) Traceback (most recent call last): File \"\", line 1, in TypeError: 'str' object is not callable If you want to check which elements in your list are equal to \"hello\" you have to use something like this: shesaid = list(filter(lambda x: x == \"hello\", [\"hello\", \"goodbye\"])) (I put your function into a lambda, see Randy C's answer for a \"normal\" function)"} +{"question_id": 1077015, "score": 123, "creation_date": 1246574440, "tags": ["functional-programming", "python", "list-comprehension"], "instruction": "How can I get a flat result from a list comprehension instead of a nested list?\n\nI have a list A, and a function f which takes an item of A and returns a list. I can use a list comprehension to convert everything in A like [f(a) for a in A], but this returns a list of lists. Suppose my input is [a1,a2,a3], resulting in [[b11,b12],[b21,b22],[b31,b32]]. How can I get the flattened list [b11,b12,b21,b22,b31,b32] instead? In other words, in Python, how can I get what is traditionally called flatmap in functional programming languages, or SelectMany in .NET? (In the actual code, A is a list of directories, and f is os.listdir. I want to build a flat list of subdirectories.) See also: How do I make a flat list out of a list of lists? for the more general problem of flattening a list of lists after it's been created.", "output": "You can have nested iterations in a single list comprehension: [filename for path in dirs for filename in os.listdir(path)] which is equivalent (at least functionally) to: filenames = [] for path in dirs: for filename in os.listdir(path): filenames.append(filename)"} +{"question_id": 17578115, "score": 123, "creation_date": 1373480566, "tags": ["python", "pandas", "numpy", "aggregate"], "instruction": "Pass percentiles to pandas agg function\n\nI want to pass the numpy percentile() function through pandas' agg() function as I do below with various other numpy statistics functions. Right now I have a dataframe that looks like this: AGGREGATE MY_COLUMN A 10 A 12 B 5 B 9 A 84 B 22 And my code looks like this: grouped = dataframe.groupby('AGGREGATE') column = grouped['MY_COLUMN'] column.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max]) The above code works, but I want to do something like column.agg([np.sum, np.mean, np.percentile(50), np.percentile(95)]) I.e., specify various percentiles to return from agg(). How should this be done?", "output": "Perhaps not super efficient, but one way would be to create a function yourself: def percentile(n): def percentile_(x): return x.quantile(n) percentile_.__name__ = 'percentile_{:02.0f}'.format(n*100) return percentile_ Then include this in your agg: In [11]: column.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max, percentile(50), percentile(95)]) Out[11]: sum mean std median var amin amax percentile_50 percentile_95 AGGREGATE A 106 35.333333 42.158431 12 1777.333333 10 84 12 76.8 B 36 12.000000 8.888194 9 79.000000 5 22 12 76.8 Note sure this is how it should be done though..."} +{"question_id": 13685386, "score": 123, "creation_date": 1354545212, "tags": ["python", "matplotlib", "axis", "aspect-ratio", "matplotlib-3d"], "instruction": "How to set the 'equal' aspect ratio for all axes (x, y, z)\n\nWhen I set up an equal aspect ratio for a 3d graph, the z-axis does not change to 'equal'. So this: fig = pylab.figure() mesFig = fig.gca(projection='3d', adjustable='box') mesFig.axis('equal') mesFig.plot(xC, yC, zC, 'r.') mesFig.plot(xO, yO, zO, 'b.') pyplot.show() Gives me the following: Where obviously the unit length of z-axis is not equal to x- and y- units. How can I make the unit length of all three axes equal? All the solutions I found did not work.", "output": "I believe matplotlib does not yet set correctly equal axis in 3D... But I found a trick some times ago (I don't remember where) that I've adapted using it. The concept is to create a fake cubic bounding box around your data. You can test it with the following code: from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.set_aspect('equal') X = np.random.rand(100)*10+5 Y = np.random.rand(100)*5+2.5 Z = np.random.rand(100)*50+25 scat = ax.scatter(X, Y, Z) # Create cubic bounding box to simulate equal aspect ratio max_range = np.array([X.max()-X.min(), Y.max()-Y.min(), Z.max()-Z.min()]).max() Xb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][0].flatten() + 0.5*(X.max()+X.min()) Yb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][1].flatten() + 0.5*(Y.max()+Y.min()) Zb = 0.5*max_range*np.mgrid[-1:2:2,-1:2:2,-1:2:2][2].flatten() + 0.5*(Z.max()+Z.min()) # Comment or uncomment following both lines to test the fake bounding box: for xb, yb, zb in zip(Xb, Yb, Zb): ax.plot([xb], [yb], [zb], 'w') plt.grid() plt.show() z data are about an order of magnitude larger than x and y, but even with equal axis option, matplotlib autoscale z axis: But if you add the bounding box, you obtain a correct scaling:"} +{"question_id": 55762673, "score": 123, "creation_date": 1555680681, "tags": ["python", "pydantic"], "instruction": "How to parse list of models with Pydantic\n\nI use Pydantic to model the requests and responses to an API. I defined a User class: from pydantic import BaseModel class User(BaseModel): name: str age: int My API returns a list of users which I retrieve with requests and convert into a dict: users = [{\"name\": \"user1\", \"age\": 15}, {\"name\": \"user2\", \"age\": 28}] How can I convert this dict to a list of User instances? My solution for now is user_list = [] for user in users: user_list.append(User(**user))", "output": "Pydantic V1: This is now possible using parse_obj_as. from pydantic import parse_obj_as users = [ {\"name\": \"user1\", \"age\": 15}, {\"name\": \"user2\", \"age\": 28} ] m = parse_obj_as(List[User], users) Pydantic V2: Use Type Adapter. from pydantic import TypeAdapter users = [ {\"name\": \"user1\", \"age\": 15}, {\"name\": \"user2\", \"age\": 28} ] ta = TypeAdapter(List[User]) m = ta.validate_python(users)"} +{"question_id": 1011938, "score": 123, "creation_date": 1245320618, "tags": ["python", "loops", "iteration"], "instruction": "Loop that also accesses previous and next values\n\nHow can I iterate over a list of objects, accessing the previous, current, and next items? Like this C/C++ code, in Python? foo = somevalue; previous = next = 0; for (i=1; i 0: previous_item = objects[index - 1] if index < (l - 1): next_item = objects[index + 1] Here's the docs on the enumerate function."} +{"question_id": 534855, "score": 123, "creation_date": 1234310242, "tags": ["python", "list", "tuples", "vector"], "instruction": "Subtracting 2 lists in Python\n\nRight now I have vector3 values represented as lists. is there a way to subtract 2 of these like vector3 values, like [2,2,2] - [1,1,1] = [1,1,1] Should I use tuples? If none of them defines these operands on these types, can I define it instead? If not, should I create a new vector3 class?", "output": "If this is something you end up doing frequently, and with different operations, you should probably create a class to handle cases like this, or better use some library like Numpy. Otherwise, look for list comprehensions used with the zip builtin function: [a_i - b_i for a_i, b_i in zip(a, b)]"} +{"question_id": 23720875, "score": 123, "creation_date": 1400408455, "tags": ["python", "opencv", "computer-vision", "object-detection", "draw"], "instruction": "OpenCV: draw a rectangle around a region\n\nHow can I use OpenCV in Python to draw rectangles around some regions within an image for object detection purposes?", "output": "Use cv2: import cv2 cv2.rectangle(img, (x1, y1), (x2, y2), color=(255,0,0), thickness=2) x1,y1 ------ | | | | | | --------x2,y2 [edit] to append the follow-up questions below: cv2.imwrite(\"my.png\",img) cv2.imshow(\"lalala\", img) k = cv2.waitKey(0) # 0==wait forever"} +{"question_id": 5384570, "score": 123, "creation_date": 1300746956, "tags": ["python", "iterator", "generator", "iterable"], "instruction": "How can I count the number of items in an arbitrary iterable (such as a generator)?\n\nSuppose I have an arbitrary iterable - for example, a generator that iterates over lines of a file and yields the ones matching a regex. How can I count the number of items in that iterable, supposing that I don't care about the elements themselves?", "output": "Calls to itertools.imap() in Python 2 or map() in Python 3 can be replaced by equivalent generator expressions: sum(1 for dummy in it) This also uses a lazy generator, so it avoids materializing a full list of all iterator elements in memory."} +{"question_id": 26133538, "score": 123, "creation_date": 1412133101, "tags": ["python", "pandas"], "instruction": "round a single column in pandas\n\nIs there a way to round a single column in pandas without affecting the rest of the dataframe? >>> print(df) item value1 value2 0 a 1.12 1.3 1 a 1.50 2.5 2 a 0.10 0.0 3 b 3.30 -1.0 4 b 4.80 -1.0 I have tried the following: >>> df.value1.apply(np.round) 0 1 1 2 2 0 3 3 4 5 5 5 What is the correct way to make data look like this: item value1 value2 0 a 1 1.3 1 a 2 2.5 2 a 0 0.0 3 b 3 -1.0 4 b 5 -1.0 5 c 5 5.0", "output": "You are very close. You applied the round to the series of values given by df.value1. The return type is thus a Series. You need to assign that series back to the dataframe (or another dataframe with the same Index). Also, there is a pandas.Series.round method which is basically a short hand for pandas.Series.apply(np.round). >>> df.value1 = df.value1.round() >>> print df item value1 value2 0 a 1 1.3 1 a 2 2.5 2 a 0 0.0 3 b 3 -1.0 4 b 5 -1.0"} +{"question_id": 32888124, "score": 123, "creation_date": 1443704091, "tags": ["python", "datetime", "pandas", "datetimeoffset"], "instruction": "pandas out of bounds nanosecond timestamp after offset rollforward plus adding a month offset\n\nI am confused how pandas blew out of bounds for datetime objects with these lines: import pandas as pd BOMoffset = pd.tseries.offsets.MonthBegin() # here some code sets the all_treatments dataframe and the newrowix, micolix, mocolix counters all_treatments.iloc[newrowix,micolix] = BOMoffset.rollforward(all_treatments.iloc[i,micolix] + pd.tseries.offsets.DateOffset(months = x)) all_treatments.iloc[newrowix,mocolix] = BOMoffset.rollforward(all_treatments.iloc[newrowix,micolix]+ pd.tseries.offsets.DateOffset(months = 1)) Here all_treatments.iloc[i,micolix] is a datetime set by pd.to_datetime(all_treatments['INDATUMA'], errors='coerce',format='%Y%m%d'), and INDATUMA is date information in the format 20070125. This logic seems to work on mock data (no errors, dates make sense), so at the moment I cannot reproduce while it fails in my entire data with the following error: pandas.tslib.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 2262-05-01 00:00:00", "output": "Since pandas represents timestamps in nanosecond resolution, the timespan that can be represented using a 64-bit integer is limited to approximately 584 years In [54]: pd.Timestamp.min Out[54]: Timestamp('1677-09-22 00:12:43.145225') In [55]: pd.Timestamp.max Out[55]: Timestamp('2262-04-11 23:47:16.854775807') And your value is out of this range 2262-05-01 00:00:00 and hence the outofbounds error Straight out of: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#timestamp-limitations Workaround: This will force the dates which are outside the bounds to NaT pd.to_datetime(date_col_to_force, errors = 'coerce')"} +{"question_id": 14617136, "score": 123, "creation_date": 1359596793, "tags": ["python", "pip", "setuptools"], "instruction": "Why is pip installing an old version of my package?\n\nI've just uploaded a new version of my package to PyPi (1.2.1.0-r4): I can download the egg file and install it with easy_install, and the version checks out correctly. But when I try to install using pip, it installs version 1.1.0.0 instead. Even if I explicitly specify the version to pip with pip install -Iv tome==1.2.1.0-r4, I get this message: Requested tome==1.2.1.0-r4, but installing version 1.1.0.0, but I don't understand why. I double checked with parse_version and confirmed that the version string on 1.2.1 is greater than that on 1.1.0 as shown: >>> from pkg_resources import parse_version as pv >>> pv('1.1.0.0') < pv('1.2.1.0-r4') True >>> So any idea why it's choosing to install 1.1.0 instead?", "output": "This is an excellent question. It took me forever to figure out. This is the solution that works for me: Apparently, if pip can find a local version of the package, pip will prefer the local versions to remote ones. I even disconnected my computer from the internet and tried it again -- when pip still installed the package successfully, and didn't even complain, the source was obviously local. The really confusing part, in my case, was that pip found the newer versions on pypi, reported them, and then went ahead and re-installed the older version anyway ... arggh. Also, it didn't tell me what it was doing, and why. So how did I solve this problem? You can get pip to give verbose output using the -v flag ... but one isn't enough. I RTFM-ed the help, which said you can do -v multiple times, up to 3x, for more verbose output. So I did: pip install -vvv Then I looked through the output. One line caught my eye: Source in /tmp/pip-build-root/ has version 0.0.11, which satisfies requirement I deleted that directory, after which pip installed the newest version from pypi."} +{"question_id": 44640479, "score": 123, "creation_date": 1497908785, "tags": ["python", "mypy", "python-typing"], "instruction": "Type annotation for classmethod returning instance\n\nHow should I annotate a @classmethod that returns an instance of cls? Here's a bad example: class Foo(object): def __init__(self, bar: str): self.bar = bar @classmethod def with_stuff_appended(cls, bar: str) -> ???: return cls(bar + \"stuff\") This returns a Foo but more accurately returns whichever subclass of Foo this is called on, so annotating with -> \"Foo\" wouldn't be good enough.", "output": "The trick is to explicitly add an annotation to the cls parameter, in combination with TypeVar, for generics, and Type, to represent a class rather than the instance itself, like so: from typing import TypeVar, Type # Create a generic variable that can be 'Parent', or any subclass. T = TypeVar('T', bound='Parent') class Parent: def __init__(self, bar: str) -> None: self.bar = bar @classmethod def with_stuff_appended(cls: Type[T], bar: str) -> T: # We annotate 'cls' with a typevar so that we can # type our return type more precisely return cls(bar + \"stuff\") class Child(Parent): # If you're going to redefine __init__, make sure it # has a signature that's compatible with the Parent's __init__, # since mypy currently doesn't check for that. def child_only(self) -> int: return 3 # Mypy correctly infers that p is of type 'Parent', # and c is of type 'Child'. p = Parent.with_stuff_appended(\"10\") c = Child.with_stuff_appended(\"20\") # We can verify this ourself by using the special 'reveal_type' # function. Be sure to delete these lines before running your # code -- this function is something only mypy understands # (it's meant to help with debugging your types). reveal_type(p) # Revealed type is 'test.Parent*' reveal_type(c) # Revealed type is 'test.Child*' # So, these all typecheck print(p.bar) print(c.bar) print(c.child_only()) Normally, you can leave cls (and self) unannotated, but if you need to refer to the specific subclass, you can add an explicit annotation. Note that this feature is still experimental and may be buggy in some cases. You may also need to use the latest version of mypy cloned from Github, rather then what's available on pypi -- I don't remember if that version supports this feature for classmethods."} +{"question_id": 58961497, "score": 123, "creation_date": 1574276174, "tags": ["python", "python-poetry"], "instruction": "How to update Poetry's lock file without upgrading dependencies?\n\nAfter adding a [tool.poetry.extras] section to pyproject.toml, Poetry displays the following warning, for example on install: Warning: The lock file is not up to date with the latest changes in pyproject.toml. You may be getting outdated dependencies. Run update to update them. That's fine, but if I run poetry update it upgrades my dependencies, which is not what I want at this time. If I run poetry lock instead, it still upgrades dependencies. Sorry for not providing a reproducible example, it's quite tricky to generate a poetry.lock file with outdated dependencies. My existing one is too large for posting here. Update: Opened sdispater/poetry#1614 for this issue", "output": "With poetry v2.0.0 (release date: Jan 5, 2025): poetry lock alone should be enough. As you can see in the v2 documentation: By default, packages that have already been added to the lock file before will not be updated. With poetry v1, the lock action defaults to updating dependencies when recreating the lockfile. There is a specific option for the lock command that prevents this: poetry lock --no-update This makes it possible to remove a dependency from pyproject.toml and update the lock file without upgrading dependencies. Note that this flag is only available since 1.1.2."} +{"question_id": 29882642, "score": 123, "creation_date": 1430077557, "tags": ["python", "flask"], "instruction": "How to run a flask application?\n\nI want to know the correct way to start a flask application. The docs show two different commands: $ flask -a sample run and $ python3.4 sample.py produce the same result and run the application correctly. What is the difference between the two and which should be used to run a Flask application?", "output": "The flask command is a CLI for interacting with Flask apps. The docs describe how to use CLI commands and add custom commands. The flask run command is the preferred way to start the development server. Never use this command to deploy publicly, use a production WSGI server such as Gunicorn, uWSGI, Waitress, or mod_wsgi. As of Flask 2.2, use the --app option to point the command at your app. It can point to an import name or file name. It will automatically detect an app instance or an app factory called create_app. Use the --debug option to run in debug mode with the debugger and reloader. $ flask --app sample --debug run Prior to Flask 2.2, the FLASK_APP and FLASK_ENV=development environment variables were used instead. FLASK_APP and FLASK_DEBUG=1 can still be used in place of the CLI options above. $ export FLASK_APP=sample $ export FLASK_ENV=development $ flask run On Windows CMD, use set instead of export. > set FLASK_APP=sample For PowerShell, use $env:. > $env:FLASK_APP = \"sample\" The python sample.py command runs a Python file and sets __name__ == \"__main__\". If the main block calls app.run(), it will run the development server. If you use an app factory, you could also instantiate an app instance at this point. if __name__ == \"__main__\": app = create_app() app.run(debug=True) Both these commands ultimately start the Werkzeug development server, which as the name implies starts a simple HTTP server that should only be used during development. You should prefer using the flask run command over the app.run()."} +{"question_id": 2191890, "score": 123, "creation_date": 1265201047, "tags": ["python", "syntax"], "instruction": "Conditional operator in Python?\n\ndo you know if Python supports some keyword or expression like in C++ to return values based on if condition, all in the same line (The C++ if expressed with the question mark ?) // C++ value = ( a > 10 ? b : c )", "output": "value = b if a > 10 else c For Python 2.4 and lower you would have to do something like the following, although the semantics isn't identical as the short circuiting effect is lost: value = [c, b][a > 10] There's also another hack using 'and ... or' but it's best to not use it as it has an undesirable behaviour in some situations that can lead to a hard to find bug. I won't even write the hack here as I think it's best not to use it, but you can read about it on Wikipedia if you want."} +{"question_id": 44672524, "score": 123, "creation_date": 1498038335, "tags": ["python", "io"], "instruction": "How to create in-memory file object\n\nI want to make a in-memory file to use in pygame mixer. I mean something like http://www.pygame.org/docs/ref/music.html#pygame.mixer.music.load which says load() method supports file object. import requests from pygame import mixer r = requests.get(\"http://example.com/some_small_file.mp3\") in_memory_file = file(r.content) # something like this mixer.music.init() mixer.music.load(in_memory_file) mixer.music.play()", "output": "You are probably looking for BytesIO or StringIO classes from Python io package, both available in python 2 and python 3. They provide a file-like interface you can use in your code the exact same way you interact with a real file. StringIO is used to store textual data: import io f = io.StringIO(\"some initial text data\") BytesIO must be used for binary data: import io f = io.BytesIO(b\"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x01\\x01\\x01\\x01\\x01\") To store MP3 file data, you will probably need the BytesIO class. To initialize it from a GET request to a server, proceed like this: import requests from pygame import mixer import io r = requests.get(\"http://example.com/somesmallmp3file.mp3\") inmemoryfile = io.BytesIO(r.content) mixer.music.init() mixer.music.load(inmemoryfile) mixer.music.play() # This will free the memmory from any data inmemoryfile.close() Additional note: as both classes inherit from IOBase, they can be used as context manager with the with statement, so you don't need to manually call the close() method anymore: import requests from pygame import mixer import io r = requests.get(\"http://example.com/somesmallmp3file.mp3\") with io.BytesIO(r.content) as inmemoryfile: mixer.music.init() mixer.music.load(inmemoryfile) mixer.music.play()"} +{"question_id": 20838162, "score": 123, "creation_date": 1388401236, "tags": ["python", "multithreading", "python-3.x", "python-multithreading", "concurrent.futures"], "instruction": "How does ThreadPoolExecutor().map differ from ThreadPoolExecutor().submit?\n\nI was just very confused by some code that I wrote. I was surprised to discover that: with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor: results = list(executor.map(f, iterable)) and with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor: results = list(map(lambda x: executor.submit(f, x), iterable)) produce different results. The first one produces a list of whatever type f returns, the second produces a list of concurrent.futures.Future objects that then need to be evaluated with their result() method in order to get the value that f returned. My main concern is that this means that executor.map can't take advantage of concurrent.futures.as_completed, which seems like an extremely convenient way to evaluate the results of some long-running calls to a database that I'm making as they become available. I'm not at all clear on how concurrent.futures.ThreadPoolExecutor objects work -- naively, I would prefer the (somewhat more verbose): with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor: result_futures = list(map(lambda x: executor.submit(f, x), iterable)) results = [f.result() for f in futures.as_completed(result_futures)] over the more concise executor.map in order to take advantage of a possible gain in performance. Am I wrong to do so?", "output": "The problem is that you transform the result of ThreadPoolExecutor.map to a list. If you don't do this and instead iterate over the resulting generator directly, the results are still yielded in the original order but the loop continues before all results are ready. You can test this with this example: import time import concurrent.futures e = concurrent.futures.ThreadPoolExecutor(4) s = range(10) for i in e.map(time.sleep, s): print(i) The reason for the order being kept may be because it's sometimes important that you get results in the same order you give them to map. And results are probably not wrapped in future objects because in some situations it may take just too long to do another map over the list to get all results if you need them. And after all in most cases it's very likely that the next value is ready before the loop processed the first value. This is demonstrated in this example: import concurrent.futures executor = concurrent.futures.ThreadPoolExecutor() # Or ProcessPoolExecutor data = some_huge_list() results = executor.map(crunch_number, data) finals = [] for value in results: finals.append(do_some_stuff(value)) In this example it may be likely that do_some_stuff takes longer than crunch_number and if this is really the case it's really not a big loss of performance while you still keep the easy usage of map. Also since the worker threads(/processes) start processing at the beginning of the list and work their way to the end to the list you submitted the results should be finished in the order they're already yielded by the iterator. Which means in most cases executor.map is just fine, but in some cases, for example if it doesn't matter in which order you process the values and the function you passed to map takes very different times to run, the future.as_completed may be faster."} +{"question_id": 15541404, "score": 123, "creation_date": 1363849627, "tags": ["python", "string", "python-internals", "internals", "string-interning"], "instruction": "Python string interning\n\nWhile this question doesn't have any real use in practice, I am curious as to how Python does string interning. I have noticed the following. >>> \"string\" is \"string\" True This is as I expected. You can also do this. >>> \"strin\"+\"g\" is \"string\" True And that's pretty clever! But you can't do this. >>> s1 = \"strin\" >>> s2 = \"string\" >>> s1+\"g\" is s2 False Why wouldn't Python evaluate s1+\"g\", and realize it is the same as s2 and point it to the same address? What is actually going on in that last block to have it return False?", "output": "This is implementation-specific, but your interpreter is probably interning compile-time constants but not the results of run-time expressions. In what follows CPython 3.9.0+ is used. In the second example, the expression \"strin\"+\"g\" is evaluated at compile time, and is replaced with \"string\". This makes the first two examples behave the same. If we examine the bytecodes, we'll see that they are exactly the same: # s1 = \"string\" 1 0 LOAD_CONST 0 ('string') 2 STORE_NAME 0 (s1) # s2 = \"strin\" + \"g\" 2 4 LOAD_CONST 0 ('string') 6 STORE_NAME 1 (s2) This bytecode was obtained with (which prints a few more lines after the above): import dis source = 's1 = \"string\"\\ns2 = \"strin\" + \"g\"' code = compile(source, '', 'exec') print(dis.dis(code)) The third example involves a run-time concatenation, the result of which is not automatically interned: # s3a = \"strin\" 3 8 LOAD_CONST 1 ('strin') 10 STORE_NAME 2 (s3a) # s3 = s3a + \"g\" 4 12 LOAD_NAME 2 (s3a) 14 LOAD_CONST 2 ('g') 16 BINARY_ADD 18 STORE_NAME 3 (s3) 20 LOAD_CONST 3 (None) 22 RETURN_VALUE This bytecode was obtained with (which prints a few more lines before the above, and those lines are exactly as in the first block of bytecodes given above): import dis source = ( 's1 = \"string\"\\n' 's2 = \"strin\" + \"g\"\\n' 's3a = \"strin\"\\n' 's3 = s3a + \"g\"') code = compile(source, '', 'exec') print(dis.dis(code)) If you were to manually sys.intern() the result of the third expression, you'd get the same object as before: >>> import sys >>> s3a = \"strin\" >>> s3 = s3a + \"g\" >>> s3 is \"string\" False >>> sys.intern(s3) is \"string\" True Also, Python 3.9 prints a warning for the last two statements above: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?"} +{"question_id": 3562403, "score": 122, "creation_date": 1282702446, "tags": ["python", "ssh", "paramiko"], "instruction": "How can you get the SSH return code using Paramiko?\n\nclient = paramiko.SSHClient() stdin, stdout, stderr = client.exec_command(command) Is there any way to get the command return code? It's hard to parse all stdout/stderr and know whether the command finished successfully or not.", "output": "SSHClient is a simple wrapper class around the more lower-level functionality in Paramiko. The API documentation lists a recv_exit_status() method on the Channel class. A very simple demonstration script: import paramiko import getpass pw = getpass.getpass() client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.WarningPolicy()) client.connect('127.0.0.1', password=pw) while True: cmd = raw_input(\"Command to run: \") if cmd == \"\": break chan = client.get_transport().open_session() print \"running '%s'\" % cmd chan.exec_command(cmd) print \"exit status: %s\" % chan.recv_exit_status() client.close() Example of its execution: $ python sshtest.py Password: Command to run: true running 'true' exit status: 0 Command to run: false running 'false' exit status: 1 Command to run: $"} +{"question_id": 12638408, "score": 122, "creation_date": 1348828705, "tags": ["python", "hex", "padding", "built-in"], "instruction": "Decorating Hex function to pad zeros\n\nI wrote this simple function: def padded_hex(i, l): given_int = i given_len = l hex_result = hex(given_int)[2:] # remove '0x' from beginning of str num_hex_chars = len(hex_result) extra_zeros = '0' * (given_len - num_hex_chars) # may not get used.. return ('0x' + hex_result if num_hex_chars == given_len else '?' * given_len if num_hex_chars > given_len else '0x' + extra_zeros + hex_result if num_hex_chars < given_len else None) Examples: padded_hex(42,4) # result '0x002a' hex(15) # result '0xf' padded_hex(15,1) # result '0xf' Whilst this is clear enough for me and fits my use case (a simple test tool for a simple printer) I can't help thinking there's a lot of room for improvement and this could be squashed down to something very concise. What other approaches are there to this problem?", "output": "Starting with Python 3.6, you can: >>> value = 42 >>> padding = 6 >>> f\"{value:#0{padding}x}\" '0x002a' Note the padding includes the 0x. If you don't want that you can do >>> f\"0x{value:0{padding}x}\" '0x00002a' for older python versions use the .format() string method: >>> \"{0:#0{1}x}\".format(42,6) '0x002a' Explanation: { # Format identifier 0: # first parameter # # use \"0x\" prefix 0 # fill with zeroes {1} # to a length of n characters (including 0x), defined by the second parameter x # hexadecimal number, using lowercase letters for a-f } # End of format identifier If you want the letter hex digits uppercase but the prefix with a lowercase 'x', you'll need a slight workaround: >>> '0x{0:0{1}X}'.format(42,4) '0x002A'"} +{"question_id": 36498127, "score": 122, "creation_date": 1460113795, "tags": ["python", "tensorflow", "machine-learning", "keras", "deep-learning"], "instruction": "How to apply gradient clipping in TensorFlow?\n\nConsidering the example code. I would like to know How to apply gradient clipping on this network on the RNN where there is a possibility of exploding gradients. tf.clip_by_value(t, clip_value_min, clip_value_max, name=None) This is an example that could be used but where do I introduce this ? In the def of RNN lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0) # Split data because rnn cell needs a list of inputs for the RNN inner loop _X = tf.split(0, n_steps, _X) # n_steps tf.clip_by_value(_X, -1, 1, name=None) But this doesn't make sense as the tensor _X is the input and not the grad what is to be clipped? Do I have to define my own Optimizer for this or is there a simpler option?", "output": "Gradient clipping needs to happen after computing the gradients, but before applying them to update the model's parameters. In your example, both of those things are handled by the AdamOptimizer.minimize() method. In order to clip your gradients you'll need to explicitly compute, clip, and apply them as described in this section in TensorFlow's API documentation. Specifically you'll need to substitute the call to the minimize() method with something like the following: optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) gvs = optimizer.compute_gradients(cost) capped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs] train_op = optimizer.apply_gradients(capped_gvs)"} +{"question_id": 57727372, "score": 122, "creation_date": 1567170492, "tags": ["python", "pytorch", "tensor"], "instruction": "How do I get the value of a tensor in PyTorch?\n\nPrinting a tensor x gives: >>> x = torch.tensor([3]) >>> print(x) tensor([3]) Indexing x.data gives: >>> x.data[0] tensor(3) How do I get just a regular non-tensor value 3?", "output": "You can use x.item() to get a Python number from a Tensor that has one element."} +{"question_id": 35634238, "score": 122, "creation_date": 1456421426, "tags": ["python", "pandas"], "instruction": "How to save a pandas DataFrame table as a png\n\nI constructed a pandas dataframe of results. This data frame acts as a table. There are MultiIndexed columns and each row represents a name, ie index=['name1','name2',...] when creating the DataFrame. I would like to display this table and save it as a png (or any graphic format really). At the moment, the closest I can get is converting it to html, but I would like a png. It looks like similar questions have been asked such as How to save the Pandas dataframe/series data as a figure? However, the marked solution converts the dataframe into a line plot (not a table) and the other solution relies on PySide which I would like to stay away simply because I cannot pip install it on linux. I would like this code to be easily portable. I really was expecting table creation to png to be easy with python. All help is appreciated.", "output": "Pandas allows you to plot tables using matplotlib (details here). Usually this plots the table directly onto a plot (with axes and everything) which is not what you want. However, these can be removed first: import matplotlib.pyplot as plt import pandas as pd from pandas.table.plotting import table # EDIT: see deprecation warnings below ax = plt.subplot(111, frame_on=False) # no visible frame ax.xaxis.set_visible(False) # hide the x axis ax.yaxis.set_visible(False) # hide the y axis table(ax, df) # where df is your data frame plt.savefig('mytable.png') The output might not be the prettiest but you can find additional arguments for the table() function here. EDIT: Here is a (admittedly quite hacky) way of simulating multi-indexes when plotting using the method above. If you have a multi-index data frame called df that looks like: first second bar one 1.991802 two 0.403415 baz one -1.024986 two -0.522366 foo one 0.350297 two -0.444106 qux one -0.472536 two 0.999393 dtype: float64 First reset the indexes so they become normal columns df = df.reset_index() df first second 0 0 bar one 1.991802 1 bar two 0.403415 2 baz one -1.024986 3 baz two -0.522366 4 foo one 0.350297 5 foo two -0.444106 6 qux one -0.472536 7 qux two 0.999393 Remove all duplicates from the higher order multi-index columns by setting them to an empty string (in my example I only have duplicate indexes in \"first\"): df.ix[df.duplicated('first') , 'first'] = '' # see deprecation warnings below df first second 0 0 bar one 1.991802 1 two 0.403415 2 baz one -1.024986 3 two -0.522366 4 foo one 0.350297 5 two -0.444106 6 qux one -0.472536 7 two 0.999393 Change the column names over your \"indexes\" to the empty string new_cols = df.columns.values new_cols[:2] = '','' # since my index columns are the two left-most on the table df.columns = new_cols Now call the table function but set all the row labels in the table to the empty string (this makes sure the actual indexes of your plot are not displayed): table(ax, df, rowLabels=['']*df.shape[0], loc='center') et voila: Your not-so-pretty but totally functional multi-indexed table. EDIT: DEPRECATION WARNINGS As pointed out in the comments, the import statement for table: from pandas.tools.plotting import table is now deprecated in newer versions of pandas in favour of: from pandas.plotting import table EDIT: DEPRECATION WARNINGS 2 The ix indexer has now been fully deprecated so we should use the loc indexer instead. Replace: df.ix[df.duplicated('first') , 'first'] = '' with df.loc[df.duplicated('first') , 'first'] = ''"} +{"question_id": 12941416, "score": 122, "creation_date": 1350500388, "tags": ["python", "sql", "sqlalchemy"], "instruction": "How to count rows with SELECT COUNT(*) with SQLAlchemy?\n\nI'd like to know if it's possible to generate a SELECT COUNT(*) FROM TABLE statement in SQLAlchemy without explicitly asking for it with execute(). If I use: session.query(table).count() then it generates something like: SELECT count(*) AS count_1 FROM (SELECT table.col1 as col1, table.col2 as col2, ... from table) which is significantly slower in MySQL with InnoDB. I am looking for a solution that doesn't require the table to have a known primary key, as suggested in Get the number of rows in table using SQLAlchemy.", "output": "I managed to render the following SELECT with SQLAlchemy on both layers. SELECT count(*) AS count_1 FROM \"table\" Usage from the SQL Expression layer from sqlalchemy import select, func, Integer, Table, Column, MetaData metadata = MetaData() table = Table(\"table\", metadata, Column('primary_key', Integer), Column('other_column', Integer) # just to illustrate ) print select([func.count()]).select_from(table) Usage from the ORM layer You just subclass Query (you have probably anyway) and provide a specialized count() method, like this one. from sqlalchemy.sql.expression import func class BaseQuery(Query): def count_star(self): count_query = (self.statement.with_only_columns([func.count()]) .order_by(None)) return self.session.execute(count_query).scalar() Please note that order_by(None) resets the ordering of the query, which is irrelevant to the counting. Using this method you can have a count(*) on any ORM Query, that will honor all the filter andjoin conditions already specified."} +{"question_id": 23164058, "score": 122, "creation_date": 1397864669, "tags": ["python", "python-3.x", "encoding", "base64"], "instruction": "How to encode text to base64 in python\n\nI am trying to encode a text string to base64. i tried doing this : name = \"your name\" print('encoding %s in base64 yields = %s\\n'%(name,name.encode('base64','strict'))) But this gives me the following error: LookupError: 'base64' is not a text encoding; use codecs.encode() to handle arbitrary codecs How do I go about doing this ? ( using Python 3.4)", "output": "Remember to import base64 and that the b64encode function takes bytes as an argument. import base64 b = base64.b64encode(bytes('your_string', 'utf-8')) # bytes base64_str = b.decode('utf-8') # convert bytes to string Explanation: The bytes function creates a bytes object from the string \"your_string\" using UTF-8 encoding. In Python, bytes represents a sequence of bits and UTF-8 specifies the character encoding to use. The base64.b64encode function encodes bytes object into Base64 format. It takes a bytes-like object as input and returns a Base64 encoded bytes object. The b.decode function decodes the bytes object (here b) using UTF-8 encoding and returns the resulting string. It converts the bytes back to their original string representation."} +{"question_id": 9807634, "score": 122, "creation_date": 1332343568, "tags": ["python", "recursion", "dictionary", "traversal"], "instruction": "Find all occurrences of a key in nested dictionaries and lists\n\nI have a dictionary like this: { \"id\": \"abcde\", \"key1\": \"blah\", \"key2\": \"blah blah\", \"nestedlist\": [ { \"id\": \"qwerty\", \"nestednestedlist\": [ { \"id\": \"xyz\", \"keyA\": \"blah blah blah\" }, { \"id\": \"fghi\", \"keyZ\": \"blah blah blah\" } ], \"anothernestednestedlist\": [ { \"id\": \"asdf\", \"keyQ\": \"blah blah\" }, { \"id\": \"yuiop\", \"keyW\": \"blah\" } ] } ] } Basically a dictionary with nested lists, dictionaries, and strings, of arbitrary depth. What is the best way of traversing this to extract the values of every \"id\" key? I want to achieve the equivalent of an XPath query like \"//id\". The value of \"id\" is always a string. So from my example, the output I need is basically: [\"abcde\", \"qwerty\", \"xyz\", \"fghi\", \"asdf\", \"yuiop\"] Order is not important.", "output": "I found this Q/A very interesting, since it provides several different solutions for the same problem. I took all these functions and tested them with a complex dictionary object. I had to take two functions out of the test, because they had to many fail results and they did not support returning lists or dicts as values, which i find essential, since a function should be prepared for almost any data to come. So i pumped the other functions in 100.000 iterations through the timeit module and output came to following result: 0.11 usec/pass on gen_dict_extract(k,o) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 6.03 usec/pass on find_all_items(k,o) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 0.15 usec/pass on findkeys(k,o) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1.79 usec/pass on get_recursively(k,o) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 0.14 usec/pass on find(k,o) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 0.36 usec/pass on dict_extract(k,o) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - All functions had the same needle to search for ('logging') and the same dictionary object, which is constructed like this: o = { 'temparature': '50', 'logging': { 'handlers': { 'console': { 'formatter': 'simple', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout', 'level': 'DEBUG' } }, 'loggers': { 'simpleExample': { 'handlers': ['console'], 'propagate': 'no', 'level': 'INFO' }, 'root': { 'handlers': ['console'], 'level': 'DEBUG' } }, 'version': '1', 'formatters': { 'simple': { 'datefmt': \"'%Y-%m-%d %H:%M:%S'\", 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } } }, 'treatment': {'second': 5, 'last': 4, 'first': 4}, 'treatment_plan': [[4, 5, 4], [4, 5, 4], [5, 5, 5]] } All functions delivered the same result, but the time differences are dramatic! The function gen_dict_extract(k,o) is my function adapted from the functions here, actually it is pretty much like the find function from Alfe, with the main difference, that i am checking if the given object has iteritems function, in case strings are passed during recursion: # python 2 def gen_dict_extract(key, var): if hasattr(var,'iteritems'): # hasattr(var,'items') for python 3 for k, v in var.iteritems(): # var.items() for python 3 if k == key: yield v if isinstance(v, dict): for result in gen_dict_extract(key, v): yield result elif isinstance(v, list): for d in v: for result in gen_dict_extract(key, d): yield result So this variant is the fastest and safest of the functions here. And find_all_items is incredibly slow and far off the second slowest get_recursivley while the rest, except dict_extract, is close to each other. The functions fun and keyHole only work if you are looking for strings. Interesting learning aspect here :)"} +{"question_id": 44864156, "score": 122, "creation_date": 1498933751, "tags": ["python", "tuples", "operator-precedence"], "instruction": "Why in Python does \"0, 0 == (0, 0)\" equal \"(0, False)\"?\n\nIn Python (I checked only with Python 3.6 but I believe it should hold for many of the previous versions as well): (0, 0) == 0, 0 # results in a two element tuple: (False, 0) 0, 0 == (0, 0) # results in a two element tuple: (0, False) (0, 0) == (0, 0) # results in a boolean True But: a = 0, 0 b = (0, 0) a == b # results in a boolean True Why does the result differ between the two approaches? Does the equality operator handle tuples differently?", "output": "The first two expressions both parse as tuples: (0, 0) == 0 (which is False), followed by 0 0, followed by 0 == (0, 0) (which is still False that way around). The expressions are split that way because of the relative precedence of the comma separator compared to the equality operator: Python sees a tuple containing two expressions, one of which happens to be an equality test, instead of an equality test between two tuples. But in your second set of statements, a = 0, 0 cannot be a tuple. A tuple is a collection of values, and unlike an equality test, assignment has no value in Python. An assignment is not an expression, but a statement; it does not have a value that can be included into a tuple or any other surrounding expression. If you tried something like (a = 0), 0 in order to force interpretation as a tuple, you would get a syntax error. That leaves the assignment of a tuple to a variable \u2013 which could be made more explicit by writing it a = (0, 0) \u2013 as the only valid interpretation of a = 0, 0. So even without the parentheses on the assignment to a, both it and b get assigned the value (0,0), so a == b is therefore True."} +{"question_id": 69711606, "score": 122, "creation_date": 1635179133, "tags": ["python", "pip", "setuptools", "python-poetry", "pyproject.toml"], "instruction": "How to install a package using pip in editable mode with pyproject.toml?\n\nWhen a project is specified only via pyproject.toml (i.e. no setup.{py,cfg} files), how can it be installed in editable mode via pip (i.e. python -m pip install -e .)? I tried both setuptools and poetry for the build system, but neither worked: [build-system] requires = [\"setuptools\", \"wheel\"] build-backend = \"setuptools.build_meta\" [build-system] requires = [\"poetry-core>=1.0.0\"] build-backend = \"poetry.core.masonry.api\" I get the same error for both build systems: ERROR: Project file:///tmp/demo has a 'pyproject.toml' and its build backend is missing the 'build_editable' hook. Since it does not have a 'setup.py' nor a 'setup.cfg', it cannot be installed in editable mode. Consider using a build backend that supports PEP 660. I'm using this inside a conda environment, the following is my version of setuptools and pip: $ conda list | grep setuptools setuptools 58.3.0 pypi_0 pypi $ python -m pip --version pip 21.3.1", "output": "PEP 660 \u2013 Editable installs for pyproject.toml based builds defines how to build projects that only use pyproject.toml. Build tools must implement PEP 660 for editable installs to work. You need a front-end (such as pip \u2265 21.3) and a backend. The statuses of some popular backends are: Setuptools implements PEP 660 as of version 64. Flit implements PEP 660 as of version 3.4. Poetry implements PEP 660 as of version 1.0.8. Note: To be able to do an editable installation to your user site (pip install -e --user), you need a system installed setuptools v62.0.0 or newer."} +{"question_id": 1534504, "score": 122, "creation_date": 1254953173, "tags": ["python", "string", "variables"], "instruction": "Convert Variable Name to String?\n\nI would like to convert a python variable name into the string equivalent as shown. Any ideas how? var = {} print ??? # Would like to see 'var' something_else = 3 print ??? # Would print 'something_else'", "output": "TL;DR: Not possible. See 'conclusion' at the end. There is an usage scenario where you might need this. I'm not implying there are not better ways or achieving the same functionality. This would be useful in order to 'dump' an arbitrary list of dictionaries in case of error, in debug modes and other similar situations. What would be needed, is the reverse of the eval() function: get_indentifier_name_missing_function() which would take an identifier name ('variable','dictionary',etc) as an argument, and return a string containing the identifier\u2019s name. Consider the following current state of affairs: random_function(argument_data) If one is passing an identifier name ('function','variable','dictionary',etc) argument_data to a random_function() (another identifier name), one actually passes an identifier (e.g.: ) to another identifier (e.g.: ): () From my understanding, only the memory address is passed to the function: () Therefore, one would need to pass a string as an argument to random_function() in order for that function to have the argument's identifier name: random_function('argument_data') Inside the random_function() def random_function(first_argument): , one would use the already supplied string 'argument_data' to: serve as an 'identifier name' (to display, log, string split/concat, whatever) feed the eval() function in order to get a reference to the actual identifier, and therefore, a reference to the real data: print(\"Currently working on\", first_argument) some_internal_var = eval(first_argument) print(\"here comes the data: \" + str(some_internal_var)) Unfortunately, this doesn't work in all cases. It only works if the random_function() can resolve the 'argument_data' string to an actual identifier. I.e. If argument_data identifier name is available in the random_function()'s namespace. This isn't always the case: # main1.py import some_module1 argument_data = 'my data' some_module1.random_function('argument_data') # some_module1.py def random_function(first_argument): print(\"Currently working on\", first_argument) some_internal_var = eval(first_argument) print(\"here comes the data: \" + str(some_internal_var)) ###### Expected results would be: Currently working on: argument_data here comes the data: my data Because argument_data identifier name is not available in the random_function()'s namespace, this would yield instead: Currently working on argument_data Traceback (most recent call last): File \"~/main1.py\", line 6, in some_module1.random_function('argument_data') File \"~/some_module1.py\", line 4, in random_function some_internal_var = eval(first_argument) File \"\", line 1, in NameError: name 'argument_data' is not defined Now, consider the hypotetical usage of a get_indentifier_name_missing_function() which would behave as described above. Here's a dummy Python 3.0 code: . # main2.py import some_module2 some_dictionary_1 = { 'definition_1':'text_1', 'definition_2':'text_2', 'etc':'etc.' } some_other_dictionary_2 = { 'key_3':'value_3', 'key_4':'value_4', 'etc':'etc.' } # # more such stuff # some_other_dictionary_n = { 'random_n':'random_n', 'etc':'etc.' } for each_one_of_my_dictionaries in ( some_dictionary_1, some_other_dictionary_2, ..., some_other_dictionary_n ): some_module2.some_function(each_one_of_my_dictionaries) # some_module2.py def some_function(a_dictionary_object): for _key, _value in a_dictionary_object.items(): print( get_indentifier_name_missing_function(a_dictionary_object) + \" \" + str(_key) + \" = \" + str(_value) ) ###### Expected results would be: some_dictionary_1 definition_1 = text_1 some_dictionary_1 definition_2 = text_2 some_dictionary_1 etc = etc. some_other_dictionary_2 key_3 = value_3 some_other_dictionary_2 key_4 = value_4 some_other_dictionary_2 etc = etc. ...... ...... ...... some_other_dictionary_n random_n = random_n some_other_dictionary_n etc = etc. Unfortunately, get_indentifier_name_missing_function() would not see the 'original' identifier names (some_dictionary_,some_other_dictionary_2,some_other_dictionary_n). It would only see the a_dictionary_object identifier name. Therefore the real result would rather be: a_dictionary_object definition_1 = text_1 a_dictionary_object definition_2 = text_2 a_dictionary_object etc = etc. a_dictionary_object key_3 = value_3 a_dictionary_object key_4 = value_4 a_dictionary_object etc = etc. ...... ...... ...... a_dictionary_object random_n = random_n a_dictionary_object etc = etc. So, the reverse of the eval() function won't be that useful in this case. Currently, one would need to do this: # main2.py same as above, except: for each_one_of_my_dictionaries_names in ( 'some_dictionary_1', 'some_other_dictionary_2', '...', 'some_other_dictionary_n' ): some_module2.some_function( { each_one_of_my_dictionaries_names : eval(each_one_of_my_dictionaries_names) } ) # some_module2.py def some_function(a_dictionary_name_object_container): for _dictionary_name, _dictionary_object in a_dictionary_name_object_container.items(): for _key, _value in _dictionary_object.items(): print( str(_dictionary_name) + \" \" + str(_key) + \" = \" + str(_value) ) ###### In conclusion: Python passes only memory addresses as arguments to functions. Strings representing the name of an identifier, can only be referenced back to the actual identifier by the eval() function if the name identifier is available in the current namespace. A hypothetical reverse of the eval() function, would not be useful in cases where the identifier name is not 'seen' directly by the calling code. E.g. inside any called function. Currently one needs to pass to a function: the string representing the identifier name the actual identifier (memory address) This can be achieved by passing both the 'string' and eval('string') to the called function at the same time. I think this is the most 'general' way of solving this egg-chicken problem across arbitrary functions, modules, namespaces, without using corner-case solutions. The only downside is the use of the eval() function which may easily lead to unsecured code. Care must be taken to not feed the eval() function with just about anything, especially unfiltered external-input data."} +{"question_id": 47333227, "score": 122, "creation_date": 1510845464, "tags": ["python", "pandas", "csv"], "instruction": "Pandas: ValueError: cannot convert float NaN to integer\n\nI get ValueError: cannot convert float NaN to integer for following: df = pandas.read_csv('zoom11.csv') df[['x']] = df[['x']].astype(int) The \"x\" is a column in the csv file, I cannot spot any float NaN in the file, and I don't understand the error or why I am getting it. When I read the column as String, then it has values like -1,0,1,...2000, all look very nice int numbers to me. When I read the column as float, then this can be loaded. Then it shows values as -1.0,0.0 etc, still there are no any NaN-s I tried with error_bad_lines = False and dtype parameter in read_csv to no avail. It just cancels loading with same exception. The file is not small (10+ M rows), so cannot inspect it manually, when I extract a small header part, then there is no error, but it happens with full file. So it is something in the file, but cannot detect what. Logically the csv should not have missing values, but even if there is some garbage then I would be ok to skip the rows. Or at least identify them, but I do not see way to scan through file and report conversion errors. Update: Using the hints in comments/answers I got my data clean with this: # x contained NaN df = df[~df['x'].isnull()] # Y contained some other garbage, so null check was not enough df = df[df['y'].str.isnumeric()] # final conversion now worked df[['x']] = df[['x']].astype(int) df[['y']] = df[['y']].astype(int)", "output": "For identifying NaN values use boolean indexing: print(df[df['x'].isnull()]) Then for removing all non-numeric values use to_numeric with parameter errors='coerce' - to replace non-numeric values to NaNs: df['x'] = pd.to_numeric(df['x'], errors='coerce') And for remove all rows with NaNs in column x use dropna: df = df.dropna(subset=['x']) Last convert values to ints: df['x'] = df['x'].astype(int)"} +{"question_id": 5347065, "score": 121, "creation_date": 1300411251, "tags": ["python", "arrays", "numpy"], "instruction": "Interleaving two NumPy arrays efficiently\n\nAssume the following arrays are given: a = array([1, 3, 5]) b = array([2, 4, 6]) How would one interleave them efficiently so that one gets a third array like the following? c = array([1, 2, 3, 4, 5, 6]) It can be assumed that length(a) == length(b).", "output": "I like Josh's answer. I just wanted to add a more mundane, usual, and slightly more verbose solution. I don't know which is more efficient. I expect they will have similar performance. import numpy as np a = np.array([1,3,5]) b = np.array([2,4,6]) c = np.empty((a.size + b.size,), dtype=a.dtype) c[0::2] = a c[1::2] = b"} +{"question_id": 8218608, "score": 121, "creation_date": 1321910027, "tags": ["python", "numpy", "matplotlib", "scipy", "savefig"], "instruction": "savefig without frames, axes, only content\n\nIn numpy/scipy I have an image stored in an array. I can display it, I want to save it using savefig without any borders, axes, labels, titles,... Just pure image, nothing else. I want to avoid packages like PyPNG or scipy.misc.imsave, they are sometimes problematic (they do not always install well), only basic savefig() for me", "output": "EDIT Changed aspect='normal to aspect='auto' since that changed in more recent versions of matplotlib (thanks to @Luke19). Assuming : import matplotlib.pyplot as plt To make a figure without the frame : fig = plt.figure(frameon=False) fig.set_size_inches(w,h) To make the content fill the whole figure ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) Then draw your image on it : ax.imshow(your_image, aspect='auto') fig.savefig(fname, dpi) The aspect parameter changes the pixel size to make sure they fill the figure size specified in fig.set_size_inches(\u2026). To get a feel of how to play with this sort of things, read through matplotlib's documentation, particularly on the subject of Axes, Axis and Artist."} +{"question_id": 54716377, "score": 121, "creation_date": 1550261384, "tags": ["python", "machine-learning", "deep-learning", "pytorch", "gradient-descent"], "instruction": "How to do gradient clipping in pytorch?\n\nWhat is the correct way to perform gradient clipping in pytorch? I have an exploding gradients problem.", "output": "clip_grad_norm (which is actually deprecated in favor of clip_grad_norm_ following the more consistent syntax of a trailing _ when in-place modification is performed) clips the norm of the overall gradient by concatenating all parameters passed to the function, as can be seen from the documentation: The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place. From your example it looks like that you want clip_grad_value_ instead which has a similar syntax and also modifies the gradients in-place: clip_grad_value_(model.parameters(), clip_value) Another option is to register a backward hook. This takes the current gradient as an input and may return a tensor which will be used in-place of the previous gradient, i.e. modifying it. This hook is called each time after a gradient has been computed, i.e. there's no need for manually clipping once the hook has been registered: for p in model.parameters(): p.register_hook(lambda grad: torch.clamp(grad, -clip_value, clip_value))"} +{"question_id": 61875869, "score": 121, "creation_date": 1589824360, "tags": ["python", "ubuntu", "ubuntu-20.04"], "instruction": "Ubuntu 20.04 upgrade, Python missing libffi.so.6\n\nI recently upgraded my OS to Ubuntu 20.04 LTS. Now when I try to import a library like Numpy in Python, I get the following error: ImportError: libffi.so.6: cannot open shared object file: No such file or directory I tried installing the libffi package, but apt can't locate it: sudo apt-get install libffi Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libffi", "output": "It seems like I fixed it. I could be wrong, but here is what I think happened: Ubuntu 20.04 upgraded libffi6 to libffi7 Python is still looking for libffi6 What I did to fix it : Locate libffi.so.7 in your system $ find /usr/lib -name \"libffi.so*\" Create a simlink named libffi.so.6 that points to libffi.so.7: sudo ln -s /usr/path/to/libffi.so.7 /usr/lib/path/to/libffi.so.6 UPDATE: As noted by many users, this fix could have unintended consequences. The better way to do it is to reinstall python as @amichaud explained. This should be used as a last resort IF you're not using pyenv/virtualenv/etc in which case removing python will cause a lot of dependencies to be removed as well."} +{"question_id": 12800862, "score": 121, "creation_date": 1349788396, "tags": ["python", "django", "gunicorn"], "instruction": "How to make Django serve static files with Gunicorn?\n\nI want to run my django project under gunicorn on localhost. I installed and integrated gunicorn. When I run: python manage.py run_gunicorn It works but there are no any static files (css and js) I disabled debug and template_debug in settings.py (made them false), but it is still same. Am I missing something? I call statics like: {{ STATIC_URL }}css/etc....", "output": "When in development mode and when you are using some other server for local development add this to your urls.py from django.contrib.staticfiles.urls import staticfiles_urlpatterns # ... the rest of your URLconf goes here ... urlpatterns += staticfiles_urlpatterns() Note that staticfiles_urlpatterns() will only work when DEBUG = True is set in your settings.py. More info here When in production you never, ever put gunicorn in front. Instead you use a server like nginx which dispatches requests to a pool of gunicorn workers and also serves the static files. See here"} +{"question_id": 60580113, "score": 121, "creation_date": 1583600910, "tags": ["python", "python-poetry"], "instruction": "Change python version to 3.x\n\nAccording to poetry's docs, the proper way to setup a new project is with poetry new poetry-demo, however this creates a project based on the now deprecated python2.7 by creating the following toml file: [tool.poetry] name = \"poetry-demo\" version = \"0.1.0\" description = \"\" authors = [\"Harsha Goli \"] [tool.poetry.dependencies] python = \"^2.7\" [tool.poetry.dev-dependencies] pytest = \"^4.6\" [build-system] requires = [\"poetry>=0.12\"] build-backend = \"poetry.masonry.api\" How can I update this to 3.7? Simply changing python = \"^2.7\" to python = \"^3.7\" results in the following error when poetry install is run: [SolverProblemError] The current project's Python requirement (2.7.17) is not compatible with some of the required packages Python requirement: - zipp requires Python >=3.6 Because no versions of pytest match >=4.6,<4.6.9 || >4.6.9,<5.0 and pytest (4.6.9) depends on importlib-metadata (>=0.12), pytest (>=4.6,<5.0) requires importlib-metadata (>=0.12). And because no versions of importlib-metadata match >=0.12,<1.5.0 || >1.5.0 and importlib-metadata (1.5.0) depends on zipp (>=0.5), pytest (>=4.6,<5.0) requires zipp (>=0.5). Because zipp (3.1.0) requires Python >=3.6 and no versions of zipp match >=0.5,<3.1.0 || >3.1.0, zipp is forbidden. Thus, pytest is forbidden. So, because poetry-demo depends on pytest (^4.6), version solving failed.", "output": "Interestingly, poetry is silently failing due to a missing package the tool itself relies on and continues to install a broken venv. Here's how you fix it. sudo apt install python3-venv poetry env remove python3 poetry install I had to remove pytest, and then reinstall with poetry add pytest. EDIT: I ran into this issue again when upgrading a project from python3.7 to python3.8 - for this instead of installing python3-venv, you'd want to install python3.8-venv instead"} +{"question_id": 42118651, "score": 121, "creation_date": 1486571599, "tags": ["python", "configuration", "code-formatting", "visual-studio-code"], "instruction": "How to set Python language specific tab spacing in Visual Studio Code?\n\nUsing VSCode 1.9.0 with the (donjayamanne) Python 0.5.8 extension, is it possible to provide Python specific editor options? Or more generally speaking, is it possible to provide language specific tab spacing and replacement rules? For example, Python should be tab=4 spaces (replaced as spaces), and Ruby should be tab=2 spaces (replaced). Other languages tend to have their own opinions. However, I only see the general \"editor.tabSize\": 4, \"editor.insertSpaces\": true, options. I thought perhaps there was a \"python.editor\": { } block or perhaps a \"python.editor.tabSize\" option, but I can't find reference to such, nor have I successfully guessed a working name.", "output": "I had the same problem today. This is how I fixed it. Add this lines in setting.json in VSCode: \"[python]\": { \"editor.insertSpaces\": true, \"editor.tabSize\": 4 } It works like a charm."} +{"question_id": 53014306, "score": 121, "creation_date": 1540578016, "tags": ["python", "macos", "matplotlib"], "instruction": "Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized\n\nGetting the error message when using matplotlib: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.", "output": "Do the following to solve the issue: import os os.environ['KMP_DUPLICATE_LIB_OK']='True' Answer found at: https://github.com/dmlc/xgboost/issues/1715 Be aware of potential side-effects: but that may cause crashes or silently produce incorrect results."} +{"question_id": 45201514, "score": 121, "creation_date": 1500500463, "tags": ["python", "matplotlib", "seaborn", "legend", "legend-properties"], "instruction": "How to edit a seaborn legend title and labels for figure-level functions\n\nI've created this plot using Seaborn and a pandas dataframe (data): My code: import seaborn as sns g = sns.lmplot('credibility', 'percentWatched', data=data, hue='millennial', markers=[\"+\", \".\"]) You may notice the plot's legend title is simply the variable name ('millennial') and the legend items are its values (0, 1). How can I edit the legend's title and labels? Ideally, the legend title would be 'Generation' and the labels would be \"Millennial\" and \"Older Generations\".", "output": "If legend_out is set to True then legend is available through the g._legend property and it is a part of a figure. Seaborn legend is standard matplotlib legend object. Therefore you may change legend texts. Tested in python 3.8.11, matplotlib 3.4.3, seaborn 0.11.2 import seaborn as sns # load the tips dataset tips = sns.load_dataset(\"tips\") # plot g = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': True}) # title new_title = 'My title' g._legend.set_title(new_title) # replace labels new_labels = ['label 1', 'label 2'] for t, l in zip(g._legend.texts, new_labels): t.set_text(l) Another situation if legend_out is set to False. You have to define which axes has a legend (in below example this is axis number 0): g = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': False}) # check axes and find which is have legend leg = g.axes.flat[0].get_legend() new_title = 'My title' leg.set_title(new_title) new_labels = ['label 1', 'label 2'] for t, l in zip(leg.texts, new_labels): t.set_text(l) Moreover you may combine both situations and use this code: g = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': True}) # check axes and find which is have legend for ax in g.axes.flat: leg = g.axes.flat[0].get_legend() if not leg is None: break # or legend may be on a figure if leg is None: leg = g._legend # change legend texts new_title = 'My title' leg.set_title(new_title) new_labels = ['label 1', 'label 2'] for t, l in zip(leg.texts, new_labels): t.set_text(l) This code works for any seaborn plot which is based on Grid class."} +{"question_id": 34471102, "score": 121, "creation_date": 1451130877, "tags": ["python", "django", "nameerror"], "instruction": "Python NameError: name 'include' is not defined\n\nI'm currently developing a website with the framework Django (I'm very beginner), but I have a problem with Python: since I have created my templates, I can't run server anymore for this reason (the stack trace points to a line in file urls.py): ... path('apppath/', include('myapp.urls')), NameError: name 'include' is not defined Where can I import include from?", "output": "Guessing on the basis of whatever little information provided in the question, I think you might have forgotten to add the following import in your urls.py file. from django.conf.urls import include"} +{"question_id": 18992086, "score": 121, "creation_date": 1380056899, "tags": ["python", "pandas", "histogram"], "instruction": "save a pandas.Series histogram plot to file\n\nIn ipython Notebook, first create a pandas Series object, then by calling the instance method .hist(), the browser displays the figure. I am wondering how to save this figure to a file (I mean not by right click and save as, but the commands needed in the script).", "output": "Use the Figure.savefig() method, like so: ax = s.hist() # s is an instance of Series fig = ax.get_figure() fig.savefig('/path/to/figure.pdf') It doesn't have to end in pdf, there are many options. Check out the documentation. Alternatively, you can use the pyplot interface and just call the savefig as a function to save the most recently created figure: import matplotlib.pyplot as plt s.hist() plt.savefig('path/to/figure.pdf') # saves the current figure Plots from multiple columns Added from a comment toto_tico made on 2018-05-11 If you are getting this error AttributeError: 'numpy.ndarray' object has no attribute 'get_figure', then it is likely that you are plotting multiple columns. In this case, ax will be an array of all the axes. ax = s.hist(columns=['colA', 'colB']) # try one of the following fig = ax[0].get_figure() fig = ax[0][0].get_figure() fig.savefig('figure.pdf')"} +{"question_id": 42228895, "score": 121, "creation_date": 1487083072, "tags": ["python", "pytest", "parametrized-testing", "pytest-fixtures", "parametrize"], "instruction": "How to parametrize a Pytest fixture?\n\nConsider the following Pytest: import pytest class TimeLine(object): instances = [0, 1, 2] @pytest.fixture def timeline(): return TimeLine() def test_timeline(timeline): for instance in timeline.instances: assert instance % 2 == 0 if __name__ == \"__main__\": pytest.main([__file__]) The test test_timeline uses a Pytest fixture, timeline, which itself has the attribute instances. This attribute is iterated over in the test, so that the test only passes if the assertion holds for every instance in timeline.instances. What I actually would like to do, however, is to generate 3 tests, 2 of which should pass and 1 of which would fail. I've tried @pytest.mark.parametrize(\"instance\", timeline.instances) def test_timeline(timeline): assert instance % 2 == 0 but this leads to AttributeError: 'function' object has no attribute 'instances' As I understand it, in Pytest fixtures the function 'becomes' its return value, but this seems to not have happened yet at the time the test is parametrized. How can I set up the test in the desired fashion?", "output": "From Using fixtures in pytest.mark.parametrize pytest issue, it would appear that it is currently not possible to use fixtures in pytest.mark.parametrize."} +{"question_id": 6785226, "score": 121, "creation_date": 1311303891, "tags": ["python", "concurrency", "iterator", "future", "map-function"], "instruction": "Pass multiple parameters to concurrent.futures.Executor.map?\n\nThe concurrent.futures.Executor.map takes a variable number of iterables from which the function given is called. How should I call it if I have a generator that produces tuples that are normally unpacked in place? The following doesn't work because each of the generated tuples is given as a different argument to map: args = ((a, b) for (a, b) in c) for result in executor.map(f, *args): pass Without the generator, the desired arguments to map might look like this: executor.map( f, (i[0] for i in args), (i[1] for i in args), ..., (i[N] for i in args), )", "output": "You need to remove the * on the map call: args = ((a, b) for b in c) for result in executor.map(f, args): pass This will call f, len(args) times, where f should accept one parameter. If you want f to accept two parameters you can use a lambda call like: args = ((a, b) for b in c) for result in executor.map(lambda p: f(*p), args): # (*p) does the unpacking part pass"} +{"question_id": 31582750, "score": 121, "creation_date": 1437642104, "tags": ["python", "unit-testing", "python-3.x", "dependency-injection", "mocking"], "instruction": "Python mock Patch os.environ and return value\n\nUnit testing conn() using mock: app.py import mysql.connector import os, urlparse def conn(): if \"DATABASE_URL\" in os.environ: url = urlparse(os.environ[\"DATABASE_URL\"]) g.db = mysql.connector.connect( user=url.username, password=url.password, host=url.hostname, database=url.path[1:], ) else: return \"Error\" test.py def test_conn(self): with patch(app.mysql.connector) as mock_mysql: with patch(app.os.environ) as mock_environ: con() mock_mysql.connect.assert_callled_with(\"credentials\") Error: Assertion mock_mysql.connect.assert_called_with is not called. which I believe it is because 'Database_url' is not in my patched os.environ and because of that test call is not made to mysql_mock.connect. Questions: What changes do I need to make this test code work? Do I also have to patch urlparse?", "output": "You can try unittest.mock.patch.dict solution. Just call conn with a dummy argument: import mysql.connector import os, urlparse from unittest import mock @mock.patch.dict(os.environ, {\"DATABASE_URL\": \"mytemp\"}, clear=True) # why need clear=True explained here https://stackoverflow.com/a/67477901/248616 # If clear is true then the dictionary will be cleared before the new values are set. def conn(mock_A): print os.environ[\"mytemp\"] if \"DATABASE_URL\" in os.environ: url = urlparse(os.environ[\"DATABASE_URL\"]) g.db = mysql.connector.connect( user=url.username, password=url.password, host=url.hostname, database=url.path[1:], ) else: return \"Error\" Or if you don't want to modify your original function try this solution: import os from unittest import mock def func(): print os.environ[\"mytemp\"] def test_func(): k = mock.patch.dict(os.environ, {\"mytemp\": \"mytemp\"}) k.start() func() k.stop() test_func()"} +{"question_id": 24808043, "score": 121, "creation_date": 1405611864, "tags": ["python", "python-2.7", "scipy", "pybrain"], "instruction": "ImportError: No module named scipy\n\nI am using Python 2.7 and trying to get PyBrain to work. But I get this error even though scipy is installed - Traceback (most recent call last): File \"\", line 1, in File \"/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1- py2.7.egg/pybrain/__init__.py\", line 1, in from pybrain.structure.__init__ import * File \"/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1-py2.7.egg/pybrain/structure/__init__.py\", line 1, in from pybrain.structure.connections.__init__ import * File \"/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1-py2.7.egg/pybrain/structure/connections/__init__.py\", line 1, in from pybrain.structure.connections.full import FullConnection File \"/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1-py2.7.egg/pybrain/structure/connections/full.py\", line 3, in from scipy import reshape, dot, outer ImportError: No module named scipy I have installed scipy using this command - sudo apt-get install python-scipy I get - Reading package lists... Done Building dependency tree Reading state information... Done python-scipy is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. What should I do?", "output": "Try to install it as a python package using pip. You said you already tried: sudo apt-get install python-scipy Now run: pip install scipy I ran both and it worked on my Debian-based box."} +{"question_id": 36539623, "score": 121, "creation_date": 1460347158, "tags": ["python", "anaconda", "jupyter", "conda", "env"], "instruction": "How do I find the name of the conda environment in which my code is running?\n\nI'm looking for a good way to figure out the name of the conda environment I'm in from within running code or an interactive python instance. The use-case is that I am running Jupyter notebooks with both Python 2 and Python 3 kernels from a miniconda install. The default environment is Py3. There is a separate environment for Py2. Inside the a notebook file, I want it to attempt to conda install foo. I'm using subcommand to do this for now, since I can't find a programmatic conda equivalent of pip.main(['install','foo']). The problem is that the command needs to know the name of the Py2 environment to install foo there if the notebook is running using the Py2 kernel. Without that info it installs in the default Py3 env. I'd like for the code to figure out which environment it is in and the right name for it on its own. The best solution I've got so far is: import sys def get_env(): sp = sys.path[1].split(\"/\") if \"envs\" in sp: return sp[sp.index(\"envs\") + 1] else: return \"\" Is there a more direct/appropriate way to accomplish this?", "output": "You want $CONDA_DEFAULT_ENV or $CONDA_PREFIX: $ source activate my_env (my_env) $ echo $CONDA_DEFAULT_ENV my_env (my_env) $ echo $CONDA_PREFIX /Users/nhdaly/miniconda3/envs/my_env $ source deactivate $ echo $CONDA_DEFAULT_ENV # (not-defined) $ echo $CONDA_PREFIX # (not-defined) In python: import os print(os.environ['CONDA_DEFAULT_ENV']) for the absolute entire path which is usually more useful: Python 3.9.0 | packaged by conda-forge | (default, Oct 14 2020, 22:56:29) [Clang 10.0.1 ] on darwin import os; print(os.environ[\"CONDA_PREFIX\"]) /Users/miranda9/.conda/envs/synthesis The environment variables are not well documented. You can find CONDA_DEFAULT_ENV mentioned here: https://www.continuum.io/blog/developer/advanced-features-conda-part-1 The only info on CONDA_PREFIX I could find is this Issue: https://github.com/conda/conda/issues/2764"} +{"question_id": 7222382, "score": 121, "creation_date": 1314550574, "tags": ["python", "gis", "distance", "latitude-longitude"], "instruction": "Get lat/long given current point, distance and bearing\n\nGiven an existing point in lat/long, distance in (in KM) and bearing (in degrees converted to radians), I would like to calculate the new lat/long. This site crops up over and over again, but I just can't get the formula to work for me. The formulas as taken the above link are: lat2 = asin(sin(lat1)*cos(d/R) + cos(lat1)*sin(d/R)*cos(\u03b8)) lon2 = lon1 + atan2(sin(\u03b8)*sin(d/R)*cos(lat1), cos(d/R)\u2212sin(lat1)*sin(lat2)) The above formula is for MSExcel where- asin = arc sin() d = distance (in any unit) R = Radius of the earth (in the same unit as above) and hence d/r = is the angular distance (in radians) atan2(a,b) = arc tan(b/a) \u03b8 is the bearing (in radians, clockwise from north); Here's the code I've got in Python. import math R = 6378.1 #Radius of the Earth brng = 1.57 #Bearing is 90 degrees converted to radians. d = 15 #Distance in km #lat2 52.20444 - the lat result I'm hoping for #lon2 0.36056 - the long result I'm hoping for. lat1 = 52.20472 * (math.pi * 180) #Current lat point converted to radians lon1 = 0.14056 * (math.pi * 180) #Current long point converted to radians lat2 = math.asin( math.sin(lat1)*math.cos(d/R) + math.cos(lat1)*math.sin(d/R)*math.cos(brng)) lon2 = lon1 + math.atan2(math.sin(brng)*math.sin(d/R)*math.cos(lat1), math.cos(d/R)-math.sin(lat1)*math.sin(lat2)) print(lat2) print(lon2) I get lat2 = 0.472492248844 lon2 = 79.4821662373", "output": "Needed to convert answers from radians back to degrees. Working code below: from math import asin, atan2, cos, degrees, radians, sin def get_point_at_distance(lat1, lon1, d, bearing, R=6371): \"\"\" lat: initial latitude, in degrees lon: initial longitude, in degrees d: target distance from initial bearing: (true) heading in degrees R: optional radius of sphere, defaults to mean radius of earth Returns new lat/lon coordinate {d}km from initial, in degrees \"\"\" lat1 = radians(lat1) lon1 = radians(lon1) a = radians(bearing) lat2 = asin(sin(lat1) * cos(d/R) + cos(lat1) * sin(d/R) * cos(a)) lon2 = lon1 + atan2( sin(a) * sin(d/R) * cos(lat1), cos(d/R) - sin(lat1) * sin(lat2) ) return (degrees(lat2), degrees(lon2),) lat = 52.20472 lon = 0.14056 distance = 15 bearing = 90 lat2, lon2 = get_point_at_distance(lat, lon, distance, bearing) # lat2 52.20444 - the lat result I'm hoping for # lon2 0.36056 - the long result I'm hoping for. print(lat2, lon2) # prints \"52.20451523755824 0.36067845713550956\""} +{"question_id": 13865596, "score": 121, "creation_date": 1355421247, "tags": ["python", "statistics", "scipy"], "instruction": "Quantile-Quantile Plot using SciPy\n\nHow would you create a qq-plot using Python? Assuming that you have a large set of measurements and are using some plotting function that takes XY-values as input. The function should plot the quantiles of the measurements against the corresponding quantiles of some distribution (normal, uniform...). The resulting plot lets us then evaluate in our measurement follows the assumed distribution or not. http://en.wikipedia.org/wiki/Quantile-quantile_plot Both R and Matlab provide ready made functions for this, but I am wondering what the cleanest method for implementing in in Python would be.", "output": "Update: As folks have pointed out this answer is not correct. A probplot is different from a quantile-quantile plot. Please see those comments and other answers before you make an error in interpreting or conveying your distributions' relationship. I think that scipy.stats.probplot will do what you want. See the documentation for more detail. import numpy as np import pylab import scipy.stats as stats measurements = np.random.normal(loc = 20, scale = 5, size=100) stats.probplot(measurements, dist=\"norm\", plot=pylab) pylab.show() Result"} +{"question_id": 26059111, "score": 121, "creation_date": 1411732966, "tags": ["python", "windows", "egg", "python-wheel"], "instruction": "Build a wheel/egg and all dependencies for a python project\n\nIn order to stage python project within our corporation I need to make an installable distribution. This should include: An egg or whl for my project An egg or whl for every dependency of the project (optionally) produce a requirements.txt file listing all the installable components for this release Is there an easy plug in, (e.g. an alternative to bdist_wheel) that will not only compile one wheel but also that project's components? Obviously I can script this, but I was hoping that there might be a short-cut that builds the package + dependencies in fewer steps. This needs to work on Python 2.7 on Windows + Linux.", "output": "With poetry you can define your dependencies and metadata about your project in a file in the root of your project, called pyproject.toml: [tool.poetry] name = \"my-project\" version = \"0.1.0\" description = \"some longer description\" authors = [\"Some Author \"] [tool.poetry.dependencies] python = \"*\" [tool.poetry.dev-dependencies] pytest = \"^3.4\" To build your project as a wheel, execute poetry build $ poetry build Building my-project (0.1.0) - Building sdist - Built my-project-0.1.0.tar.gz - Building wheel - Built my-project-0.1.0-py3-none-any.whl a dist/ folder is created with a wheel for your project."} +{"question_id": 24507078, "score": 121, "creation_date": 1404207694, "tags": ["python", "selenium", "python-3.x", "selenium-webdriver", "browser"], "instruction": "How to deal with certificates using Selenium?\n\nI am using Selenium to launch a browser. How can I deal with the webpages (URLs) that will ask the browser to accept a certificate or not? In Firefox, I may have a website like that asks me to accept its certificate like this: On the Internet Explorer browser, I may get something like this: On Google Chrome: I repeat my question: How can I automate the acceptance of a website's certificate when I launch a browser (Internet Explorer, Firefox and Google Chrome) with Selenium (Python programming language)?", "output": "For the Firefox, you need to set accept_untrusted_certs FirefoxProfile() option to True: from selenium import webdriver profile = webdriver.FirefoxProfile() profile.accept_untrusted_certs = True driver = webdriver.Firefox(firefox_profile=profile) driver.get('https://cacert.org/') driver.close() For Chrome, you need to add --ignore-certificate-errors ChromeOptions() argument: from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument('ignore-certificate-errors') driver = webdriver.Chrome(chrome_options=options) driver.get('https://cacert.org/') driver.close() For the Internet Explorer, you need to set acceptSslCerts desired capability: from selenium import webdriver capabilities = webdriver.DesiredCapabilities().INTERNETEXPLORER capabilities['acceptSslCerts'] = True driver = webdriver.Ie(capabilities=capabilities) driver.get('https://cacert.org/') driver.close() Actually, according to the Desired Capabilities documentation, setting acceptSslCerts capability to True should work for all browsers since it is a generic read/write capability: acceptSslCerts boolean Whether the session should accept all SSL certs by default. Working demo for Firefox: >>> from selenium import webdriver Setting acceptSslCerts to False: >>> capabilities = webdriver.DesiredCapabilities().FIREFOX >>> capabilities['acceptSslCerts'] = False >>> driver = webdriver.Firefox(capabilities=capabilities) >>> driver.get('https://cacert.org/') >>> print(driver.title) Untrusted Connection >>> driver.close() Setting acceptSslCerts to True: >>> capabilities = webdriver.DesiredCapabilities().FIREFOX >>> capabilities['acceptSslCerts'] = True >>> driver = webdriver.Firefox(capabilities=capabilities) >>> driver.get('https://cacert.org/') >>> print(driver.title) Welcome to CAcert.org >>> driver.close()"} +{"question_id": 63723514, "score": 121, "creation_date": 1599135456, "tags": ["python", "matplotlib"], "instruction": "UserWarning: FixedFormatter should only be used together with FixedLocator\n\nI have used for a long time small subroutines to format axes of charts I'm plotting. A couple of examples: def format_y_label_thousands(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:,.0f}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) def format_y_label_percent(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:.1%}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) However, after an update to matplotlib yesterday, I get the following warning when calling any of these two functions: UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) What is the reason for such a warning? I couldn't figure it out looking into matplotlib's documentation.", "output": "WORKAROUND: The way to avoid the warning is to use FixedLocator (that is part of matplotlib.ticker). Below I show a code to plot three charts. I format their axes in different ways. Note that the \"set_ticks\" silence the warning, but it changes the actual ticks locations/labels (it took me some time to figure out that FixedLocator uses the same info but keeps the ticks locations intact). You can play with the x/y's to see how each solution might affect the output. import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mticker mpl.rcParams['font.size'] = 6.5 x = np.array(range(1000, 5000, 500)) y = 37*x fig, [ax1, ax2, ax3] = plt.subplots(1,3) ax1.plot(x,y, linewidth=5, color='green') ax2.plot(x,y, linewidth=5, color='red') ax3.plot(x,y, linewidth=5, color='blue') label_format = '{:,.0f}' # nothing done to ax1 as it is a \"control chart.\" # fixing yticks with \"set_yticks\" ticks_loc = ax2.get_yticks().tolist() ax2.set_yticks(ax1.get_yticks().tolist()) ax2.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing yticks with matplotlib.ticker \"FixedLocator\" ticks_loc = ax3.get_yticks().tolist() ax3.yaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing xticks with FixedLocator but also using MaxNLocator to avoid cramped x-labels ax3.xaxis.set_major_locator(mticker.MaxNLocator(3)) ticks_loc = ax3.get_xticks().tolist() ax3.xaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_xticklabels([label_format.format(x) for x in ticks_loc]) fig.tight_layout() plt.show() OUTPUT CHARTS: Obviously, having a couple of idle lines of code like the one above (I'm basically getting the yticks or xticks and setting them again) only adds noise to my program. I would prefer that the warning was removed. However, look into some of the \"bug reports\" (from links on the comments above/below; the issue is not actually a bug: it is an update that is generating some issues), and the contributors that manage matplotlib have their reasons to keep the warning. OLDER VERSION OF MATPLOTLIB: If you use your Console to control critical outputs of your code (as I do), the warning messages might be problematic. Therefore, a way to delay having to deal with the issue is to downgrade matplotlib to version 3.2.2. I use Anaconda to manage my Python packages, and here is the command used to downgrade matplotlib: conda install matplotlib=3.2.2 Not all listed versions might be available. For instance, couldn't install matplotlib 3.3.0 although it is listed on matplotlib's releases page: https://github.com/matplotlib/matplotlib/releases"} +{"question_id": 16731115, "score": 121, "creation_date": 1369385777, "tags": ["python", "segmentation-fault"], "instruction": "How to debug a Python segmentation fault?\n\nHow can I debug a Python segmentation fault? We are trying to run our python code on SuSE 12.3. We get reproducible segmentation faults. The python code has been working on other platforms without segmentation faults, for years. We only code Python, no C extension .... What is the best way to debug this? I know a bit ansi c, but that was ten years ago .... Python 2.7.5 Update The segmentation fault happens on interpreter shutdown. I can run the script several times: python -m pdb myscript.py arg1 arg1 continue run continue run But the segmentation faults happen, if I leave the pdb with ctrl-d. Update 2 I now try to debug it with gdb: gdb > file python > run myscript.py arg1 arg2 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffefbe2700 (LWP 15483)] 0x00007ffff7aef93c in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 (gdb) bt #0 0x00007ffff7aef93c in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 #1 0x00007ffff7af5303 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0 #2 0x00007ffff7adc858 in ?? () from /usr/lib64/libpython2.7.so.1.0 #3 0x00007ffff7ad840d in PyObject_Call () from /usr/lib64/libpython2.7.so.1.0 #4 0x00007ffff7af1082 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 #5 0x00007ffff7af233d in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 #6 0x00007ffff7af233d in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 #7 0x00007ffff7af5303 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0 #8 0x00007ffff7adc5b6 in ?? () from /usr/lib64/libpython2.7.so.1.0 #9 0x00007ffff7ad840d in PyObject_Call () from /usr/lib64/libpython2.7.so.1.0 #10 0x00007ffff7ad9171 in ?? () from /usr/lib64/libpython2.7.so.1.0 #11 0x00007ffff7ad840d in PyObject_Call () from /usr/lib64/libpython2.7.so.1.0 #12 0x00007ffff7aeeb62 in PyEval_CallObjectWithKeywords () from /usr/lib64/libpython2.7.so.1.0 #13 0x00007ffff7acc757 in ?? () from /usr/lib64/libpython2.7.so.1.0 #14 0x00007ffff7828e0f in start_thread () from /lib64/libpthread.so.0 #15 0x00007ffff755c7dd in clone () from /lib64/libc.so.6 Update 3 I installed gdbinit from http://hg.python.org/cpython/file/default/Misc/gdbinit and the debugging symbols from http://download.opensuse.org/debug/distribution/12.3/repo/oss/suse/x86_64/ (gdb) pystack No symbol \"_PyUnicode_AsString\" in current context. What now? Update 4 We installed the a new RPM (python-2.7.5-3.1.x86_64). We get less segfaults, but they still happen. Here is the link to repository: http://download.opensuse.org/repositories/devel:/languages:/python:/Factory/openSUSE_12.3/x86_64/ Update 5 Solved my initial problem: It was http://bugs.python.org/issue1856 (shutdown (exit) can hang or segfault with daemon threads running) Related: Detect Interpreter shut down in daemon thread", "output": "I got to this question because of the Segmentation fault, but not on exit, just in general, and I found that nothing else helped as effectively as faulthandler. It's part of Python 3.3, and you can install in 2.7 using pip. To enable the fault handler, just set the PYTHONFAULTHANDLER env var PYTHONFAULTHANDLER=1 python myscript.py"} +{"question_id": 18322262, "score": 121, "creation_date": 1376943055, "tags": ["python", "django", "django-views", "django-settings", "django-middleware"], "instruction": "How to set up custom middleware in Django?\n\nI'm trying to create a middleware to optionally pass a kwarg to every view that meets a condition. The problem is that I cannot find an example of how to set up the middleware. I have seen classes that override the method I want to, process_view: Class CheckConditionMiddleware(object): def process_view(self, request): return None But where do I put this class? Do I create a middleware app and put this class inside of it and then reference it in settings.middleware?", "output": "First: The path structure If you don't have it you need to create the middleware folder within your app following the structure: yourproject/yourapp/middleware The folder middleware should be placed in the same folder as settings.py, urls, templates... Important: Don't forget to create the init.py empty file inside the middleware folder so your app recognizes this folder Second: Create the middleware Now we should create a file for our custom middleware, in this example let's suppose we want a middleware that filter the users based on their IP, we create a file called filter_ip_middleware.py inside the middleware folder with this code: class FilterIPMiddleware(object): # Check if client IP is allowed def process_request(self, request): allowed_ips = ['192.168.1.1', '123.123.123.123', etc...] # Authorized ip's ip = request.META.get('REMOTE_ADDR') # Get client IP if ip not in allowed_ips: raise Http403 # If user is not allowed raise Error # If IP is allowed we don't do anything return None Third: Add the middleware in our 'settings.py' We need to look for: MIDDLEWARE_CLASSES (django < 1.10) MIDDLEWARE (django >= 1.10) Inside the settings.py we need to add our middleware (Add it in the last position). It should look like: MIDDLEWARE = ( # Before Django 1.10 the setting name was 'MIDDLEWARE_CLASSES' 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', # Above are django standard middlewares # Now we add here our custom middleware 'yourapp.middleware.filter_ip_middleware.FilterIPMiddleware' ) Done! Now every request from every client will call your custom middleware and process your custom code!"} +{"question_id": 4746579, "score": 121, "creation_date": 1295523197, "tags": ["python", "vim", "syntax", "vi", "vim-syntax-highlighting"], "instruction": "Syntax highlighting in vim for python\n\nHow do I do I set the syntax highlighting in Vim 7 for python? I would like to set my own colorschemes, and syntax highlighting for a type of code file.", "output": "The command to enable syntax highlighting in vim is :syntax on, if you want it to be active everytime you launch vim, just add a line containing syntax on in your .vimrc file. If you're already editing a Python file and syntax highlighting wasn't on, after enabling syntax highlighting (as above) then: :set filetype=python"} +{"question_id": 7168508, "score": 121, "creation_date": 1314140178, "tags": ["python", "multithreading"], "instruction": "background function in Python\n\nI've got a Python script that sometimes displays images to the user. The images can, at times, be quite large, and they are reused often. Displaying them is not critical, but displaying the message associated with them is. I've got a function that downloads the image needed and saves it locally. Right now it's run inline with the code that displays a message to the user, but that can sometimes take over 10 seconds for non-local images. Is there a way I could call this function when it's needed, but run it in the background while the code continues to execute? I would just use a default image until the correct one becomes available.", "output": "Do something like this: def function_that_downloads(my_args): # do some long download here then inline, do something like this: import threading def my_inline_function(some_args): # do some stuff download_thread = threading.Thread(target=function_that_downloads, name=\"Downloader\", args=some_args) download_thread.start() # continue doing stuff You may want to check if the thread has finished before going on to other things by calling download_thread.isAlive()"} +{"question_id": 18805203, "score": 121, "creation_date": 1379184303, "tags": ["python", "python-idle"], "instruction": "How to turn on line numbers in IDLE?\n\nIn the main shell of IDLE, errors always return a line number but the development environment doesn't even have line numbers. Is there anyway to turn on line numbers?", "output": "Version 3.8 or newer: To show line numbers in the current window, go to Options and click Show Line Numbers. To show them automatically, go to Options > Configure IDLE > General and check the Show line numbers in new windows box. Version 3.7 or older: Unfortunately there is not an option to display line numbers in IDLE although there is an enhancement request open for this. However, there are a couple of ways to work around this: Under the edit menu there is a go to line option (there is a default shortcut of Alt+G for this). There is a display at the bottom right which tells you your current line number / position on the line:"} +{"question_id": 8469680, "score": 121, "creation_date": 1323660974, "tags": ["python", "mocking"], "instruction": "Using mock patch to mock an instance method\n\nI'm trying to mock something while testing a Django app using the imaginatively named Mock testing library. I can't seem to quite get it to work, I'm trying to do this: models.py from somelib import FooClass class Promotion(models.Model): foo = models.ForeignKey(FooClass) def bar(self): print \"Do something I don't want!\" test.py class ViewsDoSomething(TestCase): view = 'my_app.views.do_something' def test_enter_promotion(self): @patch.object(my_app.models.FooClass, 'bar') def fake_bar(self, mock_my_method): print \"Do something I want!\" return True self.client.get(reverse(view)) What am I doing wrong?", "output": "To add onto Kit's answer, specifying a 3rd argument to patch.object() allows the mocked object/method to be specified. Otherwise, a default MagicMock object is used. def fake_bar(self): print \"Do something I want!\" return True @patch.object(my_app.models.FooClass, 'bar', fake_bar) def test_enter_promotion(self): self.client.get(reverse(view)) # Do something I want! Note that, if you specify the mocking object as a third argument, then the default MagicMock() is no longer passed into the patched object -- e.g. no longer: def test_enter_promotion(self, mock_method): but instead: def test_enter_promotion(self): https://docs.python.org/3/library/unittest.mock.html#patch-object"} +{"question_id": 7196376, "score": 121, "creation_date": 1314301990, "tags": ["python", "python-3.x", "metaclass", "abc", "abstract-methods"], "instruction": "I used `__metaclass__` to set up `abc.ABCMeta` as the metaclass, but unimplemented `@abstractmethod`s still fail to raise an exception. Why?\n\nI have read python docs about abstract base classes: From here: abc.abstractmethod(function) A decorator indicating abstract methods. Using this decorator requires that the class\u2019s metaclass is ABCMeta or is derived from it. A class that has a metaclass derived from ABCMeta cannot be instantiated unless all of its abstract methods and properties are overridden. And here You can apply the @abstractmethod decorator to methods such as draw() that must be implemented; Python will then raise an exception for classes that don\u2019t define the method. Note that the exception is only raised when you actually try to create an instance of a subclass lacking the method. I've used this code to test that out: import abc class AbstractClass(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod def abstractMethod(self): return class ConcreteClass(AbstractClass): def __init__(self): self.me = \"me\" c = ConcreteClass() c.abstractMethod() The code goes fine, so I don't get it. If I type c.abstractMethod I get: > What I'm missing here? ConcreteClass must implement the abstract methods, but I get no exception.", "output": "Are you using python3 to run that code? If yes, you should know that declaring metaclass in python3 have changes you should do it like this instead: import abc class AbstractClass(metaclass=abc.ABCMeta): @abc.abstractmethod def abstractMethod(self): return The full code and the explanation behind the answer is: import abc class AbstractClass(metaclass=abc.ABCMeta): @abc.abstractmethod def abstractMethod(self): return class ConcreteClass(AbstractClass): def __init__(self): self.me = \"me\" # Will get a TypeError without the following two lines: # def abstractMethod(self): # return 0 c = ConcreteClass() c.abstractMethod() If abstractMethod is not defined for ConcreteClass, the following exception will be raised when running the above code: TypeError: Can't instantiate abstract class ConcreteClass with abstract methods abstractMethod"} +{"question_id": 19095796, "score": 121, "creation_date": 1380548521, "tags": ["python", "escaping", "backslash"], "instruction": "How can I print a single backslash?\n\nWhen I write print('\\') or print(\"\\\") or print(\"'\\'\"), Python doesn't print the backslash \\ symbol. Instead it errors for the first two and prints '' for the third. What should I do to print a backslash? This question is about producing a string that has a single backslash in it. This is particularly tricky because it cannot be done with raw strings. For the related question about why such a string is represented with two backslashes, see Why do backslashes appear twice?. For including literal backslashes in other strings, see using backslash in python (not to escape).", "output": "You need to escape your backslash by preceding it with, yes, another backslash: print(\"\\\\\") The \\ character is called an escape character, which interprets the character following it differently. For example, n by itself is simply a letter, but when you precede it with a backslash, it becomes \\n, which is the newline character. As you can probably guess, \\ also needs to be escaped so it doesn't function like an escape character. You have to... escape the escape, essentially. See the documentation for string literals."} +{"question_id": 34486642, "score": 121, "creation_date": 1451265671, "tags": ["python", "matplotlib", "jupyter", "jupyter-notebook"], "instruction": "What is the currently correct way to dynamically update plots in Jupyter/iPython?\n\nIn the answers to how to dynamically update a plot in a loop in ipython notebook (within one cell), an example is given of how to dynamically update a plot inside a Jupyter notebook within a Python loop. However, this works by destroying and re-creating the plot on every iteration, and a comment in one of the threads notes that this situation can be improved by using the new-ish %matplotlib nbagg magic, which provides an interactive figure embedded in the notebook, rather than a static image. However, this wonderful new nbagg feature seems to be completely undocumented as far as I can tell, and I'm unable to find an example of how to use it to dynamically update a plot. Thus my question is, how does one efficiently update an existing plot in a Jupyter/Python notebook, using the nbagg backend? Since dynamically updating plots in matplotlib is a tricky issue in general, a simple working example would be an enormous help. A pointer to any documentation on the topic would also be extremely helpful. To be clear what I'm asking for: what I want to do is to run some simulation code for a few iterations, then draw a plot of its current state, then run it for a few more iterations, then update the plot to reflect the current state, and so on. So the idea is to draw a plot and then, without any interaction from the user, update the data in the plot without destroying and re-creating the whole thing. Here is some slightly modified code from the answer to the linked question above, which achieves this by re-drawing the whole figure every time. I want to achieve the same result, but more efficiently using nbagg. %matplotlib inline import time import pylab as pl from IPython import display for i in range(10): pl.clf() pl.plot(pl.randn(100)) display.display(pl.gcf()) display.clear_output(wait=True) time.sleep(1.0)", "output": "Here is an example that updates a plot in a loop. It updates the data in the figure and does not redraw the whole figure every time. It does block execution, though if you're interested in running a finite set of simulations and saving the results somewhere, it may not be a problem for you. The %matplotlib widget magic requires the ipympl Matplotlib Jupyter Extension package. You can install a working environment with pip install jupyter ipympl %matplotlib widget import numpy as np import matplotlib.pyplot as plt import time def pltsin(ax, colors=['b']): x = np.linspace(0,1,100) if ax.lines: for line in ax.lines: line.set_xdata(x) y = np.random.random(size=(100,1)) line.set_ydata(y) else: for color in colors: y = np.random.random(size=(100,1)) ax.plot(x, y, color) fig.canvas.draw() fig,ax = plt.subplots(1,1) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_xlim(0,1) ax.set_ylim(0,1) plt.show() # run this cell to dynamically update plot for f in range(5): pltsin(ax, ['b', 'r']) time.sleep(1) I put this up on nbviewer here, and here's a direct link to the gist"} +{"question_id": 49883177, "score": 121, "creation_date": 1523982555, "tags": ["python", "python-3.x", "numpy", "caching", "lru"], "instruction": "How does Lru_cache (from functools) Work?\n\nEspecially when using recursive code there are massive improvements with lru_cache. I do understand that a cache is a space that stores data that has to be served fast and saves the computer from recomputing. How does the Python lru_cache from functools work internally? I'm Looking for a specific answer, does it use dictionaries like the rest of Python? Does it only store the return value? I know that Python is heavily built on top of dictionaries, however, I couldn't find a specific answer to this question.", "output": "The functools source code is available here: https://github.com/python/cpython/blob/master/Lib/functools.py lru_cache uses the _lru_cache_wrapper decorator (python decorator with arguments pattern) which has a cache dictionary in context in which it saves the return value of the function called (every decorated function will have its own cache dict). The dictionary key is generated with the _make_key function from the arguments. Added some bold comments below: # ACCORDING TO PASSED maxsize ARGUMENT _lru_cache_wrapper # DEFINES AND RETURNS ONE OF wrapper DECORATORS def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): # Constants shared by all lru cache instances: sentinel = object() # unique object used to signal cache misses cache = {} # RESULTS SAVES HERE cache_get = cache.get # bound method to lookup a key or return None # ... maxsize is None: def wrapper(*args, **kwds): # Simple caching without ordering or size limit nonlocal hits, misses key = make_key(args, kwds, typed) # BUILD A KEY FROM ARGUMENTS result = cache_get(key, sentinel) # TRYING TO GET PREVIOUS CALLS RESULT if result is not sentinel: # ALREADY CALLED WITH PASSED ARGS hits += 1 return result # RETURN SAVED RESULT # WITHOUT ACTUALLY CALLING FUNCTION misses += 1 result = user_function(*args, **kwds) # FUNCTION CALL - if cache[key] empty cache[key] = result # SAVE RESULT return result # ... return wrapper"} +{"question_id": 27745500, "score": 120, "creation_date": 1420217220, "tags": ["python", "list", "file"], "instruction": "How to save a list to a file and read it as a list type?\n\nSay I have the list score = [1,2,3,4,5] and it gets changed while my program is running. How could I save it to a file so that next time the program is run I can access the changed list as a list type? I have tried: score=[1,2,3,4,5] with open(\"file.txt\", 'w') as f: for s in score: f.write(str(s) + '\\n') with open(\"file.txt\", 'r') as f: score = [line.rstrip('\\n') for line in f] print(score) But this results in the elements in the list being strings not integers.", "output": "I decided I didn't want to use a pickle because I wanted to be able to open the text file and change its contents easily during testing. Therefore, I did this: score = [1,2,3,4,5] with open(\"file.txt\", \"w\") as f: for s in score: f.write(str(s) +\"\\n\") score = [] with open(\"file.txt\", \"r\") as f: for line in f: score.append(int(line.strip())) So the items in the file are read as integers, despite being stored to the file as strings."} +{"question_id": 18715688, "score": 120, "creation_date": 1378806624, "tags": ["python", "string", "algorithm", "time-complexity", "dynamic-programming"], "instruction": "Find common substring between two strings\n\nI'd like to compare 2 strings and keep the matched, splitting off where the comparison fails. So if I have 2 strings: string1 = \"apples\" string2 = \"appleses\" answer = \"apples\" Another example, as the string could have more than one word: string1 = \"apple pie available\" string2 = \"apple pies\" answer = \"apple pie\" I'm sure there is a simple Python way of doing this but I can't work it out, any help and explanation appreciated.", "output": "Its called Longest Common Substring problem. Here I present a simple, easy to understand but inefficient solution. It will take a long time to produce correct output for large strings, as the complexity of this algorithm is O(N^2). def longestSubstringFinder(string1, string2): answer = \"\" len1, len2 = len(string1), len(string2) for i in range(len1): match = \"\" for j in range(len2): if (i + j < len1 and string1[i + j] == string2[j]): match += string2[j] else: if (len(match) > len(answer)): answer = match match = \"\" return answer print(longestSubstringFinder(\"apple pie available\", \"apple pies\")) print(longestSubstringFinder(\"apples\", \"appleses\")) print(longestSubstringFinder(\"bapples\", \"cappleses\")) Output apple pie apples apples"} +{"question_id": 53902507, "score": 120, "creation_date": 1545557289, "tags": ["python", "selenium", "google-chrome", "selenium-webdriver", "selenium-chromedriver"], "instruction": "unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed with ChromeDriver Selenium\n\nI'm using InstaPy which use Python and Selenium. I start the script per Cron and from time to time it crashes. So it'r really irregular, sometimes it runs well through. I'v posted on GitHub Repo as well already but didn't get an answer there, so i'm asking here now if someone has an idea why. It's a digital ocean ubuntu server and i'm using it on headless mode. The driver version are visible on the log. here are error messages: ERROR [2018-12-10 09:53:54] [user] Error occurred while deleting cookies from web browser! b'Message: invalid session id\\n (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)\\n' Traceback (most recent call last): File \"/root/InstaPy/instapy/util.py\", line 1410, in smart_run yield File \"./my_config.py\", line 43, in session.follow_user_followers(['xxxx','xxxx','xxxx','xxxx'], amount=100, randomize=True, interact=True) File \"/root/InstaPy/instapy/instapy.py\", line 2907, in follow_user_followers self.logfolder) File \"/root/InstaPy/instapy/unfollow_util.py\", line 883, in get_given_user_followers channel, jumps, logger, logfolder) File \"/root/InstaPy/instapy/unfollow_util.py\", line 722, in get_users_through_dialog person_list = dialog_username_extractor(buttons) File \"/root/InstaPy/instapy/unfollow_util.py\", line 747, in dialog_username_extractor person_list.append(person.find_element_by_xpath(\"../../../*\") File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py\", line 351, in find_element_by_xpath return self.find_element(by=By.XPATH, value=xpath) File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py\", line 659, in find_element {\"using\": by, \"value\": value})['value'] File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webelement.py\", line 633, in _execute return self._parent.execute(command, params) File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py\", line 321, in execute self.error_handler.check_response(response) File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py\", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed (Session info: headless chrome=70.0.3538.110) (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64) During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"/root/InstaPy/instapy/instapy.py\", line 3845, in end self.browser.delete_all_cookies() File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py\", line 878, in delete_all_cookies self.execute(Command.DELETE_ALL_COOKIES) File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py\", line 321, in execute self.error_handler.check_response(response) File \"/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py\", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: chrome not reachable (Session info: headless chrome=71.0.3578.80) (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64) Any idea what the reason could be and how to solve it? Thanks for the inputs. And the guys from http://treestones.ch/ helped me out.", "output": "Though you see the error as: Error occurred while deleting cookies from web browser! b'Message: invalid session id\\n (Driver info: chromedriver=2.44.609551 (5d576e9a44fe4c5b6a07e568f1ebc753f1214634),platform=Linux 4.15.0-42-generic x86_64)\\n' The main exception is: selenium.common.exceptions.WebDriverException: Message: unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed Your code trials would have given us some clues what going wrong. Solution There are diverse solution to this issue. However as per UnknownError: session deleted because of page crash from tab crashed this issue can be solved by either of the following solutions: Add the following chrome_options: chrome_options.add_argument('--no-sandbox') Chrome seem to crash in Docker containers on certain pages due to too small /dev/shm. So you may have to fix the small /dev/shm size. An example: sudo mount -t tmpfs -o rw,nosuid,nodev,noexec,relatime,size=512M tmpfs /dev/shm It also works if you use -v /dev/shm:/dev/shm option to share host /dev/shm Another way to make it work would be to add the chrome_options as --disable-dev-shm-usage. This will force Chrome to use the /tmp directory instead. This may slow down the execution though since disk will be used instead of memory. chrome_options.add_argument('--disable-dev-shm-usage') from tab crashed from tab crashed was WIP(Work In Progress) with the Chromium Team for quite some time now which relates to Linux attempting to always use /dev/shm for non-executable memory. Here are the references : Linux: Chrome/Chromium SIGBUS/Aw, Snap! on small /dev/shm Chrome crashes/fails to load when /dev/shm is too small, and location can't be overridden As per Comment61#Issue 736452 the fix seems to be have landed with Chrome v65.0.3299.6 Reference You can find a couple of relevant discussions in: org.openqa.selenium.SessionNotCreatedException: session not created exception from tab crashed error when executing from Jenkins CI server"} +{"question_id": 42097053, "score": 120, "creation_date": 1486491380, "tags": ["python", "matplotlib", "truetype", "miniconda"], "instruction": "Matplotlib cannot find basic fonts\n\nI am using matplotlib version 2.0.0 on Python 3 in a miniconda virtual environment. I am working on a unix scientific computing cluster where I don't have root privileges. I am generally executing python code through an ipython notebook. If I do a basic command such as: import matplotlib.pyplot as plt plt.scatter([1,5], [1,5]) I get an error message: path_to_miniconda/miniconda3/envs/conda34/lib/python3.4/site- packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) I would like to be able to use a Times New Roman font but even after deleting the Font cache file (fontList.py3k.cache) which I find from here: import matplotlib as mpl fm = mpl.font_manager fm.get_cachedir() The commands: mpl.rcParams['font.family'] = ['serif'] mpl.rcParams['font.serif'] = ['Times New Roman'] have no effect, I get the same error as above. The true type fonts directory: path_to_miniconda/miniconda3/envs/conda34/lib/python3.4/site-packages/matplotlib/mpl-data/fonts/ttf/ only has 40 fonts in it of the type: DejaVuSerif,DejaVuSans,STIX,cmb, cmt, cmy Any idea what could be going on and how I can add additional fonts? Thanks!", "output": "To get it to work, I had to combine the two current top answers. Here's what worked for me: sudo apt install msttcorefonts -qq rm ~/.cache/matplotlib -rf # remove cache"} +{"question_id": 15667189, "score": 120, "creation_date": 1364410695, "tags": ["python"], "instruction": "What is `id()` function used for in Python?\n\nI read the Python 2 docs and noticed the id() function: Return the \u201cidentity\u201d of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. CPython implementation detail: This is the address of the object in memory. So, I experimented by using id() with a list: >>> list = [1,2,3] >>> id(list[0]) 31186196 >>> id(list[1]) 31907092 // increased by 896 >>> id(list[2]) 31907080 // decreased by 12 What is the integer returned from the function? Is it synonymous to memory addresses in C? If so, why doesn't the integer correspond to the size of the data type? When is id() used in practice?", "output": "Your post asks several questions: What is the number returned from the function? It is \"an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime.\" (Python Standard Library - Built-in Functions) A unique number. Nothing more, and nothing less. Think of it as a social-security number or employee id number for Python objects. Is it the same with memory addresses in C? Conceptually, yes, in that they are both guaranteed to be unique in their universe during their lifetime. And in one particular implementation of Python, it actually is the memory address of the corresponding C object. If yes, why doesn't the number increase instantly by the size of the data type (I assume that it would be int)? Because a list is not an array, and a list element is a reference, not an object. When do we really use id( ) function? Aside from debugging, whenever you only care about the identity of objects and not their value equality (__eq__ or other built-in value equality), and when the reference itself is not desirable or possible to use, e.g. using arbitrary object identity as a dict key, or using integers instead of references for optimization purposes. You should not use it to test if two references are the same by comparing their ids, as the is operator has always been the recommended way of doing that."} +{"question_id": 21360361, "score": 120, "creation_date": 1390716638, "tags": ["python", "pandas", "matplotlib", "jupyter-notebook", "ipython"], "instruction": "How to dynamically update a plot in a loop in IPython notebook (within one cell)\n\nEnvironment: Python 2.7, Matplotlib 1.3, IPython notebook 1.1, Linux, and Chrome. The code is in one single input cell, using --pylab=inline. I want to use IPython notebook and Pandas to consume a stream and dynamically update a plot every five seconds. When I just use a print statement to print the data in text format, it works perfectly fine: the output cell just keeps printing data and adding new rows. But when I try to plot the data (and then update it in a loop), the plot never shows up in the output cell. But if I remove the loop, and just plot it once, it works fine. Then I did some simple test: i = pd.date_range('2013-1-1',periods=100,freq='s') while True: plot(pd.Series(data=np.random.randn(100), index=i)) #pd.Series(data=np.random.randn(100), index=i).plot() also tried this one time.sleep(5) The output will not show anything until I manually interrupt the process (Ctrl + M + I). And after I interrupt it, the plot shows correctly as multiple overlapped lines. But what I really want is a plot that shows up and gets updated every five seconds (or whenever the plot() function gets called, just like what print statement outputs I mentioned above, which works well). Only showing the final chart after the cell is completely done is not what I want. I even tried to explicitly add the draw() function after each plot(), etc. None of them works. How can I dynamically update a plot by a for/while loop within one cell in IPython notebook?", "output": "Use the IPython.display module: %matplotlib inline import time import pylab as pl from IPython import display for i in range(10): pl.plot(pl.randn(100)) display.clear_output(wait=True) display.display(pl.gcf()) time.sleep(1.0)"} +{"question_id": 3633140, "score": 120, "creation_date": 1283489865, "tags": ["python", "for-loop", "list-comprehension"], "instruction": "Nested For Loops Using List Comprehension\n\nIf I had two strings, 'abc' and 'def', I could get all combinations of them using two for loops: for j in s1: for k in s2: print(j, k) However, I would like to be able to do this using list comprehension. I've tried many ways, but have never managed to get it. Does anyone know how to do this?", "output": "lst = [j + k for j in s1 for k in s2] or lst = [(j, k) for j in s1 for k in s2] if you want tuples. Like in the question, for j... is the outer loop, for k... is the inner loop. Essentially, you can have as many independent 'for x in y' clauses as you want in a list comprehension just by sticking one after the other. To make it more readable, use multiple lines: lst = [ j + k # result for j in s1 # for loop for k in s2 # for loop # condition ]"} +{"question_id": 24942358, "score": 120, "creation_date": 1406230065, "tags": ["python", "keyword"], "instruction": "Is 'file' a keyword in python?\n\nIs file a keyword in python? I've seen some code using the keyword file just fine, while others have suggested not to use it and my editor is color coding it as a keyword.", "output": "No, file is not a keyword: >>> import keyword >>> keyword.iskeyword('file') False The name is not present in Python 3. In Python 2, file is a built-in: >>> import __builtin__, sys >>> hasattr(__builtin__, 'file') True >>> sys.version_info[:2] (2, 7) It can be seen as an alias for open(), but it was removed in Python 3, where the new io framework replaced it. Technically, it is the type of object returned by the Python 2 open() function."} +{"question_id": 3580027, "score": 120, "creation_date": 1282861393, "tags": ["python", "matplotlib"], "instruction": "How do you determine which backend is being used by matplotlib?\n\nEither interactively, such as from within an Ipython session, or from within a script, how can you determine which backend is being used by matplotlib?", "output": "Use the get_backend() function to obtain a string denoting which backend is in use: >>> import matplotlib >>> matplotlib.get_backend() 'TkAgg'"} +{"question_id": 43196636, "score": 120, "creation_date": 1491267414, "tags": ["python", "machine-learning", "keras", "neural-network", "hierarchical"], "instruction": "How to concatenate two layers in keras?\n\nI have an example of a neural network with two layers. The first layer takes two arguments and has one output. The second should take one argument as result of the first layer and one additional argument. It should looks like this: x1 x2 x3 \\ / / y1 / \\ / y2 So, I'd created a model with two layers and tried to merge them but it returns an error: The first layer in a Sequential model must get an \"input_shape\" or \"batch_input_shape\" argument. on the line result.add(merged). Model: first = Sequential() first.add(Dense(1, input_shape=(2,), activation='sigmoid')) second = Sequential() second.add(Dense(1, input_shape=(1,), activation='sigmoid')) result = Sequential() merged = Concatenate([first, second]) ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0) result.add(merged) result.compile(optimizer=ada_grad, loss=_loss_tensor, metrics=['accuracy'])", "output": "You're getting the error because result defined as Sequential() is just a container for the model and you have not defined an input for it. Given what you're trying to build set result to take the third input x3. first = Sequential() first.add(Dense(1, input_shape=(2,), activation='sigmoid')) second = Sequential() second.add(Dense(1, input_shape=(1,), activation='sigmoid')) third = Sequential() # of course you must provide the input to result which will be your x3 third.add(Dense(1, input_shape=(1,), activation='sigmoid')) # lets say you add a few more layers to first and second. # concatenate them merged = Concatenate([first, second]) # then concatenate the two outputs result = Concatenate([merged, third]) ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0) result.compile(optimizer=ada_grad, loss='binary_crossentropy', metrics=['accuracy']) However, my preferred way of building a model that has this type of input structure would be to use the functional api. Here is an implementation of your requirements to get you started: from keras.models import Model from keras.layers import Concatenate, Dense, LSTM, Input, concatenate from keras.optimizers import Adagrad first_input = Input(shape=(2, )) first_dense = Dense(1, )(first_input) second_input = Input(shape=(2, )) second_dense = Dense(1, )(second_input) merge_one = concatenate([first_dense, second_dense]) third_input = Input(shape=(1, )) merge_two = concatenate([merge_one, third_input]) model = Model(inputs=[first_input, second_input, third_input], outputs=merge_two) ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0) model.compile(optimizer=ada_grad, loss='binary_crossentropy', metrics=['accuracy']) To answer the question in the comments: How are result and merged connected? Assuming you mean how are they concatenated. Concatenation works like this: a b c a b c g h i a b c g h i d e f j k l d e f j k l i.e rows are just joined. Now, x1 is input to first, x2 is input into second and x3 input into third."} +{"question_id": 38108832, "score": 120, "creation_date": 1467229666, "tags": ["python", "scikit-learn", "decision-tree"], "instruction": "Passing categorical data to Sklearn Decision Tree\n\nThere are several posts about how to encode categorical data to Sklearn Decision trees, but from Sklearn documentation, we got these Some advantages of decision trees are: (...) Able to handle both numerical and categorical data. Other techniques are usually specialized in analyzing datasets that have only one type of variable. See the algorithms for more information. But running the following script import pandas as pd from sklearn.tree import DecisionTreeClassifier data = pd.DataFrame() data['A'] = ['a','a','b','a'] data['B'] = ['b','b','a','b'] data['C'] = [0, 0, 1, 0] data['Class'] = ['n','n','y','n'] tree = DecisionTreeClassifier() tree.fit(data[['A','B','C']], data['Class']) outputs the following error: Traceback (most recent call last): File \"\", line 1, in File \"/usr/local/lib/python2.7/site-packages/sklearn/tree/tree.py\", line 154, in fit X = check_array(X, dtype=DTYPE, accept_sparse=\"csc\") File \"/usr/local/lib/python2.7/site-packages/sklearn/utils/validation.py\", line 377, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: could not convert string to float: b I know that in R it is possible to pass categorical data, with Sklearn, is it possible?", "output": "I would prefer to use tools provided by Scikit-Learn for this purpose. The main reason for doing so is that they can be easily integrated in a Pipeline. Scikit-Learn itself provides very good classes to handle categorical data. Instead of writing your custom function, you should use LabelEncoder which is specially designed for this purpose. Refer to the following code from the documentation: from sklearn import preprocessing le = preprocessing.LabelEncoder() le.fit([\"paris\", \"paris\", \"tokyo\", \"amsterdam\"]) le.transform([\"tokyo\", \"tokyo\", \"paris\"]) This automatically encodes them into numbers for your machine learning algorithms. Now this also supports going back to strings from integers. You can do this by simply calling inverse_transform as follows: list(le.inverse_transform([2, 2, 1])) This would return ['tokyo', 'tokyo', 'paris']. Also note that for many other classifiers, apart from decision trees, such as logistic regression or SVM, you would like to encode your categorical variables using One-Hot encoding. Scikit-learn supports this as well through the OneHotEncoder class."} +{"question_id": 10525185, "score": 120, "creation_date": 1336603473, "tags": ["python", "multithreading", "locks"], "instruction": "Python threading. How do I lock a thread?\n\nI'm trying to understand the basics of threading and concurrency. I want a simple case where two threads repeatedly try to access one shared resource. The code: import threading class Thread(threading.Thread): def __init__(self, t, *args): threading.Thread.__init__(self, target=t, args=args) self.start() count = 0 lock = threading.Lock() def increment(): global count lock.acquire() try: count += 1 finally: lock.release() def bye(): while True: increment() def hello_there(): while True: increment() def main(): hello = Thread(hello_there) goodbye = Thread(bye) while True: print count if __name__ == '__main__': main() So, I have two threads, both trying to increment the counter. I thought that if thread 'A' called increment(), the lock would be established, preventing 'B' from accessing until 'A' has released. Running the makes it clear that this is not the case. You get all of the random data race-ish increments. How exactly is the lock object used? Additionally, I've tried putting the locks inside of the thread functions, but still no luck.", "output": "You can see that your locks are pretty much working as you are using them, if you slow down the process and make them block a bit more. You had the right idea, where you surround critical pieces of code with the lock. Here is a small adjustment to your example to show you how each waits on the other to release the lock. import threading import time import inspect class Thread(threading.Thread): def __init__(self, t, *args): threading.Thread.__init__(self, target=t, args=args) self.start() count = 0 lock = threading.Lock() def incre(): global count caller = inspect.getouterframes(inspect.currentframe())[1][3] print \"Inside %s()\" % caller print \"Acquiring lock\" with lock: print \"Lock Acquired\" count += 1 time.sleep(2) def bye(): while count < 5: incre() def hello_there(): while count < 5: incre() def main(): hello = Thread(hello_there) goodbye = Thread(bye) if __name__ == '__main__': main() Sample output: ... Inside hello_there() Acquiring lock Lock Acquired Inside bye() Acquiring lock Lock Acquired ..."} +{"question_id": 40378427, "score": 120, "creation_date": 1478084871, "tags": ["python", "numpy"], "instruction": "numpy: formal definition of \"array_like\" objects?\n\nIn numpy, the constructors of many objects accept an \"array_like\" as first argument. Is there a definition of a such object, either as an abstract meta class, or documentation of the methods is should contain??", "output": "It turns out almost anything is technically an array-like. \"Array-like\" is more of a statement of how the input will be interpreted than a restriction on what the input can be; if a parameter is documented as array-like, NumPy will try to interpret it as an array. There is no formal definition of array-like beyond the nearly tautological one -- an array-like is any Python object that np.array can convert to an ndarray. To go beyond this, you'd need to study the source code. NPY_NO_EXPORT PyObject * PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, int max_depth, int flags, PyObject *context) { /* * This is the main code to make a NumPy array from a Python * Object. It is called from many different places. */ PyArrayObject *arr = NULL, *ret; PyArray_Descr *dtype = NULL; int ndim = 0; npy_intp dims[NPY_MAXDIMS]; /* Get either the array or its parameters if it isn't an array */ if (PyArray_GetArrayParamsFromObject(op, newtype, 0, &dtype, &ndim, dims, &arr, context) < 0) { Py_XDECREF(newtype); return NULL; } ... Particularly interesting is PyArray_GetArrayParamsFromObject, whose comments enumerate the types of objects np.array expects: NPY_NO_EXPORT int PyArray_GetArrayParamsFromObject(PyObject *op, PyArray_Descr *requested_dtype, npy_bool writeable, PyArray_Descr **out_dtype, int *out_ndim, npy_intp *out_dims, PyArrayObject **out_arr, PyObject *context) { PyObject *tmp; /* If op is an array */ /* If op is a NumPy scalar */ /* If op is a Python scalar */ /* If op supports the PEP 3118 buffer interface */ /* If op supports the __array_struct__ or __array_interface__ interface */ /* * If op supplies the __array__ function. * The documentation says this should produce a copy, so * we skip this method if writeable is true, because the intent * of writeable is to modify the operand. * XXX: If the implementation is wrong, and/or if actual * usage requires this behave differently, * this should be changed! */ /* Try to treat op as a list of lists */ /* Anything can be viewed as an object, unless it needs to be writeable */ } So by studying the source code we can conclude an array-like is a NumPy array, or a NumPy scalar, or a Python scalar, or any object which supports the PEP 3118 buffer interface, or any object that supports the __array_struct__ or __array_interface__ interface, or any object that supplies the __array__ function, or any object that can be treated as a list of lists, or anything! If it doesn't fall under one of the other cases, it'll be treated as a 0-dimensional array of object dtype."} +{"question_id": 1365081, "score": 119, "creation_date": 1251846239, "tags": ["python", "powershell", "virtualenv"], "instruction": "virtualenv in PowerShell?\n\nThere seems to be a problem when virtualenv is used in PowerShell. When I try to activate my environment in PowerShell like... env/scripts/activate .. nothing happens. (the shell prompt should have changed as well as the PATH env. variable .) I guess the problem is that PowerShell spawns a new cmd. process just for running the activate.bat thus rendering the changes activate.bat does to the shell dead after it completes. Do you have any workarounds for the issue? (I'm sticking with cmd.exe for now)", "output": "My original answer is now outdated. Now, just use activate.ps1 (rather than activate.bat) to activate from a Powershell environment. Original answer: Here's a post that contains a Powershell script that allows you to run batch files that persistently modify their environment variables. The script propagates any environment variable changes back to the calling PowerShell environment."} +{"question_id": 4140437, "score": 119, "creation_date": 1289352288, "tags": ["python", "validation", "tkinter", "textbox"], "instruction": "Interactively validating Entry widget content in tkinter\n\nWhat is the recommended technique for interactively validating content in a tkinter Entry widget? I've read the posts about using validate=True and validatecommand=command, and it appears that these features are limited by the fact that they get cleared if the validatecommand command updates the Entry widget's value. Given this behavior, should we bind on the KeyPress, Cut, and Paste events and monitor/update our Entry widget's value through these events? (And other related events that I might have missed?) Or should we forget interactive validation altogether and only validate on FocusOut events?", "output": "The correct answer is, use the validatecommand attribute of the widget. Unfortunately this feature is severely under-documented in the Tkinter world, though it is quite sufficiently documented in the Tk world. Even though it's not documented well, it has everything you need to do validation without resorting to bindings or tracing variables, or modifying the widget from within the validation procedure. The trick is to know that you can have Tkinter pass in special values to your validate command. These values give you all the information you need to know to decide on whether the data is valid or not: the value prior to the edit, the value after the edit if the edit is valid, and several other bits of information. To use these, though, you need to do a little voodoo to get this information passed to your validate command. Note: it's important that the validation command returns either True or False. Anything else will cause the validation to be turned off for the widget. Here's an example that only allows lowercase. It also prints the values of all of the special values for illustrative purposes. They aren't all necessary; you rarely need more than one or two. import tkinter as tk # python 3.x # import Tkinter as tk # python 2.x class Example(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) # valid percent substitutions (from the Tk entry man page) # note: you only have to register the ones you need; this # example registers them all for illustrative purposes # # %d = Type of action (1=insert, 0=delete, -1 for others) # %i = index of char string to be inserted/deleted, or -1 # %P = value of the entry if the edit is allowed # %s = value of entry prior to editing # %S = the text string being inserted or deleted, if any # %v = the type of validation that is currently set # %V = the type of validation that triggered the callback # (key, focusin, focusout, forced) # %W = the tk name of the widget vcmd = (self.register(self.onValidate), '%d', '%i', '%P', '%s', '%S', '%v', '%V', '%W') self.entry = tk.Entry(self, validate=\"key\", validatecommand=vcmd) self.text = tk.Text(self, height=10, width=40) self.entry.pack(side=\"top\", fill=\"x\") self.text.pack(side=\"bottom\", fill=\"both\", expand=True) def onValidate(self, d, i, P, s, S, v, V, W): self.text.delete(\"1.0\", \"end\") self.text.insert(\"end\",\"OnValidate:\\n\") self.text.insert(\"end\",\"d='%s'\\n\" % d) self.text.insert(\"end\",\"i='%s'\\n\" % i) self.text.insert(\"end\",\"P='%s'\\n\" % P) self.text.insert(\"end\",\"s='%s'\\n\" % s) self.text.insert(\"end\",\"S='%s'\\n\" % S) self.text.insert(\"end\",\"v='%s'\\n\" % v) self.text.insert(\"end\",\"V='%s'\\n\" % V) self.text.insert(\"end\",\"W='%s'\\n\" % W) # Disallow anything but lowercase letters if S == S.lower(): return True else: self.bell() return False if __name__ == \"__main__\": root = tk.Tk() Example(root).pack(fill=\"both\", expand=True) root.mainloop() For more information about what happens under the hood when you call the register method, see Why is calling register() required for tkinter input validation? For the canonical documentation see the Validation section of the Tcl/Tk Entry man page"} +{"question_id": 2690521, "score": 119, "creation_date": 1271937518, "tags": ["python", "django", "django-views"], "instruction": "Django check for any exists for a query\n\nIn django how to check whether any entry exists for a query sc=scorm.objects.filter(Header__id=qp.id) This was how it was done in php if(mysql_num_rows($resultn)) { // True condition } else { // False condition }", "output": "You can use exists(): if scorm.objects.filter(Header__id=qp.id).exists(): .... Returns True if the QuerySet contains any results, and False if not. This tries to perform the query in the simplest and fastest way possible, but it does execute nearly the same query as a normal QuerySet query. Older versions: (<1.2) Use count(): sc=scorm.objects.filter(Header__id=qp.id) if sc.count() > 0: ... The advantage over e.g. len() is, that the QuerySet is not yet evaluated: count() performs a SELECT COUNT(*) behind the scenes, so you should always use count() rather than loading all of the record into Python objects and calling len() on the result. Having this in mind, When QuerySets are evaluated can be worth reading. If you use get(), e.g. scorm.objects.get(pk=someid), and the object does not exists, an ObjectDoesNotExist exception is raised: from django.core.exceptions import ObjectDoesNotExist try: sc = scorm.objects.get(pk=someid) except ObjectDoesNotExist: print ..."} +{"question_id": 3854692, "score": 119, "creation_date": 1286190803, "tags": ["python", "passwords"], "instruction": "Generate password in Python\n\nI'd like to generate some alphanumeric passwords in Python. Some possible ways are: import string from random import sample, choice chars = string.ascii_letters + string.digits length = 8 ''.join(sample(chars, length)) # first way ''.join(choice(chars) for i in range(length)) # second way But I don't the first way because only unique chars are selected and you can't generate passwords where length > len(chars) and I don't like the second way because we have an unused i variable. Are there any other good options?", "output": "On Python 3.6+ you should use the secrets module to generate cryptographically safe passwords. Adapted from the documentation: import secrets import string alphabet = string.ascii_letters + string.digits password = ''.join(secrets.choice(alphabet) for i in range(20)) # for a 20-character password For more information on recipes and best practices, see this section on recipes in the Python documentation. You can also consider adding string.punctuation."} +{"question_id": 29817447, "score": 119, "creation_date": 1429776769, "tags": ["python", "cmd", "pip"], "instruction": "How to run Pip commands from CMD\n\nAs I understand, Python 2.7.9 comes with Pip installed, however when I try to execute a Pip command from CMD (Windows) I get the following error: 'pip' is not recognized as an internal or external command, operable program or batch file. When I type python I do get the following, which suggests it has been installed correctly: Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. I did however need to add some environmental variables to get the python part working on CMD: Add to the environment variable PATH: \"C:\\Python27\\\" Define the system variable PYTHONPATH: \"C:\\Python27\\\" I cannot find a Pip folder within the Python directory, however there is a folder called \"ensurepip\" in C:\\Python27\\Lib\\. Does anybody know how can I get Pip commands to start working in CMD?", "output": "Little side note for anyone new to Python who didn't figure it out by theirself: this should be automatic when installing Python, but just in case, note that to run Python using the python command in Windows' CMD you must first add it to the PATH environment variable, as explained here. If you have the Python launcher installed, in such case instead of typing python in the console you can type py. Whether or not one or both commands are available depends on the choices you made during installation. To execute Pip, first of all make sure you have it installed, so type in your CMD: > python >>> import pip >>> The above should proceed with no error. Otherwise, if this fails, you can look here to see how to install it. Now that you are sure you've got Pip, you can run it from CMD with Python using the -m (module) parameter, like this: > python -m pip Where is any Pip command you want to run, and are its relative arguments, separated by spaces. For example, to install a package: > python -m pip install "} +{"question_id": 36183486, "score": 119, "creation_date": 1458750107, "tags": ["python", "python-3.x", "anaconda", "spyder"], "instruction": "ImportError: No module named 'google'\n\nI installed Python 3.5. I ran the pip install google command and verified the modules. Google was present. I installed Anaconda 3.5 and tried to run z sample code. But I'm getting the import error. Please find the screen shot attached. What am I missing? Do I have to link my Spyder to Python installation directory in some way? Why is Spyder unable to google module? My Python installation directory: C:\\Users\\XXX\\AppData\\Local\\Programs\\Python\\Python35 My scenario is a bit different and I could not find a solution from similar posts here.", "output": "I figured out the solution: I had to delete my Anaconda and Python installations Re-install Anaconda only Open the Anaconda prompt and point it to Anaconda/Scripts Run pip install google Run the sample code now from Spyder. No more errors."} +{"question_id": 5201346, "score": 119, "creation_date": 1299294054, "tags": ["python", "django", "templates"], "instruction": "How do I go straight to template, in Django's urls.py?\n\nInstead of going to views.py, I want it to go to to a template, robots.txt.", "output": "Django 2+ Note: is valid still as of Django 4+ Use the class based generic views but register with the django 2.0+ pattern. from django.urls import path from django.views.generic import TemplateView urlpatterns = [ path('foo/', TemplateView.as_view(template_name='foo.html')) ] https://docs.djangoproject.com/en/4.1/topics/class-based-views/#usage-in-your-urlconf Django 1.5+ Use the class based generic views. from django.views.generic import TemplateView urlpatterns = patterns('', (r'^foo/$', TemplateView.as_view(template_name='foo.html')), ) #Django <= 1.4 Docs: https://docs.djangoproject.com/en/1.4/ref/generic-views/#django-views-generic-simple-direct-to-template urlpatterns = patterns('django.views.generic.simple', (r'^foo/$', 'direct_to_template', {'template': 'foo_index.html'}), (r'^foo/(?P\\d+)/$', 'direct_to_template', {'template': 'foo_detail.html'}), )"} +{"question_id": 20626994, "score": 119, "creation_date": 1387259869, "tags": ["python", "scipy", "normal-distribution"], "instruction": "How to calculate the inverse of the normal cumulative distribution function in python?\n\nHow do I calculate the inverse of the cumulative distribution function (CDF) of the normal distribution in Python? Which library should I use? Possibly scipy?", "output": "NORMSINV (mentioned in a comment) is the inverse of the CDF of the standard normal distribution. Using scipy, you can compute this with the ppf method of the scipy.stats.norm object. The acronym ppf stands for percent point function, which is another name for the quantile function. In [20]: from scipy.stats import norm In [21]: norm.ppf(0.95) Out[21]: 1.6448536269514722 Check that it is the inverse of the CDF: In [34]: norm.cdf(norm.ppf(0.95)) Out[34]: 0.94999999999999996 By default, norm.ppf uses mean=0 and stddev=1, which is the \"standard\" normal distribution. You can use a different mean and standard deviation by specifying the loc and scale arguments, respectively. In [35]: norm.ppf(0.95, loc=10, scale=2) Out[35]: 13.289707253902945 If you look at the source code for scipy.stats.norm, you'll find that the ppf method ultimately calls scipy.special.ndtri. So to compute the inverse of the CDF of the standard normal distribution, you could use that function directly: In [43]: from scipy.special import ndtri In [44]: ndtri(0.95) Out[44]: 1.6448536269514722 ndtri is much faster than norm.ppf: In [46]: %timeit norm.ppf(0.95) 240 \u00b5s \u00b1 1.75 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1,000 loops each) In [47]: %timeit ndtri(0.95) 1.47 \u00b5s \u00b1 1.3 ns per loop (mean \u00b1 std. dev. of 7 runs, 1,000,000 loops each)"} +{"question_id": 15771472, "score": 119, "creation_date": 1364926965, "tags": ["python", "pandas", "time-series", "rolling-computation"], "instruction": "Pandas: rolling mean by time interval\n\nI've got a bunch of polling data; I want to compute a Pandas rolling mean to get an estimate for each day based on a three-day window. According to this question, the rolling_* functions compute the window based on a specified number of values, and not a specific datetime range. How do I implement this functionality? Sample input data: polls_subset.tail(20) Out[185]: favorable unfavorable other enddate 2012-10-25 0.48 0.49 0.03 2012-10-25 0.51 0.48 0.02 2012-10-27 0.51 0.47 0.02 2012-10-26 0.56 0.40 0.04 2012-10-28 0.48 0.49 0.04 2012-10-28 0.46 0.46 0.09 2012-10-28 0.48 0.49 0.03 2012-10-28 0.49 0.48 0.03 2012-10-30 0.53 0.45 0.02 2012-11-01 0.49 0.49 0.03 2012-11-01 0.47 0.47 0.05 2012-11-01 0.51 0.45 0.04 2012-11-03 0.49 0.45 0.06 2012-11-04 0.53 0.39 0.00 2012-11-04 0.47 0.44 0.08 2012-11-04 0.49 0.48 0.03 2012-11-04 0.52 0.46 0.01 2012-11-04 0.50 0.47 0.03 2012-11-05 0.51 0.46 0.02 2012-11-07 0.51 0.41 0.00 Output would have only one row for each date.", "output": "In the meantime, a time-window capability was added. See this link. In [1]: df = DataFrame({'B': range(5)}) In [2]: df.index = [Timestamp('20130101 09:00:00'), ...: Timestamp('20130101 09:00:02'), ...: Timestamp('20130101 09:00:03'), ...: Timestamp('20130101 09:00:05'), ...: Timestamp('20130101 09:00:06')] In [3]: df Out[3]: B 2013-01-01 09:00:00 0 2013-01-01 09:00:02 1 2013-01-01 09:00:03 2 2013-01-01 09:00:05 3 2013-01-01 09:00:06 4 In [4]: df.rolling(2, min_periods=1).sum() Out[4]: B 2013-01-01 09:00:00 0.0 2013-01-01 09:00:02 1.0 2013-01-01 09:00:03 3.0 2013-01-01 09:00:05 5.0 2013-01-01 09:00:06 7.0 In [5]: df.rolling('2s', min_periods=1).sum() Out[5]: B 2013-01-01 09:00:00 0.0 2013-01-01 09:00:02 1.0 2013-01-01 09:00:03 3.0 2013-01-01 09:00:05 3.0 2013-01-01 09:00:06 7.0"} +{"question_id": 43921240, "score": 119, "creation_date": 1494520838, "tags": ["python", "dictionary", "namedtuple"], "instruction": "Pythonic way to convert a dictionary into namedtuple or another hashable dict-like?\n\nI have a dictionary like: d = {'a': 1, 'b': 2, 'c': 3, 'd': 4} which I would like to convert to a namedtuple. My current approach is with the following code namedTupleConstructor = namedtuple('myNamedTuple', ' '.join(sorted(d.keys()))) nt= namedTupleConstructor(**d) which produces myNamedTuple(a=1, b=2, c=3, d=4) This works fine for me (I think), but am I missing a built-in such as... nt = namedtuple.from_dict() ? UPDATE: as discussed in the comments, my reason for wanting to convert my dictionary to a namedtuple is so that it becomes hashable, but still generally useable like a dict. UPDATE2: 4 years after I've posted this question, TLK posts a new answer recommending using the dataclass decorator that I think is really great. I think that's now what I would use going forward.", "output": "To create the subclass, you may just pass the keys of a dict directly: from collections import namedtuple MyTuple = namedtuple('MyTuple', d) Now to create tuple instances from this dict, or any other dict with matching keys: my_tuple = MyTuple(**d) Beware: namedtuples compare on values only (ordered). They are designed to be a drop-in replacement for regular tuples, with named attribute access as an added feature. The field names will not be considered when making equality comparisons. It may not be what you wanted nor expected from the namedtuple type! This differs from dict equality comparisons, which do take into account the keys and also compare order agnostic. For readers who don't really need a type which is a subclass of tuple, there probably isn't much point to use a namedtuple in the first place. If you just want to use attribute access syntax on fields, it would be simpler and easier to create namespace objects instead: >>> from types import SimpleNamespace >>> SimpleNamespace(**d) namespace(a=1, b=2, c=3, d=4) my reason for wanting to convert my dictionary to a namedtuple is so that it becomes hashable, but still generally useable like a dict For a hashable \"attrdict\" like recipe, check out a frozen box: >>> from box import Box >>> b = Box(d, frozen_box=True) >>> hash(b) 7686694140185755210 >>> b.a 1 >>> b[\"a\"] 1 >>> b[\"a\"] = 2 BoxError: Box is frozen There may also be a frozen mapping type coming in a later version of Python, watch this draft PEP for acceptance or rejection: PEP 603 -- Adding a frozenmap type to collections"} +{"question_id": 32923952, "score": 119, "creation_date": 1443884266, "tags": ["python", "package", "python-wheel", "python-packaging"], "instruction": "How do I list the files inside a python wheel?\n\nI'm poking around the various options to setup.py for including non-python files, and they're somewhat less than intuitive. I'd like to be able to check the package generated by bdist_wheel to see what's actually in it--not so much to make sure that it will work (that's what tests are for) but to see the effects of the options I've set. How do I list the files contained in a .whl?", "output": "You can take the wheel file change the extension to .zip and then extract the contents like any other zip file. from PEP 427 A wheel is a ZIP-format archive with a specially formatted file name and the .whl extension. Example the Django python package has a wheel file. Try Django-1.8.4-py2.py3-none-any.whl as an example. Their package contains non-python files if you wanted to see where they end up being stored in the archive. Code The following code works correctly using python2 and python3. It will list the files in any wheel package. I use the pep8 wheel package as an example, whose wheel can be downloaded with pip download --no-deps pep8==1.7.0. import pprint from zipfile import ZipFile path = 'pep8-1.7.0-py2.py3-none-any.whl' names = ZipFile(path).namelist() pprint.pprint(names) Output ['pep8.py', 'pep8-1.7.0.dist-info/DESCRIPTION.rst', 'pep8-1.7.0.dist-info/entry_points.txt', 'pep8-1.7.0.dist-info/metadata.json', 'pep8-1.7.0.dist-info/namespace_packages.txt', 'pep8-1.7.0.dist-info/top_level.txt', 'pep8-1.7.0.dist-info/WHEEL', 'pep8-1.7.0.dist-info/METADATA', 'pep8-1.7.0.dist-info/RECORD']"} +{"question_id": 693070, "score": 119, "creation_date": 1238258562, "tags": ["python", "refactoring", "dead-code"], "instruction": "How can you find unused functions in Python code?\n\nSo you've got some legacy code lying around in a fairly hefty project. How can you find and delete dead functions? I've seen these two references: Find unused code and Tool to find unused functions in php project, but they seem specific to C# and PHP, respectively. Is there a Python tool that'll help you find functions that aren't referenced anywhere else in the source code (notwithstanding reflection/etc.)?", "output": "In Python you can find unused code by using dynamic or static code analyzers. Two examples for dynamic analyzers are coverage and figleaf. They have the drawback that you have to run all possible branches of your code in order to find unused parts, but they also have the advantage that you get very reliable results. Alternatively, you can use static code analyzers that just look at your code, but don't actually run it. They run much faster, but due to Python's dynamic nature the results may contain false positives. Two tools in this category are pyflakes and vulture. Pyflakes finds unused imports and unused local variables. Vulture finds all kinds of unused and unreachable code. (Full disclosure: I'm the maintainer of Vulture.) The tools are available in the Python Package Index https://pypi.org/."} +{"question_id": 18054500, "score": 119, "creation_date": 1375694196, "tags": ["python", "youtube-dl"], "instruction": "How to use youtube-dl from a python program?\n\nI would like to access the result of the following shell command, youtube-dl -g \"www.youtube.com/...\" to print its output direct url to a file, from within a python program. This is what I have tried: import youtube-dl fromurl=\"www.youtube.com/...\" geturl=youtube-dl.magiclyextracturlfromurl(fromurl) Is that possible? I tried to understand the mechanism in the source but got lost: youtube_dl/__init__.py, youtube_dl/youtube_DL.py, info_extractors ...", "output": "It's not difficult and actually documented: import youtube_dl ydl = youtube_dl.YoutubeDL({'outtmpl': '%(id)s.%(ext)s'}) with ydl: result = ydl.extract_info( 'http://www.youtube.com/watch?v=BaW_jenozKc', download=False # We just want to extract the info ) if 'entries' in result: # Can be a playlist or a list of videos video = result['entries'][0] else: # Just a video video = result print(video) video_url = video['url'] print(video_url)"} +{"question_id": 4740473, "score": 119, "creation_date": 1295469744, "tags": ["python", "rpm"], "instruction": "setup.py examples?\n\nAfter studying this page: http://docs.python.org/distutils/builtdist.html I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?", "output": "Complete walkthrough of writing setup.py scripts here. (with some examples) If you'd like a real-world example, I could point you towards the setup.py scripts of a couple major projects. Django's is here, pyglet's is here. You can just browse the source of other projects for a file named setup.py for more examples. These aren't simple examples; the tutorial link I gave has those. These are more complex, but also more practical."} +{"question_id": 28754603, "score": 119, "creation_date": 1424992230, "tags": ["python", "pandas", "dataframe"], "instruction": "Indexing Pandas data frames: integer rows, named columns\n\nSay df is a pandas dataframe. df.loc[] only accepts names df.iloc[] only accepts integers (actual placements) df.ix[] accepts both names and integers: When referencing rows, df.ix[row_idx, ] only wants to be given names. e.g. df = pd.DataFrame({'a' : ['one', 'two', 'three','four', 'five', 'six'], '1' : np.arange(6)}) df = df.ix[2:6] print(df) 1 a 2 2 three 3 3 four 4 4 five 5 5 six df.ix[0, 'a'] throws an error, it doesn't give return 'two'. When referencing columns, iloc is prefers integers, not names. e.g. df.ix[2, 1] returns 'three', not 2. (Although df.idx[2, '1'] does return 2). Oddly, I'd like the exact opposite functionality. Usually my column names are very meaningful, so in my code I reference them directly. But due to a lot of observation cleaning, the row names in my pandas data frames don't usually correspond to range(len(df)). I realize I can use: df.iloc[0].loc['a'] # returns three But it seems ugly! Does anyone know of a better way to do this, so that the code would look like this? df.foo[0, 'a'] # returns three In fact, is it possible to add on my own new method to pandas.core.frame.DataFrames, so e.g. df.idx(rows, cols) is in fact df.iloc[rows].loc[cols]?", "output": "A very late answer but it amazed me that pandas still doesn't have such a function after all these years. If it irks you a lot, you can monkey-patch a custom indexer into the DataFrame: class XLocIndexer: def __init__(self, frame): self.frame = frame def __getitem__(self, key): row, col = key return self.frame.iloc[row][col] pd.core.indexing.IndexingMixin.xloc = property(lambda frame: XLocIndexer(frame)) # Usage df.xloc[0, 'a'] # one"} +{"question_id": 54962869, "score": 119, "creation_date": 1551559936, "tags": ["python", "function"], "instruction": "Function parameter with colon\n\nI just came across this function: def splitComma(line: str): splits = Utils.COMMA_DELIMITER.split(line) return \"{}, {}\".format(splits[1], splits[2]) I am aware that you can separate parameters by , or can set a value within a parameter like a=39 but I have not seen a colon like line:str. I have checked the function definition online but could not find anything like this. What does this colon mean?", "output": "It's a function annotation; function arguments and the return value can be tagged with arbitrary Python expressions. Python itself ignores the annotation (other than saving it), but third-party tools can make use of them. In this case, it is intended as type hint: programs like mypy can analyze your code statically (that is, without running it, but only looking at the source code itself) to ensure that only str values are passed as arguments to splitComma. A fuller annotation to also specify the return type of the function: def splitComma(line: str) -> str: ... (Note that originally, function annotations weren't assumed to have any specific semantics. This is still true, but the overwhelming assumption these days is that the annotations provide type hints.)"} +{"question_id": 10772302, "score": 119, "creation_date": 1338102999, "tags": ["python"], "instruction": "Python - abs vs fabs\n\nI noticed that in python there are two similar looking methods for finding the absolute value of a number: First abs(-5) Second import math math.fabs(-5) How do these methods differ?", "output": "math.fabs() converts its argument to float if it can (if it can't, it throws an exception). It then takes the absolute value, and returns the result as a float. In addition to floats, abs() also works with integers and complex numbers. Its return type depends on the type of its argument. In [7]: type(abs(-2)) Out[7]: int In [8]: type(abs(-2.0)) Out[8]: float In [9]: type(abs(3+4j)) Out[9]: float In [10]: type(math.fabs(-2)) Out[10]: float In [11]: type(math.fabs(-2.0)) Out[11]: float In [12]: type(math.fabs(3+4j)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/npe/ in () ----> 1 type(math.fabs(3+4j)) TypeError: can't convert complex to float"} +{"question_id": 15171695, "score": 119, "creation_date": 1362207035, "tags": ["python", "caching", "code-analysis", "literals", "python-internals"], "instruction": "What's with the integer cache maintained by the interpreter?\n\nAfter dive into Python's source code, I find out that it maintains an array of PyInt_Objects ranging from int(-5) to int(256) (@src/Objects/intobject.c) A little experiment proves it: >>> a = 1 >>> b = 1 >>> a is b True >>> a = 257 >>> b = 257 >>> a is b False But if I run those code together in a py file (or join them with semi-colons), the result is different: >>> a = 257; b = 257; a is b True I'm curious why they are still the same object, so I digg deeper into the syntax tree and compiler, I came up with a calling hierarchy listed below: PyRun_FileExFlags() mod = PyParser_ASTFromFile() node *n = PyParser_ParseFileFlagsEx() //source to cst parsetoke() ps = PyParser_New() for (;;) PyTokenizer_Get() PyParser_AddToken(ps, ...) mod = PyAST_FromNode(n, ...) //cst to ast run_mod(mod, ...) co = PyAST_Compile(mod, ...) //ast to CFG PyFuture_FromAST() PySymtable_Build() co = compiler_mod() PyEval_EvalCode(co, ...) PyEval_EvalCodeEx() Then I added some debug code in PyInt_FromLong and before/after PyAST_FromNode, and executed a test.py: a = 257 b = 257 print \"id(a) = %d, id(b) = %d\" % (id(a), id(b)) the output looks like: DEBUG: before PyAST_FromNode name = a ival = 257, id = 176046536 name = b ival = 257, id = 176046752 name = a name = b DEBUG: after PyAST_FromNode run_mod PyAST_Compile ok id(a) = 176046536, id(b) = 176046536 Eval ok It means that during the cst to ast transform, two different PyInt_Objects are created (actually it's performed in the ast_for_atom() function), but they are later merged. I find it hard to comprehend the source in PyAST_Compile and PyEval_EvalCode, so I'm here to ask for help, I'll be appreciative if some one gives a hint?", "output": "Python caches integers in the range [-5, 256], so integers in that range are usually but not always identical. What you see for 257 is the Python compiler optimizing identical literals when compiled in the same code object. When typing in the Python shell each line is a completely different statement, parsed and compiled separately, thus: >>> a = 257 >>> b = 257 >>> a is b False But if you put the same code into a file: $ echo 'a = 257 > b = 257 > print a is b' > testing.py $ python testing.py True This happens whenever the compiler has a chance to analyze the literals together, for example when defining a function in the interactive interpreter: >>> def test(): ... a = 257 ... b = 257 ... print a is b ... >>> dis.dis(test) 2 0 LOAD_CONST 1 (257) 3 STORE_FAST 0 (a) 3 6 LOAD_CONST 1 (257) 9 STORE_FAST 1 (b) 4 12 LOAD_FAST 0 (a) 15 LOAD_FAST 1 (b) 18 COMPARE_OP 8 (is) 21 PRINT_ITEM 22 PRINT_NEWLINE 23 LOAD_CONST 0 (None) 26 RETURN_VALUE >>> test() True >>> test.func_code.co_consts (None, 257) Note how the compiled code contains a single constant for the 257. In conclusion, the Python bytecode compiler is not able to perform massive optimizations (like statically typed languages), but it does more than you think. One of these things is to analyze usage of literals and avoid duplicating them. Note that this does not have to do with the cache, because it works also for floats, which do not have a cache: >>> a = 5.0 >>> b = 5.0 >>> a is b False >>> a = 5.0; b = 5.0 >>> a is b True For more complex literals, like tuples, it \"doesn't work\": >>> a = (1,2) >>> b = (1,2) >>> a is b False >>> a = (1,2); b = (1,2) >>> a is b False But the literals inside the tuple are shared: >>> a = (257, 258) >>> b = (257, 258) >>> a[0] is b[0] False >>> a[1] is b[1] False >>> a = (257, 258); b = (257, 258) >>> a[0] is b[0] True >>> a[1] is b[1] True (Note that constant folding and the peephole optimizer can change behaviour even between bugfix versions, so which examples return True or False is basically arbitrary and will change in the future). Regarding why you see that two PyInt_Object are created, I'd guess that this is done to avoid literal comparison. for example, the number 257 can be expressed by multiple literals: >>> 257 257 >>> 0x101 257 >>> 0b100000001 257 >>> 0o401 257 The parser has two choices: Convert the literals to some common base before creating the integer, and see if the literals are equivalent. then create a single integer object. Create the integer objects and see if they are equal. If yes, keep only a single value and assign it to all the literals, otherwise, you already have the integers to assign. Probably the Python parser uses the second approach, which avoids rewriting the conversion code and also it's easier to extend (for example it works with floats as well). Reading the Python/ast.c file, the function that parses all numbers is parsenumber, which calls PyOS_strtoul to obtain the integer value (for intgers) and eventually calls PyLong_FromString: x = (long) PyOS_strtoul((char *)s, (char **)&end, 0); if (x < 0 && errno == 0) { return PyLong_FromString((char *)s, (char **)0, 0); } As you can see here the parser does not check whether it already found an integer with the given value and so this explains why you see that two int objects are created, and this also means that my guess was correct: the parser first creates the constants and only afterward optimizes the bytecode to use the same object for equal constants. The code that does this check must be somewhere in Python/compile.c or Python/peephole.c, since these are the files that transform the AST into bytecode. In particular, the compiler_add_o function seems the one that does it. There is this comment in compiler_lambda: /* Make None the first constant, so the lambda can't have a docstring. */ if (compiler_add_o(c, c->u->u_consts, Py_None) < 0) return 0; So it seems like compiler_add_o is used to insert constants for functions/lambdas etc. The compiler_add_o function stores the constants into a dict object, and from this immediately follows that equal constants will fall in the same slot, resulting in a single constant in the final bytecode."} +{"question_id": 37233140, "score": 119, "creation_date": 1463270887, "tags": ["python", "module"], "instruction": "Module not found - \"No module named\"\n\nHere's my Python folder structure -project ----src ------model --------order.py ------hello-world.py Under src, I have a folder named model, which has a Python file called order.py, whose contents follow: class SellOrder(object): def __init__(self,genericName,brandName): self.genericName = genericName self.brandName = brandName Next, my hello-world.py is inside the src folder, one level above order.py: import model.order.SellOrder order = SellOrder(\"Test\",\"Test\") print order.brandName Whenever I run python hello-world.py it results in the error Traceback (most recent call last): File \"hello-world.py\", line 1, in import model.order.SellOrder ImportError: No module named model.order.SellOrder Is there anything I missed?", "output": "All modules in Python have to have a certain directory structure. You can find details here. Create an empty file called __init__.py under the model directory, such that your directory structure would look something like that: . \u2514\u2500\u2500 project \u2514\u2500\u2500 src \u251c\u2500\u2500 hello-world.py \u2514\u2500\u2500 model \u251c\u2500\u2500 __init__.py \u2514\u2500\u2500 order.py Also in your hello-world.py file change the import statement to the following: from model.order import SellOrder That should fix it P.S.: If you are placing your model directory in some other location (not in the same directory branch), you will have to modify the python path using sys.path."} +{"question_id": 1133857, "score": 118, "creation_date": 1247690022, "tags": ["python", "time", "sleep"], "instruction": "How accurate is python's time.sleep()?\n\nI can give it floating point numbers, such as time.sleep(0.5) but how accurate is it? If i give it time.sleep(0.05) will it really sleep about 50 ms?", "output": "The accuracy of the time.sleep function depends on your underlying OS's sleep accuracy. For non-real-time OSs like a stock Windows, the smallest interval you can sleep for is about 10-13ms. I have seen accurate sleeps within several milliseconds of that time when above the minimum 10-13ms. Update: Like mentioned in the docs cited below, it's common to do the sleep in a loop that will make sure to go back to sleep if it wakes you up early. I should also mention that if you are running Ubuntu you can try out a pseudo real-time kernel (with the RT_PREEMPT patch set) by installing the rt kernel package (at least in Ubuntu 10.04 LTS). Non-real-time Linux kernels have minimum sleep intervals much closer to 1ms than 10ms, but it varies in a non-deterministic manner."} +{"question_id": 50475635, "score": 118, "creation_date": 1527019319, "tags": ["python", "json", "jsonlines"], "instruction": "Loading JSONL file as JSON objects\n\nI want to load a JSONL file as JSON objects in python. Is there an easy way to do so?", "output": "The splitlines would address that problem for you, so In general the code below will work for you: import json result = [json.loads(jline) for jline in jsonl_content.splitlines()] If that's the response object the result would be: result = [json.loads(jline) for jline in response.read().splitlines()] NOTE splitlines() treats line breaks differently than JSON's escape requirements. Specifically, splitlines() splits on Unicode Line Separator (\\u2028), while JSON allows this character to remain unescaped in strings. When using json.dump with ensure_ascii=False, Python will emit \\u2028 unescaped in strings."} +{"question_id": 8898294, "score": 118, "creation_date": 1326818272, "tags": ["python", "utf-8", "utf-16", "byte-order-mark"], "instruction": "Convert UTF-8 with BOM to UTF-8 with no BOM in Python\n\nTwo questions here. I have a set of files which are usually UTF-8 with BOM. I'd like to convert them (ideally in place) to UTF-8 with no BOM. It seems like codecs.StreamRecoder(stream, encode, decode, Reader, Writer, errors) would handle this. But I don't really see any good examples on usage. Would this be the best way to handle this? source files: Tue Jan 17$ file brh-m-157.json brh-m-157.json: UTF-8 Unicode (with BOM) text Also, it would be ideal if we could handle different input encoding wihtout explicitly knowing (seen ASCII and UTF-16). It seems like this should all be feasible. Is there a solution that can take any known Python encoding and output as UTF-8 without BOM? edit 1 proposed sol'n from below (thanks!) fp = open('brh-m-157.json','rw') s = fp.read() u = s.decode('utf-8-sig') s = u.encode('utf-8') print fp.encoding fp.write(s) This gives me the following error: IOError: [Errno 9] Bad file descriptor Newsflash I'm being told in comments that the mistake is I open the file with mode 'rw' instead of 'r+'/'r+b', so I should eventually re-edit my question and remove the solved part.", "output": "This answer is for Python 2 Simply use the \"utf-8-sig\" codec: fp = open(\"file.txt\") s = fp.read() u = s.decode(\"utf-8-sig\") That gives you a unicode string without the BOM. You can then use s = u.encode(\"utf-8\") to get a normal UTF-8 encoded string back in s. If your files are big, then you should avoid reading them all into memory. The BOM is simply three bytes at the beginning of the file, so you can use this code to strip them out of the file: import os, sys, codecs BUFSIZE = 4096 BOMLEN = len(codecs.BOM_UTF8) path = sys.argv[1] with open(path, \"r+b\") as fp: chunk = fp.read(BUFSIZE) if chunk.startswith(codecs.BOM_UTF8): i = 0 chunk = chunk[BOMLEN:] while chunk: fp.seek(i) fp.write(chunk) i += len(chunk) fp.seek(BOMLEN, os.SEEK_CUR) chunk = fp.read(BUFSIZE) fp.seek(-BOMLEN, os.SEEK_CUR) fp.truncate() It opens the file, reads a chunk, and writes it out to the file 3 bytes earlier than where it read it. The file is rewritten in-place. As easier solution is to write the shorter file to a new file like newtover's answer. That would be simpler, but use twice the disk space for a short period. As for guessing the encoding, then you can just loop through the encoding from most to least specific: def decode(s): for encoding in \"utf-8-sig\", \"utf-16\": try: return s.decode(encoding) except UnicodeDecodeError: continue return s.decode(\"latin-1\") # will always work An UTF-16 encoded file wont decode as UTF-8, so we try with UTF-8 first. If that fails, then we try with UTF-16. Finally, we use Latin-1 \u2014 this will always work since all 256 bytes are legal values in Latin-1. You may want to return None instead in this case since it's really a fallback and your code might want to handle this more carefully (if it can)."} +{"question_id": 12683834, "score": 118, "creation_date": 1349145478, "tags": ["python", "copy", "distutils"], "instruction": "How to copy directory recursively in python and overwrite all?\n\nI'm trying to copy /home/myUser/dir1/ and all its contents (and their contents, etc.) to /home/myuser/dir2/ in python. Furthermore, I want the copy to overwrite everything in dir2/. It looks like distutils.dir_util.copy_tree might be the right tool for the job, but not sure if there's anything easier/more obvious to use for such a simple task. If it is the right tool, how do I use it? According to the docs there are 8 parameters that it takes. Do I have to pass all 8 are just src, dst and update, and if so, how (I'm brand new to Python). If there's something out there that's better, can someone give me an example and point me in the right direction?", "output": "Notice: distutils has been deprecated and will be removed in Python 3.12. Consider looking for other answers at this question if you are looking for a post-3.12 solution. Original answer: You can use distutils.dir_util.copy_tree. It works just fine and you don't have to pass every argument, only src and dst are mandatory. However in your case you can't use a similar tool likeshutil.copytree because it behaves differently: as the destination directory must not exist this function can't be used for overwriting its contents. If you want to use the cp tool as suggested in the question comments beware that using the subprocess module is currently the recommended way for spawning new processes as you can see in the documentation of the os.system function."} +{"question_id": 15789059, "score": 118, "creation_date": 1364996657, "tags": ["python", "json", "python-2.7", "iterator", "key"], "instruction": "python JSON only get keys in first level\n\nI have a very long and complicated json object but I only want to get the items/keys in the first level! Example: { \"1\": \"a\", \"3\": \"b\", \"8\": { \"12\": \"c\", \"25\": \"d\" } } I want to get 1,3,8 as result! I found this code: for key, value in data.iteritems(): print key, value But it prints all keys (also 12 and 25)", "output": "Just do a simple .keys() >>> dct = { ... \"1\": \"a\", ... \"3\": \"b\", ... \"8\": { ... \"12\": \"c\", ... \"25\": \"d\" ... } ... } >>> >>> dct.keys() ['1', '8', '3'] >>> for key in dct.keys(): print key ... 1 8 3 >>> If you need a sorted list: keylist = dct.keys() # this is of type `dict_key`, NOT a `list` keylist.sort() And if you want them as simple list, do this: list(dct_instance.keys())"} +{"question_id": 3503719, "score": 118, "creation_date": 1282056571, "tags": ["python", "bash"], "instruction": "Emulating Bash 'source' in Python\n\nI have a script that looks something like this: export foo=/tmp/foo export bar=/tmp/bar Every time I build I run 'source init_env' (where init_env is the above script) to set up some variables. To accomplish the same in Python I had this code running, reg = re.compile('export (?P\\w+)(\\=(?P.+))*') for line in open(file): m = reg.match(line) if m: name = m.group('name') value = '' if m.group('value'): value = m.group('value') os.putenv(name, value) But then someone decided it would be nice to add a line like the following to the init_env file: export PATH=\"/foo/bar:/bar/foo:$PATH\" Obviously my Python script fell apart. I could modify the Python script to handle this line, but then it'll just break later on when someone comes up with a new feature to use in the init_env file. The question is if there is an easy way to run a Bash command and let it modify my os.environ?", "output": "The problem with your approach is that you are trying to interpret bash scripts. First you just try to interpret the export statement. But when people think they can use bash syntax they will. They will use variable expansion, conditionals, process substitutions. In the end you will have a full blown bash script interpreter with a gazillion bugs. Don't do that. Let Bash interpret the file for you and then collect the results. Here is a minimal example how to do so: #! /usr/bin/env python import os import pprint import shlex import subprocess command = shlex.split(\"bash -c 'source init_env && env'\") proc = subprocess.Popen(command, stdout = subprocess.PIPE) for line in proc.stdout: (key, _, value) = line.partition(\"=\") os.environ[key] = value proc.communicate() pprint.pprint(dict(os.environ)) Make sure that you handle errors. see here for how: \"subprocess.Popen\" - checking for success and errors Also read the documentation on subprocess. this will only capture variables set with the export statement, as env only prints exported variables. you can add set -a to treat all variables as exported. command = shlex.split(\"bash -c 'set -a && source init_env && env'\") ^^^^^^ note that this code will not handle multi line variables. it will also not handle bash function definitions. perhaps better than calling bash source from inside python is to first let bash source the file and then run the python script #!/bin/bash source init_env /path/to/python_script.py here bash will source init_env with all the power and glory and quirks of bash. the python script will inherit the updated environment. note that again only exported variables will be inherited. you can force all variable assignments to be exported with set -a #!/bin/bash set -a source init_env /path/to/python_script.py another approach would be to tell the users that they can strictly only do key=value without any bash power. then use python configparser. this will have the advantage of simple init_env syntax and a rigorously tested config parser. but the disadvantage that the init_env will no longer be as expressive as bash config files can be."} +{"question_id": 34377319, "score": 118, "creation_date": 1450577236, "tags": ["python", "python-3.x", "async-await", "future", "python-asyncio"], "instruction": "Combine awaitables like Promise.all\n\nIn asynchronous JavaScript, it is easy to run tasks in parallel and wait for all of them to complete using Promise.all: async function bar(i) { console.log('started', i); await delay(1000); console.log('finished', i); } async function foo() { await Promise.all([bar(1), bar(2)]); } // This works too: async function my_all(promises) { for (let p of promises) await p; } async function foo() { await my_all([bar(1), bar(2), bar(3)]); } I tried to rewrite the latter in python: import asyncio async def bar(i): print('started', i) await asyncio.sleep(1) print('finished', i) async def aio_all(seq): for f in seq: await f async def main(): await aio_all([bar(i) for i in range(10)]) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() But it executes my tasks sequentially. What is the simplest way to await multiple awaitables? Why doesn't my approach work?", "output": "The equivalent would be using asyncio.gather: import asyncio async def bar(i): print('started', i) await asyncio.sleep(1) print('finished', i) async def main(): await asyncio.gather(*[bar(i) for i in range(10)]) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() Why doesn't my approach work? Because when you await each item in seq, you block that coroutine. So in essence, you have synchronous code masquerading as async. If you really wanted to, you could implement your own version of asyncio.gather using loop.create_task or asyncio.ensure_future. EDIT The original answer used the lower-level asyncio.wait."} +{"question_id": 17778372, "score": 118, "creation_date": 1374452942, "tags": ["python", "function", "recursion", "return"], "instruction": "Why does my recursive function return None?\n\nI have this function that calls itself: def get_input(): my_var = input('Enter \"a\" or \"b\": ') if my_var != \"a\" and my_var != \"b\": print('You didn\\'t type \"a\" or \"b\". Try again.') get_input() else: return my_var print('got input:', get_input()) Now, if I input just \"a\" or \"b\", everything works fine: Type \"a\" or \"b\": a got input: a But, if I type something else and then \"a\" or \"b\", I get this: Type \"a\" or \"b\": purple You didn't type \"a\" or \"b\". Try again. Type \"a\" or \"b\": a got input: None I don't know why get_input() is returning None since it should only return my_var. Where is this None coming from and how do I fix my function?", "output": "It is returning None because when you recursively call it: if my_var != \"a\" and my_var != \"b\": print('You didn\\'t type \"a\" or \"b\". Try again.') get_input() ...you don't return the value. So while the recursion does happen, the return value gets discarded, and then you fall off the end of the function. Falling off the end of the function means that python implicitly returns None, just like this: >>> def f(x): ... pass >>> print(f(20)) None So, instead of just calling get_input() in your if statement, you need to return what the recursive call returns: if my_var != \"a\" and my_var != \"b\": print('You didn\\'t type \"a\" or \"b\". Try again.') return get_input()"} +{"question_id": 22511792, "score": 118, "creation_date": 1395245475, "tags": ["python", "import"], "instruction": "Python \"from [dot]package import ...\" syntax\n\nLooking through a Django tutorial I saw the following syntax: from .models import Recipe, Ingredient, Instruction Can someone explain how the .models works / what it does exactly? Usually I have: from myapp.models import How does it work without the myapp part in front of .models?", "output": "The . is a shortcut that tells it to search in the current package before the rest of the PYTHONPATH. So, if a same-named module Recipe exists somewhere else in your PYTHONPATH, it won't be loaded."} +{"question_id": 3603581, "score": 118, "creation_date": 1283196770, "tags": ["python", "pickle"], "instruction": "What does it mean for an object to be picklable (or pickle-able)?\n\nPython docs mention this word a lot and I want to know what it means.", "output": "It simply means it can be serialized by the pickle module. For a basic explanation of this, see What can be pickled and unpickled?. Pickling Class Instances provides more details, and shows how classes can customize the process."} +{"question_id": 10448200, "score": 118, "creation_date": 1336131844, "tags": ["python", "command-line-arguments", "argparse"], "instruction": "How to parse multiple nested sub-commands using python argparse?\n\nI am implementing a command line program which has interface like this: cmd [GLOBAL_OPTIONS] {command [COMMAND_OPTS]} [{command [COMMAND_OPTS]} ...] I have gone through the argparse documentation. I can implement GLOBAL_OPTIONS as optional argument using add_argument in argparse. And the {command [COMMAND_OPTS]} using Sub-commands. From the documentation it seems I can have only one sub-command. But as you can see I have to implement one or more sub-commands. What is the best way to parse such command line arguments useing argparse?", "output": "@mgilson has a nice answer to this question. But problem with splitting sys.argv myself is that I lose the nice help message Argparse generates for the user. So I ended up doing this: import argparse ## This function takes the 'extra' attribute from global namespace and ## re-parses it to create separate namespaces for all other chained commands. def parse_extra (parser, namespace): namespaces = [] extra = namespace.extra while extra: n = parser.parse_args(extra) extra = n.extra namespaces.append(n) return namespaces argparser=argparse.ArgumentParser() subparsers = argparser.add_subparsers(help='sub-command help', dest='subparser_name') parser_a = subparsers.add_parser('command_a', help = \"command_a help\") ## Setup options for parser_a ## Add nargs=\"*\" for zero or more other commands argparser.add_argument('extra', nargs = \"*\", help = 'Other commands') ## Do similar stuff for other sub-parsers Now after first parse all chained commands are stored in extra. I reparse it while it is not empty to get all the chained commands and create separate namespaces for them. And i get nicer usage string that argparse generates."} +{"question_id": 7850908, "score": 118, "creation_date": 1319207329, "tags": ["python"], "instruction": "What exactly should be set in PYTHONPATH?\n\nI'm going through and writing a setup doc for other developers at work for a python project and I've been reading up on the PYTHONPATH environment variable. I'm looking at my current development system and think I have a few things set wrong that is causing my IDE (IntelliJ) to behave incorrectly when looking up the python libraries. I've looked at the documentation here and here and I'm still unsure of what should actually be in the PYTHONPATH environment variable. I have PYTHONHOME pointed to C:\\Python27. My current PYTHONPATH is set to PYTHONHOME. Should I also add the directories from sys.path? UPDATE: Based on the below information, PYTHONPATH does not need to be set unless there are non-standard libraries that you want python to be able to find by default. For instance, when I install wxPython from the installer it will add its libraries to PYTHONPATH. I do set PYTHONHOME to the root of the python installation so that I can add it to my system PATH environment variable so that I can run python from any where.", "output": "You don't have to set either of them. PYTHONPATH can be set to point to additional directories with private libraries in them. PYTHONHOME sets the location of default libraries. Documentation: PYTHONHOME Change the location of the standard Python libraries. By default, the libraries are searched in prefix/lib/pythonversion and exec_prefix/lib/pythonversion, where prefix and exec_prefix are installation-dependent directories, both defaulting to /usr/local. When PYTHONHOME is set to a single directory, its value replaces both prefix and exec_prefix. To specify different values for these, set PYTHONHOME to prefix:exec_prefix. PYTHONPATH Augment the default search path for module files. The format is the same as the shell\u2019s PATH: one or more directory pathnames separated by os.pathsep (e.g. colons on Unix or semicolons on Windows). Non-existent directories are silently ignored. In addition to normal directories, individual PYTHONPATH entries may refer to zipfiles containing pure Python modules (in either source or compiled form). Extension modules cannot be imported from zipfiles. The default search path is installation dependent, but generally begins with prefix/lib/pythonversion (see PYTHONHOME above). It is always appended to PYTHONPATH. An additional directory will be inserted in the search path in front of PYTHONPATH as described above under Interface options. The search path can be manipulated from within a Python program as the variable sys.path."} +{"question_id": 18262306, "score": 117, "creation_date": 1376602628, "tags": ["python", "algorithm", "sorting", "quicksort"], "instruction": "Quicksort with Python\n\nI am totally new to python and I am trying to implement quicksort in it. Could someone please help me complete my code? I do not know how to concatenate the three arrays and print them. def sort(array=[12,4,5,6,7,3,1,15]): less = [] equal = [] greater = [] if len(array) > 1: pivot = array[0] for x in array: if x < pivot: less.append(x) if x == pivot: equal.append(x) if x > pivot: greater.append(x) sort(less) sort(pivot) sort(greater)", "output": "def sort(array): \"\"\"Sort the array by using quicksort.\"\"\" less = [] equal = [] greater = [] if len(array) > 1: pivot = array[0] for x in array: if x < pivot: less.append(x) elif x == pivot: equal.append(x) elif x > pivot: greater.append(x) # Don't forget to return something! return sort(less)+equal+sort(greater) # Just use the + operator to join lists # Note that you want equal ^^^^^ not pivot else: # You need to handle the part at the end of the recursion - when you only have one element in your array, just return the array. return array"} +{"question_id": 19580102, "score": 117, "creation_date": 1382667945, "tags": ["python", "arrays", "opencv", "numpy", "image-processing"], "instruction": "inverting image in Python with OpenCV\n\nI want to load a color image, convert it to grayscale, and then invert the data in the file. What I need: to iterate over the array in OpenCV and change every single value with this formula (it might be wrong but it seems reasonable for me): img[x,y] = abs(img[x,y] - 255) but I don't understand why doesn't it works: def inverte(imagem, name): imagem = abs(imagem - 255) cv2.imwrite(name, imagem) def inverte2(imagem, name): for x in np.nditer(imagem, op_flags=['readwrite']): x = abs(x - 255) cv2.imwrite(name, imagem) if __name__ == '__main__': nome = str(sys.argv[1]) image = cv2.imread(nome) gs_imagem = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) inverte(gs_imagem, \"invertida.png\") inverte2(gs_imagem, \"invertida2.png\") I don't want to do an explicit loop (I am trying to be more pythonic). I can see that in one image that got a white background it turned black, but only this it doesn't looks like the other colors are having much (if any) change.", "output": "You almost did it. You were tricked by the fact that abs(imagem-255) will give a wrong result since your dtype is an unsigned integer. You have to do (255-imagem) in order to keep the integers unsigned: def inverte(imagem, name): imagem = (255-imagem) cv2.imwrite(name, imagem) As recommended in the answer by Eric Olmon, you can also invert the image using the bitwise_not function of OpenCV: imagem = cv2.bitwise_not(imagem)"} +{"question_id": 38862293, "score": 117, "creation_date": 1470787536, "tags": ["python", "pandas", "dataframe"], "instruction": "How to Add Incremental Numbers to a New Column Using Pandas\n\nI have this simplified dataframe: ID Fruit F1 Apple F2 Orange F3 Banana I want to add in the begining of the dataframe a new column df['New_ID'] which has the number 880 that increments by one in each row. The output should be simply like: New_ID ID Fruit 880 F1 Apple 881 F2 Orange 882 F3 Banana I tried the following: df['New_ID'] = [\"880\"] # but I want to do this without assigning it the list of numbers literally Any idea how to solve this? Thanks!", "output": "Here: df = df.reset_index() df = df.rename(columns={\"index\":\"New_ID\"}) df['New_ID'] = df.index + 880"} +{"question_id": 4983258, "score": 117, "creation_date": 1297586749, "tags": ["python", "list"], "instruction": "Check list monotonicity\n\nHow do I efficiently check list monotonicity? i.e. that it is either a non-decreasing or non-increasing set of ordered values? Examples: [0, 1, 2, 3, 3, 4] # This is a monotonically increasing list [4.3, 4.2, 4.2, -2] # This is a monotonically decreasing list [2, 3, 1] # This is neither", "output": "Are repeated values (e.g. [1, 1, 2]) monotonic? If yes: def non_decreasing(L): return all(x<=y for x, y in zip(L, L[1:])) def non_increasing(L): return all(x>=y for x, y in zip(L, L[1:])) def monotonic(L): return non_decreasing(L) or non_increasing(L) If no: def strictly_increasing(L): return all(xy for x, y in zip(L, L[1:])) def strictly_monotonic(L): return strictly_increasing(L) or strictly_decreasing(L)"} +{"question_id": 71516140, "score": 117, "creation_date": 1647535766, "tags": ["python", "asynchronous", "concurrency", "python-asyncio", "fastapi"], "instruction": "FastAPI runs API calls in serial instead of parallel fashion\n\nI have the following FastAPI application: from fastapi import FastAPI, Request import time app = FastAPI() @app.get(\"/ping\") async def ping(request: Request): print(\"Hello\") time.sleep(5) print(\"bye\") return {\"ping\": \"pong!\"} Calling the above endpoint on localhost\u2014e.g., http://localhost:8501/ping\u2014from different tabs of the same browser window, it returns the following: Hello bye Hello bye instead of: Hello Hello bye bye I have read about using httpx, but, still, I cannot get true parallelization. What's the problem?", "output": "As per FastAPI's docs: When you declare an endpoint with normal def instead of async def, it is run in an external threadpool that is then awaited, instead of being called directly (as it would block the server). and: If you are using a third party library that communicates with something (a database, an API, the file system, etc.) and doesn't have support for using await, (this is currently the case for most database libraries), then declare your endpoints as normally, with just def. If your application (somehow) doesn't have to communicate with anything else and wait for it to respond, use async def. You can mix def and async def in your endpoints as much as you need and define each one using the best option for you. FastAPI will do the right thing with them. Thus, a def (synchronous) endpoint in FastAPI will still run in the event loop, but instead of calling it directly, which would block the server, FastAPI will run it in a separate thread from an external threadpool and then await it (more details on the external threadpool are given later on); hence, FastAPI will still work asynchronously. In other words, the server will process requests to such endpoints concurrently (at the cost, though, of spawning a new thread or reusing an existing one from the threadpool, for every incoming request to such endpoints). Whereas, async def endpoints run directly in the event loop\u2014which runs in a single thread, typically the main thread of a process/worker, and here is created when calling uvicorn.run(), or the equivalent method of some other ASGI server\u2014that is, the server will also process requests to such endpoints concurrently/asynchronously, as long as there is an await call to non-blocking operations inside such async def endpoints; usually, these are I/O-bound operations, such as waiting for (1) data from the client to be sent through the network, (2) contents of a file in the disk to be read, and (3) a database operation to finish. However, if an endpoint defined with async def does not await for some coroutine inside (i.e., a coroutine object is the result of calling an async def function), in order to give up time for other tasks in the event loop to run (e.g., requests to the same or other endpoints, background tasks, etc.), each request to such an endpoint will have to be completely finished (i.e., exit the endpoint), before returning control back to the event loop and allowing other tasks in the event loop to run (see this answer, if you would like to monitor all pending tasks in an event loop). In other words, in such cases, the server would be \"blocked\", and hence, any requests would be processed sequentially. Having said that, you should still consider defining an endpoint with async def, if it doesn't execute any blocking operations inside that has to wait for them to respond (e.g., time.sleep()), but is instead used to return simple JSON data, a simple HTMLResponse or even a FileResponse (in which case the file data will be read asynchronously and in chunks regardless, using await anyio.open_file(), as shown in FileResponse), even if there is not an await call inside the endpoint in such cases, as FastAPI would likely perform better, when running such a simple endpoint directly in the event loop instead of a separate thread. If, however, you had to return some complex and large JSON data, either encoding them on your own within the endpoint, as shown in the linked answer earlier, or using Starlette's JSONResponse or FastAPI's ORJSONResponse/UJSONResponse (see this related answer as well), which, all these classes, would encode the data in a synchronous way, using json.dumps() and orjson.dumps()/ujson.dumps() respectively, in that case, you might consider having the endpoint defined with normal def (related answers could be found here and here). Alternatively, you could keep using an async def endpoint, but have such blocking operations inside (e.g., orjson.dumps() or df.to_json()) run in a separate thread/process, as described in the solutions provided later on (it would be a good practice to perform benchmark tests, similar to this, and compare the results to find the best-performing approach in your case). Note that the same concept not only applies to endpoints, but also to functions that are used as StreamingResponse generators (see StreamingResponse class implementation) or Background Tasks (see BackgroundTask class implementation and this answer), as well as Dependencies. That means FastAPI, behind the scenes, will also run such functions defined with normal def in a separate thread from the same external threadpool; whereas, if such functions were defined with async def, they would run directly in the event loop. In order to run an endpoint or a function described above in a separate thread and await it, FastAPI uses Starlette's asynchronous run_in_threadpool() function, which, under the hood, calls anyio.to_thread.run_sync(). The default number of worker threads of that external threadpool is 40 and can be adjusted as required (have a look at this answer for more details). Hence, after reading this answer to the end, you should be able to know when to define a FastAPI endpoint/StreamingResponse generator/BackgroundTask/Dependency with def or async def, as well as whether or not you should increase the number of threads of the external threadpool. Python's async def function and await The keyword await (which only works within an async def function) passes function control back to the event loop. In other words, it suspends the execution of the surrounding coroutine, and tells the event loop to let some other task run, until that awaited task is completed. Note that just because you may define a custom function with async def and then await it inside your async def endpoint, it doesn't mean that your code will work asynchronously, if that custom function contains, for example, calls to time.sleep(), CPU-bound tasks, non-async IO libraries, or any other blocking call that is incompatible with asynchronous Python code. In FastAPI, for example, when using the async methods of UploadFile, such as await file.read() and await file.close(), FastAPI/Starlette, behind the scenes, actually calls the corresponding synchronous File methods in a separate thread from the external threadpool described earlier (using run_in_threadpool()) and awaits it; otherwise, such methods/operations would block the event loop\u2014you could find out more by looking at the implementation of the UploadFile class. Note that async does not mean parallel, but concurrently. As mentioned earlier, asynchronous code with async and await is many times summarized as using coroutines. Coroutines are cooperative, meaning that at any given time, a program with coroutines is running only one of its coroutines, and this running coroutine suspends its execution only when it explicitly requests to be suspended. As described here: Specifically, whenever execution of a currently-running coroutine reaches an await expression, the coroutine may be suspended, and another previously-suspended coroutine may resume execution if what it was suspended on has since returned a value. Suspension can also happen when an async for block requests the next value from an asynchronous iterator or when an async with block is entered or exited, as these operations use await under the hood. If, however, a blocking I/O-bound or CPU-bound task was directly executed inside an async def function/endpoint, it would then block the event loop, and hence, the main thread would be blocked as well. Hence, a blocking operation such as time.sleep() in an async def endpoint would block the entire server (as in the example provided in your question). Thus, if your endpoint is not going to make any async calls, you could declare it with normal def instead, in which case, FastAPI would run it in a separate thread from the external threadpool and await it, as explained earlier (more solutions are given in the following sections). Example: @app.get(\"/ping\") def ping(request: Request): #print(request.client) print(\"Hello\") time.sleep(5) print(\"bye\") return \"pong\" Otherwise, if the functions that you had to execute inside the endpoint are async functions that you had to await, you should define your endpoint with async def. To demonstrate this, the example below uses asyncio.sleep(), which provides a non-blocking sleep operation. Calling it will suspend the execution of the surrounding coroutine (until the sleep operation is completed), thus allowing other tasks in the event loop to run. import asyncio @app.get(\"/ping\") async def ping(request: Request): #print(request.client) print(\"Hello\") await asyncio.sleep(5) print(\"bye\") return \"pong\" Both the endpoints above will print out the specified messages to the screen in the same order as mentioned in your question\u2014if two requests arrived at (around) the same time\u2014that is: Hello Hello bye bye Important Note When using a web browser to call the same endpoint for the second (third, and so on) time, please remember to do that from a tab that is isolated from the browser's main session; otherwise, succeeding requests (i.e., coming after the first one) might be blocked by the browser (i.e., on client side), as the browser might be waiting for a response to the previous request from the server, before sending the next request. This is a common behaviour for the Chrome web browser at least, due to waiting to see the result of a request and check if the result can be cached, before requesting the same resource again (Also, note that every browser has a specific limit for parallel connections to a given hostname). You could confirm that by using print(request.client) inside the endpoint, where you would see that the hostname and port number are the same for all incoming requests\u2014in case the requests were initiated from tabs opened in the same browser window/session; otherwise, the port number would normally be different for every request\u2014and hence, those requests would be processed sequentially by the server, because of the browser sending them sequentially in the first place. To overcome this, you could either: Reload the same tab (as is running), or Open a new tab in an (isolated) Incognito Window, or Use a different web browser/client to send the request, or Use the httpx library to make asynchronous HTTP requests, along with the awaitable asyncio.gather(), which allows executing multiple asynchronous operations concurrently and then returns a list of results in the same order the awaitables (tasks) were passed to that function (have a look at this answer for more details). Example: import httpx import asyncio URLS = ['http://127.0.0.1:8000/ping'] * 2 async def send(url, client): return await client.get(url, timeout=10) async def main(): async with httpx.AsyncClient() as client: tasks = [send(url, client) for url in URLS] responses = await asyncio.gather(*tasks) print(*[r.json() for r in responses], sep='\\n') asyncio.run(main()) In case you had to call different endpoints that may take different time to process a request, and you would like to print the response out on client side as soon as it is returned from the server\u2014instead of waiting for asyncio.gather() to gather the results of all tasks and print them out in the same order the tasks were passed to the send() function\u2014you could replace the send() function of the example above with the one shown below: async def send(url, client): res = await client.get(url, timeout=10) print(res.json()) return res Python's GIL and Blocking Operations inside Threads Simply put, the Global Interpreter Lock (GIL) is a mutex (lock), ensuring that only one thread (per process) can hold the control of the Python interpreter (and run Python bytecode) at any point in time. One might wonder that if a blocking operation inside a thread, such as time.sleep() within a def endpoint, blocks the calling thread, how is the GIL released, so that other threads get a chance to execute? The answer is because time.sleep() is not really a CPU-bound operation, but it \"suspends execution of the calling thread for the given number of seconds\"; hence, the thread is switched out of the CPU for x seconds, allowing other threads to switch in for execution. In other words, it does block the calling thread, but the calling process is still alive, so that other threads can still run within the process (obviously, in a single-threaded application, everything would be blocked). The state of the thread is stored, so that it can be restored and resume execution at a later point. That process of the CPU jumping from one thread of execution to another is called context switching. Even if a CPU-bound operation (or an I/O-bound one that wouldn't voluntarily release the GIL) was executed inside a thread, and the GIL hadn't been released after 5ms (or some other configurable interval), Python would (automatically) tell the currently running thread to release the GIL. To find the default thread switch interval, use: import sys print(sys.getswitchinterval()) # 0.005 However, this automatic GIL release is best-effort, not guaranteed\u2014see this answer. Async/await and Blocking I/O-bound or CPU-bound Operations You should always aim at using async code, as it uses a single thread to execute tasks (i.e., there is only one thread that can take a lock on the interpreter), and thus, it can be more efficient than threading in I/O-bound scenarios, as it avoids the additional overhead of context switching. If, however, you are required to define a FastAPI endpoint (or a StreamingResponse generator, a background task, etc.) with async def (as you might need to await for coroutines inside), but also have to run some synchronous I/O-bound or CPU-bound task that would block the event loop (essentially, the entire server) and wouldn't let other requests to go through, for example: @app.post(\"/ping\") async def ping(file: UploadFile = File(...)): print(\"Hello\") try: contents = await file.read() res = cpu_bound_task(contents) # this would block the event loop finally: await file.close() print(\"bye\") return \"pong\" then: You should check whether you could change your endpoint's definition to normal def instead of async def. One way, if the only method in your endpoint that had to be awaited was the one reading the file contents would be to declare the file contents parameter as bytes, i.e., contents: bytes = File(). Using that definition, FastAPI would read the file for you and you would receive the contents as bytes. Hence, there would be no need to use an async def endpoint with await file.read() inside. Please note that this approach (i.e., using contents: bytes = File()) should work fine for small files; however, for larger files, and always depending on your server's resources, this might cause issues, as the enitre file contents would be stored to memory (see the documentation on File Parameters). Hence, if your system does not have enough RAM available to accommodate the accumulated data, your application may end up crashing\u2014if, for instance, you have 8GB of RAM (the available RAM will always be less than the amount installed on your device, as other apps/services will be using it as well), you can't load a 50GB file. Alternatively, you could use file: UploadFile = File(...) definition in your endpoint, but this time call the synchronous .read() method of the SpooledTemporaryFile directly, which can be accessed through the .file attribute of the UploadFile object. In this way, you will be able to declare your endpoint with a normal def instead, and hence, each request will run in a separate thread from the external threadpool and then be awaited (as explained earlier). Example is given below. For more details on how to upload a File, as well as how FastAPI/Starlette uses the SpooledTemporaryFile behind the scenes when uploading a File, please have a look at this answer and this answer. @app.post(\"/ping\") def ping(file: UploadFile = File(...)): print(\"Hello\") try: contents = file.file.read() res = cpu_bound_task(contents) finally: file.file.close() print(\"bye\") return \"pong\" Another way, when you would like having the endpoint defined with normal def, as you might need to run blocking operations inside and would like having it run in a separate thread instead of calling it directly in the event loop, but at the same time you would have to await for coroutines inside, is to await such coroutines within an async dependency instead, as demonstrated in this answer, which will then return the result to the def endpoint. Use FastAPI's (Starlette's) run_in_threadpool() function from the concurrency module\u2014as @tiangolo suggested\u2014which, as noted earlier, will run the function in a separate thread from an external threadpool to ensure that the main thread (where coroutines are run) does not get blocked. The run_in_threadpool() is an awaitable function, where its first parameter is a normal function, and the following parameters are passed to that function directly. It supports both sequence and keyword arguments. from fastapi.concurrency import run_in_threadpool res = await run_in_threadpool(cpu_bound_task, contents) Alternatively, use asyncio's loop.run_in_executor()\u2014after obtaining the running event loop using asyncio.get_running_loop()\u2014to run the task, which, in this case, you can await for it to complete and return the result(s), before moving on to the next line of code. Passing None to the executor argument, the default executor will be used; which is a ThreadPoolExecutor: import asyncio loop = asyncio.get_running_loop() res = await loop.run_in_executor(None, cpu_bound_task, contents) or, if you would like to pass keyword arguments instead, you could use a lambda expression (e.g., lambda: cpu_bound_task(some_arg=contents)), or, preferably, functools.partial(), which is specifically recommended in the documentation for loop.run_in_executor(): import asyncio from functools import partial loop = asyncio.get_running_loop() res = await loop.run_in_executor(None, partial(cpu_bound_task, some_arg=contents)) In Python 3.9+, you could also use asyncio.to_thread() to asynchronously run a synchronous function in a separate thread\u2014which, essentially, uses await loop.run_in_executor(None, func_call) under the hood, as can been seen in the implementation of asyncio.to_thread(). The to_thread() function takes the name of a blocking function to execute, as well as any arguments (*args and/or **kwargs) to the function, and then returns a coroutine that can be awaited. Example: import asyncio res = await asyncio.to_thread(cpu_bound_task, contents) Note that as explained in this answer, passing None to the executor argument does not create a new ThreadPoolExecutor every time you call await loop.run_in_executor(None, ...), but instead re-uses the default executor with the default number of worker threads (i.e., min(32, os.cpu_count() + 4)). Thus, depending on the requirements of your application, that number might not be enough. In that case, you should rather use a custom ThreadPoolExecutor. For instance: import asyncio import concurrent.futures loop = asyncio.get_running_loop() with concurrent.futures.ThreadPoolExecutor() as pool: res = await loop.run_in_executor(pool, cpu_bound_task, contents) I would strongly recommend having a look at the linked answer above to learn about the difference between using run_in_threadpool() and run_in_executor(), as well as how to create a re-usable custom ThreadPoolExecutor at the application startup, and adjust the number of maximum worker threads as needed. ThreadPoolExecutor will successfully prevent the event loop from being blocked (and should be prefered for calling blocking I/O-bound tasks), but won't give you the performance improvement you would expect from running code in parallel; especially, when one needs to perform CPU-bound tasks, such as audio or image processing and machine learning (see here). It is thus preferable to run CPU-bound tasks in a separate process\u2014using ProcessPoolExecutor, as shown below\u2014which, again, you can integrate with asyncio, in order to await it to finish its work and return the result(s). As described here, it is important to protect the entry point of the program to avoid recursive spawning of subprocesses, etc. Basically, your code must be under if __name__ == '__main__'. import concurrent.futures loop = asyncio.get_running_loop() with concurrent.futures.ProcessPoolExecutor() as pool: res = await loop.run_in_executor(pool, cpu_bound_task, contents) Again, I'd suggest having a look at the linked answer earlier on how to create a re-usable ProcessPoolExecutor at application startup\u2014you should find this answer helpful as well. More solutions, as shown in this answer, include using asyncio.create_task() (if your task is actually async def, but you wouldn't like to await for it to complete) or background tasks, as well as spawning a new thread (using threading) or process (using multiprocessing) in the background instead of using concurrent.futures. Moreover, if you had to perform some heavy background computation task that wouldn't necessarily have to be run by the same process (for example, you don't need to share memory, variables, etc.), you could also benefit from using other bigger tools like Celery. Using apscheduler, as demonstrated in this answer, might be another option as well\u2014always choose what suits you best. Use more server workers to take advantage of multi-core CPUs, in order to run multiple processes in parallel and be able to serve more requests. For example, uvicorn main:app --workers 4. When using 1 worker, only one process is run. When using multiple workers, this will spawn multiple processes (all single threaded). Each process has a separate GIL, as well as its own event loop, which runs in the main thread of each process and executes all tasks in its thread. That means, there is only one thread that can take a lock on the interpreter of each process; unless, of course, you employ additional threads, either outside or inside the event loop, e.g., when using run_in_threadpool, a custom ThreadPoolExecutor or defining endpoints/StreamingResponse generators/background tasks/dependencies with normal def instead of async def, as well as when calling UploadFile's methods (see the first two paragraphs of this answer for more details). Note that each worker \"has its own things, variables and memory\". This means that global variables/objects, etc., won't be shared across the processes/workers. In this case, you should consider using a database storage, or Key-Value stores (Caches), as described here and here. Additionally, note that \"if you are consuming a large amount of memory in your code, each process will consume an equivalent amount of memory\"."} +{"question_id": 12613797, "score": 117, "creation_date": 1348717722, "tags": ["python", "ftp", "upload"], "instruction": "Python Script Uploading files via FTP\n\nI would like to make a script to upload a file to FTP. How would the login system work? I'm looking for something like this: ftp.login=(mylogin) ftp.pass=(mypass) And any other sign in credentials.", "output": "Use ftplib, you can write it like this: import ftplib session = ftplib.FTP('server.address.com','USERNAME','PASSWORD') file = open('kitten.jpg','rb') # file to send session.storbinary('STOR kitten.jpg', file) # send the file file.close() # close file and FTP session.quit() Use ftplib.FTP_TLS instead if you FTP host requires TLS. Python 2 To retrieve it, you can use urllib.retrieve: import urllib urllib.urlretrieve('ftp://server/path/to/file', 'file') Python 3 import urllib.request urllib.request.urlretrieve('ftp://server/path/to/file', 'file') EDIT: To find out the current directory, use FTP.pwd(): FTP.pwd(): Return the pathname of the current directory on the server. To change the directory, use FTP.cwd(pathname): FTP.cwd(pathname): Set the current directory on the server."} +{"question_id": 490195, "score": 117, "creation_date": 1233192948, "tags": ["python", "pdf"], "instruction": "split a multi-page pdf file into multiple pdf files with python?\n\nI would like to take a multi-page pdf file and create separate pdf files per page. I have downloaded reportlab and have browsed the documentation, but it seems aimed at pdf generation. I haven't yet seen anything about processing PDF files themselves. Is there an easy way to do this in python?", "output": "Updated solution for the latest release of PyPDF (3.0.0) and to split a range of pages. from PyPDF2 import PdfReader, PdfWriter file_name = r'c:\\temp\\junk.pdf' pages = (121, 130) reader = PdfReader(file_name) writer = PdfWriter() page_range = range(pages[0], pages[1] + 1) for page_num, page in enumerate(reader.pages, 1): if page_num in page_range: writer.add_page(page) with open(f'{file_name}_page_{pages[0]}-{pages[1]}.pdf', 'wb') as out: writer.write(out)"} +{"question_id": 3929433, "score": 117, "creation_date": 1287020076, "tags": ["python", "ruby"], "instruction": "Does Python have an \"or equals\" function like ||= in Ruby?\n\nIf not, what is the best way to do this? Right now I'm doing (for a django project): if not 'thing_for_purpose' in request.session: request.session['thing_for_purpose'] = 5 but its pretty awkward. In Ruby it would be: request.session['thing_for_purpose'] ||= 5 which is much nicer.", "output": "Jon-Eric's answer's is good for dicts, but the title seeks a general equivalent to ruby's ||= operator. A common way to do something like ||= in Python is x = x or new_value"} +{"question_id": 8905501, "score": 117, "creation_date": 1326863591, "tags": ["python", "numpy", "machine-learning"], "instruction": "Extract upper or lower triangular part of a numpy matrix\n\nI have a matrix A and I want 2 matrices U and L such that U contains the upper triangular elements of A (all elements above and not including diagonal) and similarly for L(all elements below and not including diagonal). Is there a numpy method to do this? e.g A = array([[ 4., 9., -3.], [ 2., 4., -2.], [-2., -3., 7.]]) U = array([[ 0., 9., -3.], [ 0., 0., -2.], [ 0., 0., 0.]]) L = array([[ 0., 0., 0.], [ 2., 0., 0.], [-2., -3., 0.]])", "output": "Try numpy.triu (triangle-upper) and numpy.tril (triangle-lower). Code example: np.triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]]) array([[ 1, 2, 3], [ 4, 5, 6], [ 0, 8, 9], [ 0, 0, 12]])"} +{"question_id": 21596281, "score": 117, "creation_date": 1391670521, "tags": ["python", "image", "opencv", "image-processing", "computer-vision"], "instruction": "How does one convert a grayscale image to RGB in OpenCV (Python)?\n\nI'm learning image processing using OpenCV for a realtime application. I did some thresholding on an image and want to label the contours in green, but they aren't showing up in green because my image is in black and white. Early in the program I used gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) to convert from RGB to grayscale, but to go back I'm confused, and the function backtorgb = cv2.cvtColor(gray,cv2.CV_GRAY2RGB) is giving: AttributeError: 'module' object has no attribute 'CV_GRAY2RGB'. The code below does not appear to be drawing contours in green. Is this because it's a grayscale image? If so, can I convert the grayscale image back to RGB to visualize the contours in green? import numpy as np import cv2 import time cap = cv2.VideoCapture(0) while(cap.isOpened()): ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ret, gb = cv2.threshold(gray,128,255,cv2.THRESH_BINARY) gb = cv2.bitwise_not(gb) contour,hier = cv2.findContours(gb,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE) for cnt in contour: cv2.drawContours(gb,[cnt],0,255,-1) gray = cv2.bitwise_not(gb) cv2.drawContours(gray,contour,-1,(0,255,0),3) cv2.imshow('test', gray) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()", "output": "I am promoting my comment to an answer: The easy way is: You could draw in the original 'frame' itself instead of using gray image. The hard way (method you were trying to implement): backtorgb = cv2.cvtColor(gray,cv2.COLOR_GRAY2RGB) is the correct syntax."} +{"question_id": 22798934, "score": 117, "creation_date": 1396395464, "tags": ["python", "pandas", "stata", "reshape"], "instruction": "Pandas long to wide reshape, by two variables\n\nI have data in long format and am trying to reshape to wide, but there doesn't seem to be a straightforward way to do this using melt/stack/unstack: Salesman Height product price Knut 6 bat 5 Knut 6 ball 1 Knut 6 wand 3 Steve 5 pen 2 Becomes: Salesman Height product_1 price_1 product_2 price_2 product_3 price_3 Knut 6 bat 5 ball 1 wand 3 Steve 5 pen 2 NA NA NA NA I think Stata can do something like this with the reshape command.", "output": "Here's another solution more fleshed out, taken from Chris Albon's site. Create \"long\" dataframe raw_data = { 'patient': [1, 1, 1, 2, 2], 'obs': [1, 2, 3, 1, 2], 'treatment': [0, 1, 0, 1, 0], 'score': [6252, 24243, 2345, 2342, 23525]} df = pd.DataFrame(raw_data, columns=['patient', 'obs', 'treatment', 'score']) patient obs treatment score 0 1 1 0 6252 1 1 2 1 24243 2 1 3 0 2345 3 2 1 1 2342 4 2 2 0 23525 Make a \"wide\" data df.pivot(index='patient', columns='obs', values='score') obs 1 2 3 patient 1 6252.0 24243.0 2345.0 2 2342.0 23525.0 NaN"} +{"question_id": 25862026, "score": 117, "creation_date": 1410849127, "tags": ["python", "matplotlib", "subplot", "axis-labels", "unset"], "instruction": "Turn off axes in subplots\n\nI have the following code: import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib.cm as cm img = mpimg.imread(\"lena.jpg\") fig, axs = plt.subplots(2, 2) axs[0,0].imshow(img, cmap = cm.Greys_r) axs[0,0].set_title(\"Rank = 512\") rank = 128 new_img = prune_matrix(rank, img) axs[0,1].imshow(new_img, cmap = cm.Greys_r) axs[0,1].set_title(\"Rank = %s\" %rank) rank = 32 new_img = prune_matrix(rank, img) axs[1,0].imshow(new_img, cmap = cm.Greys_r) axs[1,0].set_title(\"Rank = %s\" %rank) rank = 16 new_img = prune_matrix(rank, img) axs[1,1].imshow(new_img, cmap = cm.Greys_r) axs[1,1].set_title(\"Rank = %s\" %rank) plt.show() However, the result is pretty ugly because of the values on the axes: How can I turn off axes values for all subplots simultaneously? How to remove axis, legends, and white padding doesn't work because I don't know how to make it work with subplots.", "output": "You can turn the Axes off by following the advice in Veedrac's comment (linking to here) with one small modification. Rather than using plt.axis('off'), use ax.axis('off') where ax is a matplotlib.axes object. To do this, index each Axes, axs[0, 0].axis('off'), and so on for each subplot. See Native Matplotlib interfaces for the difference between pyplot and Axes. The code below shows the result without the prune_matrix, which is not available. import matplotlib.pyplot as plt import matplotlib.image as mpimg import matplotlib.cm as cm import matplotlib.cbook as cbook # used for matplotlib sample image # load readily available sample image with cbook.get_sample_data('grace_hopper.jpg') as image_file: img = plt.imread(image_file) # read a local file # img = mpimg.imread(\"file.jpg\") fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(8, 8), tight_layout=True) axs[0, 0].imshow(img, cmap=cm.Greys_r) axs[0, 0].set_title(\"Rank = 512\") axs[0, 0].axis(\"off\") axs[0, 1].imshow(img, cmap=cm.Greys_r) axs[0, 1].set_title(\"Rank = %s\" % 128) axs[0, 1].axis(\"off\") axs[1, 0].imshow(img, cmap=cm.Greys_r) axs[1, 0].set_title(\"Rank = %s\" % 32) axs[1, 0].axis(\"off\") axs[1, 1].imshow(img, cmap=cm.Greys_r) axs[1, 1].set_title(\"Rank = %s\" % 16) axs[1, 1].axis(\"off\") plt.show() Note: To turn off only the x or y axis you can use set_visible() e.g.: axs[0, 0].xaxis.set_visible(False) # Hide only x axis Iterative approach fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(8, 8), tight_layout=True) # convert the 2d array to 1d, which removes the need to iterate through i and j axs = axs.flat ranks = [512, 128, 32, 16] # iterate through each Axes with the associate rank for ax, rank in zip(axs, ranks): ax.imshow(img, cmap=cm.Greys_r) ax.set_title(f'Rank = {rank}') ax.axis('off') plt.show()"} +{"question_id": 13252333, "score": 117, "creation_date": 1352209460, "tags": ["python", "python-3.x", "list", "types"], "instruction": "Check if all elements of a list are of the same type\n\nHow can I check if the elements of a list are of the same type, without checking individually every element if possible? For example, I would like to have a function to check that every element of this list is an integer (which is clearly false): x = [1, 2.5, 'a'] def checkIntegers(x): # return True if all elements are integers, False otherwise", "output": "Try using all in conjunction with isinstance: all(isinstance(x, int) for x in lst) You can even check for multiple types with isinstance if that is desireable: all(isinstance(x, (int, long)) for x in lst) Note that this will pick up inherited classes as well. e.g.: class MyInt(int): pass print(isinstance(MyInt('3'),int)) #True If you need to restrict yourself to just integers, you could use all(type(x) is int for x in lst). But that is a VERY rare scenario. A fun function you could write with this is one which would return the type of the first element in a sequence if all the other elements are the same type: def homogeneous_type(seq): iseq = iter(seq) first_type = type(next(iseq)) return first_type if all( (type(x) is first_type) for x in iseq ) else False This will work for any arbitrary iterable, but it will consume \"iterators\" in the process. Another fun function in the same vein which returns the set of common bases: import inspect def common_bases(seq): iseq = iter(seq) bases = set(inspect.getmro(type(next(iseq)))) for item in iseq: bases = bases.intersection(inspect.getmro(type(item))) if not bases: break return bases"} +{"question_id": 35066588, "score": 117, "creation_date": 1453997989, "tags": ["python", "datetime"], "instruction": "Is there a simple way to increment a datetime object one month in Python?\n\nSo I am trying to find a way to increment a datetime object by one month. However, it seems this is not so simple, according to this question. I was hoping for something like: import datetime as dt now = dt.datetime.now() later = now + dt.timedelta(months=1) But that doesn't work. I was also hoping to be able to go to the same day (or the closest alternative) in the next month if possible. For example, a datetime object set at January 1st would increment to Feb 1st whereas a datetime object set at February 28th would increment to March 31st as opposed to March 28th or something. To be clear, February 28th would (typically) map to March 31st because it is the last day of the month, and thus it should go to the last day of the month for the next month. Otherwise it would be a direct link: the increment should go to the day in the next month with the same numbered day. Is there a simple way to do this in the current release of Python?", "output": "Check out from dateutil.relativedelta import * for adding a specific amount of time to a date, you can continue to use timedelta for the simple stuff i.e. import datetime from dateutil.relativedelta import * use_date = datetime.datetime.now() use_date = use_date + datetime.timedelta(minutes=+10) use_date = use_date + datetime.timedelta(hours=+1) use_date = use_date + datetime.timedelta(days=+1) use_date = use_date + datetime.timedelta(weeks=+1) or you can start using relativedelta use_date = use_date+relativedelta(months=+1) use_date = use_date+relativedelta(years=+1) for the last day of next month: use_date = use_date+relativedelta(months=+1) use_date = use_date+relativedelta(day=31) Right now this will provide 29/02/2016 for the penultimate day of next month: use_date = use_date+relativedelta(months=+1) use_date = use_date+relativedelta(day=31) use_date = use_date+relativedelta(days=-1) last Friday of the next month: use_date = use_date+relativedelta(months=+1, day=31, weekday=FR(-1)) 2nd Tuesday of next month: new_date = use_date+relativedelta(months=+1, day=1, weekday=TU(2)) As @mrroot5 points out dateutil's rrule functions can be applied, giving you an extra bang for your buck, if you require date occurences. for example: Calculating the last day of the month for 9 months from the last day of last month. Then, calculate the 2nd Tuesday for each of those months. from dateutil.relativedelta import * from dateutil.rrule import * from datetime import datetime use_date = datetime(2020,11,21) #Calculate the last day of last month use_date = use_date+relativedelta(months=-1) use_date = use_date+relativedelta(day=31) #Generate a list of the last day for 9 months from the calculated date x = list(rrule(freq=MONTHLY, count=9, dtstart=use_date, bymonthday=(-1,))) print(\"Last day\") for ld in x: print(ld) #Generate a list of the 2nd Tuesday in each of the next 9 months from the calculated date print(\"\\n2nd Tuesday\") x = list(rrule(freq=MONTHLY, count=9, dtstart=use_date, byweekday=TU(2))) for tuesday in x: print(tuesday) Last day 2020-10-31 00:00:00 2020-11-30 00:00:00 2020-12-31 00:00:00 2021-01-31 00:00:00 2021-02-28 00:00:00 2021-03-31 00:00:00 2021-04-30 00:00:00 2021-05-31 00:00:00 2021-06-30 00:00:00 2nd Tuesday 2020-11-10 00:00:00 2020-12-08 00:00:00 2021-01-12 00:00:00 2021-02-09 00:00:00 2021-03-09 00:00:00 2021-04-13 00:00:00 2021-05-11 00:00:00 2021-06-08 00:00:00 2021-07-13 00:00:00 rrule could be used to find the next date occurring on a particular day. e.g. the next 1st of January occurring on a Monday (Given today is the 4th November 2021) from dateutil.relativedelta import * from dateutil.rrule import * from datetime import * year = rrule(YEARLY,dtstart=datetime.now(),bymonth=1,bymonthday=1,byweekday=MO)[0].year year 2024 or the next 5 x 1st of January's occurring on a Monday years = rrule(YEARLY,dtstart=datetime.now(),bymonth=1,bymonthday=1,byweekday=MO)[0:5] for i in years:print(i.year) ... 2024 2029 2035 2046 2052 The first Month next Year that starts on a Monday: >>> month = rrule(YEARLY,dtstart=datetime.date(2023, 1, 1),bymonthday=1,byweekday=MO)[0] >>> month.strftime('%Y-%m-%d : %B') '2023-05-01 : May' If you need the months that start on a Monday between 2 dates: months = rrule(YEARLY,dtstart=datetime.date(2025, 1, 1),until=datetime.date(2030, 1, 1),bymonthday=1,byweekday=MO) >>> for m in months: ... print(m.strftime('%Y-%m-%d : %B')) ... 2025-09-01 : September 2025-12-01 : December 2026-06-01 : June 2027-02-01 : February 2027-03-01 : March 2027-11-01 : November 2028-05-01 : May 2029-01-01 : January 2029-10-01 : October This is by no means an exhaustive list of what is available. Documentation is available here: https://dateutil.readthedocs.org/en/latest/"} +{"question_id": 12761806, "score": 117, "creation_date": 1349542530, "tags": ["python", "matplotlib", "legend"], "instruction": "How to place two different legends on the same graph\n\nI have a plot where different colors are used for different parameters, and where different line styles are used for different algorithms. The goal is to compare the results of the different algorithms performed with similar parameters. It means in total I use 4 different colors, and 3 different line styles, for a total of 12 plots on the same graph. I actually build the legend based on colors, associating each color with the corresponding parameter. Now I'd like to display a second legend on the same graph, with the meaning of each line style. It is possible to achieve that? How? Here is what my code looks like actually: colors = ['b', 'r', 'g', 'c'] cc = cycle(c) for p in parameters: d1 = algo1(p) d2 = algo2(p) d3 = algo3(p) pyplot.hold(True) c = next(cc) pyplot.plot(d1, '-', color=c, label=\"d1\") pyplot.plot(d1, '--', color=c) pyplot.plot(d2, '.-', color=c) pyplot.legend()", "output": "There's a section in the matplotlib documentation on that exact subject. Here's code for your specific example: import itertools from matplotlib import pyplot colors = ['b', 'r', 'g', 'c'] cc = itertools.cycle(colors) plot_lines = [] for p in parameters: d1 = algo1(p) d2 = algo2(p) d3 = algo3(p) pyplot.hold(True) c = next(cc) l1, = pyplot.plot(d1, '-', color=c) l2, = pyplot.plot(d2, '--', color=c) l3, = pyplot.plot(d3, '.-', color=c) plot_lines.append([l1, l2, l3]) legend1 = pyplot.legend(plot_lines[0], [\"algo1\", \"algo2\", \"algo3\"], loc=1) pyplot.legend([l[0] for l in plot_lines], parameters, loc=4) pyplot.gca().add_artist(legend1) Here's an example of its output:"} +{"question_id": 38985053, "score": 117, "creation_date": 1471384314, "tags": ["python", "pandas", "dataframe", "group-by"], "instruction": "Groupby and sum only one column\n\nSo I have a dataframe, df1, that looks like the following: A B C 1 foo 12 California 2 foo 22 California 3 bar 8 Rhode Island 4 bar 32 Rhode Island 5 baz 15 Ohio 6 baz 26 Ohio I want to group by column A and then sum column B while keeping the value in column C. Something like this: A B C 1 foo 34 California 2 bar 40 Rhode Island 3 baz 41 Ohio The issue is, when I say df.groupby('A').sum() column C gets removed, returning B A bar 40 baz 41 foo 34 How can I get around this and keep column C when I group and sum?", "output": "The only way to do this would be to include C in your groupby (the groupby function can accept a list). Give this a try: df.groupby(['A','C'])['B'].sum() One other thing to note, if you need to work with df after the aggregation you can also use the as_index=False option to return a dataframe object. This one gave me problems when I was first working with Pandas. Example: df.groupby(['A','C'], as_index=False)['B'].sum()"} +{"question_id": 8161617, "score": 117, "creation_date": 1321499166, "tags": ["python", "buildout"], "instruction": "How can I specify library versions in setup.py?\n\nIn my setup.py file, I've specified a few libraries needed to run my project: setup( # ... install_requires = [ 'django-pipeline', 'south' ] ) How can I specify required versions of these libraries?", "output": "I'm not sure about buildout, however, for setuptools/distribute, you specify version info with the comparison operators (like ==, >=, or <=). For example: install_requires = ['django-pipeline==1.1.22', 'south>=0.7'] See the Python packaging documentation"} +{"question_id": 52771328, "score": 116, "creation_date": 1539311143, "tags": ["python", "jupyter-notebook", "plotly", "jupyter", "jupyter-lab"], "instruction": "Plotly chart not showing in Jupyter notebook\n\nI have been trying to solve this issue for hours. I followed the steps on the Plotly website and the chart still doesn't show in the notebook. This is my code for the plot: colorway = ['#f3cec9', '#e7a4b6', '#cd7eaf', '#a262a9', '#6f4d96', '#3d3b72', '#182844'] data = [ go.Scatter( x = immigration.columns, y = immigration.loc[state], name=state) for state in immigration.index] layout = go.Layout( title='Immigration', yaxis=dict(title='Immigration %'), xaxis=dict(title='Years'), colorway=colorway, font=dict(family='Courier New, monospace', size=18, color='#7f7f7f') ) fig = go.Figure(data=data, layout=layout) iplot(fig) And this is everything I have imported into my notebook: import pandas as pd import numpy as np import matplotlib.pyplot as plt import plotly.plotly as py import plotly.graph_objs as go from plotly.offline import init_notebook_mode, iplot init_notebook_mode(connected=True)", "output": "You need to change init_notebook_mode call and remove connected=True, if you want to work in offline mode. Such that: # Import the necessaries libraries import plotly.offline as pyo import plotly.graph_objs as go # Set notebook mode to work in offline pyo.init_notebook_mode() # Create traces trace0 = go.Scatter( x=[1, 2, 3, 4], y=[10, 15, 13, 17] ) trace1 = go.Scatter( x=[1, 2, 3, 4], y=[16, 5, 11, 9] ) # Fill out data with our traces data = [trace0, trace1] # Plot it and save as basic-line.html pyo.iplot(data, filename = 'basic-line') Output should be shown in your jupyter notebook:"} +{"question_id": 31181295, "score": 116, "creation_date": 1435830663, "tags": ["python", "pandas", "datetime"], "instruction": "converting a pandas date to week number\n\nI would like to extract a week number from data in a pandas dataframe. The date format is datetime64[ns] I have normalized the date to remove the time from it df['Date'] = df['Date'].apply(pd.datetools.normalize_date) so the date now looks like - 2015-06-17 in the data frame column and now I like to convert that to a week number.", "output": "Just access the week attribute of Series.dt.isocalendar(): Example: In [286]: df['Date'].dt.isocalendar().week Out[286]: 0 25 dtype: int64 In [287]: df['Week_Number'] = df['Date'].dt.isocalendar().week df Out[287]: Date Week_Number 0 2015-06-17 25"} +{"question_id": 31087111, "score": 116, "creation_date": 1435396616, "tags": ["python", "list", "shadowing"], "instruction": "Why does \"example = list(...)\" result in \"TypeError: 'list' object is not callable\"?\n\nI tried to use this code from a tutorial at the REPL: example = list('easyhoss') The tutorial says that example should become equal to a list ['e', 'a', 's', 'y', 'h', 'o', 's', 's']. But I got an error instead: >>> example = list('easyhoss') Traceback (most recent call last): File \"\", line 1, in TypeError: 'list' object is not callable Why did this happen?", "output": "Seems like you've shadowed the builtin name list, which points at a class, by the same name pointing at an instance of it. Here is an example: >>> example = list('easyhoss') # here `list` refers to the builtin class >>> list = list('abc') # we create a variable `list` referencing an instance of `list` >>> example = list('easyhoss') # here `list` refers to the instance Traceback (most recent call last): File \"\", line 1, in TypeError: 'list' object is not callable I believe this is fairly obvious. Python stores object names (functions and classes are objects, too) in namespaces (which are implemented as dictionaries), hence you can rewrite pretty much any name in any scope. It won't show up as an error of some sort. As you might know, Python emphasizes that \"special cases aren't special enough to break the rules\". And there are two major rules behind the problem you've faced: Namespaces. Python supports nested namespaces. Theoretically you can endlessly nest them. As I've already mentioned, they are basically dictionaries of names and references to corresponding objects. Any module you create gets its own \"global\" namespace, though in fact it's just a local namespace with respect to that particular module. Scoping. When you reference a name, the Python runtime looks it up in the local namespace (with respect to the reference) and, if such name does not exist, it repeats the attempt in a higher-level namespace. This process continues until there are no higher namespaces left. In that case you get a NameError. Builtin functions and classes reside in a special high-order namespace __builtins__. If you declare a variable named list in your module's global namespace, the interpreter will never search for that name in a higher-level namespace (that is __builtins__). Similarly, suppose you create a variable var inside a function in your module, and another variable var in the module. Then, if you reference var inside the function, you will never get the global var, because there is a var in the local namespace - the interpreter has no need to search it elsewhere. Here is a simple illustration. >>> example = list(\"abc\") # Works fine >>> >>> # Creating name \"list\" in the global namespace of the module >>> list = list(\"abc\") >>> >>> example = list(\"abc\") Traceback (most recent call last): File \"\", line 1, in TypeError: 'list' object is not callable >>> # Python looks for \"list\" and finds it in the global namespace, >>> # but it's not the proper \"list\". >>> >>> # Let's remove \"list\" from the global namespace >>> del list >>> # Since there is no \"list\" in the global namespace of the module, >>> # Python goes to a higher-level namespace to find the name. >>> example = list(\"abc\") # It works. So, as you see there is nothing special about Python builtins. And your case is a mere example of universal rules. You'd better use an IDE (e.g. a free version of PyCharm, or Atom with Python plugins) that highlights name shadowing to avoid such errors. You might as well be wondering what is a \"callable\", in which case you can read this post. list, being a class, is callable. Calling a class triggers instance construction and initialisation. An instance might as well be callable, but list instances are not. If you are even more puzzled by the distinction between classes and instances, then you might want to read the documentation (quite conveniently, the same page covers namespaces and scoping). If you want to know more about builtins, please read the answer by Christian Dean. P.S. When you start an interactive Python session, you create a temporary module."} +{"question_id": 5870188, "score": 116, "creation_date": 1304429282, "tags": ["python", "regex", "flask"], "instruction": "Does Flask support regular expressions in its URL routing?\n\nI understand that Flask has the int, float and path converters, but the application we're developing has more complex patterns in its URLs. Is there a way we can use regular expressions, as in Django?", "output": "Even though Armin beat me to the punch with an accepted answer I thought I'd show an abbreviated example of how I implemented a regex matcher in Flask just in case anyone wants a working example of how this could be done. from flask import Flask from werkzeug.routing import BaseConverter app = Flask(__name__) class RegexConverter(BaseConverter): def __init__(self, url_map, *items): super(RegexConverter, self).__init__(url_map) self.regex = items[0] app.url_map.converters['regex'] = RegexConverter @app.route('/-/') def example(uid, slug): return \"uid: %s, slug: %s\" % (uid, slug) if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) this URL should return with 200: http://localhost:5000/abc0-foo/ this URL should will return with 404: http://localhost:5000/abcd-foo/"} +{"question_id": 23195321, "score": 116, "creation_date": 1398075375, "tags": ["python", "jinja2"], "instruction": "How to convert string to uppercase / lowercase in Jinja2?\n\nI am trying to convert to upper case a string in a Jinja template I am working on. In the template documentation, I read: upper(s) Convert a value to uppercase. So I wrote this code: {% if student.department == \"Academy\" %} Academy {% elif upper(student.department) != \"MATHS DEPARTMENT\" %} Maths department {% endif %} But I am getting this error: UndefinedError: 'upper' is undefined So, how do you convert a string to uppercase in Jinja2?", "output": "Filters are used with the |filter syntax: {% elif student.department|upper != \"MATHS DEPARTMENT\" %} Maths department {% endif %} or you can use the str.upper() method: {% elif student.department.upper() != \"MATHS DEPARTMENT\" %} Maths department {% endif %} Jinja syntax is Python-like, not actual Python."} +{"question_id": 16774849, "score": 116, "creation_date": 1369663154, "tags": ["python", "arrays", "numpy", "mean", "mean-square-error"], "instruction": "Mean Squared Error in Numpy?\n\nIs there a method in numpy for calculating the Mean Squared Error between two matrices? I've tried searching but found none. Is it under a different name? If there isn't, how do you overcome this? Do you write it yourself or use a different lib?", "output": "You can use: mse = ((A - B)**2).mean(axis=ax) Or mse = (np.square(A - B)).mean(axis=ax) with ax=0 the average is performed along the row, for each column, returning an array with ax=1 the average is performed along the column, for each row, returning an array with omitting the ax parameter (or setting it to ax=None) the average is performed element-wise along the array, returning a scalar value"} +{"question_id": 6150108, "score": 116, "creation_date": 1306487361, "tags": ["python", "html", "lxml", "pretty-print"], "instruction": "How to Pretty Print HTML to a file, with indentation\n\nI am using lxml.html to generate some HTML. I want to pretty print (with indentation) my final result into an html file. How do I do that? This is what I have tried and got till now import lxml.html as lh from lxml.html import builder as E sliderRoot=lh.Element(\"div\", E.CLASS(\"scroll\"), style=\"overflow-x: hidden; overflow-y: hidden;\") scrollContainer=lh.Element(\"div\", E.CLASS(\"scrollContainer\"), style=\"width: 4340px;\") sliderRoot.append(scrollContainer) print lh.tostring(sliderRoot, pretty_print = True, method=\"html\") As you can see I am using the pretty_print=True attribute. I thought that would give indented code, but it doesn't really help. This is the output :
", "output": "I ended up using BeautifulSoup directly. That is something lxml.html.soupparser uses for parsing HTML. BeautifulSoup has a prettify method that does exactly what it says it does. It prettifies the HTML with proper indents and everything. BeautifulSoup will NOT fix the HTML, so broken code, remains broken. But in this case, since the code is being generated by lxml, the HTML code should be at least semantically correct. In the example given in my question, I will have to do this : from bs4 import BeautifulSoup as bs root = lh.tostring(sliderRoot) #convert the generated HTML to a string soup = bs(root) #make BeautifulSoup prettyHTML = soup.prettify() #prettify the html"} +{"question_id": 2315520, "score": 116, "creation_date": 1266887982, "tags": ["python", "dictionary"], "instruction": "In Python, how do I loop through the dictionary and change the value if it equals something?\n\nIf the value is None, I'd like to change it to \"\" (empty string). I start off like this, but I forget: for k, v in mydict.items(): if v is None: ... right?", "output": "for k, v in mydict.iteritems(): if v is None: mydict[k] = '' In a more general case, e.g. if you were adding or removing keys, it might not be safe to change the structure of the container you're looping on -- so using items to loop on an independent list copy thereof might be prudent -- but assigning a different value at a given existing index does not incur any problem, so, in Python 2.any, it's better to use iteritems. In Python3 however the code gives AttributeError: 'dict' object has no attribute 'iteritems' error. Use items() instead of iteritems() here. Refer to this post."} +{"question_id": 20995196, "score": 116, "creation_date": 1389182784, "tags": ["python", "excel", "pandas", "conditional-statements", "aggregate"], "instruction": "Pandas counting and summing specific conditions\n\nAre there single functions in pandas to perform the equivalents of SUMIF, which sums over a specific condition and COUNTIF, which counts values of specific conditions from Excel? I know that there are many multiple step functions that can be used for For example for sumif I can use (df.map(lambda x: condition) or df.size()) then use .sum(), and for countif, I can use (groupby functions and look for my answer or use a filter and the .count()). Is there simple one step process to do these functions where you enter the condition and the dataframe and you get the sum or counted results?", "output": "You can first make a conditional selection, and sum up the results of the selection using the sum function. >> df = pd.DataFrame({'a': [1, 2, 3]}) >> df[df.a > 1].sum() a 5 dtype: int64 Having more than one condition: >> df[(df.a > 1) & (df.a < 3)].sum() a 2 dtype: int64 If you want to do COUNTIF, just replace sum() with count()"} +{"question_id": 30327417, "score": 116, "creation_date": 1432042901, "tags": ["python", "pandas", "random", "integer", "range"], "instruction": "Pandas: create new column in df with random integers from range\n\nI have a pandas data frame with 50k rows. I'm trying to add a new column that is a randomly generated integer from 1 to 5. If I want 50k random numbers I'd use: df1['randNumCol'] = random.sample(range(50000), len(df1)) but for this I'm not sure how to do it. Side note in R, I'd do: sample(1:5, 50000, replace = TRUE) Any suggestions?", "output": "One solution is to use numpy.random.randint: import numpy as np df1['randNumCol'] = np.random.randint(1, 6, df1.shape[0]) Or if the numbers are non-consecutive (albeit slower), you can use this: df1['randNumCol'] = np.random.choice([1, 9, 20], df1.shape[0]) In order to make the results reproducible you can set the seed with numpy.random.seed (e.g. np.random.seed(42))"} +{"question_id": 37246941, "score": 116, "creation_date": 1463374645, "tags": ["python", "matplotlib", "plot", "layer"], "instruction": "Specifying the order of layers\n\nSuppose I run the following script: import matplotlib.pyplot as plt lineWidth = 20 plt.figure() plt.plot([0,0],[-1,1], lw=lineWidth, c='b') plt.plot([-1,1],[-1,1], lw=lineWidth, c='r') plt.plot([-1,1],[1,-1], lw=lineWidth, c='g') plt.show() This produces the following: How can I specify the top-to-bottom order of the layers instead of having Python pick for me?", "output": "I don't know why zorder has that behavior and it's likely that might be a bug or, at the very least, a badly documented feature. It might be because there are already automatic references to zorder when you build a plot (like grid, axis, and so on...) and when you try to specify the zorder for elements you are somehow overlapping them. This is hypothetical in any case. For you to solve your problem just make the differences in zorder exaggerated. For instance instead of 0,1,2, make it 0,5,10: import matplotlib.pyplot as plt lineWidth = 20 plt.figure() plt.plot([0,0],[-1,1], lw=lineWidth, c='b',zorder=10) plt.plot([-1,1],[-1,1], lw=lineWidth, c='r',zorder=5) plt.plot([-1,1],[1,-1], lw=lineWidth, c='g',zorder=0) plt.show() Which results in this: For this plot I specified the opposite order shown in your question."} +{"question_id": 37997668, "score": 116, "creation_date": 1466701185, "tags": ["python", "pandas", "group-by", "rank"], "instruction": "Number rows within group in increasing order in a pandas dataframe\n\nGiven the following dataframe: import pandas as pd import numpy as np df = pd.DataFrame({'A': ['A','A','A','B','B','B'], 'B': ['a','a','b','a','a','a'], }) df A B 0 A a 1 A a 2 A b 3 B a 4 B a 5 B a I'd like to create column 'C', which numbers the rows within each group in columns A and B like this: A B C 0 A a 1 1 A a 2 2 A b 1 3 B a 1 4 B a 2 5 B a 3 I've tried this so far: df['C'] = df.groupby(['A','B'])['B'].transform('rank') ...but it doesn't work!", "output": "Use groupby/cumcount: In [25]: df['C'] = df.groupby(['A','B']).cumcount()+1; df Out[25]: A B C 0 A a 1 1 A a 2 2 A b 1 3 B a 1 4 B a 2 5 B a 3"} +{"question_id": 14302248, "score": 116, "creation_date": 1358069240, "tags": ["python", "odoo"], "instruction": "dictionary update sequence element #0 has length 3; 2 is required\n\nI want to add lines to the object account.bank.statement.line through other object But I get following error: \"dictionary update sequence element #0 has length 3; 2 is required\" Here is my code: def action_account_line_create(self, cr, uid, ids): res = False cash_id = self.pool.get('account.bank.statement.line') for exp in self.browse(cr, uid, ids): company_id = exp.company_id.id #statement_id = exp.statement_id.id lines = [] for l in exp.line_ids: lines.append((0, 0, { 'name': l.name, 'date': l.date, 'amount': l.amount, 'type': l.type, 'statement_id': exp.statement_id.id, 'account_id': l.account_id.id, 'account_analytic_id': l.analytic_account_id.id, 'ref': l.ref, 'note': l.note, 'company_id': l.company_id.id })) inv_id = cash_id.create(cr, uid, lines,context=None) res = inv_id return res I changed it on that but then I ran into this error: File \"C:\\Program Files (x86)\\OpenERP 6.1-20121029-003136\\Server\\server\\.\\openerp\\workflow\\wkf_expr.py\", line 68, in execute File \"C:\\Program Files (x86)\\OpenERP 6.1-20121029-003136\\Server\\server\\.\\openerp\\workflow\\wkf_expr.py\", line 58, in _eval_expr File \"C:\\Program Files (x86)\\OpenERP 6.1-20121029-003136\\Server\\server\\.\\openerp\\tools\\safe_eval.py\", line 241, in safe_eval File \"C:\\Program Files (x86)\\OpenERP 6.1-20121029-003136\\Server\\server\\.\\openerp\\tools\\safe_eval.py\", line 108, in test_expr File \"\", line 0 ^ SyntaxError: unexpected EOF while parsing Code: def action_account_line_create(self, cr, uid, ids, context=None): res = False cash_id = self.pool.get('account.bank.statement.line') for exp in self.browse(cr, uid, ids): company_id = exp.company_id.id lines = [] for l in exp.line_ids: res = cash_id.create ( cr, uid, { 'name': l.name, 'date': l.date, 'amount': l.amount, 'type': l.type, 'statement_id': exp.statement_id.id, 'account_id': l.account_id.id, 'account_analytic_id': l.analytic_account_id.id, 'ref': l.ref, 'note': l.note, 'company_id': l.company_id.id }, context=None) return res", "output": "This error raised up because you trying to update dict object by using a wrong sequence (list or tuple) structure. cash_id.create(cr, uid, lines,context=None) trying to convert lines into dict object: (0, 0, { 'name': l.name, 'date': l.date, 'amount': l.amount, 'type': l.type, 'statement_id': exp.statement_id.id, 'account_id': l.account_id.id, 'account_analytic_id': l.analytic_account_id.id, 'ref': l.ref, 'note': l.note, 'company_id': l.company_id.id }) Remove the second zero from this tuple to properly convert it into a dict object. To test it your self, try this into python shell: >>> l=[(0,0,{'h':88})] >>> a={} >>> a.update(l) Traceback (most recent call last): File \"\", line 1, in a.update(l) ValueError: dictionary update sequence element #0 has length 3; 2 is required >>> l=[(0,{'h':88})] >>> a.update(l)"} +{"question_id": 42636018, "score": 116, "creation_date": 1488836217, "tags": ["python", "delete-file"], "instruction": "Python: Difference between os.remove() and os.unlink() and which one to use?\n\nI have a number of files in a folder. I want to delete each file once it has been processed. What's the difference between using os.remove() and os.unlink? Which method is ideal for my scenario?", "output": "Note: When this question was originally asked, it had a python-2.7 tag, which has since been removed. See the comments of this answer for discussion on the changes made in Python 3. They are identical as described in the Python 2.7 documentation: os.remove(path): Remove (delete) the file path. If path is a directory, OSError is raised; see rmdir() below to remove a directory. This is identical to the unlink() function documented below. On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use. Availability: Unix, Windows. os.unlink(path): Remove (delete) the file path. This is the same function as remove(); the unlink() name is its traditional Unix name. Availability: Unix, Windows."} +{"question_id": 46045956, "score": 116, "creation_date": 1504576372, "tags": ["python", "python-3.x", "multiprocessing", "threadpool", "python-multiprocessing"], "instruction": "What's the difference between ThreadPool vs Pool in the multiprocessing module?\n\nWhats the difference between ThreadPool and Pool in multiprocessing module. When I try my code out, this is the main difference I see: from multiprocessing import Pool import os, time print(\"hi outside of main()\") def hello(x): print(\"inside hello()\") print(\"Proccess id: \", os.getpid()) time.sleep(3) return x*x if __name__ == \"__main__\": p = Pool(5) pool_output = p.map(hello, range(3)) print(pool_output) I see the following output: hi outside of main() hi outside of main() hi outside of main() hi outside of main() hi outside of main() hi outside of main() inside hello() Proccess id: 13268 inside hello() Proccess id: 11104 inside hello() Proccess id: 13064 [0, 1, 4] With \"ThreadPool\": from multiprocessing.pool import ThreadPool import os, time print(\"hi outside of main()\") def hello(x): print(\"inside hello()\") print(\"Proccess id: \", os.getpid()) time.sleep(3) return x*x if __name__ == \"__main__\": p = ThreadPool(5) pool_output = p.map(hello, range(3)) print(pool_output) I see the following output: hi outside of main() inside hello() inside hello() Proccess id: 15204 Proccess id: 15204 inside hello() Proccess id: 15204 [0, 1, 4] My questions are: why is the \u201coutside __main__()\u201d run each time in the Pool? multiprocessing.pool.ThreadPool doesn't spawn new processes? It just creates new threads? If so whats the difference between using multiprocessing.pool.ThreadPool as opposed to just threading module? I don't see any official documentation for ThreadPool anywhere, can someone help me out where I can find it?", "output": "The multiprocessing.pool.ThreadPool behaves the same as the multiprocessing.Pool with the only difference that uses threads instead of processes to run the workers logic. The reason you see hi outside of main() being printed multiple times with the multiprocessing.Pool is due to the fact that the pool will spawn 5 independent processes. Each process will initialize its own Python interpreter and load the module resulting in the top level print being executed again. Note that this happens only if the spawn process creation method is used (only method available on Windows). If you use the fork one (Unix), you will see the message printed only once as for the threads. The multiprocessing.pool.ThreadPool is not documented as its implementation has never been completed. It lacks tests and documentation. You can see its implementation in the source code. I believe the next natural question is: when to use a thread based pool and when to use a process based one? The rule of thumb is: IO bound jobs -> multiprocessing.pool.ThreadPool CPU bound jobs -> multiprocessing.Pool Hybrid jobs -> depends on the workload, I usually prefer the multiprocessing.Pool due to the advantage process isolation brings On Python 3 you might want to take a look at the concurrent.future.Executor pool implementations."} +{"question_id": 59661042, "score": 116, "creation_date": 1578562781, "tags": ["python", "python-3.x", "function", "parameter-passing", "function-parameter"], "instruction": "What do * (single star) and / (slash) do as independent parameters?\n\nIn the following function definition, what do the * and / account for? def func(self, param1, param2, /, param3, *, param4, param5): print(param1, param2, param3, param4, param5) NOTE: Not to mistake with the single|double asterisks in *args | **kwargs (solved here)", "output": "The function parameter syntax(/) is to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments.(This is new in Python 3.8) Documentation specifies some of the use cases/benefits of positional-only parameters It allows pure Python functions to fully emulate behaviors of existing C coded functions. For example, the built-in pow() function does not accept keyword arguments: def pow(x, y, z=None, /): \"Emulate the built in pow() function\" r = x ** y return r if z is None else r%z Another use case is to preclude keyword arguments when the parameter name is not helpful. For example, the builtin len() function has the signature len(obj, /). This precludes awkward calls such as: len(obj='hello') # The \"obj\" keyword argument impairs readability A further benefit of marking a parameter as positional-only is that it allows the parameter name to be changed in the future without risk of breaking client code. For example, in the statistics module, the parameter name dist may be changed in the future. This was made possible with the following function specification: def quantiles(dist, /, *, n=4, method='exclusive') ... Where as * is used to force the caller to use named arguments. Django documentation contains a section which clearly explains a usecase of named arguments. Form fields no longer accept optional arguments as positional arguments To help prevent runtime errors due to incorrect ordering of form field arguments, optional arguments of built-in form fields are no longer accepted as positional arguments. For example: forms.IntegerField(25, 10) raises an exception and should be replaced with: forms.IntegerField(max_value=25, min_value=10) Suppose we have a method called func: def func(self, param1, param2, /, param3, *, param4, param5): print(param1, param2, param3, param4, param5) It must called with obj.func(10, 20, 30, param4=50, param5=60) OR obj.func(10, 20, param3=30, param4=50, param5=60) ie, param1, param2 must be specified positionally. param3 can be called either with positional or keyword argument. param4 and param5 must be called with keyword argument. DEMO: >>> class MyClass(object): ... def func(self, param1, param2, /, param3, *, param4, param5): ... return param1, param2, param3, param4, param5 ... >>> obj = MyClass() >>> >>> assert obj.func(10, 20, 30, param4=40, param5=50), obj.func( ... 10, 20, param3=30, param4=40, param5=50 ... )"} +{"question_id": 18554012, "score": 115, "creation_date": 1377994432, "tags": ["python", "dictionary", "iteration", "intersection"], "instruction": "Intersecting two dictionaries\n\nI am working on a search program over an inverted index. The index itself is a dictionary whose keys are terms and whose values are themselves dictionaries of short documents, with ID numbers as keys and their text content as values. To perform an 'AND' search for two terms, I thus need to intersect their postings lists (dictionaries). What is a clear (not necessarily overly clever) way to do this in Python? I started out by trying it the long way with iter: p1 = index[term1] p2 = index[term2] i1 = iter(p1) i2 = iter(p2) while ... # not sure of the 'iter != end 'syntax in this case ...", "output": "In general, to construct the intersection of dictionaries in Python, you can first use the & operator to calculate the intersection of sets of the dictionary keys (dictionary keys are set-like objects in Python 3): dict_a = {\"a\": 1, \"b\": 2} dict_b = {\"a\": 2, \"c\": 3} intersection = dict_a.keys() & dict_b.keys() # {'a'} On Python 2 you have to convert the dictionary keys to sets yourself: keys_a = set(dict_a.keys()) keys_b = set(dict_b.keys()) intersection = keys_a & keys_b Then given the intersection of the keys, you can then build the intersection of your values however is desired. You have to make a choice here, since the concept of set intersection doesn't tell you what to do if the associated values differ. (This is presumably why the & intersection operator is not defined directly for dictionaries in Python). In this case it sounds like your values for the same key would be equal, so you can just choose the value from one of the dictionaries: dict_of_dicts_a = {\"a\": {\"x\":1}, \"b\": {\"y\":3}} dict_of_dicts_b = {\"a\": {\"x\":1}, \"c\": {\"z\":4}} shared_keys = dict_of_dicts_a.keys() & dict_of_dicts_b.keys() # values equal so choose values from a: dict_intersection = {k: dict_of_dicts_a[k] for k in shared_keys } # {\"a\":{\"x\":1}} Other reasonable methods of combining values would depend on the types of the values in your dictionaries, and what they represent. For example you might also want the union of values for shared keys of dictionaries of dictionaries. Since the union of dictionaries doesn't depend on the values, it is well defined, and in python you can get it using the | operator: # union of values for each key in the intersection: dict_intersection_2 = { k: dict_of_dicts_a[k] | dict_of_dicts_b[k] for k in shared_keys } Which in this case, with identical dictionary values for key \"a\" in both, would be the same result."} +{"question_id": 58435645, "score": 115, "creation_date": 1571324723, "tags": ["python", "boolean-logic"], "instruction": "Is there a more elegant way to express ((x == a and y == b) or (x == b and y == a))?\n\nI'm trying to evaluate ((x == a and y == b) or (x == b and y == a)) in Python, but it seems a bit verbose. Is there a more elegant way?", "output": "If the elements are hashable, you could use sets: {a, b} == {y, x}"} +{"question_id": 44901806, "score": 115, "creation_date": 1499159625, "tags": ["python", "file"], "instruction": "Python error message io.UnsupportedOperation: not readable\n\nI have this code: line1 = [] line1.append(\"xyz \") line1.append(\"abc\") line1.append(\"mno\") file = open(\"File.txt\",\"w\") for i in range(3): file.write(line1[i]) file.write(\"\\n\") for line in file: print(line) file.close() But when I try it, I get an error message like: File \"...\", line 18, in for line in file: UnsupportedOperation: not readable Why? How do I fix it?", "output": "You are opening the file as \"w\", which stands for writable. Using \"w\" you won't be able to read the file. Use the following instead: file = open(\"File.txt\", \"r\") Additionally, here are the other options: \"r\" Opens a file for reading only. \"r+\" Opens a file for both reading and writing. \"rb\" Opens a file for reading only in binary format. \"rb+\" Opens a file for both reading and writing in binary format. \"w\" Opens a file for writing only. \"a\" Open for writing. The file is created if it does not exist. \"a+\" Open for reading and writing. The file is created if it does not exist."} +{"question_id": 4918425, "score": 115, "creation_date": 1297057806, "tags": ["python", "python-3.x"], "instruction": "Subtract a value from every number in a list in Python?\n\nI have a list a = [49, 51, 53, 56] How do I subtract 13 from each integer value in the list?", "output": "With a list comprehension: a = [x - 13 for x in a]"} +{"question_id": 38134086, "score": 115, "creation_date": 1467324775, "tags": ["python", "pycharm", "pylint"], "instruction": "How to run Pylint with PyCharm\n\nI want to configure Pylint as an external tool in my entire project directory for a Python project that I'm working on. I've tried to use the repository as a module with __init__.py and without, and it's not working either way. I'm having difficulty setting up Pylint to run with PyCharm. I know that I should be running it as an external tool, however the settings confuse me. The authoritative source on their documentation is broken, so I can't check that up either.", "output": "You can set up Pylint to work with PyCharm by following the following steps: Install pylint: pip install pylint Locate your pylint installation folder: which pylint # macOS/Linux Output: /usr/local/bin/pylint # This is just a possible output - check yours where pylint # Windows Output: %LocalAppData%\\Programs\\Python\\Python36-32\\Scripts\\pylint.exe # Possible location Open the PyCharm settings window with menu File \u2192 Settings, then navigate to menu Tools \u2192 External Tools in the sidebar. (Or search \"external tools\") Set up an external tool by clicking on the + sign and filling in the fields accordingly. In Program use the path you got when running which pylint. For the other values, you can use the same from the image. Run pylint from menu Tools \u2192 External Tools \u2192 pylint: Look at your output in the PyCharm terminal For more details, refer to Pylinting with PyCharm. If you want to use Pylint to check your whole project or a particular file or directory, you can right click on your project root, file or directory, then activate External Tools \u2192 pylint as shown below."} +{"question_id": 15315573, "score": 115, "creation_date": 1362861031, "tags": ["python", "git", "bash", "github"], "instruction": "How can I call 'git pull' from within Python?\n\nUsing the github webhooks, I would like to be able to pull any changes to a remote development server. At the moment, when in the appropriate directory, git pull gets any changes that need to be made. However, I can't figure out how to call that function from within Python. I have tried the following: import subprocess process = subprocess.Popen(\"git pull\", stdout=subprocess.PIPE) output = process.communicate()[0] But this results in the following error Traceback (most recent call last): File \"\", line 1, in File \"/usr/lib/python2.7/subprocess.py\", line 679, in __init__ errread, errwrite) File \"/usr/lib/python2.7/subprocess.py\", line 1249, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory Is there a way that I can call this bash command from within Python?", "output": "Have you considered using GitPython? It's designed to handle all this nonsense for you. import git # pip install gitpython g = git.cmd.Git(git_dir) g.pull() To install the module use pip install GitPython Project can be found here, GitPython"} +{"question_id": 92438, "score": 115, "creation_date": 1221743826, "tags": ["python", "string", "non-printable"], "instruction": "Stripping non printable characters from a string in python\n\nI use to run $s =~ s/[^[:print:]]//g; on Perl to get rid of non printable characters. In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not. What would you do? EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output. curses.ascii.isprint will return false for any unicode character.", "output": "Iterating over strings is unfortunately rather slow in Python. Regular expressions are over an order of magnitude faster for this kind of thing. You just have to build the character class yourself. The unicodedata module is quite helpful for this, especially the unicodedata.category() function. See Unicode Character Database for descriptions of the categories. import unicodedata, re, itertools, sys all_chars = (chr(i) for i in range(sys.maxunicode)) categories = {'Cc'} control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories) # or equivalently and much more efficiently control_chars = ''.join(map(chr, itertools.chain(range(0x00,0x20), range(0x7f,0xa0)))) control_char_re = re.compile('[%s]' % re.escape(control_chars)) def remove_control_chars(s): return control_char_re.sub('', s) For Python2 import unicodedata, re, sys all_chars = (unichr(i) for i in xrange(sys.maxunicode)) categories = {'Cc'} control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories) # or equivalently and much more efficiently control_chars = ''.join(map(unichr, range(0x00,0x20) + range(0x7f,0xa0))) control_char_re = re.compile('[%s]' % re.escape(control_chars)) def remove_control_chars(s): return control_char_re.sub('', s) For some use-cases, additional categories (e.g. all from the control group might be preferable, although this might slow down the processing time and increase memory usage significantly. Number of characters per category: Cc (control): 65 Cf (format): 161 Cs (surrogate): 2048 Co (private-use): 137468 Cn (unassigned): 836601 Edit Adding suggestions from the comments."} +{"question_id": 41110742, "score": 115, "creation_date": 1481581590, "tags": ["python", "django", "django-rest-framework"], "instruction": "Django Rest Framework partial update\n\nI'm trying to implement partial_update with Django Rest Framework but I need some clarification because I'm stuck. Why do we need to specify partial=True? In my understanding, we could easily update Demo object inside of partial_update method. What is the purpose of this? What is inside of serialized variable? What is inside of serialized variable in partial_update method? Is that a Demo object? What function is called behind the scenes? How would one finish the implementation here? Viewset class DemoViewSet(viewsets.ModelViewSet): serializer_class = DemoSerializer def partial_update(self, request, pk=None): serialized = DemoSerializer(request.user, data=request.data, partial=True) return Response(status=status.HTTP_202_ACCEPTED) Serializer class DemoSerializer(serializers.ModelSerializer): class Meta: model = Demo fields = '__all__' def update(self, instance, validated_data): print 'this - here' demo = Demo.objects.get(pk=instance.id) Demo.objects.filter(pk=instance.id)\\ .update(**validated_data) return demo", "output": "I went digging into the source code of rest_framework and got the following findings: For question 1. Why do we need to specify partial=True? This question is related to HTTP verbs. PUT: The PUT method replaces all current representations of the target resource with the request payload. PATCH: The PATCH method is used to apply partial modifications to a resource. Generally speaking, partial is used to check whether the fields in the model is needed to do field validation when client submitting data to the view. For example, we have a Book model like this, pls note both of the name and author_name fields are mandatory (not null & not blank). class Book(models.Model): name = models.CharField('name of the book', max_length=100) author_name = models.CharField('the name of the author', max_length=50) # Create a new instance for testing Book.objects.create(name='Python in a nut shell', author_name='Alex Martelli') For some scenarios, we may only need to update part of the fields in the model, e.g., we only need to update name field in the Book. So for this case, client will only submit the name field with new value to the view. The data submit from the client may look like this: {\"pk\": 1, name: \"PYTHON IN A NUT SHELL\"} But you may have notice that our model definition does not allow author_name to be blank. So we have to use partial_update instead of update. So the rest framework will not perform field validation check for the fields which is missing in the request data. For testing purpose, you can create two views for both update and partial_update, and you will get more understanding what I just said. Example: views.py from rest_framework.generics import GenericAPIView from rest_framework.mixins import UpdateModelMixin from rest_framework.viewsets import ModelViewSet from rest_framework import serializers class BookSerializer(serializers.ModelSerializer): class Meta: model = Book class BookUpdateView(GenericAPIView, UpdateModelMixin): ''' Book update API, need to submit both `name` and `author_name` fields At the same time, or django will prevent to do update for field missing ''' queryset = Book.objects.all() serializer_class = BookSerializer def put(self, request, *args, **kwargs): return self.update(request, *args, **kwargs) class BookPartialUpdateView(GenericAPIView, UpdateModelMixin): ''' You just need to provide the field which is to be modified. ''' queryset = Book.objects.all() serializer_class = BookSerializer def put(self, request, *args, **kwargs): return self.partial_update(request, *args, **kwargs) urls.py urlpatterns = patterns('', url(r'^book/update/(?P\\d+)/$', BookUpdateView.as_view(), name='book_update'), url(r'^book/update-partial/(?P\\d+)/$', BookPartialUpdateView.as_view(), name='book_partial_update'), ) Data to submit {\"pk\": 1, name: \"PYTHON IN A NUT SHELL\"} When you submit the above json to the /book/update/1/, you will got the following error with HTTP_STATUS_CODE=400: { \"author_name\": [ \"This field is required.\" ] } But when you submit the above json to /book/update-partial/1/, you will got HTTP_STATUS_CODE=200 with following response, { \"id\": 1, \"name\": \"PYTHON IN A NUT SHELL\", \"author_name\": \"Alex Martelli\" } For question 2. What is inside of serialized variable? serialized is a object wrapping the model instance as a serializable object. and you can use this serialized to generate a plain JSON string with serialized.data . For question 3. How would one finish the implementation here? I think you can answer yourself when you have read the answer above, and you should have known when to use update and when to used partial_update. If you still have any question, feel free to ask. I just read part of the source code of the rest framework, and may have not understand very deeply for some terms, and please point it out when it is wrong..."} +{"question_id": 3132265, "score": 115, "creation_date": 1277727613, "tags": ["python", "python-idle"], "instruction": "How do I access the command history from IDLE?\n\nOn bash or Window's Command Prompt, we can press the up arrow on keyboard to get the last command, and edit it, and press ENTER again to see the result. But in Python's IDLE 2.6.5 or 3.1.2, it seems if our statement prints out 25 lines, we need to press the up arrow 25 times to that last command, and press ENTER for it to be copied? Or use the mouse to pinpoint that line and click there, and press ENTER to copy? Is there a faster way?", "output": "I think you are looking for the history-previous action, which is bound to Alt+P by default. You can remap it in \"Options -> Configure IDLE -> Keys\" You can also access this command from the top menu in IDLE: \"Shell -> Previous History\" Incidentally, why don't you try a better (less ugly, for starters) shell like bpython or ipython?"} +{"question_id": 37353960, "score": 115, "creation_date": 1463770474, "tags": ["python", "unit-testing", "pytest"], "instruction": "Why is PyTest not collecting tests (collected 0 items)?\n\nI have been trying to run unit tests using pytest. I wrote a module with one class and some methods inside that class, and I wrote a unit test for this module (with a simple assert statement to check equality of lists) where I first instantiated the class with a list. Then I invoke a method on that object (from the class). Both test.py and the script to be tested are in the same folder. When I run pytest on it, it reports \"collected 0 items\". I am new to pytest and but I am unable to run their examples successfully. What am I missing here? I am running Python version 3.5.1 and pytest version 2.8.1 on Windows 7. My test.py code: from sort_algos import Sorts def integer_sort_test(): myobject1 = Sorts([-100, 10, -10]) assert myobject1.merge_sort() == [-101, -100, 10] sort_algos.py is a module containing Sorts class. merge_sort is a method from Sorts.", "output": "pytest gathers tests according to a naming convention. By default any file that is to contain tests must be named starting with test_, classes that hold tests must be named starting with Test, and any function in a file that should be treated as a test must also start with test_. If you rename your test file to test_sorts.py and rename the example function you provide above as test_integer_sort, then you will find it is automatically collected and executed. This test collecting behavior can be changed to suit your desires. Changing it will require learning about configuration in pytest."} +{"question_id": 56905592, "score": 115, "creation_date": 1562339432, "tags": ["python", "image", "opencv", "image-processing", "computer-vision"], "instruction": "Automatic contrast and brightness adjustment of a color photo of a sheet of paper with OpenCV\n\nWhen photographing a sheet of paper (e.g. with phone camera), I get the following result (left image) (jpg download here). The desired result (processed manually with an image editing software) is on the right: I would like to process the original image with openCV to get a better brightness/contrast automatically (so that the background is more white). Assumption: the image has an A4 portrait format (we don't need to perspective-warp it in this topic here), and the sheet of paper is white with possibly text/images in black or colors. What I've tried so far: Various adaptive thresholding methods such as Gaussian, OTSU (see OpenCV doc Image Thresholding). It usually works well with OTSU: ret, gray = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY) but it only works for grayscale images and not directly for color images. Moreover, the output is binary (white or black), which I don't want: I prefer to keep a color non-binary image as output Histogram equalization applied on Y (after RGB => YUV transform) or applied on V (after RGB => HSV transform), as suggested by this answer (Histogram equalization not working on color image - OpenCV) or this one (OpenCV Python equalizeHist colored image): img3 = cv2.imread(f) img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2YUV) img_transf[:,:,0] = cv2.equalizeHist(img_transf[:,:,0]) img4 = cv2.cvtColor(img_transf, cv2.COLOR_YUV2BGR) cv2.imwrite('test.jpg', img4) or with HSV: img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2HSV) img_transf[:,:,2] = cv2.equalizeHist(img_transf[:,:,2]) img4 = cv2.cvtColor(img_transf, cv2.COLOR_HSV2BGR) Unfortunately, the result is quite bad since it creates awful micro contrasts locally (?): I also tried YCbCr instead, and it was similar. I also tried CLAHE (Contrast Limited Adaptive Histogram Equalization) with various tileGridSize from 1 to 1000: img3 = cv2.imread(f) img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2HSV) clahe = cv2.createCLAHE(tileGridSize=(100,100)) img_transf[:,:,2] = clahe.apply(img_transf[:,:,2]) img4 = cv2.cvtColor(img_transf, cv2.COLOR_HSV2BGR) cv2.imwrite('test.jpg', img4) but the result was equally awful too. Doing this CLAHE method with LAB color space, as suggested in the question How to apply CLAHE on RGB color images: import cv2, numpy as np bgr = cv2.imread('_example.jpg') lab = cv2.cvtColor(bgr, cv2.COLOR_BGR2LAB) lab_planes = cv2.split(lab) clahe = cv2.createCLAHE(clipLimit=2.0,tileGridSize=(100,100)) lab_planes[0] = clahe.apply(lab_planes[0]) lab = cv2.merge(lab_planes) bgr = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) cv2.imwrite('_example111.jpg', bgr) gave bad result too. Output image: Do an adaptive thresholding or histogram equalization separately on each channel (R, G, B) is not an option since it would mess with the color balance, as explained here. \"Contrast strechting\" method from scikit-image's tutorial on Histogram Equalization: the image is rescaled to include all intensities that fall within the 2nd and 98th percentiles is a little bit better, but still far from the desired result (see image on top of this question). TL;DR: how to get an automatic brightness/contrast optimization of a color photo of a sheet of paper with OpenCV/Python? What kind of thresholding/histogram equalization/other technique could be used?", "output": "This method should work well for your application. First you find a threshold value that separates the distribution modes well in the intensity histogram then rescale the intensity using that value. from skimage.filters import threshold_yen from skimage.exposure import rescale_intensity from skimage.io import imread, imsave img = imread('mY7ep.jpg') yen_threshold = threshold_yen(img) bright = rescale_intensity(img, (0, yen_threshold), (0, 255)) imsave('out.jpg', bright) I'm here using Yen's method, can learn more about this method on this page."} +{"question_id": 6256983, "score": 115, "creation_date": 1307388523, "tags": ["python", "deque"], "instruction": "How are deques in Python implemented, and when are they worse than lists?\n\nI've recently gotten into investigating how various data structures are implemented in Python in order to make my code more efficient. In investigating how lists and deques work, I found that I can get benefits when I want to shift and unshift reducing the time from O(n) in lists to O(1) in deques (lists being implemented as fixed-length arrays that have to be copied completely each time something is inserted at the front, etc...). What I can't seem to find are the specifics of how a deque is implemented, and the specifics of its downsides v.s. lists. Can someone enlighten me on these two questions?", "output": "https://github.com/python/cpython/blob/v3.8.1/Modules/_collectionsmodule.c A dequeobject is composed of a doubly-linked list of block nodes. So yes, a deque is a (doubly-)linked list as another answer suggests. Elaborating: What this means is that Python lists are much better for random-access and fixed-length operations, including slicing, while deques are much more useful for pushing and popping things off the ends, with indexing (but not slicing, interestingly) being possible but slower than with lists."} +{"question_id": 53903373, "score": 115, "creation_date": 1545565834, "tags": ["python", "pytorch"], "instruction": "Convert PyTorch tensor to python list\n\nHow do I convert a PyTorch Tensor into a python list? I want to convert a tensor of size [1, 2048, 1, 1] into a list of 2048 elements. My tensor has floating point values. Is there a solution which also works with other data types such as int?", "output": "Use Tensor.tolist() e.g: >>> import torch >>> a = torch.randn(2, 2) >>> a.tolist() [[0.012766935862600803, 0.5415473580360413], [-0.08909505605697632, 0.7729271650314331]] >>> a[0,0].tolist() 0.012766935862600803 To remove all dimensions of size 1, use a.squeeze().tolist(). Alternatively, if all but one dimension are of size 1 (or you wish to get a list of every element of the tensor) you may use a.flatten().tolist()."} +{"question_id": 58565394, "score": 115, "creation_date": 1572035599, "tags": ["python", "tensorflow", "machine-learning", "keras", "deep-learning"], "instruction": "What is the difference between sparse_categorical_crossentropy and categorical_crossentropy?\n\nWhat is the difference between sparse_categorical_crossentropy and categorical_crossentropy? When should one loss be used as opposed to the other? For example, are these losses suitable for linear regression?", "output": "Simply: categorical_crossentropy (cce) produces a one-hot array containing the probable match for each category, sparse_categorical_crossentropy (scce) produces a category index of the most likely matching category. Consider a classification problem with 5 categories (or classes). In the case of cce, the one-hot target may be [0, 1, 0, 0, 0] and the model may predict [.2, .5, .1, .1, .1] (probably right) In the case of scce, the target index may be [1] and the model may predict: [.5]. Consider now a classification problem with 3 classes. In the case of cce, the one-hot target might be [0, 0, 1] and the model may predict [.5, .1, .4] (probably inaccurate, given that it gives more probability to the first class) In the case of scce, the target index might be [0], and the model may predict [.5] Many categorical models produce scce output because you save space, but lose A LOT of information (for example, in the 2nd example, index 2 was also very close.) I generally prefer cce output for model reliability. There are a number of situations to use scce, including: when your classes are mutually exclusive, i.e. you don't care at all about other close-enough predictions, the number of categories is large to the prediction output becomes overwhelming. 220405: response to \"one-hot encoding\" comments: one-hot encoding is used for a category feature INPUT to select a specific category (e.g. male versus female). This encoding allows the model to train more efficiently: training weight is a product of category, which is 0 for all categories except for the given one. cce and scce are a model OUTPUT. cce is a probability array of each category, totally 1.0. scce shows the MOST LIKELY category, totally 1.0. scce is technically a one-hot array, just like a hammer used as a door stop is still a hammer, but its purpose is different. cce is NOT one-hot."} +{"question_id": 9766940, "score": 115, "creation_date": 1332145370, "tags": ["python", "postgresql", "sqlalchemy"], "instruction": "How to create an SQL View with SQLAlchemy?\n\nIs there a \"Pythonic\" way (I mean, no \"pure SQL\" query) to define an SQL view with SQLAlchemy?", "output": "Update: SQLAlchemy now has a great usage recipe here on this topic, which I recommend. It covers different SQL Alchemy versions up to the latest and has ORM integration (see comments below this answer and other answers). And if you look through the version history, you can also learn why using literal_binds is iffy (in a nutshell: binding parameters should be left to the database), but still arguably any other solution would make most users of the recipe not happy. I leave the below answer mostly for historical reasons. Original answer: Creating a (read-only non-materialized) view is not supported out of the box as far as I know. But adding this functionality in SQLAlchemy 0.7 is straightforward (similar to the example I gave here). You just have to write a compiler extension CreateView. With this extension, you can then write (assuming that t is a table object with a column id) createview = CreateView('viewname', t.select().where(t.c.id>5)) engine.execute(createview) v = Table('viewname', metadata, autoload=True) for r in engine.execute(v.select()): print r Here is a working example: from sqlalchemy import Table from sqlalchemy.ext.compiler import compiles from sqlalchemy.sql.expression import Executable, ClauseElement class CreateView(Executable, ClauseElement): def __init__(self, name, select): self.name = name self.select = select @compiles(CreateView) def visit_create_view(element, compiler, **kw): return \"CREATE VIEW %s AS %s\" % ( element.name, compiler.process(element.select, literal_binds=True) ) # test data from sqlalchemy import MetaData, Column, Integer from sqlalchemy.engine import create_engine engine = create_engine('sqlite://') metadata = MetaData(engine) t = Table('t', metadata, Column('id', Integer, primary_key=True), Column('number', Integer)) t.create() engine.execute(t.insert().values(id=1, number=3)) engine.execute(t.insert().values(id=9, number=-3)) # create view createview = CreateView('viewname', t.select().where(t.c.id>5)) engine.execute(createview) # reflect view and print result v = Table('viewname', metadata, autoload=True) for r in engine.execute(v.select()): print r If you want, you can also specialize for a dialect, e.g. @compiles(CreateView, 'sqlite') def visit_create_view(element, compiler, **kw): return \"CREATE VIEW IF NOT EXISTS %s AS %s\" % ( element.name, compiler.process(element.select, literal_binds=True) )"} +{"question_id": 6394511, "score": 115, "creation_date": 1308381610, "tags": ["python", "decorator"], "instruction": "Python functools.wraps equivalent for classes\n\nWhen defining a decorator using a class, how do I automatically transfer over__name__, __module__ and __doc__? Normally, I would use the @wraps decorator from functools. Here's what I did instead for a class (this is not entirely my code): class memoized: \"\"\"Decorator that caches a function's return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. \"\"\" def __init__(self, func): super().__init__() self.func = func self.cache = {} def __call__(self, *args): try: return self.cache[args] except KeyError: value = self.func(*args) self.cache[args] = value return value except TypeError: # uncacheable -- for instance, passing a list as an argument. # Better to not cache than to blow up entirely. return self.func(*args) def __repr__(self): return self.func.__repr__() def __get__(self, obj, objtype): return functools.partial(self.__call__, obj) __doc__ = property(lambda self:self.func.__doc__) __module__ = property(lambda self:self.func.__module__) __name__ = property(lambda self:self.func.__name__) Is there a standard decorator to automate the creation of name module and doc? Also, to automate the get method (I assume that's for creating bound methods?) Are there any missing methods?", "output": "Everyone seems to have missed the obvious solution. Using functools.update_wrapper: >>> import functools >>> class memoized(object): \"\"\"Decorator that caches a function's return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. \"\"\" def __init__(self, func): self.func = func self.cache = {} functools.update_wrapper(self, func) ## TA-DA! ## def __call__(self, *args): pass # Not needed for this demo. >>> @memoized def fibonacci(n): \"\"\"fibonacci docstring\"\"\" pass # Not needed for this demo. >>> fibonacci <__main__.memoized object at 0x0156DE30> >>> fibonacci.__name__ 'fibonacci' >>> fibonacci.__doc__ 'fibonacci docstring'"} +{"question_id": 9999829, "score": 115, "creation_date": 1333480414, "tags": ["python", "pypi"], "instruction": "How to specify multiple author(s) / email(s) in setup.py\n\nWe wrote a small wrapper to a twitter app and published this information to http://pypi.python.org. But setup.py just contained a single field for specifying email / name of the author. How do I specify multiple contributors / email list, to the following fields since we would like this package to be listed under our names, much similar to how it shows up in http://rubygems.org. author='foo', author_email='foo.bar@gmail.com',", "output": "As far as I know, setuptools doesn't support using a list of strings in order to specify multiple authors. Your best bet is to list the authors in a single string: author='Foo Bar, Spam Eggs', author_email='foobar@baz.com, spameggs@joe.org', I'm not sure if PyPI validates the author_email field, so you may run into trouble with that one. In any case, I would recommend you limit these to a single author and mention all contributors in the documentation or description. Some sources: This has been registered as a bug, actually, but it seems like support for multiple authors was not implemented. Here is an alternative solution. Here is an idea for how to provide a contact email for a project with multiple authors."} +{"question_id": 7232088, "score": 114, "creation_date": 1314630885, "tags": ["python", "smtplib"], "instruction": "Python: \"subject\" not shown when sending email using smtplib module\n\nI am successfully able to send email using the smtplib module. But when the email is sent, it does not include the subject in the sent email. import smtplib SERVER = FROM = TO = [] SUBJECT = \"Hello!\" message = \"Test\" TEXT = \"This message was sent with Python's smtplib.\" server = smtplib.SMTP(SERVER) server.sendmail(FROM, TO, message) server.quit() How should I write \"server.sendmail\" to include the SUBJECT as well in the email sent. If I use, server.sendmail(FROM, TO, message, SUBJECT), it gives error about \"smtplib.SMTPSenderRefused\"", "output": "Attach it as a header: message = 'Subject: {}\\n\\n{}'.format(SUBJECT, TEXT) and then: server = smtplib.SMTP(SERVER) server.sendmail(FROM, TO, message) server.quit() Also consider using standard Python module email - it will help you a lot while composing emails. Using it would look like this: from email.message import EmailMessage msg = EmailMessage() msg['Subject'] = SUBJECT msg['From'] = FROM msg['To'] = TO msg.set_content(TEXT) server.send_message(msg)"} +{"question_id": 26646191, "score": 114, "creation_date": 1414649451, "tags": ["python", "pandas"], "instruction": "Pandas groupby month and year\n\nI have the following dataframe: Date abc xyz 01-Jun-13 100 200 03-Jun-13 -20 50 15-Aug-13 40 -5 20-Jan-14 25 15 21-Feb-14 60 80 I need to group the data by year and month. I.e., Group by Jan 2013, Feb 2013, Mar 2013, etc... I will be using the newly grouped data to create a plot showing abc vs xyz per year/month. I've tried various combinations of groupby and sum, but I just can't seem to get anything to work. How can I do it?", "output": "You can use either resample or Grouper (which resamples under the hood). First make sure that the datetime column is actually of datetimes (hit it with pd.to_datetime). It's easier if it's a DatetimeIndex: In [11]: df1 Out[11]: abc xyz Date 2013-06-01 100 200 2013-06-03 -20 50 2013-08-15 40 -5 2014-01-20 25 15 2014-02-21 60 80 In [12]: g = df1.groupby(pd.Grouper(freq=\"M\")) # DataFrameGroupBy (grouped by Month) In [13]: g.sum() Out[13]: abc xyz Date 2013-06-30 80 250 2013-07-31 NaN NaN 2013-08-31 40 -5 2013-09-30 NaN NaN 2013-10-31 NaN NaN 2013-11-30 NaN NaN 2013-12-31 NaN NaN 2014-01-31 25 15 2014-02-28 60 80 In [14]: df1.resample(\"M\", how='sum') # the same Out[14]: abc xyz Date 2013-06-30 40 125 2013-07-31 NaN NaN 2013-08-31 40 -5 2013-09-30 NaN NaN 2013-10-31 NaN NaN 2013-11-30 NaN NaN 2013-12-31 NaN NaN 2014-01-31 25 15 2014-02-28 60 80 Note: Previously pd.Grouper(freq=\"M\") was written as pd.TimeGrouper(\"M\"). The latter is now deprecated since 0.21. I had thought the following would work, but it doesn't (due to as_index not being respected? I'm not sure.). I'm including this for interest's sake. If it's a column (it has to be a datetime64 column! as I say, hit it with to_datetime), you can use the PeriodIndex: In [21]: df Out[21]: Date abc xyz 0 2013-06-01 100 200 1 2013-06-03 -20 50 2 2013-08-15 40 -5 3 2014-01-20 25 15 4 2014-02-21 60 80 In [22]: pd.DatetimeIndex(df.Date).to_period(\"M\") # old way Out[22]: [2013-06, ..., 2014-02] Length: 5, Freq: M In [23]: per = df.Date.dt.to_period(\"M\") # new way to get the same In [24]: g = df.groupby(per) In [25]: g.sum() # dang not quite what we want (doesn't fill in the gaps) Out[25]: abc xyz 2013-06 80 250 2013-08 40 -5 2014-01 25 15 2014-02 60 80 To get the desired result we have to reindex..."} +{"question_id": 59810276, "score": 114, "creation_date": 1579439173, "tags": ["python", "virtualenv", "pyenv", "python-poetry"], "instruction": "Why is my poetry virtualenv using the system python instead of the pyenv python that I set?\n\nI've recently installed both Pyenv and Poetry and want to create a new Python 3.8 project. I've set both the global and local versions of python to 3.8.1 using the appropriate Pyenv commands (pyenv global 3.8.1 for example). When I run pyenv version in my terminal the output is 3.8.1. as expected. Now, the problem is that when I create a new python project with Poetry (poetry new my-project), the generated pyproject.toml file creates a project with python 2.7: [tool.poetry] name = \"my-project\" version = \"0.1.0\" description = \"\" authors = [\"user \"] [tool.poetry.dependencies] python = \"^2.7\" [tool.poetry.dev-dependencies] pytest = \"^4.6\" [build-system] requires = [\"poetry>=0.12\"] build-backend = \"poetry.masonry.api\" It seems that Poetry defaults back to the system version of Python. How do I change this so that it uses the version installed with Pyenv? Edit I'm using MacOS, which comes bundled with Python 2.7. I think that might be causing some of the issues here. I've reinstalled Python 3.8 again with Pyenv, but when I hit Poetry install I get the following error: The currently activated Python version 2.7.16 is not supported by the project (^3.8). Trying to find and use a compatible version. [NoCompatiblePythonVersionFound] Poetry was unable to find a compatible version. If you have one, you can explicitly use it via the \"env use\" command. Should I create an environment explicitly for the project using Pyenv or should the project be able to access the correct Python version after running pyenv local 3.8.1.? When I do the latter, nothing changes and I still get the same errors.", "output": "Alright, I figured the problem. A little embarrassingly, I had not run pyenv shell 3.8.1 before running any of the other commands. Everything works now. Thank you all for your efforts."} +{"question_id": 36998260, "score": 114, "creation_date": 1462260794, "tags": ["python", "numpy", "insert"], "instruction": "Prepend element to numpy array\n\nI have the following numpy array import numpy as np X = np.array([[5.], [4.], [3.], [2.], [1.]]) I want to insert [6.] at the beginning. I've tried: X = X.insert(X, 0) how do I insert into X?", "output": "numpy has an insert function that's accesible via np.insert with documentation. You'll want to use it in this case like so: X = np.insert(X, 0, 6., axis=0) the first argument X specifies the object to be inserted into. The second argument 0 specifies where. The third argument 6. specifies what is to be inserted. The fourth argument axis=0 specifies that the insertion should happen at position 0 for every column. We could've chosen rows but your X is a columns vector, so I figured we'd stay consistent."} +{"question_id": 6481279, "score": 114, "creation_date": 1309046504, "tags": ["python", "django", "django-queryset"], "instruction": "Django SUM Query?\n\nI have a query akin to the following: SELECT SUM(name) FROM table WHERE name IS NULL How does that SUM translate into a QuerySet in Django? i.e. What operation xyz does it translate to, in something like MyModel.objects.xyz()?", "output": "Update: The following incorporates the ISNULL aspect of the original query: from django.db.models import Sum ModelName.objects.filter(field_name__isnull=True).aggregate(Sum('field_name')) # returns {'field_name__sum': 1000} for example You're looking for the Sum aggregation function, which works as follows: ModelName.objects.aggregate(Sum('field_name')) See: https://docs.djangoproject.com/en/dev/ref/models/querysets/#sum"} +{"question_id": 25191620, "score": 114, "creation_date": 1407443352, "tags": ["python", "scipy", "filtering", "signal-processing"], "instruction": "Creating lowpass filter in SciPy - understanding methods and units\n\nI am trying to filter a noisy heart rate signal with python. Because heart rates should never be above about 220 beats per minute, I want to filter out all noise above 220 bpm. I converted 220/minute into 3.66666666 Hertz and then converted that Hertz to rad/s to get 23.0383461 rad/sec. The sampling frequency of the chip that takes data is 30Hz so I converted that to rad/s to get 188.495559 rad/s. After looking up some stuff online I found some functions for a bandpass filter that I wanted to make into a lowpass. Here is the link the bandpass code, so I converted it to be this: from scipy.signal import butter, lfilter from scipy.signal import freqs def butter_lowpass(cutOff, fs, order=5): nyq = 0.5 * fs normalCutoff = cutOff / nyq b, a = butter(order, normalCutoff, btype='low', analog = True) return b, a def butter_lowpass_filter(data, cutOff, fs, order=4): b, a = butter_lowpass(cutOff, fs, order=order) y = lfilter(b, a, data) return y cutOff = 23.1 #cutoff frequency in rad/s fs = 188.495559 #sampling frequency in rad/s order = 20 #order of filter #print sticker_data.ps1_dxdt2 y = butter_lowpass_filter(data, cutOff, fs, order) plt.plot(y) I am very confused by this though because I am pretty sure the butter function takes in the cutoff and sampling frequency in rad/s but I seem to be getting a weird output. Is it actually in Hz? Secondly, what is the purpose of these two lines: nyq = 0.5 * fs normalCutoff = cutOff / nyq I know it's something about normalization but I thought the nyquist was 2 times the sampling requency, not one half. And why are you using the nyquist as a normalizer? Can someone explain more about how to create filters with these functions? I plotted the filter using: w, h = signal.freqs(b, a) plt.plot(w, 20 * np.log10(abs(h))) plt.xscale('log') plt.title('Butterworth filter frequency response') plt.xlabel('Frequency [radians / second]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.axvline(100, color='green') # cutoff frequency plt.show() and got this which clearly does not cut-off at 23 rad/s:", "output": "A few comments: The Nyquist frequency is half the sampling rate. You are working with regularly sampled data, so you want a digital filter, not an analog filter. This means you should not use analog=True in the call to butter, and you should use scipy.signal.freqz (not freqs) to generate the frequency response. One goal of those short utility functions is to allow you to leave all your frequencies expressed in Hz. You shouldn't have to convert to rad/sec. As long as you express your frequencies with consistent units, the fs parameter of the SciPy functions will take care of the scaling for you. Here's my modified version of your script, followed by the plot that it generates. import numpy as np from scipy.signal import butter, lfilter, freqz import matplotlib.pyplot as plt def butter_lowpass(cutoff, fs, order=5): return butter(order, cutoff, fs=fs, btype='low', analog=False) def butter_lowpass_filter(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = lfilter(b, a, data) return y # Filter requirements. order = 6 fs = 30.0 # sample rate, Hz cutoff = 3.667 # desired cutoff frequency of the filter, Hz # Get the filter coefficients so we can check its frequency response. b, a = butter_lowpass(cutoff, fs, order) # Plot the frequency response. w, h = freqz(b, a, fs=fs, worN=8000) plt.subplot(2, 1, 1) plt.plot(w, np.abs(h), 'b') plt.plot(cutoff, 0.5*np.sqrt(2), 'ko') plt.axvline(cutoff, color='k') plt.xlim(0, 0.5*fs) plt.title(\"Lowpass Filter Frequency Response\") plt.xlabel('Frequency [Hz]') plt.grid() # Demonstrate the use of the filter. # First make some data to be filtered. T = 5.0 # seconds n = int(T * fs) # total number of samples t = np.linspace(0, T, n, endpoint=False) # \"Noisy\" data. We want to recover the 1.2 Hz signal from this. data = np.sin(1.2*2*np.pi*t) + 1.5*np.cos(9*2*np.pi*t) + 0.5*np.sin(12.0*2*np.pi*t) # Filter the data, and plot both the original and filtered signals. y = butter_lowpass_filter(data, cutoff, fs, order) plt.subplot(2, 1, 2) plt.plot(t, data, 'b-', label='data') plt.plot(t, y, 'g-', linewidth=2, label='filtered data') plt.xlabel('Time [sec]') plt.grid() plt.legend() plt.subplots_adjust(hspace=0.35) plt.show()"} +{"question_id": 11170827, "score": 114, "creation_date": 1340466383, "tags": ["python", "version", "virtualenv"], "instruction": "How do I tell a Python script to use a particular version\n\nHow do I, in the main.py module (presumably), tell Python which interpreter to use? What I mean is: if I want a particular script to use version 3 of Python to interpret the entire program, how do I do that? Bonus: How would this affect a virtualenv? Am I right in thinking that if I create a virtualenv for my program and then tell it to use a different version of Python, will I encounter some conflicts?", "output": "You can add a shebang line the to the top of the script: #!/usr/bin/env python2.7 But that will only work when executing as ./my_program.py. If you execute as python my_program.py, then the whatever Python version that which python returns will be used. In re: to virtualenv use: virtualenv -p /usr/bin/python3.2 or whatever to set it up to use that Python executable."} +{"question_id": 66159432, "score": 114, "creation_date": 1613063483, "tags": ["python", "python-3.x", "switch-statement", "python-3.10", "structural-pattern-matching"], "instruction": "How to use values stored in variables as case patterns?\n\nI'm trying to understand the new structural pattern matching syntax in Python 3.10. I understand that it is possible to match on literal values like this: def handle(retcode): match retcode: case 200: print('success') case 404: print('not found') case _: print('unknown') handle(404) # not found However, if I refactor and move these values to module-level variables, it results in an error because the statements now represent structures or patterns rather than values: SUCCESS = 200 NOT_FOUND = 404 def handle(retcode): match retcode: case SUCCESS: print('success') case NOT_FOUND: print('not found') case _: print('unknown') handle(404) # File \"\", line 6 # case SUCCESS: # ^ # SyntaxError: name capture 'SUCCESS' makes remaining patterns unreachable Is there any way to use the match statement to match values that are stored within variables?", "output": "If the constant you're testing against is a dotted name, then it should be treated as a constant instead of as the name of the variable to put the capture in (see PEP 636 # Matching against constants and enums): class Codes: SUCCESS = 200 NOT_FOUND = 404 def handle(retcode): match retcode: case Codes.SUCCESS: print('success') case Codes.NOT_FOUND: print('not found') case _: print('unknown') Although, given how python is trying to implement pattern-matching, I think that for situations like this it's probably safer and clearer code to just use an if/elif/else tower when checking against constant values."} +{"question_id": 2578752, "score": 114, "creation_date": 1270476236, "tags": ["python", "matplotlib", "nan", "colormap", "imshow"], "instruction": "How can I plot NaN values as a special color with imshow?\n\nI am trying to use imshow in matplotlib to plot data as a heatmap, but some of the values are NaNs. I'd like the NaNs to be rendered as a special color not found in the colormap. example: import numpy as np import matplotlib.pyplot as plt f = plt.figure() ax = f.add_subplot(111) a = np.arange(25).reshape((5,5)).astype(float) a[3,:] = np.nan ax.imshow(a, interpolation='nearest') f.canvas.draw() The resultant image is unexpectedly all blue (the lowest color in the jet colormap). However, if I do the plotting like this: ax.imshow(a, interpolation='nearest', vmin=0, vmax=24) --then I get something better, but the NaN values are drawn the same color as vmin... Is there a graceful way that I can set NaNs to be drawn with a special color (eg: gray or transparent)?", "output": "With newer versions of Matplotlib, it is not necessary to use a masked array anymore. For example, let\u2019s generate an array where every 7th value is a NaN: arr = np.arange(100, dtype=float).reshape(10, 10) arr[~(arr % 7).astype(bool)] = np.nan .cm.get_cmap() is replaced by .colormaps.get_cmap('viridis') in matplotlib v3.7.0 Set the color with .set_bad. import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np arr = np.arange(100, dtype=float).reshape(10, 10) arr[~(arr % 7).astype(bool)] = np.nan cmap = mpl.colormaps.get_cmap('viridis') # viridis is the default colormap for imshow cmap.set_bad(color='red') plt.imshow(arr, cmap=cmap) .cm.get_cmap() is deprecated We can modify the current colormap and plot the array with the following lines: current_cmap = mpl.cm.get_cmap() current_cmap.set_bad(color='red') plt.imshow(arr)"} +{"question_id": 2046603, "score": 114, "creation_date": 1263268193, "tags": ["python", "function", "subprocess", "popen", "python-multiprocessing"], "instruction": "Is it possible to run function in a subprocess without threading or writing a separate file/script.\n\nimport subprocess def my_function(x): return x + 100 output = subprocess.Popen(my_function, 1) #I would like to pass the function object and its arguments print output #desired output: 101 I have only found documentation on opening subprocesses using separate scripts. Does anyone know how to pass function objects or even an easy way to pass function code?", "output": "I think you're looking for something more like the multiprocessing module: http://docs.python.org/library/multiprocessing.html#the-process-class The subprocess module is for spawning processes and doing things with their input/output - not for running functions. Here is a multiprocessing version of your code: from multiprocessing import Process, Queue # must be a global function def my_function(q, x): q.put(x + 100) if __name__ == '__main__': queue = Queue() p = Process(target=my_function, args=(queue, 1)) p.start() p.join() # this blocks until the process terminates result = queue.get() print result"} +{"question_id": 11474391, "score": 114, "creation_date": 1342196373, "tags": ["python", "c++", "terminal", "console"], "instruction": "Is there go up line character? (Opposite of \\n)\n\nI would like to overwrite something on a line above in a serial console. Is there a character that allows me to move up?", "output": "Most terminals understand ANSI escape codes. The relevant \"Control Sequence Introducer\" commands for this use case: \"\\033[F\" \u2013 move cursor to the beginning of the previous line \"\\033[A\" \u2013 move cursor up one line Example (Python): print(\"\\033[FMy text overwriting the previous line.\")"} +{"question_id": 64522040, "score": 114, "creation_date": 1603616892, "tags": ["python", "python-typing"], "instruction": "Dynamically create Literal alias from list of valid values\n\nI have a function which validates its argument to accept only values from a given list of valid options. Typing-wise, I reflect this behavior using a Literal type alias, like so: from typing import Literal VALID_ARGUMENTS = ['foo', 'bar'] Argument = Literal['foo', 'bar'] def func(argument: 'Argument') -> None: if argument not in VALID_ARGUMENTS: raise ValueError( f'argument must be one of {VALID_ARGUMENTS}' ) # ... This is a violation of the DRY principle, because I have to rewrite the list of valid arguments in the definition of my Literal type, even if it is already stored in the variable VALID_ARGUMENTS. How can I create the Argument Literal type dynamically, given the VALID_ARGUMENTS variable? The following things do not work: from typing import Literal, Union, NewType Argument = Literal[*VALID_ARGUMENTS] # SyntaxError: invalid syntax Argument = Literal[VALID_ARGUMENTS] # Parameters to generic types must be types Argument = Literal[Union[VALID_ARGUMENTS]] # TypeError: Union[arg, ...]: each arg must be a type. Got ['foo', 'bar']. Argument = NewType( 'Argument', Union[ Literal[valid_argument] for valid_argument in VALID_ARGUMENTS ] ) # Expected type 'Type[_T]', got 'list' instead Can it be done at all?", "output": "Go the other way around, and build VALID_ARGUMENTS from Argument: Argument = typing.Literal['foo', 'bar'] VALID_ARGUMENTS: typing.Tuple[Argument, ...] = typing.get_args(Argument) I've used a tuple for VALID_ARGUMENTS here, but if for some reason you really prefer a list, you can get one: VALID_ARGUMENTS: typing.List[Argument] = list(typing.get_args(Argument)) It's possible at runtime to build Argument from VALID_ARGUMENTS, but doing so is incompatible with static analysis, which is the primary use case of type annotations. Doing so is also considered semantically invalid - the spec forbids parameterizing Literal with dynamically computed parameters. The runtime implementation simply doesn't have the information it would need to validate this. Building VALID_ARGUMENTS from Argument is the way to go."} +{"question_id": 30053329, "score": 114, "creation_date": 1430829849, "tags": ["python", "pandas", "numpy", "dataframe", "nan"], "instruction": "Elegant way to create empty pandas DataFrame with NaN of type float\n\nI want to create a Pandas DataFrame filled with NaNs. During my research I found an answer: import pandas as pd df = pd.DataFrame(index=range(0,4),columns=['A']) This code results in a DataFrame filled with NaNs of type \"object\". So they cannot be used later on for example with the interpolate() method. Therefore, I created the DataFrame with this complicated code (inspired by this answer): import pandas as pd import numpy as np dummyarray = np.empty((4,1)) dummyarray[:] = np.nan df = pd.DataFrame(dummyarray) This results in a DataFrame filled with NaN of type \"float\", so it can be used later on with interpolate(). Is there a more elegant way to create the same result?", "output": "Simply pass the desired value as first argument, like 0, math.inf or, here, np.nan. The constructor then initializes and fills the value array to the size specified by arguments index and columns: >>> import numpy as np >>> import pandas as pd >>> df = pd.DataFrame(np.nan, index=[0, 1, 2, 3], columns=['A', 'B']) >>> df A B 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN >>> df.dtypes A float64 B float64 dtype: object"} +{"question_id": 4591125, "score": 114, "creation_date": 1294121810, "tags": ["python"], "instruction": "Is it safe to replace '==' with 'is' to compare Boolean-values\n\nI did several Boolean Comparisons: >>> (True or False) is True True >>> (True or False) == True True It sounds like == and is are interchangeable for Boolean-values. Sometimes it's more clear to use is I want to know that: Are True and False pre-allocated in python? Is bool(var) always return the same True(or False) with the pre-allocated True(or False)? Is it safe to replace == with is to compare Boolean-values? It's not about Best-Practice. I just want to know the Truth.", "output": "It's rare to need to directly compare booleans. If you are doing something like: if some_bool == True: ... ...just change it to: if some_bool: ... No is or == needed. As commenters have pointed out, there are valid reasons to compare booleans. If both booleans are unknown and you want to know if one is equal to the other, you should use == or != rather than is or is not (the reason is explained below). Note that this is logically equivalent to xnor and xor respectively, which don't exist as logical operators in Python. Internally, there should only ever be two boolean literal objects (see also the C API), and bool(x) is True should be True if bool(x) == True for any Python program. Two caveats: This does not mean that x is True if x == True, however (eg. x = 1). This is true for the usual implementation of Python (CPython) but might not be true in other implementations. Hence == is a more reliable comparison."} +{"question_id": 33227473, "score": 114, "creation_date": 1445311563, "tags": ["python", "matplotlib", "plot", "seaborn", "boxplot"], "instruction": "How to set the range of y-axis for a seaborn boxplot\n\nFrom the official seaborn documentation, I learned that you can create a boxplot as below: import seaborn as sns sns.set_style(\"whitegrid\") tips = sns.load_dataset(\"tips\") ax = sns.boxplot(x=\"day\", y=\"total_bill\", data=tips) My question is: how do I limit the range of y-axis of this plot? For example, I want the y-axis to be within [10, 40]. Is there any easy way to do this?", "output": "It is standard matplotlib.pyplot: import matplotlib.pyplot as plt plt.ylim(10, 40) Or simpler, as mwaskom comments below: ax.set(ylim=(10, 40))"} +{"question_id": 22127569, "score": 114, "creation_date": 1393763568, "tags": ["python", "pandas", "pivot", "reshape", "melt"], "instruction": "Opposite of melt in python pandas\n\nI cannot figure out how to do \"reverse melt\" using Pandas in python. This is my starting data label type value 0 x a 1 1 x b 2 2 x c 3 3 y a 4 4 y b 5 5 y c 6 6 z a 7 7 z b 8 8 z c 9 This is the output I would like to have: label a b c x 1 2 3 y 4 5 6 z 7 8 9 I'm sure there is an easy way to do this, but I don't know how.", "output": "there are a few ways: using .pivot: >>> origin.pivot(index='label', columns='type')['value'] type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns] using pivot_table: >>> origin.pivot_table(values='value', index='label', columns='type') value type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns] or .groupby followed by .unstack: >>> origin.groupby(['label', 'type'])['value'].aggregate('mean').unstack() type a b c label x 1 2 3 y 4 5 6 z 7 8 9 [3 rows x 3 columns]"} +{"question_id": 29772337, "score": 113, "creation_date": 1429619541, "tags": ["python", "mysql"], "instruction": "Python MySQL connector - unread result found when using fetchone\n\nI am inserting JSON data into a MySQL database I am parsing the JSON and then inserting it into a MySQL db using the python connector Through trial, I can see the error is associated with this piece of code for steps in result['routes'][0]['legs'][0]['steps']: query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s') if steps['travel_mode'] == \"pub_tran\": travel_mode = steps['travel_mode'] Orig_lat = steps['var_1']['dep']['lat'] Orig_lng = steps['var_1']['dep']['lng'] Dest_lat = steps['var_1']['arr']['lat'] Dest_lng = steps['var_1']['arr']['lng'] time_stamp = leg['_sent_time_stamp'] if steps['travel_mode'] ==\"a_pied\": query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s') travel_mode = steps['travel_mode'] Orig_lat = steps['var_2']['lat'] Orig_lng = steps['var_2']['lng'] Dest_lat = steps['var_2']['lat'] Dest_lng = steps['var_2']['lng'] time_stamp = leg['_sent_time_stamp'] cursor.execute(query,(travel_mode, Orig_lat, Orig_lng, Dest_lat, Dest_lng, time_stamp)) leg_no = cursor.fetchone()[0] print(leg_no) I have inserted higher level details and am now searching the database to associate this lower level information with its parent. The only way to find this unique value is to search via the origin and destination coordinates with the time_stamp. I believe the logic is sound and by printing the leg_no immediately after this section, I can see values which appear at first inspection to be correct However, when added to the rest of the code, it causes subsequent sections where more data is inserted using the cursor to fail with this error - raise errors.InternalError(\"Unread result found.\") mysql.connector.errors.InternalError: Unread result found. The issue seems similar to MySQL Unread Result with Python Is the query too complex and needs splitting or is there another issue? If the query is indeed too complex, can anyone advise how best to split this? EDIT As per @Gord's help, Ive tried to dump any unread results cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng)) leg_no = cursor.fetchone()[0] try: cursor.fetchall() except mysql.connector.errors.InterfaceError as ie: if ie.msg == 'No result set to fetch from.': pass else: raise cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng, time_stamp)) But, I still get raise errors.InternalError(\"Unread result found.\") mysql.connector.errors.InternalError: Unread result found. [Finished in 3.3s with exit code 1] scratches head EDIT 2 - when I print the ie.msg, I get - No result set to fetch from", "output": "All that was required was for buffered to be set to true! cursor = cnx.cursor(buffered=True) The reason is that without a buffered cursor, the results are \"lazily\" loaded, meaning that \"fetchone\" actually only fetches one row from the full result set of the query. When you will use the same cursor again, it will complain that you still have n-1 results (where n is the result set amount) waiting to be fetched. However, when you use a buffered cursor the connector fetches ALL rows behind the scenes and you just take one from the connector so the mysql db won't complain."} +{"question_id": 5836335, "score": 113, "creation_date": 1304104240, "tags": ["python", "random", "numpy"], "instruction": "Consistently create same random numpy array\n\nI am waiting for another developer to finish a piece of code that will return an np array of shape (100,2000) with values of either -1,0, or 1. In the meantime, I want to randomly create an array of the same characteristics so I can get a head start on my development and testing. The thing is that I want this randomly created array to be the same each time, so that I'm not testing against an array that keeps changing its value each time I re-run my process. I can create my array like this, but is there a way to create it so that it's the same each time. I can pickle the object and unpickle it, but wondering if there's another way. r = np.random.randint(3, size=(100, 2000)) - 1", "output": "Simply seed the random number generator with a fixed value, e.g. numpy.random.seed(42) This way, you'll always get the same random number sequence. This function will seed the global default random number generator, and any call to a function in numpy.random will use and alter its state. This is fine for many simple use cases, but it's a form of global state with all the problems global state brings. For a cleaner solution, see Robert Kern's answer below."} +{"question_id": 63491221, "score": 113, "creation_date": 1597854368, "tags": ["python", "virtualenv"], "instruction": "ModuleNotFoundError: No module named 'virtualenv.seed.embed.via_app_data' when I created new env by virtualenv\n\nI was creating a new virtual environment on Ubuntu 20.04: $ virtualenv my_env But it gave an error: ModuleNotFoundError: No module named 'virtualenv.seed.embed.via_app_data' Other info: $ virtualenv --version virtualenv 20.0.17 from /usr/lib/python3/dist-packages/virtualenv/__init__.py", "output": "Try to create the virtual environment using directly venv module python3 -m venv my_env"} +{"question_id": 54895002, "score": 113, "creation_date": 1551219113, "tags": ["python", "python-3.x", "docker", "pytest"], "instruction": "ModuleNotFoundError with pytest\n\nI want my tests folder separate to my application code. My project structure is like so myproject/ myproject/ myproject.py moduleone.py tests/ myproject_test.py myproject.py from moduleone import ModuleOne class MyProject(object) .... myproject_test.py from myproject.myproject import MyProject import pytest ... I use myproject.myproject since I use the command python -m pytest from the project root directory ./myproject/ However, then the imports within those modules fail with E ModuleNotFoundError: No module named 'moduleone' I am running Python 3.7 and have read that since 3.3, empty __init__ files are no longer needed which means my project becomes an implicit namespace package However, I have tried adding an __init__.py file in myproject/myproject/ and also tried adding a conftest.py file in myproject/ but neither works I have read answers that say to mess with the paths and then upvoted comments in other questions saying not to. What is the correct way and what am I missing? EDIT; Possibly related, I used a requirements.txt to install pytest using pip. Could this be related? And if so, what is the correct way to install pytest in this case? EDIT 2: One of the paths in sys.path is /usr/src/app/ which is a docker volume lined to /my/local/path/myproject/. Should the volume be /my/local/path/myproject/myproject/ instead?", "output": "In 2023.02, according to the document of pytest, you can simply add following config to your pyproject.toml to solve this problem [tool.pytest.ini_options] pythonpath = \"src\" addopts = [ \"--import-mode=importlib\", ]"} +{"question_id": 12381756, "score": 113, "creation_date": 1347428026, "tags": ["python", "django", "model", "django-orm"], "instruction": "How to update() a single model instance retrieved by get() on Django ORM?\n\nI have a function which currently calls Models.object.get(), which returns either 0 or 1 model objects: if it returns 0, I create a new model instance in the except DoesNotExist clause of the function. Otherwise, I would like to update the fields in the pre-existing instance, without creating a new one. I was originally attempting to call .update() on the instance which was found, but .update() seems to be only callable on a QuerySets. How do I get around changing a dozen fields, without calling .filter() and comparing the lengths to know if I have to create or update a pre-existing instance?", "output": "With the advent of Django 1.7, there is now a new update_or_create QuerySet method, which should do exactly what you want. Just be careful of potential race conditions if uniqueness is not enforced at the database level. Example from the documentation: obj, created = Person.objects.update_or_create( first_name='John', last_name='Lennon', defaults={'first_name': 'Bob'}, ) The update_or_create method tries to fetch an object from database based on the given kwargs. If a match is found, it updates the fields passed in the defaults dictionary. Pre-Django 1.7: Change the model field values as appropriate, then call .save() to persist the changes: try: obj = Model.objects.get(field=value) obj.field = new_value obj.save() except Model.DoesNotExist: obj = Model.objects.create(field=new_value) # do something else with obj if need be"} +{"question_id": 3505831, "score": 113, "creation_date": 1282070807, "tags": ["python", "string", "numbers", "digits"], "instruction": "in python how do I convert a single digit number into a double digits string?\n\nSo say I have a = 5 I want to print it as a string '05'", "output": "In python 3.6, the fstring or \"formatted string literal\" mechanism was introduced. f\"{a:02}\" is the equivalent of the .format format below, but a little bit more terse. python 3 before 3.6 prefers a somewhat more verbose formatting system: \"{0:0=2d}\".format(a) You can take shortcuts here, the above is probably the most verbose variant. The full documentation is available here: http://docs.python.org/3/library/string.html#string-formatting print \"%02d\"%a is the python 2 variant The relevant doc link for python2 is: http://docs.python.org/2/library/string.html#format-specification-mini-language"} +{"question_id": 3942820, "score": 113, "creation_date": 1287150549, "tags": ["python", "unit-testing", "file"], "instruction": "How to do unit testing of functions writing files using Python's 'unittest'\n\nI have a Python function that writes an output file to disk. I want to write a unit test for it using Python's unittest module. How should I assert equality of files? I would like to get an error if the file content differs from the expected one + list of differences. As in the output of the Unix diff command. Is there an official or recommended way of doing that?", "output": "The simplest thing is to write the output file, then read its contents, read the contents of the gold (expected) file, and compare them with simple string equality. If they are the same, delete the output file. If they are different, raise an assertion. This way, when the tests are done, every failed test will be represented with an output file, and you can use a third-party tool to diff them against the gold files (Beyond Compare is wonderful for this). If you really want to provide your own diff output, remember that the Python stdlib has the difflib module. The new unittest support in Python 3.1 includes an assertMultiLineEqual method that uses it to show diffs, similar to this: def assertMultiLineEqual(self, first, second, msg=None): \"\"\"Assert that two multi-line strings are equal. If they aren't, show a nice diff. \"\"\" self.assertTrue(isinstance(first, str), 'First argument is not a string') self.assertTrue(isinstance(second, str), 'Second argument is not a string') if first != second: message = ''.join(difflib.ndiff(first.splitlines(True), second.splitlines(True))) if msg: message += \" : \" + msg self.fail(\"Multi-line strings are unequal:\\n\" + message)"} +{"question_id": 22373927, "score": 113, "creation_date": 1394702170, "tags": ["python", "warnings", "traceback"], "instruction": "Get Traceback of warnings\n\nIn numpy we can do np.seterr(invalid='raise') to get a traceback for warnings raising an error instead (see this post). Is there a general way for tracing warnings? Can I make python to give a traceback, when a warning is raised?", "output": "You can get what you want by assigning to warnings.showwarning. The warnings module documentation itself recommends that you do that, so it's not that you're being tempted by the dark side of the source. :) You may replace this function with an alternative implementation by assigning to warnings.showwarning. You can define a new function that does what warning.showwarning normaly does and additionally it prints the stack. Then you place it instead of the original: import traceback import warnings import sys def warn_with_traceback(message, category, filename, lineno, file=None, line=None): log = file if hasattr(file,'write') else sys.stderr traceback.print_stack(file=log) log.write(warnings.formatwarning(message, category, filename, lineno, line)) warnings.showwarning = warn_with_traceback After this, every warning will print the stack trace as well as the warning message. Take into account, however, that if the warning is ignored because it is not the first one, nothing will happen, so you still need to execute: warnings.simplefilter(\"always\") You can get a similar control that the one numpy.seterr gives through the warning module's filters If what you want is python to report every a warning every time it is triggered and not only the first time, you can include something like: import warnings warnings.simplefilter(\"always\") You can get other behaviours by passing different strings as arguments. Using the same function you can also specify different behaviours for warnings depending on the module that raised them, the message they provide, the warning class, the line of code that is causing it and so on... You can check the list in the module documentation As an example, you can set all the warnings to raise exceptions, except the DeprecationWarnings that should be ignored completely: import warnings warnings.simplefilter(\"error\") warnings.simplefilter(\"ignore\", DeprecationWarning) This way you get the full traceback for each warning raised as error (only the first one, since execution will stop... but you can address them one by one, and create a filter to ignore the ones you don't want to hear about again..."} +{"question_id": 25341945, "score": 113, "creation_date": 1408208584, "tags": ["python", "string", "macos", "date", "datetime"], "instruction": "Check if string has date, any format\n\nHow do I check if a string can be parsed to a date? Jan 19, 1990 January 19, 1990 Jan 19,1990 01/19/1990 01/19/90 1990 Jan 1990 January1990 These are all valid dates. If there's any concern regarding the lack of space in between stuff in item #3 and the last item above, that can be easily remedied via automatically inserting a space in between letters/characters and numbers, if so needed. But first, the basics: I tried putting it in an if statement: if datetime.strptime(item, '%Y') or datetime.strptime(item, '%b %d %y') or datetime.strptime(item, '%b %d %Y') or datetime.strptime(item, '%B %d %y') or datetime.strptime(item, '%B %d %Y'): But that's in a try-except block, and keeps returning something like this: 16343 time data 'JUNE1890' does not match format '%Y' Unless, it met the first condition in the if statement. To clarify, I don't actually need the value of the date - I just want to know if it is. Ideally, it would've been something like this: if item is date: print date else: print \"Not a date\" Is there any way to do this?", "output": "The parse function in dateutils.parser is capable of parsing many date string formats to a datetime object. pip install python-dateutil If you simply want to know whether a particular string could represent or contain a valid date, you could try the following simple function: from dateutil.parser import parse def is_date(string, fuzzy=False): \"\"\" Return whether the string can be interpreted as a date. :param string: str, string to check for date :param fuzzy: bool, ignore unknown tokens in string if True \"\"\" try: parse(string, fuzzy=fuzzy) return True except ValueError: return False Then you have: >>> is_date(\"1990-12-1\") True >>> is_date(\"2005/3\") True >>> is_date(\"Jan 19, 1990\") True >>> is_date(\"today is 2019-03-27\") False >>> is_date(\"today is 2019-03-27\", fuzzy=True) True >>> is_date(\"Monday at 12:01am\") True >>> is_date(\"xyz_not_a_date\") False >>> is_date(\"yesterday\") False Custom parsing parse might recognise some strings as dates which you don't want to treat as dates. For example: Parsing \"12\" and \"1999\" will return a datetime object representing the current date with the day and year substituted for the number in the string \"23, 4\" and \"23 4\" will be parsed as datetime.datetime(2023, 4, 16, 0, 0). \"Friday\" will return the date of the nearest Friday in the future. Similarly \"August\" corresponds to the current date with the month changed to August. Also parse is not locale aware, so does not recognise months or days of the week in languages other than English. Both of these issues can be addressed to some extent by using a custom parserinfo class, which defines how month and day names are recognised: from dateutil.parser import parserinfo class CustomParserInfo(parserinfo): # three months in Spanish for illustration MONTHS = [(\"Enero\", \"Enero\"), (\"Feb\", \"Febrero\"), (\"Marzo\", \"Marzo\")] An instance of this class can then be used with parse: >>> parse(\"Enero 1990\") # ValueError: Unknown string format >>> parse(\"Enero 1990\", parserinfo=CustomParserInfo()) datetime.datetime(1990, 1, 27, 0, 0)"} +{"question_id": 29947844, "score": 113, "creation_date": 1430320434, "tags": ["python", "set"], "instruction": "Opposite of set.intersection in python?\n\nIn Python you can use a.intersection(b) to find the items common to both sets. Is there a way to do the disjoint opposite version of this? Items that are not common to both a and b; the unique items in a unioned with the unique items in b?", "output": "You are looking for the symmetric difference; all elements that appear only in set a or in set b, but not both: a.symmetric_difference(b) From the set.symmetric_difference() method documentation: Return a new set with elements in either the set or other but not both. You can use the ^ operator too, if both a and b are sets: a ^ b while set.symmetric_difference() takes any iterable for the other argument. The output is the equivalent of (a | b) - (a & b), the union of both sets minus the intersection of both sets. Producing the output takes O(M+N) time for sets of length M and N, respectively; M steps to copy set a then N steps to alter that set based on each value in b: def symmetric_difference(a, b): result = set(a) for elem in b: try: result.remove(elem) except KeyError: result.add(elem) return result There are in-place variants too, where set a is altered directly; use a.symmetric_difference_update(b) or a ^= b. The in-place variant takes O(N) time, so it depends on the size of set b only: def symmetric_difference_update(a, b): for elem in b: try: a.remove(elem) except KeyError: a.add(elem) # no return, a has been updated in-place"} +{"question_id": 26394748, "score": 113, "creation_date": 1413422427, "tags": ["python", "python-3.x", "dictionary", "key", "nltk"], "instruction": "NLTK python error: \"TypeError: 'dict_keys' object is not subscriptable\"\n\nI'm following instructions for a class homework assignment and I'm supposed to look up the top 200 most frequently used words in a text file. Here's the last part of the code: fdist1 = FreqDist(NSmyText) vocab=fdist1.keys() vocab[:200] But when I press enter after the vocab 200 line, it returns: Traceback (most recent call last): File \"\", line 1, in TypeError: 'dict_keys' object is not subscriptable Any suggestions on how to fix this so it can correctly return an answer?", "output": "Looks like you are using Python 3. In Python 3 dict.keys() returns an iterable but not indexable object. The most simple (but not so efficient) solution would be: vocab = list(fdist1.keys())[:200] In some situations it is desirable to continue working with an iterator object instead of a list. This can be done with itertools.islice(): import itertools vocab_iterator = itertools.islice(fdist1.keys(), 200)"} +{"question_id": 15306897, "score": 113, "creation_date": 1362800634, "tags": ["python", "django", "django-models", "django-views", "reverse-foreign-key"], "instruction": "Django reverse lookup of foreign keys\n\nI have a venue, this venue has many events happening there. My models look like this: class Venue(models.Model): title = models.CharField(max_length=200) date_published = models.DateTimeField('published date',default=datetime.now, blank=True) venue_latitude = models.CharField(max_length=200) venue_longitude = models.CharField(max_length=200) venue_address = models.CharField(max_length=200) venue_city = models.CharField(max_length=200) venue_state = models.CharField(max_length=200) venue_country = models.CharField(max_length=200) description = models.TextField() def __unicode__(self): return u'%s' % (self.title) class Event(models.Model): title = models.CharField(max_length=200) date_published = models.DateTimeField('published date',default=datetime.now, blank=True) date_start = models.DateTimeField('start date') date_end = models.DateTimeField('end date') def __unicode__(self): return self.title description = models.TextField() price = models.IntegerField(null=True, blank=True) venue = models.ForeignKey(Venue) I'd like to display all the events that are happening at a certain venue. How can I do that? My current view looks like: def detail(request, venue_id): venue = get_object_or_404(Venue, pk=venue_id) return render(request, 'venue-detail.html', {'venue': venue})", "output": "You can use events = venue.event_set to go the other way. Note that venue.event_set is a manager object, like Event.objects, so you can call .all, .filter, .exclude and similar on it to get a queryset. See the Django documentation"} +{"question_id": 36699155, "score": 113, "creation_date": 1460995794, "tags": ["python", "matplotlib"], "instruction": "How to get color of most recent plotted line in Python's plt\n\nI plot a line without specifying the color (think: plt.plot(x,y)). Say the color comes out blue. Question: How do I obtain this color from the plt object so that I can put it into a variable? Seems like this is close (and potentially the solution): p = plt.plot(x,y) color = p[0].get_color() Updated question: I am not sure I understand the \"0\" index: Does p[0] always access the most recent plotted line?", "output": "In your example, p is a list of Line2D object. In that example you have only one line object, p[0]. The following is an example plotting three lines. As more line is added, it is appended to the p. So if you want the color of the last plot, it will be p[-1].get_color(). import numpy as np import matplotlib.pyplot as plt x = np.arange(10) y = np.arange(10) p = plt.plot(x,y, x,y*2, x,y*3) # make three line plots type(p) # list type(p[0]) # p[0].get_color() # 'b' p[1].get_color() # 'g' p[2].get_color() # 'r'"} +{"question_id": 5286390, "score": 113, "creation_date": 1299973392, "tags": ["python", "unit-testing", "mocking"], "instruction": "Mocking Functions Using Python Mock\n\nI am trying to Mock a function (that returns some external content) using the python mock module. I'm having some trouble mocking functions that are imported into a module. For example, in util.py I have def get_content(): return \"stuff\" I want to mock util.get_content so that it returns something else. I am trying this: util.get_content=Mock(return_value=\"mocked stuff\") If get_content gets invoked inside another module, it never actually seems to return the mocked object. Am I missing something in terms of how to use Mock? Note that if I invoke the following, things work correctly: >>> util.get_content=Mock(return_value=\"mocked stuff\") >>> util.get_content() \"mocked stuff\" However, if get_content is called from inside another module, it invokes the original function instead of the mocked version: >>> from mymodule import MyObj >>> util.get_content=Mock(return_value=\"mocked stuff\") >>> m=MyObj() >>> m.func() \"stuff\" Contents of mymodule.py from util import get_content class MyObj: def func(): get_content() So I guess my question is - how do I get invoke the Mocked version of a function from inside a module that I call? It appears that the from module import function may be to blame here, in that it doesn't point to the Mocked function.", "output": "I think I have a workaround, though it's still not quite clear on how to solve the general case In mymodule, if I replace from util import get_content class MyObj: def func(): get_content() with import util class MyObj: def func(): util.get_content() The Mock seems to get invoked. It looks like the namespaces need to match (which makes sense). However, the weird thing is that I would expect import mymodule mymodule.get_content = mock.Mock(return_value=\"mocked stuff\") to do the trick in the original case where I am using the from/import syntax (which now pulls in get_content into mymodule). But this still refers to the unmocked get_content. Turns out the namespace matters - just need to keep that in mind when writing your code."} +{"question_id": 17201172, "score": 113, "creation_date": 1371675406, "tags": ["python", "matplotlib"], "instruction": "A logarithmic colorbar in matplotlib scatter plot\n\nI would like to make the colors of the points on the scatter plot correspond to the value of the void fraction, but on a logarithmic scale to amplify differences. I did this, but now when I do plt.colorbar(), it displays the log of the void fraction, when I really want the actual void fraction. How can I make a log scale on the colorbar with the appropriate labels of the void fraction, which belongs to [0.00001,1]? Here is an image of the plot I have now, but the void fraction colorbar is not appropriately labeled to correspond to the true void fraction, instead of the log of it. fig = plt.figure() plt.scatter(x,y,edgecolors='none',s=marker_size,c=np.log(void_fraction)) plt.colorbar() plt.title('Colorbar: void fraction') Thanks for your help.", "output": "There is now a section of the documentation describing how color mapping and normalization works The way that matplotlib does color mapping is in two steps, first a Normalize function (wrapped up by the sub-classes of matplotlib.colors.Normalize) which maps the data you hand in to [0, 1]. The second step maps values in [0,1] -> RGBA space. You just need to use the LogNorm normalization class, passed in with the norm kwarg. plt.scatter(x,y,edgecolors='none',s=marker_size,c=void_fraction, norm=matplotlib.colors.LogNorm()) When you want to scale/tweak data for plotting, it is better to let matplotlib do the transformations than to do it your self. Normalize doc LogNorm doc matplotlib.color doc"} +{"question_id": 13299427, "score": 112, "creation_date": 1352415681, "tags": ["python", "variables", "reference", "pass-by-reference"], "instruction": "Python functions call by reference\n\nIn some languages you can pass a parameter by reference or value by using a special reserved word like ref or val. When you pass a parameter to a Python function it never alters the value of the parameter on leaving the function.The only way to do this is by using the global reserved word (or as i understand it currently). Example 1: k = 2 def foo (n): n = n * n #clarity regarding comment below square = n return square j = foo(k) print j print k would show >>4 >>2 showing k to be unchanged. In this example the variable n is never changed Example 2: n = 0 def foo(): global n n = n * n return n In this example the variable n is changed. Is there any way in Python to call a function and tell Python that the parameter is either a value or reference parameter instead of using global?", "output": "You can not change an immutable object, like str or tuple, inside a function in Python, but you can do things like: def foo(y): y[0] = y[0]**2 x = [5] foo(x) print x[0] # prints 25 That is a weird way to go about it, however, unless you need to always square certain elements in an array. Note that in Python, you can also return more than one value, making some of the use cases for pass by reference less important: def foo(x, y): return x**2, y**2 a = 2 b = 3 a, b = foo(a, b) # a == 4; b == 9 When you return values like that, they are being returned as a Tuple which is in turn unpacked. edit: Another way to think about this is that, while you can't explicitly pass variables by reference in Python, you can modify the properties of objects that were passed in. In my example (and others) you can modify members of the list that was passed in. You would not, however, be able to reassign the passed in variable entirely. For instance, see the following two pieces of code look like they might do something similar, but end up with different results: def clear_a(x): x = [] def clear_b(x): while x: x.pop() z = [1,2,3] clear_a(z) # z will not be changed clear_b(z) # z will be emptied"} +{"question_id": 33513522, "score": 112, "creation_date": 1446610211, "tags": ["python", "macos", "pyaudio"], "instruction": "when installing pyaudio, pip cannot find portaudio.h in /usr/local/include\n\nI'm using mac osx 10.10 As the PyAudio Homepage said, I install the PyAudio using brew install portaudio pip install pyaudio the installation of portaudio seems successful, I can find headers and libs in /usr/local/include and /usr/local/lib but when I try to install pyaudio, it gives me an error that src/_portaudiomodule.c:29:10: fatal error: 'portaudio.h' file not found #include \"portaudio.h\" ^ 1 error generated. error: command 'cc' failed with exit status 1 actually it is in /usr/local/include why can't it find the file? some answers to similar questions are not working for me(like using virtualenv, or compile it manually), and I want to find a simple way to solve this.", "output": "Since pyAudio has portAudio as a dependency, you first have to install portaudio. #for Mac brew install portaudio Then try: pip install pyAudio. If the problem persists after installing portAudio, you can specify the directory path where the compiler will be able to find the source programs (e.g: portaudio.h). Since the headers should be in the /usr/local/include directory: pip install --global-option='build_ext' --global-option='-I/usr/local/include' --global-option='-L/usr/local/lib' pyaudio"} +{"question_id": 7831371, "score": 112, "creation_date": 1319086935, "tags": ["python", "database", "sqlite"], "instruction": "Is there a way to get a list of column names in sqlite?\n\nI want to get a list of column names from a table in a database. Using pragma I get a list of tuples with a lot of unneeded information. Is there a way to get only the column names? So I might end up with something like this: [Column1, Column2, Column3, Column4] The reason why I absolutely need this list is because I want to search for a column name in the list and get the index because the index is used in a lot of my code. Is there a way of getting a list like this? Thanks", "output": "You can use sqlite3 and pep-249 import sqlite3 connection = sqlite3.connect('~/foo.sqlite') cursor = connection.execute('select * from bar') cursor.description is a sequence of 7-item sequences whose first element is the column name: names = list(map(lambda x: x[0], cursor.description)) Alternatively you could use a list comprehension: names = [description[0] for description in cursor.description]"} +{"question_id": 1119722, "score": 112, "creation_date": 1247494781, "tags": ["python", "base62"], "instruction": "Base 62 conversion\n\nHow would you convert an integer to base 62 (like hexadecimal, but with these digits: 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)? I have been trying to find a good Python library for it, but they all seems to be occupied with converting strings. The Python base64 module only accepts strings and turns a single digit into four characters. I was looking for something akin to what URL shorteners use.", "output": "There is no standard module for this, but I have written my own functions to achieve that. BASE62 = \"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\" def encode(num, alphabet): \"\"\"Encode a positive number into Base X and return the string. Arguments: - `num`: The number to encode - `alphabet`: The alphabet to use for encoding \"\"\" if num == 0: return alphabet[0] arr = [] arr_append = arr.append # Extract bound-method for faster access. _divmod = divmod # Access to locals is faster. base = len(alphabet) while num: num, rem = _divmod(num, base) arr_append(alphabet[rem]) arr.reverse() return ''.join(arr) def decode(string, alphabet=BASE62): \"\"\"Decode a Base X encoded string into the number Arguments: - `string`: The encoded string - `alphabet`: The alphabet to use for decoding \"\"\" base = len(alphabet) strlen = len(string) num = 0 idx = 0 for char in string: power = (strlen - (idx + 1)) num += alphabet.index(char) * (base ** power) idx += 1 return num Notice the fact that you can give it any alphabet to use for encoding and decoding. If you leave the alphabet argument out, you are going to get the 62 character alphabet defined on the first line of code, and hence encoding/decoding to/from 62 base. PS - For URL shorteners, I have found that it's better to leave out a few confusing characters like 0Ol1oI etc. Thus I use this alphabet for my URL shortening needs - \"23456789abcdefghijkmnpqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ\""} +{"question_id": 31569384, "score": 112, "creation_date": 1437583964, "tags": ["python", "pandas"], "instruction": "Set value for particular cell in pandas DataFrame with iloc\n\nI have a question similar to this and this. The difference is that I have to select row by position, as I do not know the index. I want to do something like df.iloc[0, 'COL_NAME'] = x, but iloc does not allow this kind of access. If I do df.iloc[0]['COL_NAME'] = x the warning about chained indexing appears.", "output": "For mixed position and index, use .ix. BUT you need to make sure that your index is not of integer, otherwise it will cause confusions. df.ix[0, 'COL_NAME'] = x Update: Alternatively, try df.iloc[0, df.columns.get_loc('COL_NAME')] = x Example: import pandas as pd import numpy as np # your data # ======================== np.random.seed(0) df = pd.DataFrame(np.random.randn(10, 2), columns=['col1', 'col2'], index=np.random.randint(1,100,10)).sort_index() print(df) col1 col2 10 1.7641 0.4002 24 0.1440 1.4543 29 0.3131 -0.8541 32 0.9501 -0.1514 33 1.8676 -0.9773 36 0.7610 0.1217 56 1.4941 -0.2052 58 0.9787 2.2409 75 -0.1032 0.4106 76 0.4439 0.3337 # .iloc with get_loc # =================================== df.iloc[0, df.columns.get_loc('col2')] = 100 df col1 col2 10 1.7641 100.0000 24 0.1440 1.4543 29 0.3131 -0.8541 32 0.9501 -0.1514 33 1.8676 -0.9773 36 0.7610 0.1217 56 1.4941 -0.2052 58 0.9787 2.2409 75 -0.1032 0.4106 76 0.4439 0.3337"} +{"question_id": 44723183, "score": 112, "creation_date": 1498226083, "tags": ["python", "pandas", "dataframe"], "instruction": "Set value on an entire column of a pandas dataframe\n\nI'm trying to set the entire column of a dataframe to a specific value. In [1]: df Out [1]: issueid industry 0 001 xxx 1 002 xxx 2 003 xxx 3 004 xxx 4 005 xxx From what I've seen, loc is the best practice when replacing values in a dataframe (or isn't it?): In [2]: df.loc[:,'industry'] = 'yyy' However, I still received this much talked-about warning message: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead If I do In [3]: df['industry'] = 'yyy' I got the same warning message. Any ideas? Working with Python 3.5.2 and pandas 0.18.1. EDIT Jan 2023: Given the volume of visits on this question, it's worth stating that my original question was really more about dataframe copy-versus-slice than \"setting value to an entire column\". On copy-versus-slice: My current understanding is that, in general, if you want to modify a subset of a dataframe after slicing, you should create the subset by .copy(). If you only want a view of the slice, no copy() needed. On setting value to an entire column: simply do df[col_name] = col_value", "output": "Python can do unexpected things when new objects are defined from existing ones. You stated in a comment above that your dataframe is defined along the lines of df = df_all.loc[df_all['issueid']==specific_id,:]. In this case, df is really just a stand-in for the rows stored in the df_all object: a new object is NOT created in memory. To avoid these issues altogether, I often have to remind myself to use the copy module, which explicitly forces objects to be copied in memory so that methods called on the new objects are not applied to the source object. I had the same problem as you, and avoided it using the deepcopy function. In your case, this should get rid of the warning message: from copy import deepcopy df = deepcopy(df_all.loc[df_all['issueid']==specific_id,:]) df['industry'] = 'yyy' EDIT: Also see David M.'s excellent comment below! df = df_all.loc[df_all['issueid']==specific_id,:].copy() df['industry'] = 'yyy'"} +{"question_id": 452969, "score": 112, "creation_date": 1232179848, "tags": ["java", "python", "class", "instantiation"], "instruction": "Does python have an equivalent to Java Class.forName()?\n\nI have the need to take a string argument and create an object of the class named in that string in Python. In Java, I would use Class.forName().newInstance(). Is there an equivalent in Python? Thanks for the responses. To answer those who want to know what I'm doing: I want to use a command line argument as the class name, and instantiate it. I'm actually programming in Jython and instantiating Java classes, hence the Java-ness of the question. getattr() works great. Thanks much.", "output": "Reflection in python is a lot easier and far more flexible than it is in Java. I recommend reading this tutorial (on archive.org) There's no direct function (that I know of) which takes a fully qualified class name and returns the class, however you have all the pieces needed to build that, and you can connect them together. One bit of advice though: don't try to program in Java style when you're in python. If you can explain what is it that you're trying to do, maybe we can help you find a more pythonic way of doing it. Here's a function that does what you want: def get_class( kls ): parts = kls.split('.') module = \".\".join(parts[:-1]) m = __import__( module ) for comp in parts[1:]: m = getattr(m, comp) return m You can use the return value of this function as if it were the class itself. Here's a usage example: >>> D = get_class(\"datetime.datetime\") >>> D >>> D.now() datetime.datetime(2009, 1, 17, 2, 15, 58, 883000) >>> a = D( 2010, 4, 22 ) >>> a datetime.datetime(2010, 4, 22, 0, 0) >>> How does that work? We're using __import__ to import the module that holds the class, which required that we first extract the module name from the fully qualified name. Then we import the module: m = __import__( module ) In this case, m will only refer to the top level module, For example, if your class lives in foo.baz module, then m will be the module foo We can easily obtain a reference to foo.baz using getattr( m, 'baz' ) To get from the top level module to the class, have to recursively use gettatr on the parts of the class name Say for example, if you class name is foo.baz.bar.Model then we do this: m = __import__( \"foo.baz.bar\" ) #m is package foo m = getattr( m, \"baz\" ) #m is package baz m = getattr( m, \"bar\" ) #m is module bar m = getattr( m, \"Model\" ) #m is class Model This is what's happening in this loop: for comp in parts[1:]: m = getattr(m, comp) At the end of the loop, m will be a reference to the class. This means that m is actually the class itslef, you can do for instance: a = m() #instantiate a new instance of the class b = m( arg1, arg2 ) # pass arguments to the constructor"} +{"question_id": 9884213, "score": 112, "creation_date": 1332828735, "tags": ["python", "loops"], "instruction": "Looping from 1 to infinity in Python\n\nIn C, I would do this: int i; for (i = 0;; i++) if (thereIsAReasonToBreak(i)) break; How can I achieve something similar in Python?", "output": "Using itertools.count: import itertools for i in itertools.count(start=1): if there_is_a_reason_to_break(i): break In Python 2, range() and xrange() were limited to sys.maxsize. In Python 3 range() can go much higher, though not to infinity: import sys for i in range(sys.maxsize**10): # you could go even higher if you really want if there_is_a_reason_to_break(i): break So it's probably best to use count()."} +{"question_id": 30914462, "score": 112, "creation_date": 1434627916, "tags": ["python", "matplotlib", "xticks", "yticks"], "instruction": "How to force integer tick labels\n\nMy python script uses matplotlib to plot a 2D \"heat map\" of an x, y, z dataset. My x- and y-values represent amino acid residues in a protein and can therefore only be integers. When I zoom into the plot, it looks like this: As I said, float values on the x-y axes do not make sense with my data and I therefore want it to look like this: Any ideas how to achieve this? This is the code that generates the plot: def plotDistanceMap(self): # Read on x,y,z x = self.currentGraph['xData'] y = self.currentGraph['yData'] X, Y = numpy.meshgrid(x, y) Z = self.currentGraph['zData'] # Define colormap cmap = colors.ListedColormap(['blue', 'green', 'orange', 'red']) cmap.set_under('white') cmap.set_over('white') bounds = [1,15,50,80,100] norm = colors.BoundaryNorm(bounds, cmap.N) # Draw surface plot img = self.axes.pcolor(X, Y, Z, cmap=cmap, norm=norm) self.axes.set_xlim(x.min(), x.max()) self.axes.set_ylim(y.min(), y.max()) self.axes.set_xlabel(self.currentGraph['xTitle']) self.axes.set_ylabel(self.currentGraph['yTitle']) # Cosmetics #matplotlib.rcParams.update({'font.size': 12}) xminorLocator = MultipleLocator(10) yminorLocator = MultipleLocator(10) self.axes.xaxis.set_minor_locator(xminorLocator) self.axes.yaxis.set_minor_locator(yminorLocator) self.axes.tick_params(direction='out', length=6, width=1) self.axes.tick_params(which='minor', direction='out', length=3, width=1) self.axes.xaxis.labelpad = 15 self.axes.yaxis.labelpad = 15 # Draw colorbar colorbar = self.figure.colorbar(img, boundaries = [0,1,15,50,80,100], spacing = 'proportional', ticks = [15,50,80,100], extend = 'both') colorbar.ax.set_xlabel('Angstrom') colorbar.ax.xaxis.set_label_position('top') colorbar.ax.xaxis.labelpad = 20 self.figure.tight_layout() self.canvas.draw()", "output": "This should be simpler: (from https://scivision.co/matplotlib-force-integer-labeling-of-axis/) import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator #... ax = plt.figure().gca() #... ax.xaxis.set_major_locator(MaxNLocator(integer=True)) Read the official docs: https://matplotlib.org/stable/api/ticker_api.html#matplotlib.ticker.MaxNLocator"} +{"question_id": 25668092, "score": 112, "creation_date": 1409840892, "tags": ["python", "flask", "sqlalchemy", "flask-sqlalchemy"], "instruction": "Flask sqlalchemy many-to-many insert data\n\nI am trying to make a many to many relation here in Flask-SQLAlchemy, but it seems that I don't know how to fill the \"many to many identifier database\". Could you please help me understand what I am doing wrong and how it is supposed to look? class User(db.Model): __tablename__ = 'users' user_id = db.Column(db.Integer, primary_key=True) user_fistName = db.Column(db.String(64)) user_lastName = db.Column(db.String(64)) user_email = db.Column(db.String(128), unique=True) class Class(db.Model): __tablename__ = 'classes' class_id = db.Column(db.Integer, primary_key=True) class_name = db.Column(db.String(128), unique=True) and then my identifier database: student_identifier = db.Table('student_identifier', db.Column('class_id', db.Integer, db.ForeignKey('classes.class_id')), db.Column('user_id', db.Integer, db.ForeignKey('users.user_id')) ) so far it looks like this when I try to insert the data into the database. # User user1 = User( user_fistName='John', user_lastName='Doe', user_email='john@doe.es') user2 = User( user_fistName='Jack', user_lastName='Doe', user_email='jack@doe.es') user3 = User( user_fistName='Jane', user_lastName='Doe', user_email='jane@doe.es') db.session.add_all([user1, user2, user3]) db.session.commit() # Class cl1 = Class(class_name='0A') cl2 = Class(class_name='0B') cl3 = Class(class_name='0C') cl4 = Class(class_name='Math') cl5 = Class(class_name='Spanish') db.session.add_all([cl1, cl2, cl3, cl4, cl5]) db.session.commit() Now my problem is, how do I add to the many to many database, since I really can't create a 'student_identifier' object? If I could it could perhaps have looked like this: # Student Identifier sti1 = StiClass(class_id=cl1.class_id, class_name=user1.user_id) sti2 = StiClass(class_id=cl3.class_id, class_name=user1.user_id) sti3 = StiClass(class_id=cl4.class_id, class_name=user1.user_id) sti4 = StiClass(class_id=cl2.class_id, class_name=user2.user_id) db.session.add_all([sti1, sti2, sti3, sti4]) db.session.commit() How I am supposed to insert into a many to many table with ORM?", "output": "You don't need to add anything directly to your association table, SQLAlchemy will do that. This is more or less from SQLAlchemy documentations: association_table = db.Table('association', db.Model.metadata, db.Column('left_id', db.Integer, db.ForeignKey('left.id')), db.Column('right_id', db.Integer, db.ForeignKey('right.id')) ) class Parent(db.Model): __tablename__ = 'left' id = db.Column(db.Integer, primary_key=True) children = db.relationship(\"Child\", secondary=association_table) class Child(db.Model): __tablename__ = 'right' id = db.Column(db.Integer, primary_key=True) p = Parent() c = Child() p.children.append(c) db.session.add(p) db.session.commit() Therefore your sample would be like this: student_identifier = db.Table('student_identifier', db.Column('class_id', db.Integer, db.ForeignKey('classes.class_id')), db.Column('user_id', db.Integer, db.ForeignKey('students.user_id')) ) class Student(db.Model): __tablename__ = 'students' user_id = db.Column(db.Integer, primary_key=True) user_fistName = db.Column(db.String(64)) user_lastName = db.Column(db.String(64)) user_email = db.Column(db.String(128), unique=True) class Class(db.Model): __tablename__ = 'classes' class_id = db.Column(db.Integer, primary_key=True) class_name = db.Column(db.String(128), unique=True) students = db.relationship(\"Student\", secondary=student_identifier) s = Student() c = Class() c.students.append(s) db.session.add(c) db.session.commit()"} +{"question_id": 12093594, "score": 112, "creation_date": 1345730992, "tags": ["python", "scipy", "signal-processing", "digital-filter"], "instruction": "How to implement band-pass Butterworth filter with Scipy.signal.butter\n\nUPDATE: I found a Scipy Recipe based in this question! So, for anyone interested, go straight to: Contents \u00bb Signal processing \u00bb Butterworth Bandpass I'm having a hard time to achieve what seemed initially a simple task of implementing a Butterworth band-pass filter for 1-D numpy array (time-series). The parameters I have to include are the sample_rate, cutoff frequencies IN HERTZ and possibly order (other parameters, like attenuation, natural frequency, etc. are more obscure to me, so any \"default\" value would do). What I have now is this, which seems to work as a high-pass filter but I'm no way sure if I'm doing it right: def butter_highpass(interval, sampling_rate, cutoff, order=5): nyq = sampling_rate * 0.5 stopfreq = float(cutoff) cornerfreq = 0.4 * stopfreq # (?) ws = cornerfreq/nyq wp = stopfreq/nyq # for bandpass: # wp = [0.2, 0.5], ws = [0.1, 0.6] N, wn = scipy.signal.buttord(wp, ws, 3, 16) # (?) # for hardcoded order: # N = order b, a = scipy.signal.butter(N, wn, btype='high') # should 'high' be here for bandpass? sf = scipy.signal.lfilter(b, a, interval) return sf The docs and examples are confusing and obscure, but I'd like to implement the form presented in the commend marked as \"for bandpass\". The question marks in the comments show where I just copy-pasted some example without understanding what is happening. I am no electrical engineering or scientist, just a medical equipment designer needing to perform some rather straightforward bandpass filtering on EMG signals.", "output": "You could skip the use of buttord, and instead just pick an order for the filter and see if it meets your filtering criterion. To generate the filter coefficients for a bandpass filter, give butter() the filter order, the cutoff frequencies Wn=[lowcut, highcut], the sampling rate fs (expressed in the same units as the cutoff frequencies) and the band type btype=\"band\". Here's a script that defines a couple convenience functions for working with a Butterworth bandpass filter. When run as a script, it makes two plots. One shows the frequency response at several filter orders for the same sampling rate and cutoff frequencies. The other plot demonstrates the effect of the filter (with order=6) on a sample time series. from scipy.signal import butter, lfilter def butter_bandpass(lowcut, highcut, fs, order=5): return butter(order, [lowcut, highcut], fs=fs, btype='band') def butter_bandpass_filter(data, lowcut, highcut, fs, order=5): b, a = butter_bandpass(lowcut, highcut, fs, order=order) y = lfilter(b, a, data) return y if __name__ == \"__main__\": import numpy as np import matplotlib.pyplot as plt from scipy.signal import freqz # Sample rate and desired cutoff frequencies (in Hz). fs = 5000.0 lowcut = 500.0 highcut = 1250.0 # Plot the frequency response for a few different orders. plt.figure(1) plt.clf() for order in [3, 6, 9]: b, a = butter_bandpass(lowcut, highcut, fs, order=order) w, h = freqz(b, a, fs=fs, worN=2000) plt.plot(w, abs(h), label=\"order = %d\" % order) plt.plot([0, 0.5 * fs], [np.sqrt(0.5), np.sqrt(0.5)], '--', label='sqrt(0.5)') plt.xlabel('Frequency (Hz)') plt.ylabel('Gain') plt.grid(True) plt.legend(loc='best') # Filter a noisy signal. T = 0.05 nsamples = T * fs t = np.arange(0, nsamples) / fs a = 0.02 f0 = 600.0 x = 0.1 * np.sin(2 * np.pi * 1.2 * np.sqrt(t)) x += 0.01 * np.cos(2 * np.pi * 312 * t + 0.1) x += a * np.cos(2 * np.pi * f0 * t + .11) x += 0.03 * np.cos(2 * np.pi * 2000 * t) plt.figure(2) plt.clf() plt.plot(t, x, label='Noisy signal') y = butter_bandpass_filter(x, lowcut, highcut, fs, order=6) plt.plot(t, y, label='Filtered signal (%g Hz)' % f0) plt.xlabel('time (seconds)') plt.hlines([-a, a], 0, T, linestyles='--') plt.grid(True) plt.axis('tight') plt.legend(loc='upper left') plt.show() Here are the plots that are generated by this script:"} +{"question_id": 4270301, "score": 112, "creation_date": 1290623746, "tags": ["python", "matplotlib", "scatter-plot"], "instruction": "Multiple datasets on the same scatter plot\n\nI want to plot multiple data sets on the same scatter plot: cases = scatter(x[:4], y[:4], s=10, c='b', marker=\"s\") controls = scatter(x[4:], y[4:], s=10, c='r', marker=\"o\") show() The above only shows the most recent scatter() I've also tried: plt = subplot(111) plt.scatter(x[:4], y[:4], s=10, c='b', marker=\"s\") plt.scatter(x[4:], y[4:], s=10, c='r', marker=\"o\") show()", "output": "You need a reference to an Axes object to keep drawing on the same subplot. import matplotlib.pyplot as plt x = range(100) y = range(100,200) fig = plt.figure() ax1 = fig.add_subplot(111) ax1.scatter(x[:4], y[:4], s=10, c='b', marker=\"s\", label='first') ax1.scatter(x[40:],y[40:], s=10, c='r', marker=\"o\", label='second') plt.legend(loc='upper left') plt.show()"} +{"question_id": 51079503, "score": 112, "creation_date": 1530178651, "tags": ["python", "python-3.7", "python-dataclasses"], "instruction": "Dataclasses and property decorator\n\nI've been reading up on Python 3.7's dataclass as an alternative to namedtuples (what I typically use when having to group data in a structure). I was wondering if dataclass is compatible with the property decorator to define getter and setter functions for the data elements of the dataclass. If so, is this described somewhere? Or are there examples available?", "output": "It sure does work: from dataclasses import dataclass @dataclass class Test: _name: str=\"schbell\" @property def name(self) -> str: return self._name @name.setter def name(self, v: str) -> None: self._name = v t = Test() print(t.name) # schbell t.name = \"flirp\" print(t.name) # flirp print(t) # Test(_name='flirp') In fact, why should it not? In the end, what you get is just a good old class, derived from type: print(type(t)) # print(type(Test)) # Maybe that's why properties are nowhere mentioned specifically. However, the PEP-557's Abstract mentions the general usability of well-known Python class features: Because Data Classes use normal class definition syntax, you are free to use inheritance, metaclasses, docstrings, user-defined methods, class factories, and other Python class features."} +{"question_id": 6260089, "score": 112, "creation_date": 1307413766, "tags": ["python", "list", "loops", "for-loop"], "instruction": "Strange result when removing item from a list while iterating over it in Python\n\nI've got this code: numbers = list(range(1, 50)) for i in numbers: if i < 20: numbers.remove(i) print(numbers) This code should go through all the elements of the list and remove those elements that are less than 20. But, the result I'm getting is: [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49] As you can see, the code removed all even numbers less than 20, but left the odd ones. Looks like I'm doing something wrong with the remove.", "output": "You're modifying the list while you iterate over it. That means that the first time through the loop, i == 1, so 1 is removed from the list. Then the for loop goes to the second item in the list, which is not 2, but 3! Then that's removed from the list, and then the for loop goes on to the third item in the list, which is now 5. And so on. Perhaps it's easier to visualize like so, with a ^ pointing to the value of i: [1, 2, 3, 4, 5, 6...] ^ That's the state of the list initially; then 1 is removed and the loop goes to the second item in the list: [2, 3, 4, 5, 6...] ^ [2, 4, 5, 6...] ^ And so on. There's no good way to alter a list's length while iterating over it. The best you can do is something like this: numbers = [n for n in numbers if n >= 20] or this, for in-place alteration (the thing in parens is a generator expression, which is implicitly converted into a tuple before slice-assignment): numbers[:] = (n for n in numbers if n >= 20) If you want to perform an operation on n before removing it, one trick you could try is this: for i, n in enumerate(numbers): if n < 20: print(\"do something\") numbers[i] = None numbers = [n for n in numbers if n is not None]"} +{"question_id": 1072569, "score": 112, "creation_date": 1246510403, "tags": ["python", "file"], "instruction": "See if two files have the same content in python\n\nPossible Duplicates: Finding duplicate files and removing them. In Python, is there a concise way of comparing whether the contents of two text files are the same? What is the easiest way to see if two files are the same content-wise in Python. One thing I can do is md5 each file and compare. Is there a better way?", "output": "Yes, I think hashing the file would be the best way if you have to compare several files and store hashes for later comparison. As hash can clash, a byte-by-byte comparison may be done depending on the use case. Generally byte-by-byte comparison would be sufficient and efficient, which filecmp module already does + other things too. See http://docs.python.org/library/filecmp.html e.g. >>> import filecmp >>> filecmp.cmp('file1.txt', 'file1.txt') True >>> filecmp.cmp('file1.txt', 'file2.txt') False Note that by default, filecmp does not compare the contents of the files, to do so, add a third parameter shallow=False. Speed consideration: Usually if only two files have to be compared, hashing them and comparing them would be slower instead of simple byte-by-byte comparison if done efficiently. e.g. code below tries to time hash vs byte-by-byte Disclaimer: this is not the best way of timing or comparing two algo. and there is need for improvements but it does give rough idea. If you think it should be improved do tell me I will change it. import random import string import hashlib import time def getRandText(N): return \"\".join([random.choice(string.printable) for i in xrange(N)]) N=1000000 randText1 = getRandText(N) randText2 = getRandText(N) def cmpHash(text1, text2): hash1 = hashlib.md5() hash1.update(text1) hash1 = hash1.hexdigest() hash2 = hashlib.md5() hash2.update(text2) hash2 = hash2.hexdigest() return hash1 == hash2 def cmpByteByByte(text1, text2): return text1 == text2 for cmpFunc in (cmpHash, cmpByteByByte): st = time.time() for i in range(10): cmpFunc(randText1, randText2) print cmpFunc.func_name,time.time()-st and the output is cmpHash 0.234999895096 cmpByteByByte 0.0"} +{"question_id": 17106288, "score": 112, "creation_date": 1371205285, "tags": ["python", "matplotlib"], "instruction": "How to forget previous plots - how can I flush/refresh?\n\nHow do you get matplotlib.pyplot to \"forget\" previous plots I am trying to plot multiple time using matplotlib.pyplot The code looks like this: def plottest(): import numpy as np import matplotlib.pyplot as plt a=np.random.rand(10,) b=np.random.rand(10,) c=np.random.rand(10,) plt.plot(a,label='a') plt.plot(b,label='b') plt.plot(c,label='c') plt.legend(loc='upper left') plt.ylabel('mag') plt.xlabel('element)') plt.show() e=np.random.rand(10,) f=np.random.rand(10,) g=np.random.rand(10,) plt.plot(e,label='e') plt.plot(f,label='f') plt.plot(g,label='g') plt.legend(loc='upper left') plt.ylabel('mag') plt.xlabel('element)') plt.show() Unfortunately I keep getting the same plot (actually from some other code which I ran and completed a while ago) no matter what I do. Similar code has worked previously for me. I have looked at these questions: How to \"clean the slate\"? Matplotlib pyplot show() doesn't work once closed (python) matplotlib pyplot show() .. blocking or not? and tried using plt.show(), plt.clf() and plt.close to no avail. Any ideas?", "output": "I would rather use plt.clf() after every plt.show() to just clear the current figure instead of closing and reopening it, keeping the window size and giving you a better performance and much better memory usage. Similarly, you could do plt.cla() to just clear the current axes. To clear a specific axes, useful when you have multiple axes within one figure, you could do for example: fig, axes = plt.subplots(nrows=2, ncols=2) axes[0, 1].clear()"} +{"question_id": 8222855, "score": 112, "creation_date": 1321943342, "tags": ["python", "string"], "instruction": "Check if string is upper, lower, or mixed case in Python\n\nI want to classify a list of string in Python depending on whether they are upper case, lower case, or mixed case How can I do this?", "output": "There are a number of \"is methods\" on strings. islower() and isupper() should meet your needs: >>> 'hello'.islower() True >>> [m for m in dir(str) if m.startswith('is')] ['isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper'] Here's an example of how to use those methods to classify a list of strings: >>> words = ['The', 'quick', 'BROWN', 'Fox', 'jumped', 'OVER', 'the', 'Lazy', 'DOG'] >>> [word for word in words if word.islower()] ['quick', 'jumped', 'the'] >>> [word for word in words if word.isupper()] ['BROWN', 'OVER', 'DOG'] >>> [word for word in words if not word.islower() and not word.isupper()] ['The', 'Fox', 'Lazy']"} +{"question_id": 7113032, "score": 112, "creation_date": 1313695860, "tags": ["python", "function", "arguments", "overloading"], "instruction": "Overloaded functions in Python\n\nIs it possible to have overloaded functions in Python? In C# I would do something like: void myfunction (int first, string second) { // Some code } void myfunction (int first, string second, float third) { // Some different code } And then when I call the function it would differentiate between the two based on the number of arguments. Is it possible to do something similar in Python?", "output": "EDIT For the new single dispatch generic functions in Python 3.4, see http://www.python.org/dev/peps/pep-0443/ You generally don't need to overload functions in Python. Python is dynamically typed, and supports optional arguments to functions. def myfunction(first, second, third = None): if third is None: #just use first and second else: #use all three myfunction(1, 2) # third will be None, so enter the 'if' clause myfunction(3, 4, 5) # third isn't None, it's 5, so enter the 'else' clause"} +{"question_id": 24347450, "score": 112, "creation_date": 1403401704, "tags": ["python", "pip", "python-wheel"], "instruction": "How do you add additional files to a wheel?\n\nHow do control what files are included in a wheel? It appears MANIFEST.in isn't used by python setup.py bdist_wheel. UPDATE: I was wrong about the difference between installing from a source tarball vs a wheel. The source distribution includes files specified in MANIFEST.in, but the installed package only has python files. Steps are needed to identify additional files that should be installed, whether the install is via source distribution, egg, or wheel. Namely, package_data is needed for additional package files, and data_files for files outside your package like command line scripts or system config files. Original Question I have a project where I've been using python setup.py sdist to build my package, MANIFEST.in to control the files included and excluded, and pyroma and check-manifest to confirm my settings. I recently converted it to dual Python 2 / 3 code, and added a setup.cfg with [bdist_wheel] universal = 1 I can build a wheel with python setup.py bdist_wheel, and it appears to be a universal wheel as desired. However, it doesn't include all of the files specified in MANIFEST.in. What gets installed? I dug deeper, and now know more about packaging and wheel. Here's what I learned: I upload two package files to the multigtfs project on PyPi: multigtfs-0.4.2.tar.gz - the source tar ball, which includes all the files in MANIFEST.in. multigtfs-0.4.2-py2.py3-none-any.whl - The binary distribution in question. I created two new virtual environments, both with Python 2.7.5, and installed each package (pip install multigtfs-0.4.2.tar.gz). The two environments are almost identical. They have different .pyc files, which are the \"compiled\" Python files. There are log files which record the different paths on disk. The install from the source tar ball includes a folder multigtfs-0.4.2-py27.egg-info, detailing the installation, and the wheel install has a multigtfs-0.4.2.dist-info folder, with the details of that process. However, from the point of view of code using the multigtfs project, there is no difference between the two installation methods. Explicitly, neither has the .zip files used by my test, so the test suite will fail: $ django-admin startproject demo $ cd demo $ pip install psycopg2 # DB driver for PostGIS project $ createdb demo # Create PostgreSQL database $ psql -d demo -c \"CREATE EXTENSION postgis\" # Make it a PostGIS database $ vi demo/settings.py # Add multigtfs to INSTALLED_APPS, # Update DATABASE to set ENGINE to django.contrib.gis.db.backends.postgis # Update DATABASE to set NAME to test $ ./manage.py test multigtfs.tests # Run the tests ... IOError: [Errno 2] No such file or directory: u'/Users/john/.virtualenvs/test/lib/python2.7/site-packages/multigtfs/tests/fixtures/test3.zip' Specifying additional files Using the suggestions from the answers, I added some additional directives to setup.py: from __future__ import unicode_literals # setup.py now requires some funky binary strings ... setup( name='multigtfs', packages=find_packages(), package_data={b'multigtfs': ['test/fixtures/*.zip']}, include_package_data=True, ... ) This installs the zip files (as well as the README) to the folder, and tests now run correctly. Thanks for the suggestions!", "output": "Have you tried using package_data in your setup.py? MANIFEST.in seems targetted for python versions <= 2.6, I'm not sure if higher versions even look at it. After exploring https://github.com/pypa/sampleproject, their MANIFEST.in says: # If using Python 2.6 or less, then have to include package data, even though # it's already declared in setup.py include sample/*.dat which seems to imply this method is outdated. Meanwhile, in setup.py they declare: setup( name='sample', ... # If there are data files included in your packages that need to be # installed, specify them here. If using Python 2.6 or less, then these # have to be included in MANIFEST.in as well. include_package_data=True, package_data={ 'sample': ['package_data.dat'], }, ... ) (I'm not sure why they chose a wildcard in MANIFEST.in and a filename in setup.py. They refer to the same file) Which, along with being simpler, again seems to imply that the package_data route is superior to the MANIFEST.in method. Well, unless you have to support 2.6 that is, in which case my prayers go out to you."} +{"question_id": 28954093, "score": 112, "creation_date": 1425946719, "tags": ["python", "datetime", "time", "pandas"], "instruction": "How to add/subtract time (hours, minutes, etc.) from a Pandas DataFrame.Index whos objects are of type datetime.time?\n\nI have a DataFrame whose index values are of type datetime.time. There is no method in DataFrame.Index to shift the time. datetime.time has replace but that will only work on individual items. Here's an example: In[526]: dfa.index[:5] Out[526]: Index([21:12:19, 21:12:20, 21:12:21, 21:12:21, 21:12:22], dtype='object') In[527]: type(dfa.index[0]) Out[527]: datetime.time", "output": "Liam's link looks great, but also check out pandas.Timedelta - looks like it plays nicely with NumPy's and Python's time deltas. https://pandas.pydata.org/pandas-docs/stable/timedeltas.html pd.date_range('2014-01-01', periods=10) + pd.Timedelta(days=1)"} +{"question_id": 5518435, "score": 112, "creation_date": 1301689439, "tags": ["python"], "instruction": "Python: fastest way to create a list of n lists\n\nSo I was wondering how to best create a list of blank lists: [[],[],[]...] Because of how Python works with lists in memory, this doesn't work: [[]]*n This does create [[],[],...] but each element is the same list: d = [[]]*n d[0].append(1) #[[1],[1],...] Something like a list comprehension works: d = [[] for x in xrange(0,n)] But this uses the Python VM for looping. Is there any way to use an implied loop (taking advantage of it being written in C)? d = [] map(lambda n: d.append([]),xrange(0,10)) This is actually slower. :(", "output": "The probably only way which is marginally faster than d = [[] for x in xrange(n)] is from itertools import repeat d = [[] for i in repeat(None, n)] It does not have to create a new int object in every iteration and is about 15 % faster on my machine. Edit: Using NumPy, you can avoid the Python loop using d = numpy.empty((n, 0)).tolist() but this is actually 2.5 times slower than the list comprehension."} +{"question_id": 59359911, "score": 112, "creation_date": 1576511106, "tags": ["python", "python-3.x", "string", "formatting", "f-string"], "instruction": "Why is f'{{{74}}}' the same as f'{{74}}' with f-Strings?\n\nf-Strings are available from Python 3.6 and are very useful for formatting strings: >>> n='you' >>> f'hello {n}, how are you?' 'hello you, how are you?' Reading more about them in Python 3's f-Strings: An Improved String Formatting Syntax (Guide). I found an interesting pattern: Note that using triple braces will result in there being only single braces in your string: >>> f\"{{{74}}}\" '{74}' However, you can get more braces to show if you use more than triple braces: >>> f\"{{{{74}}}}\" '{{74}}' And this is exactly the case: >>> f'{74}' '74' >>> f'{{74}}' '{74}' Now if we pass from two { to three, the result is the same: >>> f'{{{74}}}' '{74}' # same as f'{{74}}' ! So we need up to 4! ({{{{) to get two braces as an output: >>> f'{{{{74}}}}' '{{74}}' Why is this? What happens with two braces to have Python require an extra one from that moment on?", "output": "Double braces escape the braces, so that no interpolation happens: {{ \u279d {, and }} \u279d }. And 74 remains an unchanged string, '74'. With triple braces, the outer double braces are escaped, same as above. The inner braces, on the other hand, lead to regular string interpolation of the value 74. That is, the string f'{{{74}}}' is equivalent to f'{{ {74} }}', but without spaces (or, equivalently, to '{' + f'{74}' + '}'). You can see the difference when replacing the numeric constant by a variable: In [1]: x = 74 In [2]: f'{{x}}' Out[2]: '{x}' In [3]: f'{{{x}}}' Out[3]: '{74}'"} +{"question_id": 3310049, "score": 111, "creation_date": 1279809615, "tags": ["python", "multithreading", "mutex"], "instruction": "Proper use of mutexes in Python\n\nI am starting with multi-threads in python (or at least it is possible that my script creates multiple threads). would this algorithm be the right usage of a Mutex? I haven't tested this code yet and it probably won't even work. I just want processData to run in a thread (one at time) and the main while loop to keep running, even if there is a thread in queue. from threading import Thread from win32event import CreateMutex mutex = CreateMutex(None, False, \"My Crazy Mutex\") while(1) t = Thread(target=self.processData, args=(some_data,)) t.start() mutex.lock() def processData(self, data) while(1) if mutex.test() == False: do some stuff break Edit: re-reading my code I can see that it is grossly wrong. but hey, that's why I am here asking for help.", "output": "I don't know why you're using the Window's Mutex instead of Python's. Using the Python methods, this is pretty simple: from threading import Thread, Lock mutex = Lock() def processData(data): with mutex: print('Do some stuff') while True: t = Thread(target = processData, args = (some_data,)) t.start() But note, because of the architecture of CPython (namely the Global Interpreter Lock) you'll effectively only have one thread running at a time anyway--this is fine if a number of them are I/O bound, although you'll want to release the lock as much as possible so the I/O bound thread doesn't block other threads from running. An alternative, for Python 2.6 and later, is to use Python's multiprocessing package. It mirrors the threading package, but will create entirely new processes which can run simultaneously. It's trivial to update your example: from multiprocessing import Process, Lock mutex = Lock() def processData(data): with mutex: print('Do some stuff') if __name__ == '__main__': while True: p = Process(target = processData, args = (some_data,)) p.start()"} +{"question_id": 6631128, "score": 111, "creation_date": 1310163365, "tags": ["python", "while-loop", "variable-assignment"], "instruction": "Assign variable in while loop condition in Python?\n\nI just came across this piece of code while 1: line = data.readline() if not line: break #... and thought, there must be a better way to do this, than using an infinite loop with break. So I tried: while line = data.readline(): #... and, obviously, got an error. Is there any way to avoid using a break in that situation? Edit: Ideally, you'd want to avoid saying readline twice... IMHO, repeating is even worse than just a break, especially if the statement is complex.", "output": "Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), it's now possible to capture the condition value (data.readline()) of the while loop as a variable (line) in order to re-use it within the body of the loop: while line := data.readline(): do_smthg(line)"} +{"question_id": 526179, "score": 111, "creation_date": 1234117879, "tags": ["python", "django", "session", "sessionid", "django-sessions"], "instruction": "How to find out the request.session sessionid and use it as a variable in Django?\n\nI'm aware that you can get session variables using request.session['variable_name'], but there doesn't seem to be a way to grab the session id(key) as a variable in a similar way. Is this documented anywhere? I can't find it.", "output": "request.session.session_key Note the key will only exist if there is a session, no key, no session. You can use this to test if a session exists. If you want to create a session, call create."} +{"question_id": 8136788, "score": 111, "creation_date": 1321362385, "tags": ["python", "escaping"], "instruction": "Decode escaped characters in URL\n\nI have a list containing URLs with escaped characters in them. Those characters have been set by urllib2.urlopen when it recovers the html page: http://www.sample1webpage.com/index.php?title=%E9%A6%96%E9%A1%B5&action=edit http://www.sample1webpage.com/index.php?title=%E9%A6%96%E9%A1%B5&action=history http://www.sample1webpage.com/index.php?title=%E9%A6%96%E9%A1%B5&variant=zh Is there a way to transform them back to their unescaped form in python? P.S.: The URLs are encoded in utf-8", "output": "Using urllib package (import urllib) : Python 2.7 From official documentation : urllib.unquote(string) Replace %xx escapes by their single-character equivalent. Example: unquote('/%7Econnolly/') yields '/~connolly/'. Python 3 From official documentation : urllib.parse.unquote(string, encoding='utf-8', errors='replace') [\u2026] Example: unquote('/El%20Ni%C3%B1o/') yields '/El Ni\u00f1o/'."} +{"question_id": 51601756, "score": 111, "creation_date": 1532982100, "tags": ["python", "concurrent.futures", "tqdm"], "instruction": "Use tqdm with concurrent.futures?\n\nI have a multithreaded function that I would like a status bar for using tqdm. Is there an easy way to show a status bar with ThreadPoolExecutor? It is the parallelization part that is confusing me. import concurrent.futures def f(x): return f**2 my_iter = range(1000000) def run(f,my_iter): with concurrent.futures.ThreadPoolExecutor() as executor: function = list(executor.map(f, my_iter)) return results run(f, my_iter) # wrap tqdr around this function?", "output": "You can wrap tqdm around the executor as the following to track the progress: list(tqdm(executor.map(f, iter), total=len(iter)) Here is your example: import time import concurrent.futures from tqdm import tqdm def f(x): time.sleep(0.001) # to visualize the progress return x**2 def run(f, my_iter): with concurrent.futures.ThreadPoolExecutor() as executor: results = list(tqdm(executor.map(f, my_iter), total=len(my_iter))) return results my_iter = range(100000) run(f, my_iter) And the result is like this: 16%|\u2588\u2588\u258f | 15707/100000 [00:00<00:02, 31312.54it/s]"} +{"question_id": 31674557, "score": 111, "creation_date": 1438081774, "tags": ["python", "for-loop", "pandas", "dataframe"], "instruction": "How to append rows in a pandas dataframe in a for loop?\n\nI have the following for loop: for i in links: data = urllib2.urlopen(str(i)).read() data = json.loads(data) data = pd.DataFrame(data.items()) data = data.transpose() data.columns = data.iloc[0] data = data.drop(data.index[[0]]) Each dataframe so created has most columns in common with the others but not all of them. Moreover, they all have just one row. What I need to to is to add to the dataframe all the distinct columns and each row from each dataframe produced by the for loop I tried pandas concatenate or similar but nothing seemed to work. Any idea? Thanks.", "output": "Suppose your data looks like this: import pandas as pd import numpy as np np.random.seed(2015) df = pd.DataFrame([]) for i in range(5): data = dict(zip(np.random.choice(10, replace=False, size=5), np.random.randint(10, size=5))) data = pd.DataFrame(data.items()) data = data.transpose() data.columns = data.iloc[0] data = data.drop(data.index[[0]]) df = df.append(data) print('{}\\n'.format(df)) # 0 0 1 2 3 4 5 6 7 8 9 # 1 6 NaN NaN 8 5 NaN NaN 7 0 NaN # 1 NaN 9 6 NaN 2 NaN 1 NaN NaN 2 # 1 NaN 2 2 1 2 NaN 1 NaN NaN NaN # 1 6 NaN 6 NaN 4 4 0 NaN NaN NaN # 1 NaN 9 NaN 9 NaN 7 1 9 NaN NaN Then it could be replaced with np.random.seed(2015) data = [] for i in range(5): data.append(dict(zip(np.random.choice(10, replace=False, size=5), np.random.randint(10, size=5)))) df = pd.DataFrame(data) print(df) In other words, do not form a new DataFrame for each row. Instead, collect all the data in a list of dicts, and then call df = pd.DataFrame(data) once at the end, outside the loop. Each call to df.append requires allocating space for a new DataFrame with one extra row, copying all the data from the original DataFrame into the new DataFrame, and then copying data into the new row. All that allocation and copying makes calling df.append in a loop very inefficient. The time cost of copying grows quadratically with the number of rows. Not only is the call-DataFrame-once code easier to write, its performance will be much better -- the time cost of copying grows linearly with the number of rows."} +{"question_id": 56161595, "score": 111, "creation_date": 1557985799, "tags": ["python", "asynchronous", "python-asyncio"], "instruction": "How to use `async for` in Python?\n\nI mean what do I get from using async for. Here is the code I write with async for, AIter(10) could be replaced with get_range(). But the code runs like sync not async. import asyncio async def get_range(): for i in range(10): print(f\"start {i}\") await asyncio.sleep(1) print(f\"end {i}\") yield i class AIter: def __init__(self, N): self.i = 0 self.N = N def __aiter__(self): return self async def __anext__(self): i = self.i print(f\"start {i}\") await asyncio.sleep(1) print(f\"end {i}\") if i >= self.N: raise StopAsyncIteration self.i += 1 return i async def main(): async for p in AIter(10): print(f\"finally {p}\") if __name__ == \"__main__\": asyncio.run(main()) The result I excepted should be : start 1 start 2 start 3 ... end 1 end 2 ... finally 1 finally 2 ... However, the real result is: start 0 end 0 finally 0 start 1 end 1 finally 1 start 2 end 2 I know I could get the excepted result by using asyncio.gather or asyncio.wait. But it is hard for me to understand what I got by use async for here instead of simple for. What is the right way to use async for if I want to loop over several Feature object and use them as soon as one is finished. For example: async for f in feature_objects: data = await f with open(\"file\", \"w\") as fi: fi.write()", "output": "But it is hard for me to understand what I got by use async for here instead of simple for. The underlying misunderstanding is expecting async for to automatically parallelize the iteration. It doesn't do that, it simply allows sequential iteration over an async source. For example, you can use async for to iterate over lines coming from a TCP stream, messages from a websocket, or database records from an async DB driver. The iteration being async means that you can run it in parallel with other async tasks (including other such iterations) in the same event loop. Ordinary for is incapable of async iteration, at least not without blocking the thread it's running in. This is because for calls __next__ as a blocking function and doesn't await its result. And you cannot manually await elements obtained by for because for expects __next__ to signal the end of iteration by raising StopIteration. If __next__ is a coroutine, the StopIteration exception won't be visible before awaiting it. This is why async for was introduced, not just in Python, but also in other languages with async/await and generalized for. In other words, while ordinary for foo in bar(): ... desugars to something like: __it = bar().__iter__() while True: try: foo = __it.__next__() # await missing except StopIteration: break ... ...async for foo in bar(): ... desugars to: __ait = bar().__aiter__() while True: try: foo = await __ait.__anext__() # await present except StopAsyncIteration: break ... If you want to run the loop iterations in parallel, you need to start them as parallel coroutines and use asyncio.as_completed or equivalent to retrieve their results as they come: async def x(i): print(f\"start {i}\") await asyncio.sleep(1) print(f\"end {i}\") return i # run x(0)..x(10) concurrently and process results as they arrive for f in asyncio.as_completed([x(i) for i in range(10)]): result = await f # ... do something with the result ... If you don't care about reacting to results immediately as they arrive, but you need them all, you can make it even simpler by using asyncio.gather: # run x(0)..x(10) concurrently and process results when all are done results = await asyncio.gather(*[x(i) for i in range(10)])"} +{"question_id": 59572174, "score": 111, "creation_date": 1578015123, "tags": ["python", "import", "module"], "instruction": "no module named 'dotenv' python 3.8\n\nI've tried reinstalling both dotenv and python-dotenv but I'm still getting the same error. I do have the .env file in the same directory as this script. #bot.py import os import discord from dotenv import load_dotenv load_dotenv() token=os.getenv('DISCORD_TOKEN') client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') client.run(token)", "output": "in your installation manager if it's Ubuntu or Debian try: apt install python3-dotenv you can also try sudo pip3 install dotenv to install via pip. Whatever you do remember to include explicitly the missing 3 part. Debian/Ubuntu have separate packages and as of the present time python means python2 and python3 means python3 in their apt repositories. However, when it comes to your locally installed python binary on your system which python binary it defaults to using may vary depending on what /usr/bin/python is symlinked to on your system. Some systems it's symlinked to something like python2.7 and other's it may be something like python3.5. Similar issues exist with locally installed pip. Hence, why using the '3' is important when installing or searching for python packages"} +{"question_id": 15301999, "score": 111, "creation_date": 1362771792, "tags": ["python", "python-2.7"], "instruction": "Default arguments with *args and **kwargs\n\nIn Python 2.x (I use 2.7), which is the proper way to use default arguments with *args and **kwargs? I've found a question on SO related to this topic, but that is for Python 3: Calling a Python function with *args,**kwargs and optional / default arguments There, they say this method works: def func(arg1, arg2, *args, opt_arg='def_val', **kwargs): #... In 2.7, it results in a SyntaxError. Is there any recommended way to define such a function? I got it working this way, but I'd guess there is a nicer solution. def func(arg1, arg2, *args, **kwargs): opt_arg ='def_val' if kwargs.__contains__('opt_arg'): opt_arg = kwargs['opt_arg'] #...", "output": "Just put the default arguments before the *args: def foo(a, b=3, *args, **kwargs): Now, b will be explicitly set if you pass it as a keyword argument or the second positional argument. Examples: foo(x) # a=x, b=3, args=(), kwargs={} foo(x, y) # a=x, b=y, args=(), kwargs={} foo(x, b=y) # a=x, b=y, args=(), kwargs={} foo(x, y, z, k) # a=x, b=y, args=(z, k), kwargs={} foo(x, c=y, d=k) # a=x, b=3, args=(), kwargs={'c': y, 'd': k} foo(x, c=y, b=z, d=k) # a=x, b=z, args=(), kwargs={'c': y, 'd': k} Note that, in particular, foo(x, y, b=z) doesn't work because b is assigned by position in that case. This code works in Python 3 too. Putting the default arg after *args in Python 3 makes it a \"keyword-only\" argument that can only be specified by name, not by position. If you want a keyword-only argument in Python 2, you can use @mgilson's solution."} +{"question_id": 5067218, "score": 111, "creation_date": 1298298990, "tags": ["python", "datetime", "timestamp", "utc"], "instruction": "get UTC timestamp in python with datetime\n\nIs there a way to get the UTC timestamp by specifying the date? What I would expect: datetime(2008, 1, 1, 0, 0, 0, 0) should result in 1199145600 Creating a naive datetime object means that there is no time zone information. If I look at the documentation for datetime.utcfromtimestamp, creating a UTC timestamp means leaving out the time zone information. So I would guess, that creating a naive datetime object (like I did) would result in a UTC timestamp. However: then = datetime(2008, 1, 1, 0, 0, 0, 0) datetime.utcfromtimestamp(float(then.strftime('%s'))) results in 2007-12-31 23:00:00 Is there still any hidden time zone information in the datetime object? What am I doing wrong?", "output": "Na\u00efve datetime versus aware datetime Default datetime objects are said to be \"na\u00efve\": they keep time information without the time zone information. Think about na\u00efve datetime as a relative number (ie: +4) without a clear origin (in fact your origin will be common throughout your system boundary). In contrast, think about aware datetime as absolute numbers (ie: 8) with a common origin for the whole world. Without timezone information you cannot convert the \"naive\" datetime towards any non-naive time representation (where does +4 targets if we don't know from where to start ?). This is why you can't have a datetime.datetime.toutctimestamp() method. (cf: http://bugs.python.org/issue1457227) To check if your datetime dt is na\u00efve, check dt.tzinfo, if None, then it's na\u00efve: datetime.now() ## DANGER: returns na\u00efve datetime pointing on local time datetime(1970, 1, 1) ## returns na\u00efve datetime pointing on user given time I have na\u00efve datetimes, what can I do ? You must make an assumption depending on your particular context: The question you must ask yourself is: was your datetime on UTC ? or was it local time ? If you were using UTC (you are out of trouble): import calendar def dt2ts(dt): \"\"\"Converts a datetime object to UTC timestamp naive datetime will be considered UTC. \"\"\" return calendar.timegm(dt.utctimetuple()) If you were NOT using UTC, welcome to hell. You have to make your datetime non-na\u00efve prior to using the former function, by giving them back their intended timezone. You'll need the name of the timezone and the information about if DST was in effect when producing the target na\u00efve datetime (the last info about DST is required for cornercases): import pytz ## pip install pytz mytz = pytz.timezone('Europe/Amsterdam') ## Set your timezone dt = mytz.normalize(mytz.localize(dt, is_dst=True)) ## Set is_dst accordingly Consequences of not providing is_dst: Not using is_dst will generate incorrect time (and UTC timestamp) if target datetime was produced while a backward DST was put in place (for instance changing DST time by removing one hour). Providing incorrect is_dst will of course generate incorrect time (and UTC timestamp) only on DST overlap or holes. And, when providing also incorrect time, occuring in \"holes\" (time that never existed due to forward shifting DST), is_dst will give an interpretation of how to consider this bogus time, and this is the only case where .normalize(..) will actually do something here, as it'll then translate it as an actual valid time (changing the datetime AND the DST object if required). Note that .normalize() is not required for having a correct UTC timestamp at the end, but is probably recommended if you dislike the idea of having bogus times in your variables, especially if you re-use this variable elsewhere. and AVOID USING THE FOLLOWING: (cf: Datetime Timezone conversion using pytz) dt = dt.replace(tzinfo=timezone('Europe/Amsterdam')) ## BAD !! Why? because .replace() replaces blindly the tzinfo without taking into account the target time and will choose a bad DST object. Whereas .localize() uses the target time and your is_dst hint to select the right DST object. OLD incorrect answer (thanks @J.F.Sebastien for bringing this up): Hopefully, it is quite easy to guess the timezone (your local origin) when you create your naive datetime object as it is related to the system configuration that you would hopefully NOT change between the naive datetime object creation and the moment when you want to get the UTC timestamp. This trick can be used to give an imperfect question. By using time.mktime we can create an utc_mktime: def utc_mktime(utc_tuple): \"\"\"Returns number of seconds elapsed since epoch Note that no timezone are taken into consideration. utc tuple must be: (year, month, day, hour, minute, second) \"\"\" if len(utc_tuple) == 6: utc_tuple += (0, 0, 0) return time.mktime(utc_tuple) - time.mktime((1970, 1, 1, 0, 0, 0, 0, 0, 0)) def datetime_to_timestamp(dt): \"\"\"Converts a datetime object to UTC timestamp\"\"\" return int(utc_mktime(dt.timetuple())) You must make sure that your datetime object is created on the same timezone than the one that has created your datetime. This last solution is incorrect because it makes the assumption that the UTC offset from now is the same than the UTC offset from EPOCH. Which is not the case for a lot of timezones (in specific moment of the year for the Daylight Saving Time (DST) offsets)."} +{"question_id": 7389759, "score": 111, "creation_date": 1315839036, "tags": ["python", "mysql", "sqlalchemy"], "instruction": "memory-efficient built-in SqlAlchemy iterator/generator?\n\nI have a ~10M record MySQL table that I interface with using SqlAlchemy. I have found that queries on large subsets of this table will consume too much memory even though I thought I was using a built-in generator that intelligently fetched bite-sized chunks of the dataset: for thing in session.query(Things): analyze(thing) To avoid this, I find I have to build my own iterator that bites off in chunks: lastThingID = None while True: things = query.filter(Thing.id < lastThingID).limit(querySize).all() if not rows or len(rows) == 0: break for thing in things: lastThingID = row.id analyze(thing) Is this normal or is there something I'm missing regarding SA built-in generators? The answer to this question seems to indicate that the memory consumption is not to be expected.", "output": "Most DBAPI implementations fully buffer rows as they are fetched - so usually, before the SQLAlchemy ORM even gets a hold of one result, the whole result set is in memory. But then, the way Query works is that it fully loads the given result set by default before returning to you your objects. The rationale here regards queries that are more than simple SELECT statements. For example, in joins to other tables that may return the same object identity multiple times in one result set (common with eager loading), the full set of rows needs to be in memory so that the correct results can be returned otherwise collections and such might be only partially populated. So Query offers an option to change this behavior through yield_per(). This call will cause the Query to yield rows in batches, where you give it the batch size. As the docs state, this is only appropriate if you aren't doing any kind of eager loading of collections so it's basically if you really know what you're doing. Also, if the underlying DBAPI pre-buffers rows, there will still be that memory overhead so the approach only scales slightly better than not using it. I hardly ever use yield_per(); instead, I use a better version of the LIMIT approach you suggest above using window functions. LIMIT and OFFSET have a huge problem that very large OFFSET values cause the query to get slower and slower, as an OFFSET of N causes it to page through N rows - it's like doing the same query fifty times instead of one, each time reading a larger and larger number of rows. With a window-function approach, I pre-fetch a set of \"window\" values that refer to chunks of the table I want to select. I then emit individual SELECT statements that each pull from one of those windows at a time. The window function approach is on the wiki and I use it with great success. Also note: not all databases support window functions; you need Postgresql, Oracle, or SQL Server. IMHO using at least Postgresql is definitely worth it - if you're using a relational database, you might as well use the best."} +{"question_id": 26724275, "score": 111, "creation_date": 1415052979, "tags": ["python", "path"], "instruction": "Removing the first folder in a path\n\nI have a path which looks like /First/Second/Third/Fourth/Fifth and I would like to remove the First from it, thus obtaining Second/Third/Fourth/Fifth The only idea I could come up with is to use recursively os.path.split but this does not seem optimal. Is there a better solution?", "output": "There really is nothing in the os.path module to do this. Every so often, someone suggests creating a splitall function that returns a list (or iterator) of all of the components, but it never gained enough traction. Partly this is because every time anyone ever suggested adding new functionality to os.path, it re-ignited the long-standing dissatisfaction with the general design of the library, leading to someone proposing a new, more OO-like, API for paths to deprecated the os, clunky API. In 3.4, that finally happened, with pathlib. And it's already got functionality that wasn't in os.path. So: >>> import pathlib >>> p = pathlib.Path('/First/Second/Third/Fourth/Fifth') >>> p.parts[2:] ('Second', 'Third', 'Fourth', 'Fifth') >>> pathlib.Path(*p.parts[2:]) PosixPath('Second/Third/Fourth/Fifth') Or\u2026 are you sure you really want to remove the first component, rather than do this? >>> p.relative_to(*p.parts[:2]) PosixPath('Second/Third/Fourth/Fifth') If you need to do this in 2.6-2.7 or 3.2-3.3, there's a backport of pathlib. Of course, you can use string manipulation, as long as you're careful to normalize the path and use os.path.sep, and to make sure you handle the fiddly details with non-absolute paths or with systems with drive letters, and\u2026 Or you can just wrap up your recursive os.path.split. What exactly is \"non-optimal\" about it, once you wrap it up? It may be a bit slower, but we're talking nanoseconds here, many orders of magnitude faster than even calling stat on a file. It will have recursion-depth problems if you have a filesystem that's 1000 directories deep, but have you ever seen one? (If so, you can always turn it into a loop\u2026) It takes a few minutes to wrap it up and write good unit tests, but that's something you just do once and never worry about again. So, honestly, if you don't want to use pathlib, that's what I'd do."} +{"question_id": 21128899, "score": 111, "creation_date": 1389758510, "tags": ["python", "django", "django-models"], "instruction": "How do I make an auto increment integer field in Django?\n\nI am making an Order model for a shopping cart and I need to make a field that auto increments when the order is made: class Order(models.Model): cart = models.ForeignKey(Cart) add_date = models.DateTimeField(auto_now_add=True) order_number = models.IntegerField() enable = models.BooleanField(default=True) How do I make the IntegerField auto increment?", "output": "In Django 1 : Django model class has default field with name id which is auto increment Field. 2 : You can define your own auto increment field using AutoField field. class Order(models.Model): auto_increment_id = models.AutoField(primary_key=True) # primary_key = True if you do not want to use default field \"id\" given by django to your model db design +------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Table | Create Table | +------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+ | core_order | CREATE TABLE `core_order` ( `auto_increment_id` int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`auto_increment_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 | +------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.01 sec) If you want to use django's default id as increment field . class Order(models.Model): add_date = models.DateTimeField(auto_now_add=True) db design +-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Table | Create Table | +-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+ | core_order | CREATE TABLE `core_order` ( `id` int(11) NOT NULL AUTO_INCREMENT, `add_date` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 | +-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+"} +{"question_id": 14295673, "score": 111, "creation_date": 1358010405, "tags": ["python", "time", "python-datetime"], "instruction": "Convert string into datetime.time object\n\nGiven the string in this format \"HH:MM\", for example \"03:55\", that represents 3 hours and 55 minutes. I want to convert it to datetime.time object for easier manipulation. What would be the easiest way to do that?", "output": "Use datetime.datetime.strptime() and call .time() on the result: >>> datetime.datetime.strptime('03:55', '%H:%M').time() datetime.time(3, 55) The first argument to .strptime() is the string to parse, the second is the expected format."} +{"question_id": 15888648, "score": 111, "creation_date": 1365453901, "tags": ["python", "pandas"], "instruction": "Is it possible to insert a row at an arbitrary position in a dataframe using pandas?\n\nI have a DataFrame object similar to this one: onset length 1 2.215 1.3 2 23.107 1.3 3 41.815 1.3 4 61.606 1.3 ... What I would like to do is insert a row at a position specified by some index value and update the following indices accordingly. E.g.: onset length 1 2.215 1.3 2 23.107 1.3 3 30.000 1.3 # new row 4 41.815 1.3 5 61.606 1.3 ... What would be the best way to do this?", "output": "You could slice and use concat to get what you want. from pandas import DataFrame, concat line = DataFrame({\"onset\": 30.0, \"length\": 1.3}, index=[3]) df2 = concat([df.iloc[:2], line, df.iloc[2:]]).reset_index(drop=True) This will produce the dataframe in your example output. As far as I'm aware, concat is the best method to achieve an insert type operation in pandas, but admittedly I'm by no means a pandas expert."} +{"question_id": 3711184, "score": 111, "creation_date": 1284483860, "tags": ["python", "python-inspect"], "instruction": "How to use inspect to get the caller's info from callee in Python?\n\nI need to get the caller info (what file/what line) from the callee. I learned that I can use the inspect module for that purpose, but not exactly how. How do you get that info with inspection? Or is there any other way to get the info? import inspect print __file__ c=inspect.currentframe() print c.f_lineno def hello(): print inspect.stack ?? what file called me in what line? hello()", "output": "The caller's frame is one frame higher than the current frame. You can use inspect.currentframe().f_back to find the caller's frame. Then use inspect.getframeinfo to get the caller's filename and line number. import inspect def hello(): previous_frame = inspect.currentframe().f_back ( filename, line_number, function_name, lines, index, ) = inspect.getframeinfo(previous_frame) return (filename, line_number, function_name, lines, index) print(hello()) # ('/home/unutbu/pybin/test.py', 10, '', ['hello()\\n'], 0)"} +{"question_id": 14939953, "score": 111, "creation_date": 1361202066, "tags": ["python", "sum", "digits"], "instruction": "Sum the digits of a number\n\nIf I want to find the sum of the digits of a number, i.e.: Input: 932 Output: 14, which is (9 + 3 + 2) What is the fastest way of doing this? I instinctively did: sum(int(digit) for digit in str(number)) and I found this online: sum(map(int, str(number))) Which is best to use for speed, and are there any other methods which are even faster?", "output": "Both lines you posted are fine, but you can do it purely in integers, and it will be the most efficient: def sum_digits(n): s = 0 while n: s += n % 10 n //= 10 return s or with divmod: def sum_digits2(n): s = 0 while n: n, remainder = divmod(n, 10) s += remainder return s Slightly faster is using a single assignment statement: def sum_digits3(n): r = 0 while n: r, n = r + n % 10, n // 10 return r > %timeit sum_digits(n) 1000000 loops, best of 3: 574 ns per loop > %timeit sum_digits2(n) 1000000 loops, best of 3: 716 ns per loop > %timeit sum_digits3(n) 1000000 loops, best of 3: 479 ns per loop > %timeit sum(map(int, str(n))) 1000000 loops, best of 3: 1.42 us per loop > %timeit sum([int(digit) for digit in str(n)]) 100000 loops, best of 3: 1.52 us per loop > %timeit sum(int(digit) for digit in str(n)) 100000 loops, best of 3: 2.04 us per loop"} +{"question_id": 2300756, "score": 111, "creation_date": 1266631896, "tags": ["python", "generator"], "instruction": "Get the nth item of a generator in Python\n\nIs there a more syntactically concise way of writing the following? gen = (i for i in xrange(10)) index = 5 for i, v in enumerate(gen): if i is index: return v It seems almost natural that a generator should have a gen[index] expression, that acts as a list, but is functionally identical to the above code.", "output": "one method would be to use itertools.islice >>> gen = (x for x in range(10)) >>> index = 5 >>> next(itertools.islice(gen, index, None)) 5"} +{"question_id": 55253498, "score": 111, "creation_date": 1553055208, "tags": ["python", "installation", "pip", "google-colaboratory"], "instruction": "How do I install a library permanently in Colab?\n\nIn Google Colaboratory, I can install a new library using !pip install package-name. But when I open the notebook again tomorrow, I need to re-install it every time. Is there a way to install a library permanently? No need to spend time installing every time to use?", "output": "If you want a no-authorization solution. You can use mounting with gcsfuse + service-account key embedded in your notebook. Like this: # first install gcsfuse %%capture !echo \"deb http://packages.cloud.google.com/apt gcsfuse-bionic main\" > /etc/apt/sources.list.d/gcsfuse.list !curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - !apt update !apt install gcsfuse Then get your service account credential from google cloud console and embed it in the notebook %%writefile /key.json { \"type\": \"service_account\", \"project_id\": \"kora-id\", \"private_key_id\": \"xxxxxxx\", \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nxxxxxxx==\\n-----END PRIVATE KEY-----\\n\", \"client_email\": \"colab-7@kora-id.iam.gserviceaccount.com\", \"client_id\": \"100380920993833371482\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/colab-7%40kora-id.iam.gserviceaccount.com\" } Then set environment to look for this credential file %env GOOGLE_APPLICATION_CREDENTIALS=/key.json You must then create (or have it already) a gcs bucket. And mount it to a made-up directory. !mkdir /content/my-bucket !gcsfuse my-bucket /content/my-bucket Then finally, install the library there. Like my above answer. import sys nb_path = '/content/my-bucket' sys.path.insert(0, nb_path) # Do this just once !pip install --target=$nb_path jdc You can now import jdc without !pip install it next time."} +{"question_id": 41585078, "score": 111, "creation_date": 1484119883, "tags": ["python", "csv"], "instruction": "How do I read and write CSV files?\n\nHow do I read the following CSV file? 1,\"A towel,\",1.0 42,\" it says, \",2.0 1337,is about the most ,-1 0,massively useful thing ,123 -2,an interstellar hitchhiker can have.,3 How do I write the following data to a CSV file? data = [ (1, \"A towel,\", 1.0), (42, \" it says, \", 2.0), (1337, \"is about the most \", -1), (0, \"massively useful thing \", 123), (-2, \"an interstellar hitchhiker can have.\", 3), ]", "output": "Here are some minimal complete examples how to read CSV files and how to write CSV files with Python. Pure Python: import csv # Define data data = [ (1, \"A towel,\", 1.0), (42, \" it says, \", 2.0), (1337, \"is about the most \", -1), (0, \"massively useful thing \", 123), (-2, \"an interstellar hitchhiker can have.\", 3), ] # Write CSV file with open(\"test.csv\", \"wt\") as fp: writer = csv.writer(fp, delimiter=\",\") # writer.writerow([\"your\", \"header\", \"foo\"]) # write header writer.writerows(data) # Read CSV file with open(\"test.csv\") as fp: reader = csv.reader(fp, delimiter=\",\", quotechar='\"') # next(reader, None) # skip the headers data_read = [row for row in reader] print(data_read) After that, the contents of data_read are [['1', 'A towel,', '1.0'], ['42', ' it says, ', '2.0'], ['1337', 'is about the most ', '-1'], ['0', 'massively useful thing ', '123'], ['-2', 'an interstellar hitchhiker can have.', '3']] Please note that CSV reads only strings. You need to convert to the column types manually. A Python 2+3 version was here before (link), but Python 2 support is dropped. Removing the Python 2 stuff massively simplified this answer. Related How do I write data into csv format as string (not file)? How can I use io.StringIO() with the csv module?: This is interesting if you want to serve a CSV on-the-fly with Flask, without actually storing the CSV on the server. mpu Have a look at my utility package mpu for a super simple and easy to remember one: import mpu.io data = mpu.io.read('example.csv', delimiter=',', quotechar='\"', skiprows=None) mpu.io.write('example.csv', data) Pandas import pandas as pd # Read the CSV into a pandas data frame (df) # With a df you can do many things # most important: visualize data with Seaborn df = pd.read_csv('myfile.csv', sep=',') print(df) # Or export it in many ways, e.g. a list of tuples tuples = [tuple(x) for x in df.values] # or export it as a list of dicts dicts = df.to_dict().values() See read_csv docs for more information. Please note that pandas automatically infers if there is a header line, but you can set it manually, too. If you haven't heard of Seaborn, I recommend having a look at it. Other Reading CSV files is supported by a bunch of other libraries, for example: dask.dataframe.read_csv spark.read.csv Created CSV file 1,\"A towel,\",1.0 42,\" it says, \",2.0 1337,is about the most ,-1 0,massively useful thing ,123 -2,an interstellar hitchhiker can have.,3 Common file endings .csv Working with the data After reading the CSV file to a list of tuples / dicts or a Pandas dataframe, it is simply working with this kind of data. Nothing CSV specific. Alternatives JSON: Nice for writing human-readable data; VERY commonly used (read & write) CSV: Super simple format (read & write) YAML: Nice to read, similar to JSON (read & write) pickle: A Python serialization format (read & write) MessagePack (Python package): More compact representation (read & write) HDF5 (Python package): Nice for matrices (read & write) XML: exists too *sigh* (read & write) For your application, the following might be important: Support by other programming languages Reading / writing performance Compactness (file size) See also: Comparison of data serialization formats In case you are rather looking for a way to make configuration files, you might want to read my short article Configuration files in Python"} +{"question_id": 50766461, "score": 111, "creation_date": 1528482321, "tags": ["python", "python-3.x", "difference", "namedtuple"], "instruction": "namedtuple vs NamedTuple in Python\n\nThe typing module documentation says that the two code snippets below are equivalent: from typing import NamedTuple class Employee(NamedTuple): name: str id: int and: from collections import namedtuple Employee = namedtuple('Employee', ['name', 'id']) Are they the exact same thing or, if not, what are the differences between the two implementations?", "output": "The type generated by subclassing typing.NamedTuple is equivalent to a collections.namedtuple, but with __annotations__, _field_types and _field_defaults attributes added. The generated code will behave the same, for all practical purposes, since nothing in Python currently acts on those typing related attributes (your IDE might use them, though). As a developer, using the typing module for your namedtuples allows a more natural declarative interface: You can easily specify default values for the fields (edit: in Python 3.7, collections.namedtuple got a new defaults keyword so this is no longer an advantage) You don't need to repeat the type name twice (\"Employee\") You can customize the type directly (e.g. adding a docstring or some methods) As before, your class will be a subclass of tuple, and instances will be instances of tuple as usual. Interestingly, your class will not be a subclass of NamedTuple. If you want to know why, read on for more info about the implementation detail. from typing import NamedTuple class Employee(NamedTuple): name: str id: int Behaviour in Python <= 3.8 >>> issubclass(Employee, NamedTuple) False >>> isinstance(Employee(name='guido', id=1), NamedTuple) False typing.NamedTuple is a class, it uses metaclasses and a custom __new__ to handle the annotations, and then it delegates to collections.namedtuple to build and return the type. As you may have guessed from the lowercased name convention, collections.namedtuple is not a type/class - it's a factory function. It works by building up a string of Python source code, and then calling exec on this string. The generated constructor is plucked out of a namespace and included in a 3-argument invocation of the metaclass type to build and return your class. This explains the weird inheritance breakage seen above, NamedTuple uses a metaclass in order to use a different metaclass to instantiate the class object. Behaviour in Python >= 3.9 typing.NamedTuple is changed from a type (class) to a function (def) >>> issubclass(Employee, NamedTuple) TypeError: issubclass() arg 2 must be a class or tuple of classes >>> isinstance(Employee(name=\"guido\", id=1), NamedTuple) TypeError: isinstance() arg 2 must be a type or tuple of types The metaclass acrobatics are gone, now it's just a simple factory function which calls collections.namedtuple and then sets __annotations__ on the returned type. Multiple inheritance using NamedTuple is now disallowed (it did not work properly in the first place). See bpo40185 / GH-19371 for the change."} +{"question_id": 18787036, "score": 111, "creation_date": 1379077446, "tags": ["python", "package", "setup.py"], "instruction": "Difference between entry_points/console_scripts and scripts in setup.py?\n\nThere are basically two ways to install Python console scripts to my path by setup.py: setup( ... entry_points = { 'console_scripts': [ 'foo = package.module:func', ], } ) and setup( ... scripts = [ 'scripts/myscript.sh' ] ) What are the differences? I see the first approach allows me to choose nice, specific name for my script, but are there any other differences? Different original purposes, compatibility (setuptools, distutils, ...?), usage, ...? I am quite confused and a nice elaborated reply could help me (and probably also others) to properly understand all this. Update: Since I asked the question PyPA published these cool docs on the topic.", "output": "The docs for the (awesome) Click package suggest a few reasons to use entry points instead of scripts, including cross-platform compatibility and avoiding having the interpreter assign __name__ to __main__, which could cause code to be imported twice (if another module imports your script) Click is a nice way to implement functions for use as entry_points, btw."} +{"question_id": 18090672, "score": 110, "creation_date": 1375823862, "tags": ["python", "dictionary"], "instruction": "Convert dictionary entries into variables\n\nIs there a Pythonic way to assign the values of a dictionary to its keys, in order to convert the dictionary entries into variables? I tried this out: >>> d = {'a':1, 'b':2} >>> for key,val in d.items(): exec('exec(key)=val') exec(key)=val ^ SyntaxError: invalid syntax I am certain that the key-value pairs are correct because they were previously defined as variables by me before. I then stored these variables in a dictionary (as key-value pairs) and would like to reuse them in a different function. I could just define them all over again in the new function, but because I may have a dictionary with about 20 entries, I thought there may be a more efficient way of doing this.", "output": "This was what I was looking for: d = {'a':1, 'b':2} for key,val in d.items(): exec(key + '=val') NOTE: As noted by @divenex in the comments, this solution only creates global variables -- it will not create local variables in a function. Move the code inside a function and you will get an error. def func(): d = {'a':1, 'b':2} for key,val in d.items(): exec(key + '=val') print(a,b) func() Error message: NameError: name 'a' is not defined"} +{"question_id": 40595961, "score": 110, "creation_date": 1479150191, "tags": ["python", "themes", "spyder"], "instruction": "How to change the Spyder editor background to dark?\n\nI've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark?", "output": "If you're using Spyder 3, please go to Tools > Preferences > Syntax Coloring and select there the dark theme you want to use. In Spyder 4, a dark theme is used by default. But if you want to select a different theme you can go to Tools > Preferences > Appearance > Syntax highlighting theme"} +{"question_id": 5854515, "score": 110, "creation_date": 1304321139, "tags": ["python", "performance", "matplotlib", "scientific-computing"], "instruction": "Interactive large plot with ~20 million sample points and gigabytes of data\n\nI have got a problem (with my RAM) here: it's not able to hold the data I want to plot. I do have sufficient HD space. Is there any solution to avoid that \"shadowing\" of my data-set? Concretely I deal with Digital Signal Processing and I have to use a high sample-rate. My framework (GNU Radio) saves the values (to avoid using too much disk space) in binary. I unpack it. Afterwards I need to plot. I need the plot zoomable, and interactive. And that is an issue. Is there any optimization potential to this, or another software/programming language (like R or so) which can handle larger data-sets? Actually I want much more data in my plots. But I have no experience with other software. GNUplot fails, with a similar approach to the following. I don't know R (jet). import matplotlib.pyplot as plt import matplotlib.cbook as cbook import struct \"\"\" plots a cfile cfile - IEEE single-precision (4-byte) floats, IQ pairs, binary txt - index,in-phase,quadrature in plaintext note: directly plotting with numpy results into shadowed functions \"\"\" # unpacking the cfile dataset def unpack_set(input_filename, output_filename): index = 0 # index of the samples output_filename = open(output_filename, 'wb') with open(input_filename, \"rb\") as f: byte = f.read(4) # read 1. column of the vector while byte != \"\": # stored Bit Values floati = struct.unpack('f', byte) # write value of 1. column to a variable byte = f.read(4) # read 2. column of the vector floatq = struct.unpack('f', byte) # write value of 2. column to a variable byte = f.read(4) # next row of the vector and read 1. column # delimeter format for matplotlib lines = [\"%d,\" % index, format(floati), \",\", format(floatq), \"\\n\"] output_filename.writelines(lines) index = index + 1 output_filename.close return output_filename.name # reformats output (precision configuration here) def format(value): return \"%.8f\" % value # start def main(): # specify path unpacked_file = unpack_set(\"test01.cfile\", \"test01.txt\") # pass file reference to matplotlib fname = str(unpacked_file) plt.plotfile(fname, cols=(0,1)) # index vs. in-phase # optional # plt.axes([0, 0.5, 0, 100000]) # for 100k samples plt.grid(True) plt.title(\"Signal-Diagram\") plt.xlabel(\"Sample\") plt.ylabel(\"In-Phase\") plt.show(); if __name__ == \"__main__\": main() Something like plt.swap_on_disk() could cache the stuff on my SSD ;)", "output": "So your data isn't that big, and the fact that you're having trouble plotting it points to issues with the tools. Matplotlib has lots of options and the output is fine, but it's a huge memory hog and it fundamentally assumes your data is small. But there are other options out there. So as an example, I generated a 20M data-point file 'bigdata.bin' using the following: #!/usr/bin/env python import numpy import scipy.io.numpyio npts=20000000 filename='bigdata.bin' def main(): data = (numpy.random.uniform(0,1,(npts,3))).astype(numpy.float32) data[:,2] = 0.1*data[:,2]+numpy.exp(-((data[:,1]-0.5)**2.)/(0.25**2)) fd = open(filename,'wb') scipy.io.numpyio.fwrite(fd,data.size,data) fd.close() if __name__ == \"__main__\": main() This generates a file of size ~229MB, which isn't all that big; but you've expressed that you'd like to go to even larger files, so you'll hit memory limits eventually. Let's concentrate on non-interactive plots first. The first thing to realize is that vector plots with glyphs at each point are going to be a disaster -- for each of the 20 M points, most of which are going to overlap anyway, trying to render little crosses or circles or something is going to be a diaster, generating huge files and taking tonnes of time. This, I think is what is sinking matplotlib by default. Gnuplot has no trouble dealing with this: gnuplot> set term png gnuplot> set output 'foo.png' gnuplot> plot 'bigdata.bin' binary format=\"%3float32\" using 2:3 with dots And even Matplotlib can be made to behave with some caution (choosing a raster back end, and using pixels to mark points): #!/usr/bin/env python import numpy import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt datatype=[('index',numpy.float32), ('floati',numpy.float32), ('floatq',numpy.float32)] filename='bigdata.bin' def main(): data = numpy.memmap(filename, datatype, 'r') plt.plot(data['floati'],data['floatq'],'r,') plt.grid(True) plt.title(\"Signal-Diagram\") plt.xlabel(\"Sample\") plt.ylabel(\"In-Phase\") plt.savefig('foo2.png') if __name__ == \"__main__\": main() Now, if you want interactive, you're going to have to bin the data to plot, and zoom in on the fly. I don't know of any python tools that will help you do this offhand. On the other hand, plotting-big-data is a pretty common task, and there are tools that are up for the job. Paraview is my personal favourite, and VisIt is another one. They both are mainly for 3D data, but Paraview in particular does 2d as well, and is very interactive (and even has a Python scripting interface). The only trick will be to write the data into a file format that Paraview can easily read."} +{"question_id": 45865608, "score": 110, "creation_date": 1503588624, "tags": ["python", "sql", "pandas"], "instruction": "Executing an SQL query on a Pandas dataset\n\nI have a Pandas dataset called df. How can I do: df.query(\"select * from df\")", "output": "This is not what pandas.query is supposed to do. You can look at package pandasql (same like sqldf in R ) Update: Note pandasql hasn't been maintained since 2017. Use another library from an answer below. import pandas as pd import pandasql as ps df = pd.DataFrame([[1234, 'Customer A', '123 Street', np.nan], [1234, 'Customer A', np.nan, '333 Street'], [1233, 'Customer B', '444 Street', '333 Street'], [1233, 'Customer B', '444 Street', '666 Street']], columns= ['ID', 'Customer', 'Billing Address', 'Shipping Address']) q1 = \"\"\"SELECT ID FROM df \"\"\" print(ps.sqldf(q1, locals())) ID 0 1234 1 1234 2 1233 3 1233 Update 2020-07-10 update the pandasql ps.sqldf(\"select * from df\")"} +{"question_id": 2910221, "score": 110, "creation_date": 1274851037, "tags": ["python", "automation", "httpclient", "webautomation"], "instruction": "How can I login to a website with Python?\n\nHow can I do it? I was trying to enter some specified link (with urllib), but to do it, I need to log in. I have this source from the site:
Is this possible?", "output": "Maybe you want to use twill. It's quite easy to use and should be able to do what you want. It will look like the following: from twill.commands import * go('http://example.org') fv(\"1\", \"email-email\", \"blabla.com\") fv(\"1\", \"password-clear\", \"testpass\") submit('0') You can use showforms() to list all forms once you used go\u2026 to browse to the site you want to login. Just try it from the python interpreter."} +{"question_id": 51464455, "score": 110, "creation_date": 1532257661, "tags": ["python", "pygame"], "instruction": "How to disable welcome message when importing pygame\n\nWhen I import pygame, it prints the version and welcome message. The message reads: pygame 1.9.4 Hello from the pygame community. https://www.pygame.org/contribute.html Why is this printed? How can I disable this message?", "output": "As can be seen in the source code, the message is not printed if the environment variable PYGAME_HIDE_SUPPORT_PROMPT is set. So the following code could be used to import pygame without printing the message: import os os.environ['PYGAME_HIDE_SUPPORT_PROMPT'] = \"hide\" import pygame Note that the value does not have to be \"hide\" but can be anything else as well, and the environment variable can also be set in other ways to achieve the same."} +{"question_id": 25091976, "score": 110, "creation_date": 1406957271, "tags": ["python", "python-requests"], "instruction": "python requests get cookies\n\nx = requests.post(url, data=data) print x.cookies I used the requests library to get some cookies from a website, but I can only get the cookies from the Response, how to get the cookies from the Request? Thanks!", "output": "Alternatively, you can use requests.Session and observe cookies before and after a request: >>> import requests >>> session = requests.Session() >>> print(session.cookies.get_dict()) {} >>> response = session.get('http://google.com') >>> print(session.cookies.get_dict()) {'PREF': 'ID=5514c728c9215a9a:FF=0:TM=1406958091:LM=1406958091:S=KfAG0U9jYhrB0XNf', 'NID': '67=TVMYiq2wLMNvJi5SiaONeIQVNqxSc2RAwVrCnuYgTQYAHIZAGESHHPL0xsyM9EMpluLDQgaj3db_V37NjvshV-eoQdA8u43M8UwHMqZdL-S2gjho8j0-Fe1XuH5wYr9v'}"} +{"question_id": 33139020, "score": 110, "creation_date": 1444877545, "tags": ["python", "string", "python-3.x", "go"], "instruction": "Can Golang multiply strings like Python can?\n\nPython can multiply strings like so: Python 3.4.3 (default, Mar 26 2015, 22:03:40) [GCC 4.9.2] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> x = 'my new text is this long' >>> y = '#' * len(x) >>> y '########################' >>> Can Golang do the equivalent somehow?", "output": "It has a function instead of an operator, strings.Repeat. Here's a port of your Python example, which you can run here: package main import ( \"fmt\" \"strings\" \"unicode/utf8\" ) func main() { x := \"my new text is this long\" y := strings.Repeat(\"#\", utf8.RuneCountInString(x)) fmt.Println(x) fmt.Println(y) } Note that I've used utf8.RuneCountInString(x) instead of len(x); the former counts \"runes\" (Unicode code points), while the latter, when called on a string, counts bytes. In the case of \"my new text is this long\", the difference doesn't matter since all the runes are only one byte each, but it's good to get into the habit of specifying what you mean: len(\"\u0101\") //=> 2 utf8.RuneCountInString(\"\u0101\") //=> 1 An alternative to calling RuneCountInString is to convert the string to an array of runes and then call len on that: y := strings.Repeat(\"#\", len([]rune(x))) But if all you're doing with the runes is counting them, I think it's clearer to use the utf8 function. Since this was a Python comparison question, note that the Python version of len also counts different things depending on what you call it on. In Python 2, it counted bytes on plain strings and runes on Unicode strings (u'...'): Python 2.7.18 (default, Sep 10 2022, 16:30:21) >>> len('\u0101') #=> 2 >>> len(u'\u0101') #=> 1 Whereas in modern Python, plain strings are Unicode strings; if you want to count bytes, you need to encode the string into a bytearray first: Python 3.12.0 (main, Oct 13 2023, 15:35:30) >>> len('\u0101') #=> 1 >>> len('\u0101'.encode('UTF-8')) #=> 2 So Python has multiple types of string; Go has only one kind of string, but different ways of dealing with its contents. Oh, it's also worth noting that the Golang concept of a \"rune\" doesn't (and can't) solve the problem that in Unicode, the question \"How much string is one character?\" does not always have a well-defined answer. I used \"\u0101\" above as an example of a string that's two bytes long containing only one rune (specifically U+0101 LATIN SMALL LETTER A WITH MACRON). But you could get what looks like that same string (\"a\u0304\") by instead combining two runes (U+0061 LATIN SMALL LETTER A and U+0304 COMBINING MACRON), giving rune count 2 and byte len() 4. Proper Unicode processing will treat both forms as equal to each other (and convert one to the other depending on which Normalization Form is selected) but there's not really any sense in which the Platonic ideal string they're both equivalent to can be said to contain a definite number of runes."} +{"question_id": 47440077, "score": 110, "creation_date": 1511369599, "tags": ["python", "pandas", "dataframe", "nan"], "instruction": "How to check if particular value (in cell) is NaN in pandas DataFrame?\n\nLet's say I have the following pandas DataFrame: import pandas as pd import numpy as np df = pd.DataFrame({\"A\": [1, np.nan, 2], \"B\": [5, 6, 0]}) Which would look like: >>> df A B 0 1.0 5 1 NaN 6 2 2.0 0 First option I know one way to check if a particular value is NaN: >>> df.isnull().iloc[1,0] True But this checks the whole dataframe just to get one value, so I imagine it's wasteful. Second option (not working) I thought below option, using iloc, would work as well, but it doesn't: >>> df.iloc[1,0] == np.nan False However if I check that value I get: >>> df.iloc[1,0] nan So, why is the second option not working? Is it possible to check for NaN values using iloc? Editor's note: This question previously used pd.np instead of np and .ix in addition to .iloc, but since these no longer exist, they have been edited out to keep it short and clear.", "output": "Try pd.isna(): In [7]: pd.isna(df.iloc[1,0]) Out[7]: True AKA pd.isnull"} +{"question_id": 57612428, "score": 110, "creation_date": 1566486642, "tags": ["python", "python-3.x", "github", "github-actions"], "instruction": "Cloning private github repository within organisation in actions\n\nI have 2 private GitHub repositories (say A and B) in the organization (say ORG). Repository A has repository B in requirements.txt: -e git+git@github.com:ORG/B.git#egg=B And I have the following workflow for A (in .github/workflows/test.yml): name: Python package on: push jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 - name: Install requirements run: | pip install -r requirements.txt - name: Test with pytest run: | pytest ./tests As B is private, it fails on installing it. Is it possible to install B while testing A in this workflow if they are in the same organization? How?", "output": "Since access tokens are bound to an account and have write access to all its private repos, it's a very bad solution. Instead, use deploy keys. Deploy keys are simply SSH keys that you can use to clone a repo. Create a new SSH key pair on your computer Put the public key in the private dependency repo's Deploy keys Put the private key in the app repo's Actions secrets Delete the keys from your computer Once it's set, you can set the private key in the GitHub Action's SSH Agent. There's no need to import a third-party GitHub Action, a 2-liner will suffice. eval `ssh-agent -s` ssh-add - <<< '${{ secrets.PRIVATE_SSH_KEY }}' pip install -r requirements.txt I found that ssh-add command here."} +{"question_id": 9455111, "score": 110, "creation_date": 1330275761, "tags": ["python", "class", "methods"], "instruction": "Define a method outside of class definition?\n\nclass MyClass: def myFunc(self): pass Can I create MyFunc() outside of the class definition, maybe even in another module?", "output": "Yes. You can define a function outside of a class and then use it in the class body as a method: def func(self): print(\"func\") class MyClass: myMethod = func You can also add a function to a class after it has been defined: class MyClass: pass def func(self): print(\"func\") MyClass.myMethod = func You can define the function and the class in different modules if you want, but I'd advise against defining the class in one module then importing it in another and adding methods to it dynamically (as in my second example), because then you'd have surprisingly different behaviour from the class depending on whether or not another module has been imported. I would point out that while this is possible in Python, it's a bit unusual. You mention in a comment that \"users are allowed to add more\" methods. That sounds odd. If you're writing a library you probably don't want users of the library to add methods dynamically to classes in the library. It's more normal for users of a library to create their own subclass that inherits from your class than to change yours directly. I'd also add a reminder that functions don't have to be in classes at all. Python isn't like Java or C# and you can just have functions that aren't part of any class. If you want to group together functions you can just put them together in the same module, and you can nest modules inside packages. Only use classes when you need to create a new data type, not just to group functions together."} +{"question_id": 14843777, "score": 110, "creation_date": 1360711548, "tags": ["python", "lambda"], "instruction": "How to write python lambda with multiple lines?\n\nIn python, how can you write a lambda function taking multiple lines. I tried d = lambda x: if x: return 1 else return 2 but I am getting errors...", "output": "Use def instead. def d(x): if x: return 1 else: return 2 All python functions are first order objects (they can be assigned to variables and passed around as arguments), lambda is just a convenient way to make short ones. In general, you are better off using a normal function definition if it becomes anything beyond one line of simple code. Even then, in fact, if you are assigning it to a name, I would always use def over lambda (something PEP 8 explicitly recommends as it improves debugging). lambda is really only a good idea when defining short functions that can be placed inline into the function call, for example key functions for use with sorted(). Note that, in your case, a ternary operator would do the job (lambda x: 1 if x else 2), but I'm presuming this is a simplified case and you are talking about cases where it's not reasonable to use a single expression. (As a code golf note, this could also be done in less code as lambda x: 2-bool(x) - of course, that's highly unreadable and a bad idea.)"} +{"question_id": 63216201, "score": 110, "creation_date": 1596372660, "tags": ["python", "conda", "python-3.9"], "instruction": "How to install python with conda?\n\nI'm trying to install python 3.9 in a conda enviroment. I tried creating a new conda env using the following command, conda create --name myenv python=3.9 But I got an error saying package not found because python 3.9 is not yet released So, I manually created a folder in envs folder and tried to list all envs. But I couldn't get the manually created new environment. So, how do I install python 3.9 in a conda env with all functionalities like pip working?", "output": "To create python 3.11 conda environment use the following command conda create -n py311 python=3.11 py311 - environment name Update 3 To create python 3.10 conda environment use the following command conda create -n py310 python=3.10 py310 - environment name Update 2 You can now directly create python 3.9 environment using the following command conda create -n py39 python=3.9 py39 - environment name Update 1 Python 3.9 is now available in conda-forge. To download the tar file - https://anaconda.org/conda-forge/python/3.9.0/download/linux-64/python-3.9.0-h852b56e_0_cpython.tar.bz2 Anaconda Page - https://anaconda.org/conda-forge/python As pointed out in the comments, python 3.9 is not yet there on any channels. So, it cannot be install yet via conda. Instead, you can download the python 3.9 executable and install it. Once the installation is done, a new executable will be created for python 3.9 and pip 3.9 will be created. Python: python3.7 python3.7-config python3.7m python3.7m-config python3.9 python3.9-config pip pip pip3 pip3.7 pip3.8 pip3.9 pipreqs In order to install ipython for python 3.9, pip3.9 install ipython"} +{"question_id": 51361356, "score": 110, "creation_date": 1531742434, "tags": ["python", "parquet", "dask", "pyarrow", "fastparquet"], "instruction": "A comparison between fastparquet and pyarrow?\n\nAfter some searching I failed to find a thorough comparison of fastparquet and pyarrow. I found this blog post (a basic comparison of speeds). and a github discussion that claims that files created with fastparquet do not support AWS-athena (btw is it still the case?) when/why would I use one over the other? what are the major advantages and disadvantages ? my specific use case is processing data with dask writing it to s3 and then reading/analyzing it with AWS-athena.", "output": "In 2024 the decision should be obvious: use pyarrow instead of fastparquet: Pandas 3.0 will require pyarrow What\u2019s new in 2.1.0 (Aug 30, 2023). fastparquet is deprecated in Dask 2024.1.0, and \"users should migrate to the pyarrow engine\" fastparquet is from the dask team. If dask itself does not want to use it, why should you? See the discussion in dask / #8900. Some picks from there: In our recent parquet benchmarking and resilience testing we generally found the pyarrow engine would scale to larger datasets better than the fastparquet engine, and more test cases would complete successfully when run with pyarrow than with fastparquet. The pyarrow library has a larger development team maintaining it and seems to have more community buy-in going forward."} +{"question_id": 50015204, "score": 110, "creation_date": 1524636919, "tags": ["python", "django", "django-orm", "manytomanyfield"], "instruction": "Direct assignment to the forward side of a many-to-many set is prohibited. Use emails_for_help.set() instead\n\nI am new to Django and didn't find any reference regarding this issue. I am getting this error when i use many to many field in Django model (models.py). I guess the issue is assigning m2m field in view(views.py) from form(forms.py). How to assign m2m field in view? (Django version 2.0, python - 3.5) models.py class User(AbstractUser): username=models.CharField(max_length=20) email = models.EmailField(_('email address'), unique=True) class Setupuser(models.Model): organization=models.CharField(max_length=200,blank=False,null=True) emails_for_help = models.ManyToManyField(User) views.py class Set_user(FormView): template_name=\"pkm_templates/set_up_user.html\" form_class = Set_User_Form success_url = '/thanks/' def form_valid(self, form): org = form.cleaned_data.get('organization') emails = form.cleaned_data.get(\"emails_for_help\") instance = Setupuser(organization=org,emails_for_help=emails) instance.save() return redirect(\"/\") forms.py class Set_User_Form(ModelForm): emails_for_help = forms.ModelMultipleChoiceField( queryset=User.objects.all(), widget=forms.CheckboxSelectMultiple ) class Meta: model = Setupuser fields = [\"organization\",\"emails_for_help\"]", "output": "You need to get the User object and then add it to emails_for_help field. You can't add an object to ManyToManyField when creating an instance. Have a look at the doc. class Set_user(FormView): template_name=\"pkm_templates/set_up_user.html\" form_class = Set_User_Form success_url = '/thanks/' def form_valid(self, form): org = form.cleaned_data.get('organization') emails = form.cleaned_data.get(\"share_email_with\") users = User.objects.filter(email__in=emails) instance = Setupuser.objects.create(organization=org) for user in users: instance.emails_for_help.add(user) return redirect(\"/\") Another way of doing this is to use .set(). class Set_user(FormView): template_name=\"pkm_templates/set_up_user.html\" form_class = Set_User_Form success_url = '/thanks/' def form_valid(self, form): org = form.cleaned_data.get('organization') emails = form.cleaned_data.get(\"share_email_with\") users = User.objects.filter(email__in=emails) instance = Setupuser.objects.create(organization=org) instance.emails_for_help.set(users) return redirect(\"/\") Or you can simply use .add() to add arbitrary number of objects. class Set_user(FormView): template_name=\"pkm_templates/set_up_user.html\" form_class = Set_User_Form success_url = '/thanks/' def form_valid(self, form): org = form.cleaned_data.get('organization') emails = form.cleaned_data.get(\"share_email_with\") users = User.objects.filter(email__in=emails) instance = Setupuser.objects.create(organization=org) instance.emails_for_help.add(*users) return redirect(\"/\")"} +{"question_id": 24196932, "score": 109, "creation_date": 1402627570, "tags": ["python", "unix", "network-programming", "nic"], "instruction": "How can I get the IP address from a NIC (network interface controller) in Python?\n\nWhen an error occurs in a Python script on Unix, an email is sent. I have been asked to add {Testing Environment} to the subject line of the email if the IP address is 192.168.100.37 which is the testing server. This way we can have one version of a script and a way to tell if the email is coming from messed up data on the testing server. However, when I google I keep finding this code: import socket socket.gethostbyname(socket.gethostname()) However, that's giving me the IP address of 127.0.1.1. When I use ifconfig I get this eth0 Link encap:Ethernet HWaddr 00:1c:c4:2c:c8:3e inet addr:192.168.100.37 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75760697 errors:0 dropped:411180 overruns:0 frame:0 TX packets:23166399 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:59525958247 (59.5 GB) TX bytes:10142130096 (10.1 GB) Interrupt:19 Memory:f0500000-f0520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:25573544 errors:0 dropped:0 overruns:0 frame:0 TX packets:25573544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:44531490070 (44.5 GB) TX bytes:44531490070 (44.5 GB) Firstly, I don't know where it got 127.0.1.1 from, but either way that's not what I want. When I google I keep coming to the same syntax, Bash scripts or netifaces and I'm trying to use standard libraries. So how can I get the IP address of eth0 in Python?", "output": "Two methods: Method #1 (use external package) You need to ask for the IP address that is bound to your eth0 interface. This is available from the netifaces package import netifaces as ni ip = ni.ifaddresses('eth0')[ni.AF_INET][0]['addr'] print(ip) # should print \"192.168.100.37\" You can also get a list of all available interfaces via ni.interfaces() Method #2 (no external package) Here's a way to get the IP address without using a python package: import socket import fcntl import struct def get_ip_address(ifname): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) return socket.inet_ntoa(fcntl.ioctl( s.fileno(), 0x8915, # SIOCGIFADDR struct.pack('256s', ifname[:15]) )[20:24]) get_ip_address('eth0') # '192.168.0.110' Note: detecting the IP address to determine what environment you are using is quite a hack. Almost all frameworks provide a very simple way to set/modify an environment variable to indicate the current environment. Try and take a look at your documentation for this. It should be as simple as doing if app.config['ENV'] == 'production': # send production email else: # send development email"} +{"question_id": 16283799, "score": 109, "creation_date": 1367253403, "tags": ["python", "csv", "curl", "output", "python-2.x"], "instruction": "How to read a CSV file from a URL with Python?\n\nwhen I do curl to a API call link http://example.com/passkey=wedsmdjsjmdd curl 'http://example.com/passkey=wedsmdjsjmdd' I get the employee output data on a csv file format, like: \"Steve\",\"421\",\"0\",\"421\",\"2\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"\",\"421\",\"0\",\"421\",\"2\" how can parse through this using python. I tried: import csv cr = csv.reader(open('http://example.com/passkey=wedsmdjsjmdd',\"rb\")) for row in cr: print row but it didn't work and I got an error http://example.com/passkey=wedsmdjsjmdd No such file or directory: Thanks!", "output": "You need to replace open with urllib.urlopen or urllib2.urlopen. e.g. import csv import urllib2 url = 'http://winterolympicsmedals.com/medals.csv' response = urllib2.urlopen(url) cr = csv.reader(response) for row in cr: print row This would output the following Year,City,Sport,Discipline,NOC,Event,Event gender,Medal 1924,Chamonix,Skating,Figure skating,AUT,individual,M,Silver 1924,Chamonix,Skating,Figure skating,AUT,individual,W,Gold ... The original question is tagged \"python-2.x\", but for a Python 3 implementation (which requires only minor changes) see below."} +{"question_id": 6332577, "score": 109, "creation_date": 1307979103, "tags": ["python", "outlook"], "instruction": "Send Outlook Email Via Python?\n\nI am using Outlook 2003. What is the best way to send email (through Outlook 2003) using Python?", "output": "For a solution that uses outlook see TheoretiCAL's answer. Otherwise, use the smtplib that comes with python. Note that this will require your email account allows smtp, which is not necessarily enabled by default. SERVER = \"smtp.example.com\" FROM = \"yourEmail@example.com\" TO = [\"listOfEmails\"] # must be a list SUBJECT = \"Subject\" TEXT = \"Your Text\" # Prepare actual message message = \"\"\"From: %s\\r\\nTo: %s\\r\\nSubject: %s\\r\\n\\ %s \"\"\" % (FROM, \", \".join(TO), SUBJECT, TEXT) # Send the mail import smtplib server = smtplib.SMTP(SERVER) server.sendmail(FROM, TO, message) server.quit() EDIT: this example uses reserved domains like described in RFC2606 SERVER = \"smtp.example.com\" FROM = \"johnDoe@example.com\" TO = [\"JaneDoe@example.com\"] # must be a list SUBJECT = \"Hello!\" TEXT = \"This is a test of emailing through smtp of example.com.\" # Prepare actual message message = \"\"\"From: %s\\r\\nTo: %s\\r\\nSubject: %s\\r\\n\\ %s \"\"\" % (FROM, \", \".join(TO), SUBJECT, TEXT) # Send the mail import smtplib server = smtplib.SMTP(SERVER) server.login(\"MrDoe\", \"PASSWORD\") server.sendmail(FROM, TO, message) server.quit() For it to actually work with gmail, Mr. Doe will need to go to the options tab in gmail and set it to allow smtp connections. Note the addition of the login line to authenticate to the remote server. The original version does not include this, an oversight on my part."} +{"question_id": 62856818, "score": 109, "creation_date": 1594525020, "tags": ["python", "pycharm", "fastapi"], "instruction": "How can I run the FastAPI server using Pycharm?\n\nI have a simple API function as below, from fastapi import FastAPI app = FastAPI() @app.get(\"/\") async def read_root(): return {\"Hello\": \"World\"} I am starting the server using uvicorn command as, uvicorn main:app Since we are not calling any python file directly, it is not possible to call uvicorn command from Pycharm. So, How can I run the fast-api server using Pycharm?", "output": "Method-1: Run FastAPI by calling uvicorn.run(...) In this case, your minimal code will be as follows, # main.py import uvicorn from fastapi import FastAPI app = FastAPI() @app.get(\"/\") async def read_root(): return {\"Hello\": \"World\"} if __name__ == \"__main__\": uvicorn.run(app, host=\"0.0.0.0\", port=8000) Normally, you'll start the server by running the following command, python main.py Pycharm Setup For this setup, and now, you can set the script path in Pycharm's config Notes Script Path: path to the FastAPI script Python Interpreter: Choose your interpreter/virtual environment Working Directory: Your FastAPI project root Method-2: Run FastAPI by calling uvicorn command In this case, your minimal code will be as follows, # main.py from fastapi import FastAPI app = FastAPI() @app.get(\"/\") async def read_root(): return {\"Hello\": \"World\"} Normally, you'll start the server by running the following command, uvicorn main:app --reload Pycharm Setup For this setup, and now, you can set the script path in Pycharm's config Notes Module name: set to uvicorn [Optional] Script: Path to uvicorn binary. You will get the path by executing the command, which uvicorn , inside your environment. (See this image) Parameters: The actual parameters of uvicorn command Python Interpreter: Choose your interpreter/virtual environment Working Directory: Your FastAPI project root"} +{"question_id": 69100275, "score": 109, "creation_date": 1631092630, "tags": ["python", "python-3.x", "pip", "setuptools"], "instruction": "Error while downloading the requirements using pip install (setup command: use_2to3 is invalid.)\n\nversion pip 21.2.4 python 3.6 The command: pip install -r requirements.txt The content of my requirements.txt: mongoengine==0.19.1 numpy==1.16.2 pylint pandas==1.1.5 fawkes The command is failing with this error ERROR: Command errored out with exit status 1: command: /Users/*/Desktop/ml/*/venv/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'\"'\"'; __file__='\"'\"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-pip-egg-info-97994d6e cwd: /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/ Complete output (1 lines): error in mongoengine setup command: use_2to3 is invalid. ---------------------------------------- WARNING: Discarding https://*/pypi/packages/mongoengine-0.19.1.tar.gz#md5=68e613009f6466239158821a102ac084 (from https://*/pypi/simple/mongoengine/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement mongoengine==0.19.1 (from versions: 0.15.0, 0.19.1) ERROR: No matching distribution found for mongoengine==0.19.1", "output": "It looks like setuptools>=58 breaks support for use_2to3: setuptools changelog for v58 So you should update setuptools to setuptools<58 or avoid using packages with use_2to3 in the setup parameters. I was having the same problem, pip==19.3.1"} +{"question_id": 1388450, "score": 109, "creation_date": 1252315957, "tags": ["python", "matplotlib"], "instruction": "Giving graphs a subtitle\n\nI want to give my graph a title in big 18pt font, then a subtitle below it in smaller 10pt font. How can I do this in matplotlib? It appears the title() function only takes one single string with a single fontsize attribute. There has to be a way to do this, but how?", "output": "I don't think there is anything built-in, but you can do it by leaving more space above your axes and using figtext: axes([.1,.1,.8,.7]) figtext(.5,.9,'Foo Bar', fontsize=18, ha='center') figtext(.5,.85,'Lorem ipsum dolor sit amet, consectetur adipiscing elit',fontsize=10,ha='center') ha is short for horizontalalignment."} +{"question_id": 9390126, "score": 109, "creation_date": 1329892169, "tags": ["python", "if-statement"], "instruction": "Pythonic way to check if something exists?\n\nI was wondering if there is a pythonic way to check if something does not exist. Here's how I do it if its true: var = 1 if var: print 'it exists' but when I check if something does not exist, I often do something like this: var = 2 if var: print 'it exists' else: print 'nope it does not' Seems like a waste if all I care about is kn Is there a way to check if something does not exist without the else?", "output": "EAFP style, \"easier to ask forgiveness than permission\": try: var except NameError: var_exists = False else: var_exists = True LBYL style, \"look before you leap\": var_exists = 'var' in locals() or 'var' in globals() Prefer the first style (EAFP) when coding in Python, because it is generally more reliable."} +{"question_id": 37840812, "score": 109, "creation_date": 1466007708, "tags": ["python", "datetime", "numpy", "pandas", "int"], "instruction": "Pandas: Subtracting two date columns and the result being an integer\n\nI have two columns in a Pandas data frame that are dates. I am looking to subtract one column from another and the result being the difference in numbers of days as an integer. A peek at the data: df_test.head(10) Out[20]: First_Date Second Date 0 2016-02-09 2015-11-19 1 2016-01-06 2015-11-30 2 NaT 2015-12-04 3 2016-01-06 2015-12-08 4 NaT 2015-12-09 5 2016-01-07 2015-12-11 6 NaT 2015-12-12 7 NaT 2015-12-14 8 2016-01-06 2015-12-14 9 NaT 2015-12-15 I have created a new column successfully with the difference: df_test['Difference'] = df_test['First_Date'].sub(df_test['Second Date'], axis=0) df_test.head() Out[22]: First_Date Second Date Difference 0 2016-02-09 2015-11-19 82 days 1 2016-01-06 2015-11-30 37 days 2 NaT 2015-12-04 NaT 3 2016-01-06 2015-12-08 29 days 4 NaT 2015-12-09 NaT However I am unable to get a numeric version of the result: df_test['Difference'] = df_test[['Difference']].apply(pd.to_numeric) df_test.head() Out[25]: First_Date Second Date Difference 0 2016-02-09 2015-11-19 7.084800e+15 1 2016-01-06 2015-11-30 3.196800e+15 2 NaT 2015-12-04 NaN 3 2016-01-06 2015-12-08 2.505600e+15 4 NaT 2015-12-09 NaN", "output": "How about: df_test['Difference'] = (df_test['First_Date'] - df_test['Second Date']).dt.days This will return difference as int if there are no missing values(NaT) and float if there is. Pandas have a rich documentation on Time series / date functionality and Time deltas"} +{"question_id": 55522395, "score": 109, "creation_date": 1554401064, "tags": ["python", "matplotlib", "shapely"], "instruction": "How do I plot Shapely polygons and objects using Matplotlib?\n\nI want to use Shapely for my computational geometry project. I need to be able to visualize and display polygons, lines, and other geometric objects for this. I've tried to use Matplotlib for this but I am having trouble with it. from shapely.geometry import Polygon import matplotlib.pyplot as plt polygon1 = Polygon([(0,5), (1,1), (3,0), ]) plt.plot(polygon1) plt.show() I would like to be able to display this polygon in a plot. How would I change my code to do this?", "output": "Use: import matplotlib.pyplot as plt x,y = polygon1.exterior.xy plt.plot(x,y) Or, more succinctly: plt.plot(*polygon1.exterior.xy)"} +{"question_id": 40225683, "score": 109, "creation_date": 1477335832, "tags": ["python", "pandas", "dataframe", "multi-index"], "instruction": "How to simply add a column level to a pandas dataframe\n\nlet's say I have a dataframe that looks like this: df = pd.DataFrame({'A': range(5), 'B': range(5)}, index=list('abcde')) df Out[92]: A B a 0 0 b 1 1 c 2 2 d 3 3 e 4 4 Asumming that this dataframe already exist, how can I simply add a level 'C' to the column index so I get this: df Out[92]: A B C C a 0 0 b 1 1 c 2 2 d 3 3 e 4 4 I saw SO anwser like this python/pandas: how to combine two dataframes into one with hierarchical column index? but this concat different dataframe instead of adding a column level to an already existing dataframe.", "output": "As suggested by @StevenG himself, a better answer: df.columns = pd.MultiIndex.from_product(df.columns.levels + [['C']]) print(df) # A B # C C # a 0 0 # b 1 1 # c 2 2 # d 3 3 # e 4 4"} +{"question_id": 4216985, "score": 109, "creation_date": 1290096961, "tags": ["python", "linux", "url", "operating-system"], "instruction": "Call to operating system to open url?\n\nWhat can I use to call the OS to open a URL in whatever browser the user has as default? Not worried about cross-OS compatibility; if it works in linux thats enough for me!", "output": "Here is how to open the user's default browser with a given url: import webbrowser url = \"https://www.google.com/\" webbrowser.open(url, new=0, autoraise=True) Here is the documentation about this functionality. It's part of Python's stdlibs: http://docs.python.org/library/webbrowser.html I have tested this successfully on Linux, Ubuntu 10.10."} +{"question_id": 42063716, "score": 109, "creation_date": 1486371311, "tags": ["python", "pandas", "sum"], "instruction": "Pandas: sum up multiple columns into one column without last column\n\nIf I have a dataframe similar to this one Apples Bananas Grapes Kiwis 2 3 nan 1 1 3 7 nan nan nan 2 3 I would like to add a column like this Apples Bananas Grapes Kiwis Fruit Total 2 3 nan 1 6 1 3 7 nan 11 nan nan 2 3 5 I guess you could use df['Apples'] + df['Bananas'] and so on, but my actual dataframe is much larger than this. I was hoping a formula like df['Fruit Total']=df[-4:-1].sum could do the trick in one line of code. That didn't work however. Is there any way to do it without explicitly summing up all columns?", "output": "You can first select by iloc and then sum: df['Fruit Total']= df.iloc[:, -4:-1].sum(axis=1) print (df) Apples Bananas Grapes Kiwis Fruit Total 0 2.0 3.0 NaN 1.0 5.0 1 1.0 3.0 7.0 NaN 11.0 2 NaN NaN 2.0 3.0 2.0 For sum all columns use: df['Fruit Total']= df.sum(axis=1)"} +{"question_id": 1739913, "score": 109, "creation_date": 1258342648, "tags": ["python", "string"], "instruction": "What is the max length of a Python string?\n\nIf it is environment-independent, what is the theoretical maximum number of characters in a Python string?", "output": "With a 64-bit Python installation, and (say) 64 GB of memory, a Python string of around 63 GB should be quite feasible, if not maximally fast. If you can upgrade your memory beyond 64 GB, your maximum feasible strings should get proportionally longer. (I don't recommend relying on virtual memory to extend that by much, or your runtimes will get simply ridiculous;-). With a typical 32-bit Python installation, the total memory you can use in your application is limited to something like 2 or 3 GB (depending on OS and configuration), so the longest strings you can use will be much smaller than in 64-bit installations with high amounts of RAM."} +{"question_id": 5802108, "score": 109, "creation_date": 1303897379, "tags": ["python", "datetime", "timezone", "pytz"], "instruction": "How to check if a datetime object is localized with pytz?\n\nI want to store a datetime object with a localized UTC timezone. The method that stores the datetime object can be given a non-localized datetime (naive) object or an object that already has been localized. How do I determine if localization is needed? Code with missing if condition: class MyClass: def set_date(self, d): # what do i check here? # if(d.tzinfo): self.date = d.astimezone(pytz.utc) # else: self.date = pytz.utc.localize(d)", "output": "How do I determine if localization is needed? From datetime docs: a datetime object d is aware iff: d.tzinfo is not None and d.tzinfo.utcoffset(d) is not None d is naive iff: d.tzinfo is None or d.tzinfo.utcoffset(d) is None Though if d is a datetime object representing time in UTC timezone then you could use in both cases: self.date = d.replace(tzinfo=pytz.utc) It works regardless d is timezone-aware or naive. Note: don't use datetime.replace() method with a timezone with a non-fixed utc offset (it is ok to use it with UTC timezone but otherwise you should use tz.localize() method)."} +{"question_id": 43241221, "score": 109, "creation_date": 1491424803, "tags": ["python", "python-3.x", "asynchronous", "python-asyncio", "aiohttp"], "instruction": "How can I wrap a synchronous function in an async coroutine?\n\nI'm using aiohttp to build an API server that sends TCP requests off to a seperate server. The module that sends the TCP requests is synchronous and a black box for my purposes. So my problem is that these requests are blocking the entire API. I need a way to wrap the module requests in an asynchronous coroutine that won't block the rest of the API. So, just using sleep as a simple example, is there any way to somehow wrap time-consuming synchronous code in a non-blocking coroutine, something like this: async def sleep_async(delay): # After calling sleep, loop should be released until sleep is done yield sleep(delay) return 'I slept asynchronously'", "output": "Eventually I found an answer in this thread. The method I was looking for is run_in_executor. This allows a synchronous function to be run asynchronously without blocking an event loop. In the sleep example I posted above, it might look like this: import asyncio from time import sleep async def sleep_async(loop, delay): # None uses the default executor (ThreadPoolExecutor) await loop.run_in_executor(None, sleep, delay) return 'I slept asynchronously' Also see the following answer -> How do we call a normal function where a coroutine is expected?"} +{"question_id": 18175489, "score": 109, "creation_date": 1376247095, "tags": ["python", "soap", "python-requests"], "instruction": "Sending SOAP request using Python Requests\n\nIs it possible to use Python's requests library to send a SOAP request?", "output": "It is indeed possible. Here is an example calling the Weather SOAP Service using plain requests lib: import requests url=\"http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL\" #headers = {'content-type': 'application/soap+xml'} headers = {'content-type': 'text/xml'} body = \"\"\" \"\"\" response = requests.post(url,data=body,headers=headers) print response.content Some notes: The headers are important. Most SOAP requests will not work without the correct headers. application/soap+xml is probably the more correct header to use (but the weatherservice prefers text/xml This will return the response as a string of xml - you would then need to parse that xml. For simplicity I have included the request as plain text. But best practise would be to store this as a template, then you can load it using jinja2 (for example) - and also pass in variables. For example: from jinja2 import Environment, PackageLoader env = Environment(loader=PackageLoader('myapp', 'templates')) template = env.get_template('soaprequests/WeatherSericeRequest.xml') body = template.render() Some people have mentioned the suds library. Suds is probably the more correct way to be interacting with SOAP, but I often find that it panics a little when you have WDSLs that are badly formed (which, TBH, is more likely than not when you're dealing with an institution that still uses SOAP ;) ). You can do the above with suds like so: from suds.client import Client url=\"http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL\" client = Client(url) print client ## shows the details of this service result = client.service.GetWeatherInformation() print result Note: when using suds, you will almost always end up needing to use the doctor. Finally, a little bonus for debugging SOAP; TCPdump is your friend. On Mac, you can run TCPdump like so: sudo tcpdump -As 0 This can be helpful for inspecting the requests that actually go over the wire. The above two code snippets are also available as gists: SOAP Request with requests SOAP Request with suds"} +{"question_id": 36787603, "score": 109, "creation_date": 1461310118, "tags": ["python", "python-3.x", "python-internals"], "instruction": "What exactly is __weakref__ in Python?\n\nSurprisingly, there's no explicit documentation for __weakref__. Weak references are explained here. __weakref__ is also shortly mentioned in the documentation of __slots__. But I could not find anything about __weakref__ itself. What exactly is __weakref__? - Is it just a member acting as a flag: If present, the object may be weakly-referenced? - Or is it a function/variable that can be overridden/assigned to get a desired behavior? How?", "output": "__weakref__ is just an opaque object that references all the weak references to the current object. In actual fact it's an instance of weakref (or sometimes weakproxy) which is both a weak reference to the object and part of a doubly linked list to all weak references for that object. It's just an implementation detail that allows the garbage collector to inform weak references that its referent has been collected, and to not allow access to its underlying pointer anymore. The weak reference can't rely on checking the reference count of the object it refers to. This is because that memory may have been reclaimed and is now being used by another object. Best case scenario the VM will crash, worst case the weak reference will allow access to an object it wasn't originally referring to. This is why the garbage collector must inform the weak reference its referent is no longer valid. See weakrefobject.h for the structure and C-API for this object. And the implementation detail is here"} +{"question_id": 24988131, "score": 109, "creation_date": 1406518982, "tags": ["python", "dictionary", "pandas", "dataframe", "multi-index"], "instruction": "Nested dictionary to multiindex dataframe where dictionary keys are column labels\n\nSay I have a dictionary that looks like this: dictionary = {'A' : {'a': [1,2,3,4,5], 'b': [6,7,8,9,1]}, 'B' : {'a': [2,3,4,5,6], 'b': [7,8,9,1,2]}} and I want a dataframe that looks something like this: A B a b a b 0 1 6 2 7 1 2 7 3 8 2 3 8 4 9 3 4 9 5 1 4 5 1 6 2 Is there a convenient way to do this? If I try: In [99]: DataFrame(dictionary) Out[99]: A B a [1, 2, 3, 4, 5] [2, 3, 4, 5, 6] b [6, 7, 8, 9, 1] [7, 8, 9, 1, 2] I get a dataframe where each element is a list. What I need is a multiindex where each level corresponds to the keys in the nested dict and the rows corresponding to each element in the list as shown above. I think I can work a very crude solution but I'm hoping there might be something a bit simpler.", "output": "Pandas wants the MultiIndex values as tuples, not nested dicts. The simplest thing is to convert your dictionary to the right format before trying to pass it to DataFrame: >>> reform = {(outerKey, innerKey): values for outerKey, innerDict in dictionary.items() for innerKey, values in innerDict.items()} >>> reform {('A', 'a'): [1, 2, 3, 4, 5], ('A', 'b'): [6, 7, 8, 9, 1], ('B', 'a'): [2, 3, 4, 5, 6], ('B', 'b'): [7, 8, 9, 1, 2]} >>> pandas.DataFrame(reform) A B a b a b 0 1 6 2 7 1 2 7 3 8 2 3 8 4 9 3 4 9 5 1 4 5 1 6 2 [5 rows x 4 columns]"} +{"question_id": 58018300, "score": 109, "creation_date": 1568923633, "tags": ["python", "docker", "caching", "pip", "continuous-integration"], "instruction": "Using a pip cache directory in docker builds\n\nI'm hoping to get my pip install instructions inside my docker builds as fast as possible. I've read many posts explaining how adding your requirements.txt before the rest of the app helps you take advantage of Docker's own image cache if your requirements.txt hasn't changed. But this is no help at all when dependencies do change, even slightly. The next step would be if we could use a consistent pip cache directory. By default, pip will cache downloaded packages in ~/.cache/pip (on Linux), and so if you're ever installing the same version of a module that has been installed before anywhere on the system, it shouldn't need to go and download it again, but instead simply use the cached version. If we could leverage a shared cache directory for docker builds, this could help speed up dependency installs a lot. However, there doesn't appear to be any simple way to mount a volume while running docker build. The build environment seems to be basically impenetrable. I found one article suggesting a genius but complex method of running an rsync server on the host and then, with a hack inside the build to get the host IP, rsyncing the pip cache in from the host. But I'm not relishing the idea of running an rsync server in Jenkins (which isn't the most secure platform at the best of times). Does anyone know if there's any other way to achieve a shared cache volume more simply?", "output": "I suggest you to use buildkit, also see this. Dockerfile: # syntax=docker/dockerfile:1 FROM python:3.6-alpine RUN --mount=type=cache,target=/root/.cache/pip pip install pyyaml NOTE: # syntax = docker/dockerfile:experimental is a must\uff0cyou have to add it at the beginning of Dockerfile to enable this feature. 1. The first execute build: export DOCKER_BUILDKIT=1 docker build --progress=plain -t abc:1 . --no-cache The first log: #9 [stage-0 2/2] RUN --mount=type=cache,target=/root/.cache/pip pip install... #9 digest: sha256:55b70da1cbbe4d424f8c50c0678a01e855510bbda9d26f1ac5b983808f3bf4a5 #9 name: \"[stage-0 2/2] RUN --mount=type=cache,target=/root/.cache/pip pip install pyyaml\" #9 started: 2019-09-20 03:11:35.296107357 +0000 UTC #9 1.955 Collecting pyyaml #9 3.050 Downloading https://files.pythonhosted.org/packages/e3/e8/b3212641ee2718d556df0f23f78de8303f068fe29cdaa7a91018849582fe/PyYAML-5.1.2.tar.gz (265kB) #9 5.006 Building wheels for collected packages: pyyaml #9 5.007 Building wheel for pyyaml (setup.py): started #9 5.249 Building wheel for pyyaml (setup.py): finished with status 'done' #9 5.250 Created wheel for pyyaml: filename=PyYAML-5.1.2-cp36-cp36m-linux_x86_64.whl size=44104 sha256=867daf35eab43c2d047ad737ea1e9eaeb4168b87501cd4d62c533f671208acaa #9 5.250 Stored in directory: /root/.cache/pip/wheels/d9/45/dd/65f0b38450c47cf7e5312883deb97d065e030c5cca0a365030 #9 5.267 Successfully built pyyaml #9 5.274 Installing collected packages: pyyaml #9 5.309 Successfully installed pyyaml-5.1.2 #9completed: 2019-09-20 03:11:42.221146294 +0000 UTC #9 duration: 6.925038937s From above, you can see the first time, the build will download pyyaml from internet. 2. The second execute build: docker build --progress=plain -t abc:1 . --no-cache The second log: #9 [stage-0 2/2] RUN --mount=type=cache,target=/root/.cache/pip pip install... #9 digest: sha256:55b70da1cbbe4d424f8c50c0678a01e855510bbda9d26f1ac5b983808f3bf4a5 #9 name: \"[stage-0 2/2] RUN --mount=type=cache,target=/root/.cache/pip pip install pyyaml\" #9 started: 2019-09-20 03:16:58.588157354 +0000 UTC #9 1.786 Collecting pyyaml #9 2.234 Installing collected packages: pyyaml #9 2.270 Successfully installed pyyaml-5.1.2 #9completed: 2019-09-20 03:17:01.933398002 +0000 UTC #9 duration: 3.345240648s From above, you can see the build no longer download package from internet, just use the cache. NOTE, this is not the traditional docker build cache as I have use --no-cache, it's /root/.cache/pip which I mount into build. 3. The third execute build which delete buildkit cache: docker builder prune docker build --progress=plain -t abc:1 . --no-cache The third log: #9 [stage-0 2/2] RUN --mount=type=cache,target=/root/.cache/pip pip install... #9 digest: sha256:55b70da1cbbe4d424f8c50c0678a01e855510bbda9d26f1ac5b983808f3bf4a5 #9 name: \"[stage-0 2/2] RUN --mount=type=cache,target=/root/.cache/pip pip install pyyaml\" #9 started: 2019-09-20 03:19:07.434792944 +0000 UTC #9 1.894 Collecting pyyaml #9 2.740 Downloading https://files.pythonhosted.org/packages/e3/e8/b3212641ee2718d556df0f23f78de8303f068fe29cdaa7a91018849582fe/PyYAML-5.1.2.tar.gz (265kB) #9 3.319 Building wheels for collected packages: pyyaml #9 3.319 Building wheel for pyyaml (setup.py): started #9 3.560 Building wheel for pyyaml (setup.py): finished with status 'done' #9 3.560 Created wheel for pyyaml: filename=PyYAML-5.1.2-cp36-cp36m-linux_x86_64.whl size=44104 sha256=cea5bc4689e231df7915c2fc3abca225d4ee2e869a7540682aacb6d42eb17053 #9 3.560 Stored in directory: /root/.cache/pip/wheels/d9/45/dd/65f0b38450c47cf7e5312883deb97d065e030c5cca0a365030 #9 3.580 Successfully built pyyaml #9 3.585 Installing collected packages: pyyaml #9 3.622 Successfully installed pyyaml-5.1.2 #9completed: 2019-09-20 03:19:12.530742712 +0000 UTC #9 duration: 5.095949768s From above, you can see if delete buildkit cache, the package download again. In a word, it will give you a shared cache between several times build, and this cache will only be mounted when image build. But, the image self will not have these cache, so avoid a lots of intermediate layer in image. EDIT for folks who are using docker compose and are lazy to read the comments...: You can also do this with docker-compose if you set COMPOSE_DOCKER_CLI_BUILD=1. For example: COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build \u2013 UPDATE according to folk's question 2020/09/02: I don't know from which version (my version now is 19.03.11), if not specify mode for cache directory, the cache won't be reused by next time build. Don't know the detail reason, but you could add mode=0755, to Dockerfile to make it work again: Dockerfile: # syntax = docker/dockerfile:experimental FROM python:3.6-alpine RUN --mount=type=cache,mode=0755,target=/root/.cache/pip pip install pyyaml UPDATE according to folk's question 2023/04/23: Q: Where is the cache exactly on the host? A: The cache on host is maintained by docker with an overlay. You could use next command docker buildx du --verbose and find a entry type Type: exec.cachemount, after that you got the ID: ntpjzcz8hhx31b80nwxji05hn: ID: ntpjzcz8hhx31b80nwxji05hn Created at: 2023-04-23 01:36:41.102680066 +0000 UTC Mutable: true Reclaimable: true Shared: false Size: 3.601MB Description: cached mount /root/.cache/pip from exec /bin/sh -c pip install pyyaml Usage count: 2 Last used: 7 minutes ago Type: exec.cachemount Afterwards, go to /var/lib/docker/overlay2/ntpjzcz8hhx31b80nwxji05hn/diff/cache/wheels to find the cached pyyaml (depends on the ID you got from above). For my station, it looks like next: root@shdebian1:/var/lib/docker/overlay2/ntpjzcz8hhx31b80nwxji05hn/diff/cache/wheels/81/5a/02/b3447894318b70e3cbff3cb4f1a50d9d50a848185358de1d71# ls PyYAML-6.0-cp36-cp36m-linux_x86_64.whl"} +{"question_id": 18026980, "score": 109, "creation_date": 1375482160, "tags": ["python", "github", "setuptools"], "instruction": "Python setuptools: How can I list a private repository under install_requires?\n\nI am creating a setup.py file for a project which depends on private GitHub repositories. The relevant parts of the file look like this: from setuptools import setup setup(name='my_project', ..., install_requires=[ 'public_package', 'other_public_package', 'private_repo_1', 'private_repo_2', ], dependency_links=[ 'https://github.com/my_account/private_repo_1/master/tarball/', 'https://github.com/my_account/private_repo_2/master/tarball/', ], ..., ) I am using setuptools instead of distutils because the latter does not support the install_requires and dependency_links arguments per this answer. The above setup file fails to access the private repos with a 404 error - which is to be expected since GitHub returns a 404 to unauthorized requests for a private repository. However, I can't figure out how to make setuptools authenticate. Here are some things I've tried: Use git+ssh:// instead of https:// in dependency_links as I would if installing the repo with pip. This fails because setuptools doesn't recognize this protocol (\"unknown url type: git+ssh\"), though the distribute documentation says it should. Ditto git+https and git+http. https://:@github.com/... - still get a 404. (This method doesn't work with curl or wget from the command line either - though curl -u -O does work.) Upgrading setuptools (0.9.7) and virtualenv (1.10) to the latest versions. Also tried installing distribute though this overview says it was merged back into setuptools. Either way, no dice. Currently I just have setup.py print out a warning that the private repos must be downloaded separately. This is obviously less than ideal. I feel like there's something obvious that I'm missing, but can't think what it might be. :) Duplicate-ish question with no answers here.", "output": "I was trying to get this to work for installing with pip, but the above was not working for me. From [1] I understood the PEP508 standard should be used, from [2] I retrieved an example which actually does work (at least for my case). Please note; this is with pip 20.0.2 on Python 3.7.4 setup( name='', ... install_requires=[ '', # Private repository ' @ git+ssh://git@github.com//@', # Public repository ' @ git+https://github.com//@', ], ) After specifying my package this way installation works fine (also with -e settings and without the need to specify --process-dependency-links). References [1] https://github.com/pypa/pip/issues/4187 [2] https://github.com/pypa/pip/issues/5566"} +{"question_id": 33312175, "score": 109, "creation_date": 1445638417, "tags": ["python", "regex"], "instruction": "matching any character including newlines in a Python regex subexpression, not globally\n\nI want to use re.MULTILINE but NOT re.DOTALL, so that I can have a regex that includes both an \"any character\" wildcard and the normal . wildcard that doesn't match newlines. Is there a way to do this? What should I use to match any character in those instances that I want to include newlines?", "output": "To match a newline, or \"any symbol\" without re.S/re.DOTALL, you may use any of the following: (?s). - the inline modifier group with s flag on sets a scope where all . patterns match any char including line break chars Any of the following work-arounds: [\\s\\S] [\\w\\W] [\\d\\D] The main idea is that the opposite shorthand classes inside a character class match any symbol there is in the input string. Comparing it to (.|\\s) and other variations with alternation, the character class solution is much more efficient as it involves much less backtracking (when used with a * or + quantifier). Compare the small example: it takes (?:.|\\n)+ 45 steps to complete, and it takes [\\s\\S]+ just 2 steps. See a Python demo where I am matching a line starting with 123 and up to the first occurrence of 3 at the start of a line and including the rest of that line: import re text = \"\"\"abc 123 def 356 more text...\"\"\" print( re.findall(r\"^123(?s:.*?)^3.*\", text, re.M) ) # => ['123\\ndef\\n356'] print( re.findall(r\"^123[\\w\\W]*?^3.*\", text, re.M) ) # => ['123\\ndef\\n356']"} +{"question_id": 29147442, "score": 109, "creation_date": 1426775846, "tags": ["python", "pylint"], "instruction": "How to fix pylint logging-not-lazy?\n\nI am using prospector to examine my code. Pylint returned a logging-not-lazy warning about my debug message. Line: 31 pylint: logging-not-lazy / Specify string format arguments as logging function parameters (col 16) Line: 42 pylint: logging-not-lazy / Specify string format arguments as logging function parameters (col 12) My code is: logging.debug(\"detect mimetypes faild because %s\" % e ) How do I fix logging-not-lazy in pylint?", "output": "This means, that you should rewrite your code as: logging.debug(\"detect mimetypes faild because %s\", e) According to https://docs.python.org/2/library/logging.html Logger.debug(msg, *args, **kwargs) ... Logs a message with level DEBUG on this logger. The msg is the message format string, and the args are the arguments which are merged into msg using the string formatting operator. (Note that this means that you can use keywords in the format string, together with a single dictionary argument.) ..."} +{"question_id": 12660870, "score": 109, "creation_date": 1349004292, "tags": ["python", "syntax"], "instruction": "Associativity of \"in\" in Python?\n\nI'm making a Python parser, and this is really confusing me: >>> 1 in [] in 'a' False >>> (1 in []) in 'a' TypeError: 'in ' requires string as left operand, not bool >>> 1 in ([] in 'a') TypeError: 'in ' requires string as left operand, not list How exactly does in work in Python, with regards to associativity, etc.? Why do no two of these expressions behave the same way?", "output": "1 in [] in 'a' is evaluated as (1 in []) and ([] in 'a').\u00b9 Since the first condition (1 in []) is False, the whole condition is evaluated as False; ([] in 'a') is never actually evaluated, so no error is raised. We can see how Python executes each statement using the dis module: >>> from dis import dis >>> dis(\"1 in [] in 'a'\") 1 0 LOAD_CONST 0 (1) 2 BUILD_LIST 0 4 DUP_TOP 6 ROT_THREE 8 CONTAINS_OP 0 # `in` is the contains operator 10 JUMP_IF_FALSE_OR_POP 18 # skip to 18 if the first # comparison is false 12 LOAD_CONST 1 ('a') # 12-16 are never executed 14 CONTAINS_OP 0 # so no error here (14) 16 RETURN_VALUE >> 18 ROT_TWO 20 POP_TOP 22 RETURN_VALUE >>> dis(\"(1 in []) in 'a'\") 1 0 LOAD_CONST 0 (1) 2 LOAD_CONST 1 (()) 4 CONTAINS_OP 0 # perform 1 in [] 6 LOAD_CONST 2 ('a') # now load 'a' 8 CONTAINS_OP 0 # check if result of (1 in []) is in 'a' # throws Error because (False in 'a') # is a TypeError 10 RETURN_VALUE >>> dis(\"1 in ([] in 'a')\") 1 0 LOAD_CONST 0 (1) 2 BUILD_LIST 0 4 LOAD_CONST 1 ('a') 6 CONTAINS_OP 0 # perform ([] in 'a'), which is # incorrect, so it throws a TypeError 8 CONTAINS_OP 0 # if no Error then this would # check if 1 is in the result of ([] in 'a') 10 RETURN_VALUE Except that [] is only evaluated once. This doesn't matter in this example but if you (for example) replaced [] with a function that returned a list, that function would only be called once (at most). The documentation explains also this."} +{"question_id": 34370599, "score": 109, "creation_date": 1450527188, "tags": ["python", "dictionary", "set"], "instruction": "Difference between dict and set (python)\n\nSo, I know that this, a = {} # dict constructs an empty dictionary. Now, I also picked up that this, b = {1, 2, 3} # set creates a set. This can easily be verified, as, >>>print(type(a)) >>>print(type(b)) While I understand what it does, I fail to see why we use 'set notation' for empty dictionaries. I tried to find some more information about the logic behind this in the set and dict sections of the manual, but sadly, I got nothing out of it. Could anyone explain to me why we do this in this way? Is it for historical reasons, or am I missing something blatantly obvious?", "output": "There were no set literals in Python 2, historically curly braces were only used for dictionaries. Sets could be produced from lists (or any iterables): set([1, 2, 3]) set([i for i in range(1, 3)]) To create an empty set, use my_set = set() Python 3 introduced set literals and comprehensions (see PEP-3100) which allowed us to avoid intermediate lists: {1, 2, 3} {i for i in range(1, 3)} The empty set form, however, was reserved for dictionaries due to backwards compatibility. References from [Python-3000] sets in P3K? states: I'm sure we can work something out --- I agree, {} for empty set and {:} for empty dict would be ideal, were it not for backward compatibility. I liked the \"special empty object\" idea when I first wrote the PEP (i.e., have {} be something that could turn into either a set or dict), but one of the instructors here convinced me that it would just lead to confusion in newcomers' minds (as well as being a pain to implement). The following message describes these rules better: I think Guido had the best solution. Use set() for empty sets, use {} for empty dicts, use {genexp} for set comprehensions/displays, use {1,2,3} for explicit set literals, and use {k1:v1, k2:v2} for dict literals. We can always add {/} later if demand exceeds distaste."} +{"question_id": 36250353, "score": 109, "creation_date": 1459099625, "tags": ["python", "python-import", "python-module", "shadowing"], "instruction": "Importing a library from (or near) a script with the same name raises \"AttributeError: module has no attribute\" or an ImportError or NameError\n\nI have a script named requests.py that needs to use the third-party requests package. The script either can't import the package, or can't access its functionality. Why isn't this working, and how do I fix it? Trying a plain import and then using the functionality results in an AttributeError: import requests res = requests.get('http://www.google.ca') print(res) Traceback (most recent call last): File \"/Users/me/dev/rough/requests.py\", line 1, in import requests File \"/Users/me/dev/rough/requests.py\", line 3, in requests.get('http://www.google.ca') AttributeError: module 'requests' has no attribute 'get' In more recent versions of Python, the error message instead reads AttributeError: partially initialized module 'requests' has no attribute 'get' (most likely due to a circular import). Using from-import of a specific name results in an ImportError: from requests import get res = get('http://www.google.ca') print(res) Traceback (most recent call last): File \"requests.py\", line 1, in from requests import get File \"/Users/me/dev/rough/requests.py\", line 1, in from requests import get ImportError: cannot import name 'get' In more recent versions of Python, the error message instead reads ImportError: cannot import name 'get' from partially initialized module 'requests' (most likely due to a circular import) (/Users/me/dev/rough/requests.py). Using from-import for a module inside the package results in a different ImportError: from requests.auth import AuthBase Traceback (most recent call last): File \"requests.py\", line 1, in from requests.auth import AuthBase File \"/Users/me/dev/rough/requests.py\", line 1, in from requests.auth import AuthBase ImportError: No module named 'requests.auth'; 'requests' is not a package Using a star-import and then using the functionality raises a NameError: from requests import * res = get('http://www.google.ca') print(res) Traceback (most recent call last): File \"requests.py\", line 1, in from requests import * File \"/Users/me/dev/rough/requests.py\", line 3, in res = get('http://www.google.ca') NameError: name 'get' is not defined From Python 3.13 onwards the error message becomes very clear, from the documentation: A common mistake is to write a script with the same name as a standard library module. When this results in errors, we now display a more helpful error message: $ python random.py Traceback (most recent call last): File \"/home/me/random.py\", line 1, in import random File \"/home/me/random.py\", line 3, in print(random.randint(5)) ^^^^^^^^^^^^^^ AttributeError: module 'random' has no attribute 'randint' (consider renaming '/home/me/random.py' since it has the same name as the standard library module named 'random' and prevents importing that standard library module) Similarly, if a script has the same name as a third-party module that it attempts to import and this results in errors, we also display a more helpful error message: $ python numpy.py Traceback (most recent call last): File \"/home/me/numpy.py\", line 1, in import numpy as np File \"/home/me/numpy.py\", line 3, in np.array([1, 2, 3]) ^^^^^^^^ AttributeError: module 'numpy' has no attribute 'array' (consider renaming '/home/me/numpy.py' if it has the same name as a library you intended to import) For cases where you name your module the same as an existing one on purpose and want to handle that situation, see How can I import from the standard library, when my project has a module with the same name? (How can I control where Python looks for modules?)", "output": "This happens because your local module named requests.py shadows the installed requests module you are trying to use. The current directory is prepended to sys.path, so the local name takes precedence over the installed name. An extra debugging tip when this comes up is to look at the Traceback carefully, and realize that the name of your script in question is matching the module you are trying to import: Notice the name you used in your script: File \"/Users/me/dev/rough/requests.py\", line 1, in The module you are trying to import: requests Rename your module to something else to avoid the name collision. Python may generate a requests.pyc file next to your requests.py file (in the __pycache__ directory in Python 3). Remove that as well after your rename, as the interpreter will still reference that file, re-producing the error. However, the pyc file in __pycache__ should not affect your code if the py file has been removed. In the example, renaming the file to my_requests.py, removing requests.pyc, and running again successfully prints . Note: This doesn't only happen when naming your file as the module you are trying to import. This can also happen if you name your file the same as a module imported by a module you import directly. For example, having a file called copy.py and trying to import pandas from there, will give ImportError: cannot import name 'copy' from 'copy' That is because pandas imports copy. There is no magic solution here as you can't know all the modules' names in the world, but a rule of thumb is to try to make names of modules as unique as possible and try to change the name whenever you get such error."} +{"question_id": 32554527, "score": 109, "creation_date": 1442178221, "tags": ["python", "list", "indexing", "typeerror"], "instruction": "TypeError: list indices must be integers or slices, not str\n\nI've got two lists that I want to merge into a single array and finally put it in a csv file. How I can avoid this error : def fill_csv(self, array_urls, array_dates, csv_file_path): result_array = [] array_length = str(len(array_dates)) # We fill the CSV file file = open(csv_file_path, \"w\") csv_file = csv.writer(file, delimiter=';', lineterminator='\\n') # We merge the two arrays in one for i in array_length: result_array[i][0].append(array_urls[i]) result_array[i][1].append(array_dates[i]) i += 1 csv_file.writerows(result_array) And got : File \"C:\\Users\\--\\gcscan.py\", line 63, in fill_csv result_array[i][0].append(array_urls[i]) TypeError: list indices must be integers or slices, not str How can my count work ?", "output": "First, array_length should be an integer and not a string: array_length = len(array_dates) Second, your for loop should be constructed using range: for i in range(array_length): # Use `xrange` for python 2. Third, i will increment automatically, so delete the following line: i += 1 Note, one could also just zip the two lists given that they have the same length: import csv dates = ['2020-01-01', '2020-01-02', '2020-01-03'] urls = ['www.abc.com', 'www.cnn.com', 'www.nbc.com'] csv_file_patch = '/path/to/filename.csv' with open(csv_file_patch, 'w') as fout: csv_file = csv.writer(fout, delimiter=';', lineterminator='\\n') result_array = zip(dates, urls) csv_file.writerows(result_array)"} +{"question_id": 3217682, "score": 108, "creation_date": 1278732075, "tags": ["python", "regex", "django", "validation", "email"], "instruction": "How to validate an Email address in Django?\n\nI have written a function for adding emails to newsletter base. Until I've added checking validity of sent email it was working flawlessly. Now each time I'm getting \"Wrong email\" in return. Can anybody see any errors here ? The regex used is : \\b[\\w\\.-]+@[\\w\\.-]+\\.\\w{2,4}\\b and it is 100% valid (http://gskinner.com/RegExr/), but I may be using it wrong, or it may be some logic error : def newsletter_add(request): if request.method == \"POST\": try: e = NewsletterEmails.objects.get(email = request.POST['email']) message = _(u\"Email is already added.\") type = \"error\" except NewsletterEmails.DoesNotExist: if validateEmail(request.POST['email']): try: e = NewsletterEmails(email = request.POST['email']) except DoesNotExist: pass message = _(u\"Email added.\") type = \"success\" e.save() else: message = _(u\"Wrong email\") type = \"error\" import re def validateEmail(email): if len(email) > 6: if re.match('\\b[\\w\\.-]+@[\\w\\.-]+\\.\\w{2,4}\\b', email) != None: return 1 return 0", "output": "UPDATE 2017: the code below is 7 years old and was since modified, fixed and expanded. For anyone wishing to do this now, the correct code lives around here. Here is part of django.core.validators you may find interesting :) class EmailValidator(RegexValidator): def __call__(self, value): try: super(EmailValidator, self).__call__(value) except ValidationError, e: # Trivial case failed. Try for possible IDN domain-part if value and u'@' in value: parts = value.split(u'@') domain_part = parts[-1] try: parts[-1] = parts[-1].encode('idna') except UnicodeError: raise e super(EmailValidator, self).__call__(u'@'.join(parts)) else: raise email_re = re.compile( r\"(^[-!#$%&'*+/=?^_`{}|~0-9A-Z]+(\\.[-!#$%&'*+/=?^_`{}|~0-9A-Z]+)*\" # dot-atom r'|^\"([\\001-\\010\\013\\014\\016-\\037!#-\\[\\]-\\177]|\\\\[\\001-011\\013\\014\\016-\\177])*\"' # quoted-string r')@(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+[A-Z]{2,6}\\.?$', re.IGNORECASE) # domain validate_email = EmailValidator(email_re, _(u'Enter a valid e-mail address.'), 'invalid') so if you don't want to use forms and form fields, you can import email_re and use it in your function, or even better - import validate_email and use it, catching possible ValidationError. def validateEmail( email ): from django.core.validators import validate_email from django.core.exceptions import ValidationError try: validate_email( email ) return True except ValidationError: return False And here is Mail::RFC822::Address regexp used in PERL, if you really need to be that paranoid."} +{"question_id": 21837208, "score": 108, "creation_date": 1392663787, "tags": ["python"], "instruction": "Check if a number is odd or even in Python\n\nI'm trying to make a program which checks if a word is a palindrome and I've gotten so far and it works with words that have an even amount of numbers. I know how to make it do something if the amount of letters is odd but I just don't know how to find out if a number is odd. Is there any simple way to find if a number is odd or even? Just for reference, this is my code: a = 0 while a == 0: print(\"\\n \\n\" * 100) print(\"Please enter a word to check if it is a palindrome: \") word = input(\"?: \") wordLength = int(len(word)) finalWordLength = int(wordLength / 2) firstHalf = word[:finalWordLength] secondHalf = word[finalWordLength + 1:] secondHalf = secondHalf[::-1] print(firstHalf) print(secondHalf) if firstHalf == secondHalf: print(\"This is a palindrom\") else: print(\"This is not a palindrom\") print(\"Press enter to restart\") input()", "output": "if num % 2 == 0: pass # Even else: pass # Odd The % sign is like division only it checks for the remainder, so if the number divided by 2 has a remainder of 0 it's even otherwise odd. Or reverse them for a little speed improvement, since any number above 0 is also considered \"True\" you can skip needing to do any equality check: if num % 2: pass # Odd else: pass # Even"} +{"question_id": 48016351, "score": 108, "creation_date": 1514510785, "tags": ["python", "python-3.x", "anaconda"], "instruction": "How to make new anaconda env from yml file\n\nI installed anaconda in C:\\Program Files\\Anaconda3. Every time to create a new env, I just do cmd and write: conda create --name envname python=3.5 But how can i install a new env from the \"environments.yml\" file", "output": "The above answers did not work for me with conda 4.7.12, but this (from the Anaconda documentation) did: conda env create -f environment.yml"} +{"question_id": 40718760, "score": 108, "creation_date": 1479726737, "tags": ["python", "pycharm", "pytest"], "instruction": "Unable to debug in pycharm with pytest\n\nI cannot debug in PyCharm using py.test. All the test suite is running ok in \"Debug mode\" but it doesn't stop on breakpoints. I also have py.test as the default test runner. Maybe this is not important, but debugging works correctly in my Django server. Any ideas? picture of enable_breakpoints_and_the_mode_of_pycharm_is_debug References: pycharm-enabling-disabling-and-removing-breakpoints Run/Debug Configuration: py.test", "output": "For my situation, I found what the problem is: If there is --cov in pytest.ini, then breakpoints in pycharm won't work, after deleting all --cov in pytest.ini, the breakpoints in pycharm can work. Reason: \"The coverage module and pycharm's debugger use the same tracing api (sys.settrace) - they don't work together. \" -- https://github.com/pytest-dev/pytest-cov/issues/131 References: How to debug py.test in PyCharm when coverage is enabled"} +{"question_id": 40182944, "score": 108, "creation_date": 1477073258, "tags": ["python", "assert", "raise"], "instruction": "What's the difference between raise, try, and assert?\n\nI have been learning Python and the raise function* and assert are really similar (what I realised is that both of them crash the app, unlike try - except) and I can't see a situation where you would use raise or assert over try. So, what is the difference between raise, try, and assert? * Editor's note: It's a statement, not a function.", "output": "Assert Used when you want to \"stop\" the script based on a certain condition and return something to help debug faster: list_ = [\"a\",\"b\",\"x\"] assert \"x\" in list_, \"x is not in the list\" print(\"passed\") #>> prints passed list_ = [\"a\",\"b\",\"c\"] assert \"x\" in list_, \"x is not in the list\" print(\"passed\") #>> Traceback (most recent call last): File \"python\", line 2, in AssertionError: x is not in the list Raise Two reasons where this is useful: 1. To be used with try and except blocks Raise an error of your choosing - could be custom like below and doesn't stop the script if you pass or continue the script; or can be predefined errors raise ValueError() class Custom_error(BaseException): pass try: print(\"hello\") raise Custom_error print(\"world\") except Custom_error: print(\"found it not stopping now\") print(\"im outside\") Output: hello found it not stopping now im outside Notice it didn't stop? We can stop it using just exit(1) in the except block. 2. Re-raise raise can also be used to re-raise the current error to pass it up the stack to see if something else can handle it. except SomeError as e: if not can_handle(e): raise handle(e) Try/Except blocks Does exactly what you think: tries something, and if an error comes up, catches it and deals with it however you like. No example since there's one above."} +{"question_id": 52357542, "score": 108, "creation_date": 1537124984, "tags": ["python", "numpy", "tensorflow", "attributeerror", "tensor"], "instruction": "AttributeError: 'Tensor' object has no attribute 'numpy'\n\nI downloaded this code from GitHub. predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy() But I get an error that says: AttributeError: 'Tensor' object has no attribute 'numpy' What is wrong, and how do I fix it?", "output": "I suspect the place where you copied the code from had eager execution enabled, i.e. had invoked tf.enable_eager_execution() at the start of the program. You could do the same. UPDATE: Note that eager execution is enabled by default in TensorFlow 2.0. So the answer above applies only to TensorFlow 1.x"} +{"question_id": 53312590, "score": 108, "creation_date": 1542257357, "tags": ["python", "windows"], "instruction": "How can I check all the installed Python versions on Windows?\n\nPlease note I'm not asking \"how to check which version of Python did I install\". I've installed several versions of Pythons on my Windows computer, for example Python 2.7-64, Python 2.7-32, and Python 3.7-32. Python 3 includes \"py\" and \"pyw\" which helps me to easily start different Pythons, for example: \"py -2.7\" starts Python 2.7-64 \"py -2.7-32\" starts Python 2.7-32 \"py -3.7-32\" starts Python 3.7-32 What I'm wondering is, how to check how many different versions of Python did I install on my Windows PC and what versions are they? PyCharm is able to find it but, for one thing, I don't know if it is a complete list, and for another, I wonder if there is any tool provided by Python or the operating system can do it.", "output": "I just got the answer. By typing \"py -h\" or \"py --help\" I got the help message: C:\\Users\\admin>py -h Python Launcher for Windows Version 3.7.1150.1013 usage: py [launcher-args] [python-args] script [script-args] Launcher arguments: -2 : Launch the latest Python 2.x version -3 : Launch the latest Python 3.x version -X.Y : Launch the specified Python version The above all default to 64 bit if a matching 64 bit python is present. -X.Y-32: Launch the specified 32bit Python version -X-32 : Launch the latest 32bit Python X version -X.Y-64: Launch the specified 64bit Python version -X-64 : Launch the latest 64bit Python X version -0 --list : List the available pythons -0p --list-paths : List with paths Which tells me that \"-0\" (zero, not letter \"O\") lists the available pythons: C:\\Users\\admin>py -0 Installed Pythons found by py Launcher for Windows -3.7-64 * -3.7-32 -2.7-64 -2.7-32 While \"-0p\" lists not only the versions, but also the paths: C:\\Users\\admin>py -0p Installed Pythons found by py Launcher for Windows -3.7-64 C:\\Users\\admin\\AppData\\Local\\Programs\\Python\\Python37\\python.exe * -3.7-32 C:\\Users\\admin\\AppData\\Local\\Programs\\Python\\Python37-32\\python.exe -2.7-64 C:\\Python27_64\\python.exe -2.7-32 C:\\Python27_32\\python.exe To install a Python version that is not listed there run py install followed by the version number, e.g. py install 3.14"} +{"question_id": 53583199, "score": 108, "creation_date": 1543774597, "tags": ["java", "python", "macos", "apache-spark", "pyspark"], "instruction": "Spark Error - Unsupported class file major version\n\nI'm trying to install Spark on my Mac. I've used home-brew to install spark 2.4.0 and Scala. I've installed PySpark in my anaconda environment and am using PyCharm for development. I've exported to my bash profile: export SPARK_VERSION=`ls /usr/local/Cellar/apache-spark/ | sort | tail -1` export SPARK_HOME=\"/usr/local/Cellar/apache-spark/$SPARK_VERSION/libexec\" export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH However I'm unable to get it to work. I suspect this is due to java version from reading the traceback. I would really appreciate some help fixed the issue. Please comment if there is any information I could provide that is helpful beyond the traceback. I am getting the following error: Traceback (most recent call last): File \"\", line 4, in File \"/anaconda3/envs/coda/lib/python3.6/site-packages/pyspark/rdd.py\", line 816, in collect sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) File \"/anaconda3/envs/coda/lib/python3.6/site-packages/py4j/java_gateway.py\", line 1257, in __call__ answer, self.gateway_client, self.target_id, self.name) File \"/anaconda3/envs/coda/lib/python3.6/site-packages/py4j/protocol.py\", line 328, in get_return_value format(target_id, \".\", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : java.lang.IllegalArgumentException: Unsupported class file major version 55", "output": "Edit Spark 3.0 supports Java 11, so you'll need to upgrade Spark runs on Java 8/11, Scala 2.12, Python 2.7+/3.4+ and R 3.1+. Java 8 prior to version 8u92 support is deprecated as of Spark 3.0.0 Original answer Until Spark supports Java 11, or higher (which would be hopefully be mentioned at the latest documentation when it is), you have to add in a flag to set your Java version to Java 8. As of Spark 2.4.x Spark runs on Java 8, Python 2.7+/3.4+ and R 3.1+. For the Scala API, Spark 2.4.4 uses Scala 2.12. You will need to use a compatible Scala version (2.12.x) On Mac/Unix, see asdf-java for installing different Javas On a Mac, I am able to do this in my .bashrc, export JAVA_HOME=$(/usr/libexec/java_home -v 1.8) On Windows, checkout Chocolately, but seriously just use WSL2 or Docker to run Spark. You can also set this in spark-env.sh rather than set the variable for your whole profile. And, of course, this all means you'll need to install Java 8 in addition to your existing Java 11"} +{"question_id": 36470343, "score": 108, "creation_date": 1460016761, "tags": ["python", "python-3.x", "matplotlib"], "instruction": "How to draw a line with matplotlib?\n\nI cannot find a way to draw an arbitrary line with matplotlib Python library. It allows to draw horizontal and vertical lines (with matplotlib.pyplot.axhline and matplotlib.pyplot.axvline, for example), but i do not see how to draw a line through two given points (x1, y1) and (x2, y2). Is there a way? Is there a simple way?", "output": "As of matplotlib 3.3, you can do this with plt.axline((x1, y1), (x2, y2))."} +{"question_id": 3241929, "score": 108, "creation_date": 1279058755, "tags": ["python", "image", "image-processing", "colors"], "instruction": "How to find the dominant/most common color in an image?\n\nI'm looking for a way to find the most dominant color/tone in an image using python. Either the average shade or the most common out of RGB will do. I've looked at the Python Imaging library, and could not find anything relating to what I was looking for in their manual, and also briefly at VTK. I did however find a PHP script which does what I need, here (login required to download). The script seems to resize the image to 150*150, to bring out the dominant colors. However, after that, I am fairly lost. I did consider writing something that would resize the image to a small size then check every other pixel or so for it's image, though I imagine this would be very inefficient (though implementing this idea as a C python module might be an idea). However, after all of that, I am still stumped. Is there an easy, efficient way to find the dominant color in an image?", "output": "Here's code making use of Pillow and Scipy's cluster package. For simplicity I've hardcoded the filename as \"image.jpg\". Resizing the image is for speed: if you don't mind the wait, comment out the resize call. When run on this sample image, it usually says the dominant colour is #d8c865, which corresponds roughly to the bright yellowish area to the lower left of the two peppers. I say \"usually\" because the clustering algorithm used has a degree of randomness to it. There are various ways you could change this, but for your purposes it may suit well. (Check out the options on the kmeans2() variant if you need deterministic results.) from __future__ import print_function import binascii import struct from PIL import Image import numpy as np import scipy import scipy.misc import scipy.cluster NUM_CLUSTERS = 5 print('reading image') im = Image.open('image.jpg') im = im.resize((150, 150)) # optional, to reduce time ar = np.asarray(im) shape = ar.shape ar = ar.reshape(scipy.product(shape[:2]), shape[2]).astype(float) print('finding clusters') codes, dist = scipy.cluster.vq.kmeans(ar, NUM_CLUSTERS) print('cluster centres:\\n', codes) vecs, dist = scipy.cluster.vq.vq(ar, codes) # assign codes counts, bins = scipy.histogram(vecs, len(codes)) # count occurrences index_max = scipy.argmax(counts) # find most frequent peak = codes[index_max] colour = binascii.hexlify(bytearray(int(c) for c in peak)).decode('ascii') print('most frequent is %s (#%s)' % (peak, colour)) Note: when I expand the number of clusters to find from 5 to 10 or 15, it frequently gave results that were greenish or bluish. Given the input image, those are reasonable results too... I can't tell which colour is really dominant in that image either, so I don't fault the algorithm! Also a small bonus: save the reduced-size image with only the N most-frequent colours: # bonus: save image using only the N most common colours import imageio c = ar.copy() for i, code in enumerate(codes): c[scipy.r_[scipy.where(vecs==i)],:] = code imageio.imwrite('clusters.png', c.reshape(*shape).astype(np.uint8)) print('saved clustered image')"} +{"question_id": 48174935, "score": 108, "creation_date": 1515524100, "tags": ["python", "macos", "virtual", "environment", "conda"], "instruction": "Conda: Creating a virtual environment\n\nI'm trying to create a virtual environment. I've followed steps from both Conda and Medium. Everything works fine until I need to source the new environment: conda info -e # conda environments: # base * /Users/fwrenn/anaconda3 test_env /Users/fwrenn/anaconda3/envs/test_env source ~/anaconda3/bin/activate test_env _CONDA_ROOT=/Users/fwrenn/anaconda3: Command not found. Badly placed ()'s. I can't figure out the problem. Searching on here has solutions that say adding lines to your bash_profile file, but I don't work in Bash, only C shell (csh). It looks like it's unable to build the directory path in activate. My particulars: OS X Output of python --version: Python 3.6.3 :: Anaconda custom (64-bit) Output of conda --version: conda 4.4.7", "output": "I was able to solve my problem. Executing the source activate test_env command wasn't picking up my .bash_profile, and I normally work in tcsh. Simply starting a subprocess in Bash was enough to get activate working. I guess I assumed, incorrectly, that the activate command would start a child process in Bash and use Bash environment variables. > conda info -e > # conda environments: > # > base * ~/anaconda3 > test_env ~/anaconda3/envs/test_env > bash ~$ source ~/anaconda3/bin/activate test_env (test_env) ~$ (test_env) ~$ conda info -e # conda environments: # test_env * ~/anaconda3/envs/test_env root ~/anaconda3"} +{"question_id": 45281297, "score": 108, "creation_date": 1500901130, "tags": ["python", "pandas", "date", "group-by"], "instruction": "group by week in pandas\n\nI'm having this data frame: Name Date Quantity Apple 07/11/17 20 orange 07/14/17 20 Apple 07/14/17 70 Orange 07/25/17 40 Apple 07/20/17 30 I want to aggregate this by Name and Date to get sum of quantities Details: Date: Group, the result should be at the beginning of the week (or just on Monday) Quantity: Sum, if two or more records have same Name and Date (if falls on same interval) The desired output is given below: Name Date Quantity Apple 07/10/17 90 orange 07/10/17 20 Apple 07/17/17 30 orange 07/24/17 40", "output": "First, convert column date to_datetime and subtract one week as we want the sum for the week ahead of the date and not the week before that date. Then use groupby with Grouper by W-MON and aggregate sum: df['Date'] = pd.to_datetime(df['Date']) - pd.to_timedelta(7, unit='d') df = df.groupby(['Name', pd.Grouper(key='Date', freq='W-MON')])['Quantity'] .sum() .reset_index() .sort_values('Date') print (df) Name Date Quantity 0 Apple 2017-07-10 90 3 orange 2017-07-10 20 1 Apple 2017-07-17 30 2 Orange 2017-07-24 40"} +{"question_id": 38336090, "score": 108, "creation_date": 1468346894, "tags": ["python", "unit-testing", "mocking", "patch"], "instruction": "Mocking a global variable\n\nI've been trying to implement some unit tests for a module. An example module named alphabet.py is as follows: import database def length_letters(): return len(letters) def contains_letter(letter): return letter in letters letters = database.get('letters') # returns a list of letters I'd like to mock the response from a database with some values of my choice, but the code below doesn't seem to work. import unittests import alphabet from unittest.mock import patch class TestAlphabet(unittest.TestCase): @patch('alphabet.letters') def setUp(self, mock_letters): mock_letters.return_value = ['a', 'b', 'c'] def test_length_letters(self): self.assertEqual(3, alphabet.length_letters()) def test_contains_letter(self): self.assertTrue(alphabet.contains_letter('a')) I have seen many examples in which 'patch' is applied to methods and classes, but not to variables. I prefer not to patch the method database.get because I may use it again with different parameters later on, so I would need a different response. What am I doing wrong here?", "output": "Try this: import unittests import alphabet from unittest import mock class TestAlphabet(unittest.TestCase): def setUp(self): self.mock_letters = mock.patch.object( alphabet, 'letters', return_value=['a', 'b', 'c'] ) def test_length_letters(self): with self.mock_letters: self.assertEqual(3, alphabet.length_letters()) def test_contains_letter(self): with self.mock_letters: self.assertTrue(alphabet.contains_letter('a')) You need to apply the mock while the individual tests are actually running, not just in setUp(). We can create the mock in setUp(), and apply it later with a with ... Context Manager."} +{"question_id": 17957890, "score": 108, "creation_date": 1375220244, "tags": ["python", "numpy", "pandas"], "instruction": "pandas select from Dataframe using startswith\n\nThis works (using Pandas 12 dev) table2=table[table['SUBDIVISION'] =='INVERNESS'] Then I realized I needed to select the field using \"starts with\" Since I was missing a bunch. So per the Pandas doc as near as I could follow I tried criteria = table['SUBDIVISION'].map(lambda x: x.startswith('INVERNESS')) table2 = table[criteria] And got AttributeError: 'float' object has no attribute 'startswith' So I tried an alternate syntax with the same result table[[x.startswith('INVERNESS') for x in table['SUBDIVISION']]] Reference http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing Section 4: List comprehensions and map method of Series can also be used to produce more complex criteria: What am I missing?", "output": "You can use the str.startswith DataFrame method to give more consistent results: In [11]: s = pd.Series(['a', 'ab', 'c', 11, np.nan]) In [12]: s Out[12]: 0 a 1 ab 2 c 3 11 4 NaN dtype: object In [13]: s.str.startswith('a', na=False) Out[13]: 0 True 1 True 2 False 3 False 4 False dtype: bool and the boolean indexing will work just fine (I prefer to use loc, but it works just the same without): In [14]: s.loc[s.str.startswith('a', na=False)] Out[14]: 0 a 1 ab dtype: object . It looks least one of your elements in the Series/column is a float, which doesn't have a startswith method hence the AttributeError, the list comprehension should raise the same error..."} +{"question_id": 29158220, "score": 108, "creation_date": 1426817358, "tags": ["python", "tkinter"], "instruction": "Tkinter understanding mainloop\n\nTill now, I used to end my Tkinter programs with: tk.mainloop(), or nothing would show up! See example: from Tkinter import * import random import time tk = Tk() tk.title = \"Game\" tk.resizable(0,0) tk.wm_attributes(\"-topmost\", 1) canvas = Canvas(tk, width=500, height=400, bd=0, highlightthickness=0) canvas.pack() class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) def draw(self): pass ball = Ball(canvas, \"red\") tk.mainloop() However, when tried the next step in this program (making the ball move by time), the book am reading from, says to do the following. So I changed the draw function to: def draw(self): self.canvas.move(self.id, 0, -1) and add the following code to my program: while 1: ball.draw() tk.update_idletasks() tk.update() time.sleep(0.01) But I noticed that adding this block of code, made the use of tk.mainloop() useless, since everything would show up even without it!!! At this moment I should mention that my book never talks about tk.mainloop() (maybe because it uses Python 3) but I learned about it searching the web since my programs didn't work by copying book's code! So I tried doing the following that would not work!!! while 1: ball.draw() tk.mainloop() time.sleep(0.01) What's going on? What does tk.mainloop()? What does tk.update_idletasks() and tk.update() do and how that differs from tk.mainloop()? Should I use the above loop?tk.mainloop()? or both in my programs?", "output": "tk.mainloop() blocks. It means that execution of your Python commands halts there. You can see that by writing: while 1: ball.draw() tk.mainloop() print(\"hello\") #NEW CODE time.sleep(0.01) You will never see the output from the print statement. Because there is no loop, the ball doesn't move. On the other hand, the methods update_idletasks() and update() here: while True: ball.draw() tk.update_idletasks() tk.update() ...do not block; after those methods finish, execution will continue, so the while loop will execute over and over, which makes the ball move. An infinite loop containing the method calls update_idletasks() and update() can act as a substitute for calling tk.mainloop(). Note that the whole while loop can be said to block just like tk.mainloop() because nothing after the while loop will execute. However, tk.mainloop() is not a substitute for just the lines: tk.update_idletasks() tk.update() Rather, tk.mainloop() is a substitute for the whole while loop: while True: tk.update_idletasks() tk.update() Response to comment: Here is what the tcl docs say: Update idletasks This subcommand of update flushes all currently-scheduled idle events from Tcl's event queue. Idle events are used to postpone processing until \u201cthere is nothing else to do\u201d, with the typical use case for them being Tk's redrawing and geometry recalculations. By postponing these until Tk is idle, expensive redraw operations are not done until everything from a cluster of events (e.g., button release, change of current window, etc.) are processed at the script level. This makes Tk seem much faster, but if you're in the middle of doing some long running processing, it can also mean that no idle events are processed for a long time. By calling update idletasks, redraws due to internal changes of state are processed immediately. (Redraws due to system events, e.g., being deiconified by the user, need a full update to be processed.) APN As described in Update considered harmful, use of update to handle redraws not handled by update idletasks has many issues. Joe English in a comp.lang.tcl posting describes an alternative: So update_idletasks() causes some subset of events to be processed that update() causes to be processed. From the update docs: update ?idletasks? The update command is used to bring the application \u201cup to date\u201d by entering the Tcl event loop repeatedly until all pending events (including idle callbacks) have been processed. If the idletasks keyword is specified as an argument to the command, then no new events or errors are processed; only idle callbacks are invoked. This causes operations that are normally deferred, such as display updates and window layout calculations, to be performed immediately. KBK (12 February 2000) -- My personal opinion is that the [update] command is not one of the best practices, and a programmer is well advised to avoid it. I have seldom if ever seen a use of [update] that could not be more effectively programmed by another means, generally appropriate use of event callbacks. By the way, this caution applies to all the Tcl commands (vwait and tkwait are the other common culprits) that enter the event loop recursively, with the exception of using a single [vwait] at global level to launch the event loop inside a shell that doesn't launch it automatically. The commonest purposes for which I've seen [update] recommended are: Keeping the GUI alive while some long-running calculation is executing. See Countdown program for an alternative. 2) Waiting for a window to be configured before doing things like geometry management on it. The alternative is to bind on events such as that notify the process of a window's geometry. See Centering a window for an alternative. What's wrong with update? There are several answers. First, it tends to complicate the code of the surrounding GUI. If you work the exercises in the Countdown program, you'll get a feel for how much easier it can be when each event is processed on its own callback. Second, it's a source of insidious bugs. The general problem is that executing [update] has nearly unconstrained side effects; on return from [update], a script can easily discover that the rug has been pulled out from under it. There's further discussion of this phenomenon over at Update considered harmful. ..... Is there any chance I can make my program work without the while loop? Yes, but things get a little tricky. You might think something like the following would work: class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) def draw(self): while True: self.canvas.move(self.id, 0, -1) ball = Ball(canvas, \"red\") ball.draw() tk.mainloop() The problem is that ball.draw() will cause execution to enter an infinite loop in the draw() method, so tk.mainloop() will never execute, and your widgets will never display. In gui programming, infinite loops have to be avoided at all costs in order to keep the widgets responsive to user input, e.g. mouse clicks. So, the question is: how do you execute something over and over again without actually creating an infinite loop? Tkinter has an answer for that problem: a widget's after() method: from Tkinter import * import random import time tk = Tk() tk.title = \"Game\" tk.resizable(0,0) tk.wm_attributes(\"-topmost\", 1) canvas = Canvas(tk, width=500, height=400, bd=0, highlightthickness=0) canvas.pack() class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) def draw(self): self.canvas.move(self.id, 0, -1) self.canvas.after(1, self.draw) #(time_delay, method_to_execute) ball = Ball(canvas, \"red\") ball.draw() #Changed per Bryan Oakley's comment tk.mainloop() The after() method doesn't block (it actually creates another thread of execution), so execution continues on in your python program after after() is called, which means tk.mainloop() executes next, so your widgets get configured and displayed. The after() method also allows your widgets to remain responsive to other user input. Try running the following program, and then click your mouse on different spots on the canvas: from Tkinter import * import random import time root = Tk() root.title = \"Game\" root.resizable(0,0) root.wm_attributes(\"-topmost\", 1) canvas = Canvas(root, width=500, height=400, bd=0, highlightthickness=0) canvas.pack() class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) self.canvas.bind(\"\", self.canvas_onclick) self.text_id = self.canvas.create_text(300, 200, anchor='se') self.canvas.itemconfig(self.text_id, text='hello') def canvas_onclick(self, event): self.canvas.itemconfig( self.text_id, text=\"You clicked at ({}, {})\".format(event.x, event.y) ) def draw(self): self.canvas.move(self.id, 0, -1) self.canvas.after(50, self.draw) ball = Ball(canvas, \"red\") ball.draw() #Changed per Bryan Oakley's comment. root.mainloop()"} +{"question_id": 34008010, "score": 108, "creation_date": 1448917588, "tags": ["python", "string", "algorithm", "time-complexity", "string-concatenation"], "instruction": "Is the time-complexity of iterative string append actually O(n^2), or O(n)?\n\nI am working on a problem out of CTCI. The third problem of chapter 1 has you take a string such as 'Mr John Smith ' and asks you to replace the intermediary spaces with %20: 'Mr%20John%20Smith' The author offers this solution in Python, calling it O(n): def urlify(string, length): '''function replaces single spaces with %20 and removes trailing spaces''' counter = 0 output = '' for char in string: counter += 1 if counter > length: return output elif char == ' ': output = output + '%20' elif char != ' ': output = output + char return output My question: I understand that this is O(n) in terms of scanning through the actual string from left to right. But aren't strings in Python immutable? If I have a string and I add another string to it with the + operator, doesn't it allocate the necessary space, copy over the original, and then copy over the appending string? If I have a collection of n strings each of length 1, then that takes: 1 + 2 + 3 + 4 + 5 + ... + n = n(n+1)/2 or O(n^2) time, yes? Or am I mistaken in how Python handles appending? Alternatively, if you'd be willing to teach me how to fish: How would I go about finding this out for myself? I've been unsuccessful in my attempts to Google an official source. I found https://wiki.python.org/moin/TimeComplexity but this doesn't have anything on strings.", "output": "In CPython, the standard implementation of Python, there's an implementation detail that makes this usually O(n), implemented in the code the bytecode evaluation loop calls for + or += with two string operands. If Python detects that the left argument has no other references, it calls realloc to attempt to avoid a copy by resizing the string in place. This is not something you should ever rely on, because it's an implementation detail and because if realloc ends up needing to move the string frequently, performance degrades to O(n^2) anyway. Without the weird implementation detail, the algorithm is O(n^2) due to the quadratic amount of copying involved. Code like this would only make sense in a language with mutable strings, like C++, and even in C++ you'd want to use +=."} +{"question_id": 11992448, "score": 108, "creation_date": 1345138026, "tags": ["python", "pragma"], "instruction": "Pragmas in python\n\nI'm reading bottle.py source code. It's a web framework, with only 3000+ lines python code. So cool. I found some code like this: class ServerAdapter(object): quiet = False def __init__(self, host='127.0.0.1', port=8080, **config): self.options = config self.host = host self.port = int(port) def run(self, handler): # pragma: no cover pass ... What does the # pragma: no cover mean? I can't find any introduce about the pragma syntax in the python documentations.", "output": "It is apparenly related to the coverage.py: Coverage.py is a tool for measuring code coverage of Python programs. It monitors your program, noting which parts of the code have been executed, then analyzes the source to identify code that could have been executed but was not. That exact # pragma: no cover is the hint that the part of code should be ignored by the tool -- see Excluding code from coverage ."} +{"question_id": 5060659, "score": 108, "creation_date": 1298242335, "tags": ["python", "regex"], "instruction": "regexes: How to access multiple matches of a group?\n\nI am putting together a fairly complex regular expression. One part of the expression matches strings such as '+a', '-57' etc. A + or a - followed by any number of letters or numbers. I want to match 0 or more strings matching this pattern. This is the expression I came up with: ([\\+-][a-zA-Z0-9]+)* If I were to search the string '-56+a' using this pattern I would expect to get two matches: +a and -56 However, I only get the last match returned: >>> m = re.match(\"([\\+-][a-zA-Z0-9]+)*\", '-56+a') >>> m.groups() ('+a',) Looking at the python docs I see that: If a group matches multiple times, only the last match is accessible: >>> m = re.match(r\"(..)+\", \"a1b2c3\") # Matches 3 times. >>> m.group(1) # Returns only the last match. 'c3' So, my question is: how do you access multiple group matches?", "output": "Drop the * from your regex (so it matches exactly one instance of your pattern). Then use either re.findall(...) or re.finditer (see here) to return all matches. It sounds like you're essentially building a recursive descent parser. For relatively simple parsing tasks, it is quite common and entirely reasonable to do that by hand. If you're interested in a library solution (in case your parsing task may become more complicated later on, for example), have a look at pyparsing."} +{"question_id": 33866695, "score": 107, "creation_date": 1448268149, "tags": ["python", "macos", "postgresql", "psycopg2"], "instruction": "Error Installing Psycopg2 on MacOS 10.9.5\n\nI'm trying to install Psycopg2 on my Macbook, but I am getting an error. I found a lot of the same questions on StackOverflow but no answer seems to work. I'm using: OS: MacOS 10.9.5 Python Version: 3.4.3 My error code is: Running setup.py egg_info for package psycopg2 Error: pg_config executable not found. Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. Complete output from command python setup.py egg_info: running egg_info writing pip-egg-info/psycopg2.egg-info/PKG-INFO writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt warning: manifest_maker: standard file '-c' not found Error: pg_config executable not found. Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. ---------------------------------------- Command python setup.py egg_info failed with error code 1 in /Users/sg/build/psycopg2 Storing complete log in /Users/sg/Library/Logs/pip.log", "output": "I ran pip install psycopg2-binary and it worked like charm More info about the binary package Python 3 pip3 install psycopg2-binary"} +{"question_id": 2224742, "score": 107, "creation_date": 1265662053, "tags": ["python", "datetime"], "instruction": "Most recent previous business day in Python\n\nI need to subtract business days from the current date. I currently have some code which needs always to be running on the most recent business day. So that may be today if we're Monday thru Friday, but if it's Saturday or Sunday then I need to set it back to the Friday before the weekend. I currently have some pretty clunky code to do this: lastBusDay = datetime.datetime.today() if datetime.date.weekday(lastBusDay) == 5: #if it's Saturday lastBusDay = lastBusDay - datetime.timedelta(days = 1) #then make it Friday elif datetime.date.weekday(lastBusDay) == 6: #if it's Sunday lastBusDay = lastBusDay - datetime.timedelta(days = 2); #then make it Friday Is there a better way? Can I tell timedelta to work in weekdays rather than calendar days for example?", "output": "Use pandas! import datetime # BDay is business day, not birthday... from pandas.tseries.offsets import BDay today = datetime.datetime.today() print(today - BDay(4)) Since today is Thursday, Sept 26, that will give you an output of: datetime.datetime(2013, 9, 20, 14, 8, 4, 89761)"} +{"question_id": 3038459, "score": 107, "creation_date": 1276529089, "tags": ["python", "django", "django-templates"], "instruction": "What is the path that Django uses for locating and loading templates?\n\nI'm following this tutorial on a Windows 7 environment. My settings file has this definition: TEMPLATE_DIRS = ( 'C:/django-project/myapp/mytemplates/admin' ) I got the base_template from the template admin/base_site.html from within the default Django admin template directory in the source code of Django itself (django/contrib/admin/templates) into an admin subdirectory of myapp directory as the tutorial instructed, but it doesn't seem to take affect for some reason. Any clue of what might be the problem?", "output": "I know this isn't in the Django tutorial, and shame on them, but it's better to set up relative paths for your path variables. You can set it up like so: import os.path PROJECT_PATH = os.path.realpath(os.path.dirname(__file__)) ... MEDIA_ROOT = os.path.join(PROJECT_PATH, 'media/') TEMPLATE_DIRS = [ os.path.join(PROJECT_PATH, 'templates/'), ] This way you can move your Django project and your path roots will update automatically. This is useful when you're setting up your production server. Second, there's something suspect to your TEMPLATE_DIRS path. It should point to the root of your template directory. Also, it should also end in a trailing /. I'm just going to guess here that the .../admin/ directory is not your template root. If you still want to write absolute paths you should take out the reference to the admin template directory. TEMPLATE_DIRS = [ 'C:/django-project/myapp/mytemplates/', ] With that being said, the template loaders by default should be set up to recursively traverse into your app directories to locate template files. TEMPLATE_LOADERS = [ 'django.template.loaders.filesystem.load_template_source', 'django.template.loaders.app_directories.load_template_source', # 'django.template.loaders.eggs.load_template_source', ] You shouldn't need to copy over the admin templates unless if you specifically want to overwrite something. You will have to run a syncdb if you haven't run it yet. You'll also need to statically server your media files if you're hosting django through runserver."} +{"question_id": 65806330, "score": 107, "creation_date": 1611133698, "tags": ["python", "amazon-web-services", "docker", "dockerfile", "aws-codebuild"], "instruction": "toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading\n\nWhy does this happen, when I want to build an image from a Dockerfile in CodeCommit with CodeBuild? I get this Error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit", "output": "Try not to pull the images from the docker hub because docker has throttling for pulling the images. Use ECR(Elastic Container Registry) for private images and Amazon ECR Public Gallery for public docker images. Advice for customers dealing with Docker Hub rate limits, and a Coming Soon announcement for the advice from AWS for handling this. Update: Docker Hub will only allow an unauthenticated 10/pulls per hour starting March 1st"} +{"question_id": 50501787, "score": 107, "creation_date": 1527140613, "tags": ["python", "pandas"], "instruction": "Python Pandas User Warning: Sorting because non-concatenation axis is not aligned\n\nI'm doing some code practice and applying merging of data frames while doing this getting user warning /usr/lib64/python2.7/site-packages/pandas/core/frame.py:6201: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version of pandas will change to not sort by default. To accept the future behavior, pass 'sort=True'. To retain the current behavior and silence the warning, pass sort=False On these lines of code: Can you please help to get the solution of this warning. placement_video = [self.read_sql_vdx_summary, self.read_sql_video_km] placement_video_summary = reduce(lambda left, right: pd.merge(left, right, on='PLACEMENT', sort=False), placement_video) placement_by_video = placement_video_summary.loc[:, [\"PLACEMENT\", \"PLACEMENT_NAME\", \"COST_TYPE\", \"PRODUCT\", \"VIDEONAME\", \"VIEW0\", \"VIEW25\", \"VIEW50\", \"VIEW75\", \"VIEW100\", \"ENG0\", \"ENG25\", \"ENG50\", \"ENG75\", \"ENG100\", \"DPE0\", \"DPE25\", \"DPE50\", \"DPE75\", \"DPE100\"]] # print (placement_by_video) placement_by_video[\"Placement# Name\"] = placement_by_video[[\"PLACEMENT\", \"PLACEMENT_NAME\"]].apply(lambda x: \".\".join(x), axis=1) placement_by_video_new = placement_by_video.loc[:, [\"PLACEMENT\", \"Placement# Name\", \"COST_TYPE\", \"PRODUCT\", \"VIDEONAME\", \"VIEW0\", \"VIEW25\", \"VIEW50\", \"VIEW75\", \"VIEW100\", \"ENG0\", \"ENG25\", \"ENG50\", \"ENG75\", \"ENG100\", \"DPE0\", \"DPE25\", \"DPE50\", \"DPE75\", \"DPE100\"]] placement_by_km_video = [placement_by_video_new, self.read_sql_km_for_video] placement_by_km_video_summary = reduce(lambda left, right: pd.merge(left, right, on=['PLACEMENT', 'PRODUCT'], sort=False), placement_by_km_video) #print (list(placement_by_km_video_summary)) #print(placement_by_km_video_summary) #exit() # print(placement_by_video_new) \"\"\"Conditions for 25%view\"\"\" mask17 = placement_by_km_video_summary[\"PRODUCT\"].isin(['Display', 'Mobile']) mask18 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\", \"CPCV\"]) mask19 = placement_by_km_video_summary[\"PRODUCT\"].isin([\"InStream\"]) mask20 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\", \"CPE+\", \"CPCV\"]) mask_video_video_completions = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPCV\"]) mask21 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE+\"]) mask22 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\"]) mask23 = placement_by_km_video_summary[\"PRODUCT\"].isin(['Display', 'Mobile', 'InStream']) mask24 = placement_by_km_video_summary[\"COST_TYPE\"].isin([\"CPE\", \"CPM\", \"CPE+\"]) choice25video_eng = placement_by_km_video_summary[\"ENG25\"] choice25video_vwr = placement_by_km_video_summary[\"VIEW25\"] choice25video_deep = placement_by_km_video_summary[\"DPE25\"] placement_by_km_video_summary[\"25_pc_video\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice25video_eng, choice25video_vwr, choice25video_deep]) \"\"\"Conditions for 50%view\"\"\" choice50video_eng = placement_by_km_video_summary[\"ENG50\"] choice50video_vwr = placement_by_km_video_summary[\"VIEW50\"] choice50video_deep = placement_by_km_video_summary[\"DPE50\"] placement_by_km_video_summary[\"50_pc_video\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice50video_eng, choice50video_vwr, choice50video_deep]) \"\"\"Conditions for 75%view\"\"\" choice75video_eng = placement_by_km_video_summary[\"ENG75\"] choice75video_vwr = placement_by_km_video_summary[\"VIEW75\"] choice75video_deep = placement_by_km_video_summary[\"DPE75\"] placement_by_km_video_summary[\"75_pc_video\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice75video_eng, choice75video_vwr, choice75video_deep]) \"\"\"Conditions for 100%view\"\"\" choice100video_eng = placement_by_km_video_summary[\"ENG100\"] choice100video_vwr = placement_by_km_video_summary[\"VIEW100\"] choice100video_deep = placement_by_km_video_summary[\"DPE100\"] choicecompletions = placement_by_km_video_summary['COMPLETIONS'] placement_by_km_video_summary[\"100_pc_video\"] = np.select([mask17 & mask22, mask19 & mask24, mask17 & mask21, mask23 & mask_video_video_completions], [choice100video_eng, choice100video_vwr, choice100video_deep, choicecompletions]) \"\"\"conditions for 0%view\"\"\" choice0video_eng = placement_by_km_video_summary[\"ENG0\"] choice0video_vwr = placement_by_km_video_summary[\"VIEW0\"] choice0video_deep = placement_by_km_video_summary[\"DPE0\"] placement_by_km_video_summary[\"Views\"] = np.select([mask17 & mask18, mask19 & mask20, mask17 & mask21], [choice0video_eng, choice0video_vwr, choice0video_deep]) #print (placement_by_km_video_summary) #exit() #final Table placement_by_video_summary = placement_by_km_video_summary.loc[:, [\"PLACEMENT\", \"Placement# Name\", \"PRODUCT\", \"VIDEONAME\", \"COST_TYPE\", \"Views\", \"25_pc_video\", \"50_pc_video\", \"75_pc_video\",\"100_pc_video\", \"ENGAGEMENTS\",\"IMPRESSIONS\", \"DPEENGAMENTS\"]] #placement_by_km_video = [placement_by_video_summary, self.read_sql_km_for_video] #placement_by_km_video_summary = reduce(lambda left, right: pd.merge(left, right, on=['PLACEMENT', 'PRODUCT']), #placement_by_km_video) #print(placement_by_video_summary) #exit() # dup_col =[\"IMPRESSIONS\",\"ENGAGEMENTS\",\"DPEENGAMENTS\"] # placement_by_video_summary.loc[placement_by_video_summary.duplicated(dup_col),dup_col] = np.nan # print (\"Dhar\",placement_by_video_summary) '''adding views based on conditions''' #filter maximum value from videos placement_by_video_summary_new = placement_by_km_video_summary.loc[ placement_by_km_video_summary.reset_index().groupby(['PLACEMENT', 'PRODUCT'])['Views'].idxmax()] #print (placement_by_video_summary_new) #exit() # print (placement_by_video_summary_new) # mask22 = (placement_by_video_summary_new.PRODUCT.str.upper ()=='DISPLAY') & (placement_by_video_summary_new.COST_TYPE=='CPE') placement_by_video_summary_new.loc[mask17 & mask18, 'Views'] = placement_by_video_summary_new['ENGAGEMENTS'] placement_by_video_summary_new.loc[mask19 & mask20, 'Views'] = placement_by_video_summary_new['IMPRESSIONS'] placement_by_video_summary_new.loc[mask17 & mask21, 'Views'] = placement_by_video_summary_new['DPEENGAMENTS'] #print (placement_by_video_summary_new) #exit() placement_by_video_summary = placement_by_video_summary.drop(placement_by_video_summary_new.index).append( placement_by_video_summary_new).sort_index() placement_by_video_summary[\"Video Completion Rate\"] = placement_by_video_summary[\"100_pc_video\"] / \\ placement_by_video_summary[\"Views\"] placement_by_video_final = placement_by_video_summary.loc[:, [\"Placement# Name\", \"PRODUCT\", \"VIDEONAME\", \"Views\", \"25_pc_video\", \"50_pc_video\", \"75_pc_video\", \"100_pc_video\", \"Video Completion Rate\"]]", "output": "tl;dr: concat and append currently sort the non-concatenation index (e.g. columns if you're adding rows) if the columns don't match. In pandas 0.23 this started generating a warning; pass the parameter sort=True to silence it. In the future the default will change to not sort, so it's best to specify either sort=True or False now, or better yet ensure that your non-concatenation indices match. The warning is new in pandas 0.23.0: In a future version of pandas pandas.concat() and DataFrame.append() will no longer sort the non-concatenation axis when it is not already aligned. The current behavior is the same as the previous (sorting), but now a warning is issued when sort is not specified and the non-concatenation axis is not aligned, link. More information from linked very old github issue, comment by smcinerney : When concat'ing DataFrames, the column names get alphanumerically sorted if there are any differences between them. If they're identical across DataFrames, they don't get sorted. This sort is undocumented and unwanted. Certainly the default behavior should be no-sort. After some time the parameter sort was implemented in pandas.concat and DataFrame.append: sort : boolean, default None Sort non-concatenation axis if it is not already aligned when join is 'outer'. The current default of sorting is deprecated and will change to not-sorting in a future version of pandas. Explicitly pass sort=True to silence the warning and sort. Explicitly pass sort=False to silence the warning and not sort. This has no effect when join='inner', which already preserves the order of the non-concatenation axis. So if both DataFrames have the same columns in the same order, there is no warning and no sorting: df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8]}, columns=['a', 'b']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3]}, columns=['a', 'b']) print (pd.concat([df1, df2])) a b 0 1 0 1 2 8 0 4 7 1 5 3 df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8]}, columns=['b', 'a']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3]}, columns=['b', 'a']) print (pd.concat([df1, df2])) b a 0 0 1 1 8 2 0 7 4 1 3 5 But if the DataFrames have different columns, or the same columns in a different order, pandas returns a warning if no parameter sort is explicitly set (sort=None is the default value): df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8]}, columns=['b', 'a']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3]}, columns=['a', 'b']) print (pd.concat([df1, df2])) FutureWarning: Sorting because non-concatenation axis is not aligned. a b 0 1 0 1 2 8 0 4 7 1 5 3 print (pd.concat([df1, df2], sort=True)) a b 0 1 0 1 2 8 0 4 7 1 5 3 print (pd.concat([df1, df2], sort=False)) b a 0 0 1 1 8 2 0 7 4 1 3 5 If the DataFrames have different columns, but the first columns are aligned - they will be correctly assigned to each other (columns a and b from df1 with a and b from df2 in the example below) because they exist in both. For other columns that exist in one but not both DataFrames, missing values are created. Lastly, if you pass sort=True, columns are sorted alphanumerically. If sort=False and the second DafaFrame has columns that are not in the first, they are appended to the end with no sorting: df1 = pd.DataFrame({\"a\": [1, 2], \"b\": [0, 8], 'e':[5, 0]}, columns=['b', 'a','e']) df2 = pd.DataFrame({\"a\": [4, 5], \"b\": [7, 3], 'c':[2, 8], 'd':[7, 0]}, columns=['c','b','a','d']) print (pd.concat([df1, df2])) FutureWarning: Sorting because non-concatenation axis is not aligned. a b c d e 0 1 0 NaN NaN 5.0 1 2 8 NaN NaN 0.0 0 4 7 2.0 7.0 NaN 1 5 3 8.0 0.0 NaN print (pd.concat([df1, df2], sort=True)) a b c d e 0 1 0 NaN NaN 5.0 1 2 8 NaN NaN 0.0 0 4 7 2.0 7.0 NaN 1 5 3 8.0 0.0 NaN print (pd.concat([df1, df2], sort=False)) b a e c d 0 0 1 5.0 NaN NaN 1 8 2 0.0 NaN NaN 0 7 4 NaN 2.0 7.0 1 3 5 NaN 8.0 0.0 In your code: placement_by_video_summary = placement_by_video_summary.drop(placement_by_video_summary_new.index) .append(placement_by_video_summary_new, sort=True) .sort_index()"} +{"question_id": 31633635, "score": 107, "creation_date": 1437885088, "tags": ["python"], "instruction": "What is the meaning of \"int(a[::-1])\" in Python?\n\nI cannot understand this. I have seen this in people's code. But cannot figure out what it does. This is in Python. str(int(a[::-1]))", "output": "Assuming a is a string. The Slice notation in python has the syntax - list[::] So, when you do a[::-1], it starts from the end towards the first taking each element. So it reverses a. This is applicable for lists/tuples as well. Example - >>> a = '1234' >>> a[::-1] '4321' Then you convert it to int and then back to string (Though not sure why you do that) , that just gives you back the string."} +{"question_id": 1740726, "score": 107, "creation_date": 1258358419, "tags": ["python", "evaluation"], "instruction": "Turn string into operator\n\nHow can I turn a string such as \"+\" into the operator plus?", "output": "Use a lookup table: import operator ops = { \"+\": operator.add, \"-\": operator.sub } # etc. print(ops[\"+\"](1,1)) # prints 2"} +{"question_id": 28056171, "score": 107, "creation_date": 1421792255, "tags": ["python", "pandas"], "instruction": "How to build and fill pandas dataframe from for loop?\n\nHere is a simple example of the code I am running, and I would like the results put into a pandas dataframe (unless there is a better option): for p in game.players.passing(): print p, p.team, p.passing_att, p.passer_rating() R.Wilson SEA 29 55.7 J.Ryan SEA 1 158.3 A.Rodgers GB 34 55.8 Using this code: d = [] for p in game.players.passing(): d = [{'Player': p, 'Team': p.team, 'Passer Rating': p.passer_rating()}] pd.DataFrame(d) I can get: Passer Rating Player Team 0 55.8 A.Rodgers GB Which is a 1x3 dataframe, and I understand why it is only one row but I can't figure out how to make it multi-row with the columns in the correct order. Ideally the solution would be able to deal with n number of rows (based on p) and it would be wonderful (although not essential) if the number of columns would be set by the number of stats requested. Any suggestions? Thanks in advance!", "output": "Try this using list comprehension: import pandas as pd df = pd.DataFrame( [p, p.team, p.passing_att, p.passer_rating()] for p in game.players.passing() )"} +{"question_id": 27032052, "score": 107, "creation_date": 1416459277, "tags": ["python", "datetime", "pandas"], "instruction": "How do I properly set the Datetimeindex for a Pandas datetime object in a dataframe?\n\nI have a pandas dataframe: lat lng alt days date time 0 40.003834 116.321462 211 39745.175405 2008-10-24 04:12:35 1 40.003783 116.321431 201 39745.175463 2008-10-24 04:12:40 2 40.003690 116.321429 203 39745.175521 2008-10-24 04:12:45 3 40.003589 116.321427 194 39745.175579 2008-10-24 04:12:50 4 40.003522 116.321412 190 39745.175637 2008-10-24 04:12:55 5 40.003509 116.321484 188 39745.175694 2008-10-24 04:13:00 For which I am trying to convert the df['date'] and df['time'] columns into a datetime. I can do: df['Datetime'] = pd.to_datetime(df['date']+df['time']) df = df.set_index(['Datetime']) del df['date'] del df['time'] And I get: lat lng alt days Datetime 2008-10-2404:12:35 40.003834 116.321462 211 39745.175405 2008-10-2404:12:40 40.003783 116.321431 201 39745.175463 2008-10-2404:12:45 40.003690 116.321429 203 39745.175521 2008-10-2404:12:50 40.003589 116.321427 194 39745.175579 2008-10-2404:12:55 40.003522 116.321412 190 39745.175637 But then if I try: df.between_time(time(1),time(22,59,59))['lng'].std() I get an error - 'TypeError: Index must be DatetimeIndex' So, I've also tried setting the DatetimeIndex: df['Datetime'] = pd.to_datetime(df['date']+df['time']) #df = df.set_index(['Datetime']) df = df.set_index(pd.DatetimeIndex(df['Datetime'])) del df['date'] del df['time'] And this throws an error also - 'DateParseError: unknown string format' How do I create the datetime column and DatetimeIndex correctly so that df.between_time() works right?", "output": "To simplify Kirubaharan's answer a bit: df['Datetime'] = pd.to_datetime(df['date'] + ' ' + df['time']) df = df.set_index('Datetime') And to get rid of unwanted columns (as OP did but did not specify per se in the question): df = df.drop(['date','time'], axis=1)"} +{"question_id": 7896495, "score": 107, "creation_date": 1319580522, "tags": ["python"], "instruction": "python: how to check if a line is an empty line\n\nTrying to figure out how to write an if cycle to check if a line is empty. The file has many strings, and one of these is a blank line to separate from the other statements (not a \"\"; is a carriage return followed by another carriage return I think) new statement asdasdasd asdasdasdasd new statement asdasdasdasd asdasdasdasd Since I am using the file input module, is there a way to check if a line is empty? Using this code it seems to work, thanks everyone! for line in x: if line == '\\n': print \"found an end of line\" x.close()", "output": "If you want to ignore lines with only whitespace: if line.strip(): ... do something The empty string is a False value. Or if you really want only empty lines: if line in ['\\n', '\\r\\n']: ... do something"} +{"question_id": 8315389, "score": 107, "creation_date": 1322589370, "tags": ["python", "callstack"], "instruction": "How do I print functions as they are called?\n\nIn debugging a Python script, I'd really like to know the entire call stack for my entire program. An ideal situation would be if there were a command-line flag for python that would cause Python to print all function names as they are called (I checked man Python2.7, but didn't find anything of this sort). Because of the number of functions in this script, I'd prefer not to add a print statement to the beginning of each function and/or class, if possible. An intermediate solution would be to use PyDev's debugger, place a couple breakpoints and check the call stack for given points in my program, so I'll use this approach for the time being. I'd still prefer to see a complete list of all functions called throughout the life of the program, if such a method exists.", "output": "You can do this with a trace function (props to Spacedman for improving the original version of this to trace returns and use some nice indenting): def tracefunc(frame, event, arg, indent=[0]): if event == \"call\": indent[0] += 2 print(\"-\" * indent[0] + \"> call function\", frame.f_code.co_name) elif event == \"return\": print(\"<\" + \"-\" * indent[0], \"exit function\", frame.f_code.co_name) indent[0] -= 2 return tracefunc import sys sys.setprofile(tracefunc) main() # or whatever kicks off your script Note that a function's code object usually has the same name as the associated function, but not always, since functions can be created dynamically. Unfortunately, Python doesn't track the function objects on the stack (I've sometimes fantasized about submitting a patch for this). Still, this is certainly \"good enough\" in most cases. If this becomes an issue, you could extract the \"real\" function name from the source code\u2014Python does track the filename and line number\u2014or ask the garbage collector find out which function object refers to the code object. There could be more than one function sharing the code object, but any of their names might be good enough. Coming back to revisit this four years later, it behooves me to mention that in Python 2.6 and later, you can get better performance by using sys.setprofile() rather than sys.settrace(). The same trace function can be used; it's just that the profile function is called only when a function is entered or exited, so what's inside the function executes at full speed."} +{"question_id": 47240308, "score": 107, "creation_date": 1510418573, "tags": ["python", "numpy", "neural-network", "numpy-random"], "instruction": "Differences between numpy.random.rand vs numpy.random.randn in Python\n\nWhat are the differences between numpy.random.rand and numpy.random.randn? From the documentation, I know the only difference between them is the probabilistic distribution each number is drawn from, but the overall structure (dimension) and data type used (float) is the same. I have a hard time debugging a neural network because of this. Specifically, I am trying to re-implement the Neural Network provided in the Neural Network and Deep Learning book by Michael Nielson. The original code can be found here. My implementation was the same as the original; however, I instead defined and initialized weights and biases with numpy.random.rand in the init function, rather than the numpy.random.randn function as shown in the original. However, my code that uses random.rand to initialize weights and biases does not work. The network won't learn and the weights and biases will not change. What is the difference(s) between the two random functions that cause this weirdness?", "output": "First, as you see from the documentation numpy.random.randn generates samples from the normal distribution, while numpy.random.rand from a uniform distribution (in the range [0,1)). Second, why did the uniform distribution not work? The main reason is the activation function, especially in your case where you use the sigmoid function. The plot of the sigmoid looks like the following: So you can see that if your input is away from 0, the slope of the function decreases quite fast and as a result you get a tiny gradient and tiny weight update. And if you have many layers - those gradients get multiplied many times in the back pass, so even \"proper\" gradients after multiplications become small and stop making any influence. So if you have a lot of weights which bring your input to those regions you network is hardly trainable. That's why it is a usual practice to initialize network variables around zero value. This is done to ensure that you get reasonable gradients (close to 1) to train your net. However, uniform distribution is not something completely undesirable, you just need to make the range smaller and closer to zero. As one of good practices is using Xavier initialization. In this approach you can initialize your weights with: Normal distribution. Where mean is 0 and var = sqrt(2. / (in + out)), where in - is the number of inputs to the neurons and out - number of outputs. Uniform distribution in range [-sqrt(6. / (in + out)), +sqrt(6. / (in + out))]"} +{"question_id": 3090175, "score": 107, "creation_date": 1277179090, "tags": ["python", "numbers", "max"], "instruction": "Find the greatest (largest, maximum) number in a list of numbers\n\nHow can I easily find the greatest number in a given list of numbers? See also How do I find the maximum (larger, greater) of 2 numbers? - in that special case, the two values can also be compared directly.", "output": "What about max() highest = max(1, 2, 3) # or max([1, 2, 3]) for lists"} +{"question_id": 48437189, "score": 107, "creation_date": 1516863263, "tags": ["python", "progress-bar", "enumerate", "tqdm"], "instruction": "Python enumerate() tqdm progress-bar when reading a file?\n\nI can't see the tqdm progress bar when I use this code to iterate my opened file: with open(file_path, 'r') as f: for i, line in enumerate(tqdm(f)): print(\"line #: %s\" % i) for j in tqdm(range(line_size)): ... What's the right way to use tqdm here?", "output": "Avoid printing inside the loop when using tqdm. Also, use tqdm only on the first for-loop, and not on the inner for-loop. from tqdm import tqdm with open(file_path, 'r') as f: for i, line in enumerate(tqdm(f)): for j in range(line_size): ... Some notes on using enumerate and its usage in tqdm are available here."} +{"question_id": 27440861, "score": 107, "creation_date": 1418377638, "tags": ["python", "django", "django-models"], "instruction": "Django Model MultipleChoice\n\nI know there isn't MultipleChoiceField for a Model, you can only use it on Forms. Today I face an issue when analyzing a new project related with Multiple Choices. I would like to have a field like a CharField with choices with the option of multiple choice. I solved this issue other times by creating a CharField and managed the multiple choices in the form with a forms.MultipleChoiceField and store the choices separated by commas. In this project, due to configuration, I cannot do it as I mention above, I need to do it in the Models, and I prefer NOT to edit the Django admin form neither use forms. I need a Model Field with multiple choices option Have someone solved anything like this via Models ? Maybe overriding some of the models function or using a custom widget... I don't know, I'm kinda lost here. Edit I'm aware off simple choices, I would like to have something like: class MODEL(models.Model): MY_CHOICES = ( ('a', 'Hola'), ('b', 'Hello'), ('c', 'Bonjour'), ('d', 'Boas'), ) ... ... my_field = models.CharField(max_length=1, choices=MY_CHOICES) ... but with the capability of saving multiple choices not only 1 choice.", "output": "You need to think about how you are going to store the data at a database level. This will dictate your solution. Presumably, you want a single column in a table that is storing multiple values. This will also force you to think about how you will serialize - for example, you can't simply do comma separated if you need to store strings that might contain commas. However, you are probably best off using a solution like django-multiselectfield"} +{"question_id": 23102833, "score": 107, "creation_date": 1397633609, "tags": ["python", "web-scraping", "beautifulsoup"], "instruction": "How to scrape a website which requires login using python and beautifulsoup?\n\nIf I want to scrape a website that requires login with password first, how can I start scraping it with python using beautifulsoup4 library? Below is what I do for websites that do not require login. from bs4 import BeautifulSoup import urllib2 url = urllib2.urlopen(\"http://www.python.org\") content = url.read() soup = BeautifulSoup(content) How should the code be changed to accommodate login? Assume that the website I want to scrape is a forum that requires login. An example is http://forum.arduino.cc/index.php", "output": "You can use mechanize: import mechanize from bs4 import BeautifulSoup import urllib2 import cookielib ## http.cookiejar in python3 cj = cookielib.CookieJar() br = mechanize.Browser() br.set_cookiejar(cj) br.open(\"https://id.arduino.cc/auth/login/\") br.select_form(nr=0) br.form['username'] = 'username' br.form['password'] = 'password.' br.submit() print br.response().read() Or urllib - Login to website using urllib2"} +{"question_id": 27616778, "score": 107, "creation_date": 1419323949, "tags": ["python", "argparse", "case-insensitive"], "instruction": "Case insensitive argparse choices\n\nIs it possible to check argparse choices in case-insensitive manner? import argparse choices = [\"win64\", \"win32\"] parser = argparse.ArgumentParser() parser.add_argument(\"-p\", choices=choices) print(parser.parse_args([\"-p\", \"Win32\"])) results in: usage: choices.py [-h] [-p {win64,win32}] choices.py: error: argument -p: invalid choice: 'Win32' (choose from 'win64','win32')", "output": "Transform the argument into lowercase by using type=str.lower for the -p switch, i.e. parser.add_argument(\"-p\", choices=choices, type=str.lower) This solution was pointed out by chepner in a comment. The solution I proposed earlier was type=lambda s: s.lower() which is also valid, but it's simpler to just use str.lower."} +{"question_id": 35990313, "score": 107, "creation_date": 1457965899, "tags": ["python", "dynamic", "pylint"], "instruction": "Avoid Pylint warning E1101: 'Instance of .. has no .. member' for class with dynamic attributes\n\nImagine a function which dynamically adds attributes to an object using setattr. The reason for doing so is that I want to map some external structure (e.g. a given parameter tree) to an object: my_object = SomeClass() apply_structure(my_object, some_descriptor) my_object.device1.enabled = True Technically this works but of course Pylint rightly complains about 'device1' being not a member of SomeClass. I could disable the warning but that would be bad (because I still want to get the warning in all cases when the attribute does not exist because of misspelling, etc). Is there a common and legal (Pylint-proof) way to dynamically add members to an object that not leads to warnings? Alternatively: Can I disable Pylint for just one object rather than a line/block/file? Explanation: You might wonder why I should equip an object with member attributes dynamically when I plan to access these attributes in a hard-coded way later. The reason is: I have a dynamic part of the program (where the decoration happens) and a static part which is specialized for a certain scenario. So I could also create a static class for this scenario but that would be overkill in a lot of situations. The following specialized code might allow access to some parameter of a device which might be attached to some bus: class MyDeviceHandler: on_get_some_subtree_element(self): return _some_internal_value on_set_some_subtree_element(self, value): _some_internal_value = value dev = MyDeviceHandler() decorate_object_with_device_structure(dev, 'some/attached/device') dev.some.subtree.element = 5 # <--- will call the set-callback x = dev.some.subtree.element # <--- will call the get-callback So the structure behind 'some/attached/device' might be arbitrary and very complex and I don't want to reproduce it in a class structure. One way to get rid of this warning would be to create/access a dict based tree: dev['some']['subtree']['element'] = 5 But this is harder to write and not nice to read - I would only do this to quieten Pylint.", "output": "Used when an object (variable, function, \u2026) is accessed for a non-existent member. False positives: This message may report object members that are created dynamically, but exist at the time they are accessed. A commentator mentions that it can be disabled on a single line at the top of the file with # pylint: disable=no-member. I also found that you can use # pylint: disable=E1101 based on this reddit entry. IMPORTANT Adding the comment at the top of the file disables the check for all code in the file. Instead, you can add the comment to the end of the line where the check fails. This way the check is only disabled for that specific line."} +{"question_id": 10973614, "score": 107, "creation_date": 1339376602, "tags": ["python", "json", "list", "types", "type-conversion"], "instruction": "Convert JSON array to Python list\n\nimport json array = '{\"fruits\": [\"apple\", \"banana\", \"orange\"]}' data = json.loads(array) That is my JSON array, but I would want to convert all the values in the 'fruits' string to a Python list. What would be the correct way of doing this?", "output": "import json array = '{\"fruits\": [\"apple\", \"banana\", \"orange\"]}' data = json.loads(array) print(data['fruits']) # the print displays: # ['apple', 'banana', 'orange'] You had everything you needed. data will be a dict, and data['fruits'] will be a list"} +{"question_id": 6731119, "score": 107, "creation_date": 1310982490, "tags": ["python", "debugging", "pdb"], "instruction": "How do I skip a loop with pdb?\n\nHow can I skip over a loop using pdb.set_trace()? For example, pdb.set_trace() for i in range(5): print(i) print('Done!') pdb prompts before the loop. I input a command. All 1-5 values are returned and then I'd like to be prompted with pdb again before the print('Done!') executes.", "output": "Try the until statement. Go to the last line of the loop (with next or n) and then use until or unt. This will take you to the next line, right after the loop. https://pymotw.com/3/pdb/index.html has a good explanation"} +{"question_id": 34884567, "score": 107, "creation_date": 1453228974, "tags": ["python", "multiple-inheritance", "super", "diamond-problem"], "instruction": "python multiple inheritance passing arguments to constructors using super\n\nConsider the following snippet of python code class A(object): def __init__(self, a): self.a = a class B(A): def __init__(self, a, b): super(B, self).__init__(a) self.b = b class C(A): def __init__(self, a, c): super(C, self).__init__(a) self.c = c class D(B, C): def __init__(self, a, b, c, d): #super(D,self).__init__(a, b, c) ??? self.d = d I am wondering how can I pass a, b and c to corresponding base classes' constructors.", "output": "Well, when dealing with multiple inheritance in general, your base classes (unfortunately) should be designed for multiple inheritance. Classes B and C in your example aren't, and thus you couldn't find a proper way to apply super in D. One of the common ways of designing your base classes for multiple inheritance, is for the middle-level base classes to accept extra args in their __init__ method, which they are not intending to use, and pass them along to their super call. Here's one way to do it in python: # Multiple inheritance in Python 3 # Define class A with attribute a class A: def __init__(self, a): self.a = a def method_a(self): print(f\"Method from A: a = {self.a}\") # Define class B that inherits from A with additional attribute b class B(A): def __init__(self, b, **kwargs): super().__init__(**kwargs) # Call A's __init__ self.b = b def method_b(self): print(f\"Method from B: b = {self.b}\") # Define class C that inherits from A with additional attribute c class C(A): def __init__(self, c, **kwargs): super().__init__(**kwargs) # Call A's __init__ self.c = c def method_c(self): print(f\"Method from C: c = {self.c}\") # Define class D that inherits from both B and C with additional attribute d class D(B, C): def __init__(self, a, b, c, d): super().__init__(a=a, b=b, c=c) # super() will follow the MRO self.d = d def method_d(self): print(f\"Method from D: d = {self.d}\") # Example usage if __name__ == \"__main__\": obj = D(a=\"Value from A\", b=\"Value from B\", c=\"Value from C\", d=\"Value from D\") # Calling methods from A, B, C, and D obj.method_a() # From A obj.method_b() # From B obj.method_c() # From C obj.method_d() # From D # Print the Method Resolution Order (MRO) for class D print(\"\\nMRO of class D:\") for cls in D.mro(): print(cls) This can be viewed as disappointing, but that's just the way it is."} +{"question_id": 16467479, "score": 107, "creation_date": 1368120116, "tags": ["python", "unicode", "python-3.x"], "instruction": "Normalizing Unicode\n\nIs there a standard way, in Python, to normalize a unicode string, so that it only comprehends the simplest unicode entities that can be used to represent it ? I mean, something which would translate a sequence like ['LATIN SMALL LETTER A', 'COMBINING ACUTE ACCENT'] to ['LATIN SMALL LETTER A WITH ACUTE'] ? See where is the problem: >>> import unicodedata >>> char = \"\u00e1\" >>> len(char) 1 >>> [ unicodedata.name(c) for c in char ] ['LATIN SMALL LETTER A WITH ACUTE'] But now: >>> char = \"a\u0301\" >>> len(char) 2 >>> [ unicodedata.name(c) for c in char ] ['LATIN SMALL LETTER A', 'COMBINING ACUTE ACCENT'] I could, of course, iterate over all the chars and do manual replacements, etc., but it is not efficient, and I'm pretty sure I would miss half of the special cases, and do mistakes.", "output": "The unicodedata module offers a .normalize() function, you want to normalize to the NFC form. An example using the same U+0061 LATIN SMALL LETTER - U+0301 A COMBINING ACUTE ACCENT combination and U+00E1 LATIN SMALL LETTER A WITH ACUTE code points you used: >>> print(ascii(unicodedata.normalize('NFC', '\\u0061\\u0301'))) '\\xe1' >>> print(ascii(unicodedata.normalize('NFD', '\\u00e1'))) 'a\\u0301' (I used the ascii() function here to ensure non-ASCII codepoints are printed using escape syntax, making the differences clear). NFC, or 'Normal Form Composed' returns composed characters, NFD, 'Normal Form Decomposed' gives you decomposed, combined characters. The additional NFKC and NFKD forms deal with compatibility codepoints; e.g. U+2160 ROMAN NUMERAL ONE is really just the same thing as U+0049 LATIN CAPITAL LETTER I but present in the Unicode standard to remain compatible with encodings that treat them separately. Using either NFKC or NFKD form, in addition to composing or decomposing characters, will also replace all 'compatibility' characters with their canonical form. Here is an example using the U+2167 ROMAN NUMERAL EIGHT codepoint; using the NFKC form replaces this with a sequence of ASCII V and I characters: >>> unicodedata.normalize('NFC', '\\u2167') '\u2167' >>> unicodedata.normalize('NFKC', '\\u2167') 'VIII' Note that there is no guarantee that composed and decomposed forms are commutative; normalizing a combined character to NFC form, then converting the result back to NFD form does not always result in the same character sequence. The Unicode standard maintains a list of exceptions; characters on this list are composable, but not decomposable back to their combined form, for various reasons. Also see the documentation on the Composition Exclusion Table."} +{"question_id": 29269370, "score": 107, "creation_date": 1427332101, "tags": ["python", "concurrency", "task", "python-3.4", "python-asyncio"], "instruction": "How to properly create and run concurrent tasks using python's asyncio module?\n\nI am trying to properly understand and implement two concurrently running Task objects using Python 3's relatively new asyncio module. In a nutshell, asyncio seems designed to handle asynchronous processes and concurrent Task execution over an event loop. It promotes the use of await (applied in async functions) as a callback-free way to wait for and use a result, without blocking the event loop. (Futures and callbacks are still a viable alternative.) It also provides the asyncio.Task() class, a specialized subclass of Future designed to wrap coroutines. Preferably invoked by using the asyncio.ensure_future() method. The intended use of asyncio tasks is to allow independently running tasks to run 'concurrently' with other tasks within the same event loop. My understanding is that Tasks are connected to the event loop which then automatically keeps driving the coroutine between await statements. I like the idea of being able to use concurrent Tasks without needing to use one of the Executor classes, but I haven't found much elaboration on implementation. This is how I'm currently doing it: import asyncio print('running async test') async def say_boo(): i = 0 while True: await asyncio.sleep(0) print('...boo {0}'.format(i)) i += 1 async def say_baa(): i = 0 while True: await asyncio.sleep(0) print('...baa {0}'.format(i)) i += 1 # wrap in Task object # -> automatically attaches to event loop and executes boo = asyncio.ensure_future(say_boo()) baa = asyncio.ensure_future(say_baa()) loop = asyncio.get_event_loop() loop.run_forever() In the case of trying to concurrently run two looping Tasks, I've noticed that unless the Task has an internal await expression, it will get stuck in the while loop, effectively blocking other tasks from running (much like a normal while loop). However, as soon the Tasks have to (a)wait, they seem to run concurrently without an issue. Thus, the await statements seem to provide the event loop with a foothold for switching back and forth between the tasks, giving the effect of concurrency. Example output with internal await: running async test ...boo 0 ...baa 0 ...boo 1 ...baa 1 ...boo 2 ...baa 2 Example output without internal await: ...boo 0 ...boo 1 ...boo 2 ...boo 3 ...boo 4 Questions Does this implementation pass for a 'proper' example of concurrent looping Tasks in asyncio? Is it correct that the only way this works is for a Task to provide a blocking point (await expression) in order for the event loop to juggle multiple tasks? Edit 2022 UPDATE: Please note that the asyncio API has changed fairly substantially since this question was asked. See the newly marked as correct answer which now shows the correct use of the API given Python 3.10. I still recommend the answer from @dano for broader knowledge of how this works under the hood.", "output": "The functions asyncio.ensure_future and asyncio.get_event_loop are deprecated in Python 3.10. You can run the two coroutines say_boo and say_baa concurrently through asyncio.create_task: async def main(): boo = asyncio.create_task(say_boo()) baa = asyncio.create_task(say_baa()) await boo await baa asyncio.run(main()) You can also use asyncio.gather async def main(): await asyncio.gather(say_boo(), say_baa()) asyncio.run(main())"} +{"question_id": 16849483, "score": 107, "creation_date": 1369972325, "tags": ["python", "matplotlib"], "instruction": "Which is the recommended way to plot: matplotlib or pylab?\n\nI can plot in Python using either: import matplotlib matplotlib.pyplot.plot(...) Or: import pylab pylab.plot(...) Both of these use matplotlib. Which is recommend as the correct method to plot? Why?", "output": "The official documentation clearly recommends to use matplotlib.pyplot. The API documentation still mentions the pylab module but advises against using it.1 Since heavily importing into the global namespace may result in unexpected behavior, the use of pylab is strongly discouraged. Use matplotlib.pyplot instead. The Quick start guide also disapproves using the pylab module. You may find older examples that use the pylab interface, via from pylab import *. This approach is strongly deprecated. 1. As far as I can see this recommendation was added on February 7th, 2020"} +{"question_id": 5602918, "score": 106, "creation_date": 1302324041, "tags": ["python", "sql", "database", "sqlalchemy"], "instruction": "Select NULL Values in SQLAlchemy\n\nHere's my (PostgreSQL) table -- test=> create table people (name varchar primary key, marriage_status varchar) ; test=> insert into people values ('Ken', 'married'); test=> insert into people values ('May', 'single'); test=> insert into people values ('Joe', NULL); I want to select all people that are not known to be married, i.e., including those with NULL marriage_status. This does not work -- test=> select * from people where marriage_status != 'married' ; name | marriage_status ------+----------------- May | single (1 row) Of course this does -- test=> select * from people where marriage_status != 'married' or marriage_status is NULL ; name | marriage_status ------+----------------- May | single Joe | The problem is that I'm accessing it from SQLAlchemy with -- ...filter(or_(people.marriage_status!='married', people.marriage_status is None)) which gets translated to -- SELECT people.name as name, people.marriage_status as marriage_status FROM people WHERE people.marriage_status != %(status_1)s OR False sqlalchemy.engine.base.Engine.... {'status_1': 'married'} And does not work -- test=> select * from people where marriage_status != 'married' or False; name | marriage_status ------+----------------- May | single (1 row) neither does -- test=> select * from people where marriage_status != 'married' or NULL; name | marriage_status ------+----------------- May | single (1 row) How should I select NULL values through SQLAlchemy?", "output": "For SQLAlchemy 0.7.9 and newer Please use the answer from @jsnow. !!! For SQLAlchemy 0.7.8 and older (as indicated by @augurar): Because sqlalchemy uses magic methods (operator overloading) to create SQL constructs, it can only handle operator such as != or ==, but is not able to work with is (which is a very valid Python construct). Therefore, to make it work with sqlalchemy, you should use: ...filter(or_(people.marriage_status!='married', people.marriage_status == None)) , basically replace the is None with == None. In this case your query will be translated properly to the following SQL: SELECT people.name AS people_name, people.marriage_status AS people_marriage_status FROM people WHERE people.marriage_status IS NULL OR people.marriage_status != ? See IS NULL in the documentation."} +{"question_id": 21514354, "score": 106, "creation_date": 1391365107, "tags": ["python", "python-3.x", "django", "django-models", "django-users"], "instruction": "AbstractUser vs AbstractBaseUser in Django?\n\nThe use of AbstractUser and AbstractBaseUser looks quite similar. from django.contrib.auth.models import AbstractUser, AbstractBaseUser What is the difference between the two?", "output": "The documentation explains this fully. AbstractUser is a full User model, complete with fields, as an abstract class so that you can inherit from it and add your own profile fields and methods. AbstractBaseUser only contains the authentication functionality, but no actual fields: you have to supply them when you subclass."} +{"question_id": 2029295, "score": 106, "creation_date": 1262971122, "tags": ["python", "django", "django-models"], "instruction": "Django DateField default options\n\nI have a model which has a date time field: date = models.DateField(_(\"Date\"), default=datetime.now()) When I check the app in the built in django admin, the DateField also has the time appended to it, so that if you try to save it an error is returned. How do I make the default just the date? (datetime.today() isn't working either)", "output": "Your mistake is using the datetime class instead of the date class. You meant to do this: from datetime import date date = models.DateField(_(\"Date\"), default=date.today) If you only want to capture the current date the proper way to handle this is to use the auto_now_add parameter: date = models.DateField(_(\"Date\"), auto_now_add=True) However, the modelfield docs clearly state that auto_now_add and auto_now will always use the current date and are not a default value that you can override."} +{"question_id": 2974022, "score": 106, "creation_date": 1275654188, "tags": ["python", "syntax", "dictionary"], "instruction": "Is it possible to assign the same value to multiple keys in a dict object at once?\n\nIn Python, I need a dictionary object which looks like: {'a': 10, 'b': 20, 'c': 10, 'd': 10, 'e': 20} I've been able to get this successfully by combining the dict.update() and dict.fromkeys() functions like so: myDict = {} myDict.update(dict.fromkeys(['a', 'c', 'd'], 10)) myDict.update(dict.fromkeys(['b', 'e'], 20)) However, because the code is being written for novice users who may need to make add keys/values on occasion, I'd prefer a simple bare-bones (Perl-like) syntax such as: myDict = {} myDict['a', 'c', 'd'] = 10 myDict['b', 'e'] = 20 This, however, gives me: myDict = {('a', 'c', 'd'): 10, ('b', 'e'): 20} Is there a way I can simplify my first example (using dict.update() and dict.fromkeys()) further, and get the dict object I'm looking for? Or, alternatively, if I have a dict with tuples as in my second example, is there an easy way for me to do a lookup such as myDict['c'] or myDict.get('c') and get the value 10?", "output": "I would say what you have is very simple, you could slightly improve it to be: my_dict = dict.fromkeys(['a', 'c', 'd'], 10) my_dict.update(dict.fromkeys(['b', 'e'], 20)) If your keys are tuple you could do: >>> my_dict = {('a', 'c', 'd'): 10, ('b', 'e'): 20} >>> next(v for k, v in my_dict.items() if 'c' in k) # use .iteritems() python-2.x 10 This is, of course, will return first encountered value, key for which contains given element."} +{"question_id": 33975128, "score": 106, "creation_date": 1448736628, "tags": ["python", "pandas", "dataframe"], "instruction": "pandas get the row-wise minimum value of two or more columns\n\nHow can I reference the minimum value of two dataframes as part of a pandas dataframe equation? I tried using the python min() function which did not work. I am looking for something along the lines of this: data['eff'] = pd.DataFrame([data['flow_h'], data['flow_c']]).min() *Cp* (data[' Thi'] - data[' Tci']) I also tried to use pandas min() function, which is also not working. min_flow = pd.DataFrame([data['flow_h'], data['flow_c']]).min() InvalidIndexError: Reindexing only valid with uniquely valued Index objects I was confused by this error. The data columns are just numbers and a name, I wasn't sure where the index comes into play. import pandas as pd import numpy as np np.random.seed(365) rows = 10 flow = {'flow_c': [np.random.randint(100) for _ in range(rows)], 'flow_d': [np.random.randint(100) for _ in range(rows)], 'flow_h': [np.random.randint(100) for _ in range(rows)]} data = pd.DataFrame(flow) # display(data) flow_c flow_d flow_h 0 82 36 43 1 52 48 12 2 33 28 77 3 91 99 11 4 44 95 27 5 5 94 64 6 98 3 88 7 73 39 92 8 26 39 62 9 56 74 50", "output": "If you are trying to get the row-wise mininum of two or more columns, use pandas.DataFrame.min. Note that by default axis=0; specifying axis=1 is necessary. data['min_c_h'] = data[['flow_h','flow_c']].min(axis=1) # display(data) flow_c flow_d flow_h min_c_h 0 82 36 43 43 1 52 48 12 12 2 33 28 77 33 3 91 99 11 11 4 44 95 27 27 5 5 94 64 5 6 98 3 88 88 7 73 39 92 73 8 26 39 62 26 9 56 74 50 50"} +{"question_id": 65209934, "score": 106, "creation_date": 1607482221, "tags": ["python", "serialization", "fastapi", "pydantic"], "instruction": "Pydantic enum field does not get converted to string\n\nI am trying to restrict one field in a class to an enum. However, when I try to get a dictionary out of class, it doesn't get converted to string. Instead it retains the enum. I checked pydantic documentation, but couldn't find anything relevant to my problem. This code is representative of what I actually need. from enum import Enum from pydantic import BaseModel class S(str, Enum): am = 'am' pm = 'pm' class K(BaseModel): k: S z: str a = K(k='am', z='rrrr') print(a.dict()) # {'k': , 'z': 'rrrr'} I'm trying to get the .dict() method to return {'k': 'am', 'z': 'rrrr'}", "output": "You need to use use_enum_values option of model config: use_enum_values whether to populate models with the value property of enums, rather than the raw enum. This may be useful if you want to serialise model.dict() later (default: False) from enum import Enum from pydantic import BaseModel class S(str, Enum): am='am' pm='pm' class K(BaseModel): k:S z:str class Config: use_enum_values = True # <-- a = K(k='am', z='rrrr') print(a.dict())"} +{"question_id": 29233283, "score": 106, "creation_date": 1427201878, "tags": ["python", "pandas", "plot"], "instruction": "Plotting multiple lines, in different colors, with pandas dataframe\n\nI have a dataframe that looks like the following color x y 0 red 0 0 1 red 1 1 2 red 2 2 3 red 3 3 4 red 4 4 5 red 5 5 6 red 6 6 7 red 7 7 8 red 8 8 9 red 9 9 10 blue 0 0 11 blue 1 1 12 blue 2 4 13 blue 3 9 14 blue 4 16 15 blue 5 25 16 blue 6 36 17 blue 7 49 18 blue 8 64 19 blue 9 81 I ultimately want two lines, one blue, one red. The red line should essentially be y=x and the blue line should be y=x^2 When I do the following: df.plot(x='x', y='y') The output is this: Is there a way to make pandas know that there are two sets? And group them accordingly. I'd like to be able to specify the column color as the set differentiator", "output": "Another simple way is to use the pandas.DataFrame.pivot function to format the data. Use pandas.DataFrame.plot to plot. Providing the colors in the 'color' column exist in matplotlib: List of named colors, they can be passed to the color parameter. # sample data df = pd.DataFrame([['red', 0, 0], ['red', 1, 1], ['red', 2, 2], ['red', 3, 3], ['red', 4, 4], ['red', 5, 5], ['red', 6, 6], ['red', 7, 7], ['red', 8, 8], ['red', 9, 9], ['blue', 0, 0], ['blue', 1, 1], ['blue', 2, 4], ['blue', 3, 9], ['blue', 4, 16], ['blue', 5, 25], ['blue', 6, 36], ['blue', 7, 49], ['blue', 8, 64], ['blue', 9, 81]], columns=['color', 'x', 'y']) # pivot the data into the correct shape df = df.pivot(index='x', columns='color', values='y') # display(df) color blue red x 0 0 0 1 1 1 2 4 2 3 9 3 4 16 4 5 25 5 6 36 6 7 49 7 8 64 8 9 81 9 # plot the pivoted dataframe; if the column names aren't colors, remove color=df.columns df.plot(color=df.columns, figsize=(5, 3))"} +{"question_id": 13530998, "score": 106, "creation_date": 1353680619, "tags": ["python", "pointers", "variables", "identifier"], "instruction": "Are Python variables pointers? Or else, what are they?\n\nVariables in Python are just pointers, as far as I know. Based on this rule, I can assume that the result for this code snippet: i = 5 j = i j = 3 print(i) would be 3. But I got an unexpected result for me, and it was 5. Moreover, my Python book does cover this example: i = [1,2,3] j = i i[0] = 5 print(j) The result would be [5,2,3]. What am I understanding wrong?", "output": "We call them references. They work like this i = 5 # create int(5) instance, bind it to i j = i # bind j to the same int as i j = 3 # create int(3) instance, bind it to j print i # i still bound to the int(5), j bound to the int(3) (Sidenote: Small ints are interned, but that isn't important to this explanation) i = [1,2,3] # create the list instance, and bind it to i j = i # bind j to the same list as i i[0] = 5 # change the first item of i print j # j is still bound to the same list as i"} +{"question_id": 57518050, "score": 106, "creation_date": 1565920283, "tags": ["python", "anaconda", "conda"], "instruction": "Conda install and update do not work also solving environment get errors\n\nI am using anaconda as below: (base) C:\\Users\\xxx>conda info active environment : base active env location : C:\\Users\\xxx\\Documents\\ANACONDA shell level : 1 user config file : C:\\Users\\xxx\\.condarc populated config files : C:\\Users\\xxx\\.condarc conda version : 4.7.11 conda-build version : 3.18.9 python version : 3.6.9.final.0 virtual packages : base environment : C:\\Users\\xxx\\Documents\\ANACONDA (writable) channel URLs : https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/win-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch package cache : C:\\Users\\xxx\\Documents\\ANACONDA\\pkgs C:\\Users\\xxx\\.conda\\pkgs C:\\Users\\xxx\\AppData\\Local\\conda\\conda\\pkgs envs directories : C:\\Users\\xxx\\Documents\\ANACONDA\\envs C:\\Users\\xxx\\.conda\\envs C:\\Users\\xxx\\AppData\\Local\\conda\\conda\\envs platform : win-64 user-agent : conda/4.7.11 requests/2.22.0 CPython/3.6.9 Windows/10 Windows/10.0.16299 administrator : False netrc file : None offline mode : False Now I have 2 issues that stop my work. 1) I cannot use conda install for any package. It will give me the error in solving environment list this: failed with initial frozen solve. Retrying with flexible solve. then it will fail again and give message like this: Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. Even after the checking for incompatible packages, it didn't give me the solution. 2) When I want to upgrade or downgrade conda by the command: conda update -n base conda or conda install conda = 4.6.11 It will give me errors again in the solving environment, and I think this is related to the first issue. Now I cannot use conda for anything, please advise and thank you!", "output": "I ran into the same problem and I couldn't find a solution, but I did find a workaround. If you create an env and activate that env and then do the install, it seems to work just fine. If you don't need a lot of libraries I would try that. Commands are: Create env conda create --name myenv Activate the env conda activate myenv"} +{"question_id": 8270981, "score": 106, "creation_date": 1322233761, "tags": ["python", "statistics", "matplotlib"], "instruction": "How to highlight specific x-value ranges\n\nI'm making a visualization of historical stock data for a project, and I'd like to highlight regions of drops. For instance, when the stock is experiencing significant drawdown, I would like to highlight it with a red region. Can I do this automatically, or will I have to draw a rectangle or something?", "output": "Have a look at axvspan (and axhspan for highlighting a region of the y-axis). import matplotlib.pyplot as plt plt.plot(range(10)) plt.axvspan(3, 6, color='red', alpha=0.5) plt.show() If you're using dates, then you'll need to convert your min and max x values to matplotlib dates. Use matplotlib.dates.date2num for datetime objects or matplotlib.dates.datestr2num for various string timestamps. import matplotlib.pyplot as plt import matplotlib.dates as mdates import datetime as dt t = mdates.drange(dt.datetime(2011, 10, 15), dt.datetime(2011, 11, 27), dt.timedelta(hours=2)) y = np.sin(t) fig, ax = plt.subplots() ax.plot_date(t, y, 'b-') ax.axvspan(*mdates.datestr2num(['10/27/2011', '11/2/2011']), color='red', alpha=0.5) fig.autofmt_xdate() plt.show()"} +{"question_id": 57168148, "score": 106, "creation_date": 1563897781, "tags": ["python", "amazon-web-services", "aws-api-gateway"], "instruction": "Unable to resolve \" not a valid key=value pair (missing equal-sign) in Authorization header\" when POSTing to api gateway\n\nI created an api-gateway to put data in my s3 bucket. When I test it in console it works with no problem. Even when I test my token in the authorizer test it returns an \"Allow\", so there's nothing wrong with my token. My token validation is ^Bearer [-0-9a-zA-z\\.]*$ so my python code for generating my header looks like this: headers = { \"Authorization\": \"Bearer \" + token, \"Content-type\": \"application/json\" } The rest of my code is: response = requests.post(url, headers=headers, data={\"id\":\"0678a93d-ee8c-4db5-a831-1e311be4f04b\", \"test\":\"12345\"}) print(response.text) The error message I get is \"{\"message\":\"'{My Token}' not a valid key=value pair (missing equal-sign) in Authorization header: 'Bearer {My Token}'.\"}\" My url looks like this: https://my-api-gateway.amazonaws.com/MyStage, and I am using a {proxy+} in my resources. I noticed if I change my header from Content-type to Accept, it gives me the same error, but if I also change my url to https://my-api-gateway.amazonaws.com/MyStage/any-arbitrary-string/, I get a {\"response\":{\"status\":\"VALID\", \"message\": \"success\"}} but the file does not show up in my s3 bucket. How do I resolve this?", "output": "I resolved it. I changed my method to come from the root resource (instead of the unnecessary {proxy+}, and also noticed that my python method was incorrect. I had response = requests.post(url, headers=headers, data=my_json), but data only accepts a string. I have to use either requests.post(url, headers=headers, json=my_json) or requests.post(url, headers=headers,data=json.dumps(my_json))"} +{"question_id": 6189956, "score": 106, "creation_date": 1306855919, "tags": ["python", "string", "decimal"], "instruction": "Easy way of finding decimal places\n\nIs there an easy way or integrated function to find out the decimal places of a floating point number? The number is parsed from a string, so one way is to count the digits after the . sign, but that looks quite clumsy to me. Is there a possibility to get the information needed out of a float or Decimal object?", "output": "To repeat what others have said (because I had already typed it out!), I'm not even sure such a value would be meaningful in the case of a floating point number, because of the difference between the decimal and binary representation; often a number representable by a finite number of decimal digits will have only an infinite-digit representation in binary. In the case of a decimal.Decimal object, you can retrieve the exponent using the as_tuple method, which returns a namedtuple with sign, digits, and exponent attributes: >>> d = decimal.Decimal('56.4325') >>> d.as_tuple().exponent -4 >>> d = decimal.Decimal('56.43256436') >>> d.as_tuple().exponent -8 >>> d = decimal.Decimal(str(56.4325)) >>> d.as_tuple().exponent -4 The negation of the exponent is the number of digits after the decimal point, unless the exponent is greater than 0."} +{"question_id": 19359556, "score": 106, "creation_date": 1381751294, "tags": ["python", "python-2.7"], "instruction": "ConfigParser reads capital keys and make them lower case\n\nI found one interesting observation. I had written one config file read program as, import ConfigParser class ConfReader(object): ConfMap = dict() def __init__(self): self.config = ConfigParser.ConfigParser() self.config.read('./Config.ini') self.__loadConfigMap() def __loadConfigMap(self): for sec in self.config.sections(): for key,value in self.config.items(sec): print 'key = ', key, 'Value = ', value keyDict = str(sec) + '_' + str(key) print 'keyDict = ' + keyDict self.ConfMap[keyDict] = value def getValue(self, key): value = '' try: print ' Key = ', key value = self.ConfMap[key] except KeyError as KE: print 'Key', KE , ' didn\\'t found in configuration.' return value class MyConfReader(object): objConfReader = ConfReader() def main(): print MyConfReader().objConfReader.getValue('DB2.poolsize') print MyConfReader().objConfReader.getValue('DB_NAME') if __name__=='__main__': main() And my Config.ini file looks like, [DB] HOST_NAME=localhost NAME=temp USER_NAME=postgres PASSWORD=mandy The __loadConfigMap() works just fine. But while reading the key and values, it is making the keys lower case. I didn't understand the reason. Can any one please explain why it is so?", "output": "ConfigParser.ConfigParser() is documented to behave this way, in the Mapping Protocol Access section: By default, all keys in sections are accessible in a case-insensitive manner. E.g. for option in parser[\"section\"] yields only optionxform\u2019ed option key names. This means lowercased keys by default. That's because this module parses Windows INI files which are expected to be parsed case-insensitively. You can disable this behaviour by replacing the ConfigParser.optionxform() function: self.config = ConfigParser.ConfigParser() self.config.optionxform = str str passes through the options unchanged."} +{"question_id": 54367975, "score": 106, "creation_date": 1548428998, "tags": ["python", "python-3.x"], "instruction": "Python 3.7 logging: f-strings vs %\n\nI'm running into a performance problem in a project, and I narrowed it down to some of the log lines. It seems that f-strings are calculated even when my logging facility is above the level of the line that is logging. Consider this example to demonstrate the issue: import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger('MyLogger') class MyClass: def __init__(self, name: str) -> None: self._name = name def __str__(self) -> str: print('GENERATING STRING') return self._name c = MyClass('foo') logger.debug(f'Created: {c}') When this example is run, I get \"GENERATING STRING\" printed to screen, indicating that the __str__ method is being ran even though my logging level is set to INFO and the log line is for DEBUG. From what I can tell today, the solution is to use the following vs an f-string. logger.debug('Created: %s', c) There are three things going through my head right now. Most of the examples and docs I read seem to be pretty old. This project is Python 3.7+ only (not worried about being backwards compatible). I have a lot of lines of code to update. I'm curious to know what others do in this situation. Is the %s the best (most modern) approach? Is there a more modern way that I should be logging as demonstrated above? I have a lot of code to update (fix), and I'm hoping to align with modern best practices.", "output": "IMO, using %s in your strings is NOT the most modern approach. Definitely, most developers will prefer to use f-strings because it is more convenient and easy to read (and write). But, you interestingly find a specific case where you may not want to use an f-string. If you need to avoid automatic call of __str__() method because of optimisation issue, then it is probably a good enough reason to use %s instead of f-strings. But, this could also indicate that something may be done in your program to reduce the complexity of __str__(). Most of the time it shouldn't take so much time or resources to calculate a string representation for an object..."} +{"question_id": 11111064, "score": 106, "creation_date": 1340149770, "tags": ["python", "logging", "python-logging"], "instruction": "How to set different levels for different python log handlers\n\nI've read a few posts on this but I'm still confused. I have this logging setup: import logging class MongoHandler(logging.Handler): def __init__(self): logging.Handler.__init__(self) from pymongo import Connection self.db = Connection('db_server').db_name def emit(self, record): try: self.db.Logging.save(record.__dict__) except: print 'Logging Error: Unable to save log entry to db' mh = MongoHandler() sh = logging.StreamHandler() formatter = logging.Formatter('%(asctime)s - %(threadName)s - %(levelname)s - %(message)s') sh.setFormatter(formatter) log = logging.getLogger('DeviceMonitor_%s' % hostname) log.addHandler(mh) log.addHandler(sh) log.setLevel(logging.INFO) I want to be able to set a different level for the StreamHandler and the MongoHandler. Is that possible or do I need to have a second Logger obj?", "output": "You can set a different logging level for each logging handler but it seems you will have to set the logger's level to the \"lowest\". In the example below I set the logger to DEBUG, the stream handler to INFO and the TimedRotatingFileHandler to DEBUG. So the file has DEBUG entries and the stream outputs only INFO. You can't direct only DEBUG to one and only INFO to another handler. For that you'll need another logger. import logging import logging.handlers logger = logging.getLogger(\"mylog\") formatter = logging.Formatter( '%(asctime)s | %(name)s | %(levelname)s: %(message)s') logger.setLevel(logging.DEBUG) stream_handler = logging.StreamHandler() stream_handler.setLevel(logging.INFO) stream_handler.setFormatter(formatter) logFilePath = \"my.log\" file_handler = logging.handlers.TimedRotatingFileHandler( filename=logFilePath, when='midnight', backupCount=30) file_handler.setFormatter(formatter) file_handler.setLevel(logging.DEBUG) logger.addHandler(file_handler) logger.addHandler(stream_handler) logger.info(\"Started\") try: x = 14 y = 0 z = x / y except Exception as ex: logger.error(\"Operation failed.\") logger.debug( \"Encountered {0} when trying to perform calculation.\".format(ex)) logger.info(\"Ended\")"} +{"question_id": 34414326, "score": 106, "creation_date": 1450782049, "tags": ["python", "unit-testing"], "instruction": "Why is assertDictEqual needed if dicts can be compared by `==`?\n\nI have always used assertDictEqual, because sometimes when I didn't use it I got information that equal dicts are not the same. But I know that dicts can be compared by == operator: >>> {'a':1, 'b':2, 'c': [1,2]} == {'b':2, 'a':1, 'c': [1,2]} True Where may I actually need assertDictEqual?", "output": "Basically, it allows unittest to give you more information about why the test failed (\"diagnostics\", to use the language from \"Growing Object-Oriented Software Guided by Tests\" by Steve Freeman and Nat Pryce). Compare these two tests: import unittest class DemoTest(unittest.TestCase): D1 = {'a': 1, 'b': 2, 'c': [1, 2]} D2 = {'a': 1, 'b': 2, 'c': [1]} def test_not_so_useful(self): self.assertTrue(self.D1 == self.D2) def test_useful(self): self.assertDictEqual(self.D1, self.D2) if __name__ == \"__main__\": unittest.main() And their outputs: ====================================================================== FAIL: test_not_so_useful (__main__.DemoTest) ---------------------------------------------------------------------- Traceback (most recent call last): File \"path/to/file.py\", line 10, in test_not_so_useful self.assertTrue(self.D1 == self.D2) AssertionError: False is not true vs. ====================================================================== FAIL: test_useful (__main__.DemoTest) ---------------------------------------------------------------------- Traceback (most recent call last): File \"path/to/file.py\", line 13, in test_useful self.assertDictEqual(self.D1, self.D2) AssertionError: {'a': 1, 'b': 2, 'c': [1, 2]} != {'a': 1, 'b': 2, 'c': [1]} - {'a': 1, 'b': 2, 'c': [1, 2]} ? --- + {'a': 1, 'b': 2, 'c': [1]} In the latter, you can see exactly what the difference was, you don't have to work it out yourself. Note that you can just use the standard assertEqual instead of assertDictEqual, with the same result; per the docs ...it\u2019s usually not necessary to invoke these methods directly."} +{"question_id": 4024056, "score": 106, "creation_date": 1288099219, "tags": ["python", "multithreading", "asynchronous", "twisted"], "instruction": "Threads vs. Async\n\nI have been reading up on the threaded model of programming versus the asynchronous model from this really good article. http://krondo.com/blog/?p=1209 However, the article mentions the following points. An async program will simply outperform a sync program by switching between tasks whenever there is a I/O. Threads are managed by the operating system. I remember reading that threads are managed by the operating system by moving around TCBs between the Ready-Queue and the Waiting-Queue(amongst other queues). In this case, threads don't waste time on waiting either do they? In light of the above mentioned, what are the advantages of async programs over threaded programs?", "output": "It is very difficult to write code that is thread safe. With asyncronous code, you know exactly where the code will shift from one task to the next and race conditions are therefore much harder to come by. Threads consume a fair amount of data since each thread needs to have its own stack. With async code, all the code shares the same stack and the stack is kept small due to continuously unwinding the stack between tasks. Threads are OS structures and are therefore more memory for the platform to support. There is no such problem with asynchronous tasks. Update 2022: Many languages now support stackless co-routines (async/await). This allows us to write a task almost synchronously while yielding to other tasks (awaiting) at set places (sleeping or waiting for networking or other threads)"} +{"question_id": 72157296, "score": 106, "creation_date": 1651970217, "tags": ["python", "python-typing"], "instruction": "What is the difference between type-hinting a variable as an Iterable versus a Sequence?\n\nI don't understand the difference when hinting Iterable and Sequence. What is the main difference between those two and when to use which? I think set is an Iterable but not Sequence, are there any built-in data type that is Sequence but not Iterable? def foo(baz: Sequence[float]): ... # What is the difference? def bar(baz: Iterable[float]): ...", "output": "The Sequence and Iterable abstract base classes (can also be used as type annotations) mostly* follow Python's definition of sequence and iterable. To be specific: Iterable is any object that defines __iter__ or __getitem__. Sequence is any object that defines __getitem__ and __len__. By definition, any sequence is an iterable. The Sequence class also defines other methods such as __contains__, __reversed__ that calls the two required methods. Some examples: list, tuple, str are the most common sequences. Some built-in iterables are not sequences. For example, reversed returns a reversed object (or list_reverseiterator for lists) that cannot be subscripted. * Iterable does not exactly conform to Python's definition of iterables \u2014 it only checks if the object defines __iter__, and does not work for objects that's only iterable via __getitem__ (see this table for details). The gold standard of checking if an object is iterable is using the iter builtin."} +{"question_id": 29245848, "score": 106, "creation_date": 1427246075, "tags": ["python", "python-3.x", "pandas"], "instruction": "what are all the dtypes that pandas recognizes?\n\nFor pandas, would anyone know, if any datatype apart from (i) float64, int64 (and other variants of np.number like float32, int8 etc.) (ii) bool (iii) datetime64, timedelta64 such as string columns, always have a dtype of object ? Alternatively, I want to know, if there are any datatype apart from (i), (ii) and (iii) in the list above that pandas does not make it's dtype an object?", "output": "EDIT Feb 2020 following pandas 1.0.0 release Pandas mostly uses NumPy arrays and dtypes for each Series (a dataframe is a collection of Series, each which can have its own dtype). NumPy's documentation further explains dtype, data types, and data type objects. In addition, the answer provided by @lcameron05 provides an excellent description of the numpy dtypes. Furthermore, the pandas docs on dtypes have a lot of additional information. The main types stored in pandas objects are float, int, bool, datetime64[ns], timedelta[ns], and object. In addition these dtypes have item sizes, e.g. int64 and int32. By default integer types are int64 and float types are float64, REGARDLESS of platform (32-bit or 64-bit). The following will all result in int64 dtypes. Numpy, however will choose platform-dependent types when creating arrays. The following WILL result in int32 on 32-bit platform. One of the major changes to version 1.0.0 of pandas is the introduction of pd.NA to represent scalar missing values (rather than the previous values of np.nan, pd.NaT or None, depending on usage). Pandas extends NumPy's type system and also allows users to write their on extension types. The following lists all of pandas extension types. 1) Time zone handling Kind of data: tz-aware datetime (note that NumPy does not support timezone-aware datetimes). Data type: DatetimeTZDtype Scalar: Timestamp Array: arrays.DatetimeArray String Aliases: 'datetime64[ns, ]' 2) Categorical data Kind of data: Categorical Data type: CategoricalDtype Scalar: (none) Array: Categorical String Aliases: 'category' 3) Time span representation Kind of data: period (time spans) Data type: PeriodDtype Scalar: Period Array: arrays.PeriodArray String Aliases: 'period[]', 'Period[]' 4) Sparse data structures Kind of data: sparse Data type: SparseDtype Scalar: (none) Array: arrays.SparseArray String Aliases: 'Sparse', 'Sparse[int]', 'Sparse[float]' 5) IntervalIndex Kind of data: intervals Data type: IntervalDtype Scalar: Interval Array: arrays.IntervalArray String Aliases: 'interval', 'Interval', 'Interval[]', 'Interval[datetime64[ns, ]]', 'Interval[timedelta64[]]' 6) Nullable integer data type Kind of data: nullable integer Data type: Int64Dtype, ... Scalar: (none) Array: arrays.IntegerArray String Aliases: 'Int8', 'Int16', 'Int32', 'Int64', 'UInt8', 'UInt16', 'UInt32', 'UInt64' 7) Working with text data Kind of data: Strings Data type: StringDtype Scalar: str Array: arrays.StringArray String Aliases: 'string' 8) Boolean data with missing values Kind of data: Boolean (with NA) Data type: BooleanDtype Scalar: bool Array: arrays.BooleanArray String Aliases: 'boolean'"} +{"question_id": 6039605, "score": 105, "creation_date": 1305690061, "tags": ["python", "typeerror", "callable", "shadowing"], "instruction": "Why does code like `str = str(...)` cause a TypeError, but only the second time?\n\nI have some code like: def example(parameter): global str str = str(parameter) print(str) example(1) example(2) The first call to example works, but then the second time around I get an error like: Traceback (most recent call last): File \"test.py\", line 7, in example(2) File \"test.py\", line 3, in example str = str(parameter) TypeError: 'str' object is not callable Why does this happen, and how can I fix it? If you are in an interactive session and encountered a problem like this, and you want to fix the problem without restarting the interpreter, see How to restore a builtin that I overwrote by accident?.", "output": "Where the code says: global str str = str(parameter) You are redefining what str() means. str is the built-in Python name of the string type, and you don't want to change it. Use a different name for the local variable, and remove the global statement. Note that if you used code like this at the Python REPL, then the assignment to the global str will persist until you do something about it. You can restart the interpreter, or del str. The latter works because str is not actually a defined global variable by default - instead, it's normally found in a fallback (the builtins standard library module, which is specially imported at startup and given the global name __builtins__)."} +{"question_id": 46914222, "score": 105, "creation_date": 1508858294, "tags": ["python", "unit-testing", "pytest"], "instruction": "How can I assert lists equality with pytest\n\nI'm trying to make some unit tests with pytest. I was thinking about doing things like that: actual = b_manager.get_b(complete_set) assert actual is not None assert actual.columns == ['bl', 'direction', 'day'] The first assertion in ok but with the second I have an value error. ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I assume it is not the right way to assert the equality of two different lists with pytest. How can I assert that the dataframe columns (a list) is equal to the expected one? Thanks", "output": "You could do a list comprehension to check equality of all values. If you call all on the list comprehensions result, it will return True if all parameters are equal. actual = ['bl', 'direction', 'day'] expected = ['bl', 'direction', 'day'] assert len(actual) == len(expected) assert all([a == b for a, b in zip(actual, expected)]) print(all([a == b for a, b in zip(actual, expected)])) >>> True"} +{"question_id": 13184414, "score": 105, "creation_date": 1351799282, "tags": ["python", "appdata"], "instruction": "How can I get the path to the %APPDATA% directory in Python?\n\nHow can I get the path to the %APPDATA% directory in Python?", "output": "If you want AppData\\Roaming import os print(os.getenv('APPDATA')) If you are looking for AppData\\Local, then use import os print(os.getenv('LOCALAPPDATA')) For AppData\\Local\\Temp you can get it in this way, which also makes your code portable accross platforms import tempfile print(tempfile.gettempdir()) For the differences: Roaming is the folder that would be synchronized with a server if you logged into a domain with a roaming profile (enabling you to log into any computer in a domain and access your favorites, documents, etc. Firefox stores its information here, so you could even have the same bookmarks between computers with a roaming profile. Local is the folder that is specific to that computer - any information here would not be synchronized with a server. This folder is equivalent in Windows XP to C:\\Documents and Settings\\User\\Local Settings\\Application Data. See also this answer."} +{"question_id": 16444726, "score": 105, "creation_date": 1368027429, "tags": ["python", "binary", "floating-point"], "instruction": "Binary representation of float in Python (bits not hex)\n\nHow can I get a string of 0s and 1s, according to the bits of the IEEE 754 representation of a 32 bit float? For example, given an input 1.00, the result should be '00111111100000000000000000000000'.", "output": "You can do that with the struct package: import struct def binary(num): return ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num)) That packs it as a network byte-ordered float, and then converts each of the resulting bytes into an 8-bit binary representation and concatenates them out: >>> binary(1) '00111111100000000000000000000000' Edit: There was a request to expand the explanation. I'll expand this using intermediate variables to comment each step. def binary(num): # Struct can provide us with the float packed into bytes. The '!' ensures that # it's in network byte order (big-endian) and the 'f' says that it should be # packed as a float. Alternatively, for double-precision, you could use 'd'. packed = struct.pack('!f', num) print 'Packed: %s' % repr(packed) # For each character in the returned string, we'll turn it into its corresponding # integer code point # # [62, 163, 215, 10] = [ord(c) for c in '>\\xa3\\xd7\\n'] integers = [ord(c) for c in packed] print 'Integers: %s' % integers # For each integer, we'll convert it to its binary representation. binaries = [bin(i) for i in integers] print 'Binaries: %s' % binaries # Now strip off the '0b' from each of these stripped_binaries = [s.replace('0b', '') for s in binaries] print 'Stripped: %s' % stripped_binaries # Pad each byte's binary representation's with 0's to make sure it has all 8 bits: # # ['00111110', '10100011', '11010111', '00001010'] padded = [s.rjust(8, '0') for s in stripped_binaries] print 'Padded: %s' % padded # At this point, we have each of the bytes for the network byte ordered float # in an array as binary strings. Now we just concatenate them to get the total # representation of the float: return ''.join(padded) And the result for a few examples: >>> binary(1) Packed: '?\\x80\\x00\\x00' Integers: [63, 128, 0, 0] Binaries: ['0b111111', '0b10000000', '0b0', '0b0'] Stripped: ['111111', '10000000', '0', '0'] Padded: ['00111111', '10000000', '00000000', '00000000'] '00111111100000000000000000000000' >>> binary(0.32) Packed: '>\\xa3\\xd7\\n' Integers: [62, 163, 215, 10] Binaries: ['0b111110', '0b10100011', '0b11010111', '0b1010'] Stripped: ['111110', '10100011', '11010111', '1010'] Padded: ['00111110', '10100011', '11010111', '00001010'] '00111110101000111101011100001010'"} +{"question_id": 44429199, "score": 105, "creation_date": 1496906578, "tags": ["python", "numpy", "pytorch"], "instruction": "How to load a list of numpy arrays to pytorch dataset loader?\n\nI have a huge list of numpy arrays, where each array represents an image and I want to load it using torch.utils.data.Dataloader object. But the documentation of torch.utils.data.Dataloader mentions that it loads data directly from a folder. How do I modify it for my cause? I am new to pytorch and any help would be greatly appreciated. my numpy array for a single image looks something like this. The image is RBG image. [[[ 70 82 94] [ 67 81 93] [ 66 82 94] ..., [182 182 188] [183 183 189] [188 186 192]] [[ 66 80 92] [ 62 78 91] [ 64 79 95] ..., [176 176 182] [178 178 184] [180 180 186]] [[ 62 82 93] [ 62 81 96] [ 65 80 99] ..., [169 172 177] [173 173 179] [172 172 178]] ...,", "output": "I think what DataLoader actually requires is an input that subclasses Dataset. You can either write your own dataset class that subclasses Datasetor use TensorDataset as I have done below: import torch import numpy as np from torch.utils.data import TensorDataset, DataLoader my_x = [np.array([[1.0,2],[3,4]]),np.array([[5.,6],[7,8]])] # a list of numpy arrays my_y = [np.array([4.]), np.array([2.])] # another list of numpy arrays (targets) tensor_x = torch.Tensor(my_x) # transform to torch tensor tensor_y = torch.Tensor(my_y) my_dataset = TensorDataset(tensor_x,tensor_y) # create your datset my_dataloader = DataLoader(my_dataset) # create your dataloader Works for me."} +{"question_id": 48959098, "score": 105, "creation_date": 1519443232, "tags": ["python", "python-2.7"], "instruction": "How to create a new text file using Python\n\nIf I try to open a file that doesn't exist yet it will create it on the same directory from where the program is being executed. The problem comes that when I try to open it, I get this error: IOError: [Errno 2] No such file or directory: 'C:\\Users\\myusername\\PycharmProjects\\Tests\\copy.txt'. I even tried specifying a path as you can see in the error. import os THIS_FOLDER = os.path.dirname(os.path.abspath(__file__)) my_file = os.path.join(THIS_FOLDER, 'copy.txt')", "output": "Looks like you forgot the mode parameter when calling open, try w: with open(\"copy.txt\", \"w\") as file: file.write(\"Your text goes here\") The default value is r and will fail if the file does not exist 'r' open for reading (default) 'w' open for writing, truncating the file first Other interesting options are 'x' open for exclusive creation, failing if the file already exists 'a' open for writing, appending to the end of the file if it exists See Doc for Python 2.7 or Python 3.6"} +{"question_id": 28968660, "score": 105, "creation_date": 1426003855, "tags": ["python", "mongodb", "dictionary", "mongodb-query", "pymongo"], "instruction": "How to convert a pymongo.cursor.Cursor into a dict?\n\nI am using pymongo to query for all items in a region (actually it is to query for all venues in a region on a map). I used db.command(SON()) before to search in a spherical region, which can return me a dictionary and in the dictionary there is a key called results which contains the venues. Now I need to search in a square area and I am suggested to use db.places.find, however, this returns me a pymongo.cursor.Cursor class and I have no idea how to extract the venue results from it. Does anyone know whether I should convert the cursor into a dict and extract the results out, or use another method to query for items in a square region? BTW, db is pymongo.database.Database class The code is: >>> import pymongo >>> db = pymongo.MongoClient(host).PSRC >>> resp = db.places.find({\"loc\": {\"$within\": {\"$box\": [[ll_lng,ll_lat], [ur_lng,ur_lat]]}}}) >>> for doc in resp: >>> print(doc) I have values of ll_lng, ll_lat, ur_lng and ur_lat, use these values but it prints nothing from this codes", "output": "The find method returns a Cursor instance, which allows you to iterate over all matching documents. To get the first document that matches the given criteria, you need to use find_one. The result of find_one is a dictionary. You can always use the list constructor to return a list of all the documents in the collection but bear in mind that this will load all the data in memory and may not be what you want. You should do that if you need to reuse the cursor and have a good reason not to use rewind() Demo using find: >>> import pymongo >>> conn = pymongo.MongoClient() >>> db = conn.test #test is my database >>> col = db.spam #Here spam is my collection >>> cur = col.find() >>> cur >>> for doc in cur: ... print(doc) # or do something with the document ... {'a': 1, '_id': ObjectId('54ff30faadd8f30feb90268f'), 'b': 2} {'a': 1, 'c': 3, '_id': ObjectId('54ff32a2add8f30feb902690'), 'b': 2} Demo using find_one: >>> col.find_one() {'a': 1, '_id': ObjectId('54ff30faadd8f30feb90268f'), 'b': 2}"} +{"question_id": 33038686, "score": 105, "creation_date": 1444394743, "tags": ["python", "apache-spark", "dataframe", "pyspark", "apache-spark-sql"], "instruction": "Add an empty column to Spark DataFrame\n\nAs mentioned in many other locations on the web, adding a new column to an existing DataFrame is not straightforward. Unfortunately it is important to have this functionality (even though it is inefficient in a distributed environment) especially when trying to concatenate two DataFrames using unionAll. What is the most elegant workaround for adding a null column to a DataFrame to facilitate a unionAll? My version goes like this: from pyspark.sql.types import StringType from pyspark.sql.functions import UserDefinedFunction to_none = UserDefinedFunction(lambda x: None, StringType()) new_df = old_df.withColumn('new_column', to_none(df_old['any_col_from_old']))", "output": "All you need here is importing StringType and using lit and cast: from pyspark.sql.types import StringType from pyspark.sql.functions import lit new_df = old_df.withColumn('new_column', lit(None).cast(StringType())) A full example: df = sc.parallelize([row(1, \"2\"), row(2, \"3\")]).toDF() df.printSchema() # root # |-- foo: long (nullable = true) # |-- bar: string (nullable = true) new_df = df.withColumn('new_column', lit(None).cast(StringType())) new_df.printSchema() # root # |-- foo: long (nullable = true) # |-- bar: string (nullable = true) # |-- new_column: string (nullable = true) new_df.show() # +---+---+----------+ # |foo|bar|new_column| # +---+---+----------+ # | 1| 2| null| # | 2| 3| null| # +---+---+----------+ A Scala equivalent can be found here: Create new Dataframe with empty/null field values"} +{"question_id": 52593803, "score": 105, "creation_date": 1538405749, "tags": ["python", "dictionary", "types"], "instruction": "How to set the python type hinting for a dictionary variable?\n\nLet say I have a dictionary: from typing import Dict v = { 'height': 5, 'width': 14, 'depth': 3 } result = do_something(v) def do_something(value: Dict[???]): # do stuff How do I declare the dictionary type in do_something?", "output": "Dict takes two \"arguments\", the type of its keys and the type of its values. For a dict that maps strings to integers, use def do_something(value: Dict[str, int]): The documentation could probably be a little more explicit, though."} +{"question_id": 20170251, "score": 105, "creation_date": 1385258695, "tags": ["python", "infinite-loop"], "instruction": "How to run a script forever?\n\nI need to run my Python program forever in an infinite loop.. Currently I am running it like this - #!/usr/bin/python import time # some python code that I want # to keep on running # Is this the right way to run the python program forever? # And do I even need this time.sleep call? while True: time.sleep(5) Is there any better way of doing it? Or do I even need time.sleep call? Any thoughts?", "output": "Yes, you can use a while True: loop that never breaks to run Python code continually. However, you will need to put the code you want to run continually inside the loop: #!/usr/bin/python while True: # some python code that I want # to keep on running Also, time.sleep is used to suspend the operation of a script for a period of time. So, since you want yours to run continually, I don't see why you would use it."} +{"question_id": 2806897, "score": 105, "creation_date": 1273530665, "tags": ["python", "root", "privileges"], "instruction": "What is the best way for checking if the user of a script has root-like privileges?\n\nI have a Python script that will be doing a lot of things that would require root-level privileges, such as moving files in /etc, installing with apt-get, and so on. I currently have: if os.geteuid() != 0: exit(\"You need to have root privileges to run this script.\\nPlease try again, this time using 'sudo'. Exiting.\") Is this the best way to do the check? Are there other best practices?", "output": "Under the EAFP (Easier to Ask Forgiveness than Permission) principle: import errno try: os.rename('/etc/foo', '/etc/bar') except IOError as e: if e[0] == errno.EPERM: sys.exit(\"You need root permissions to do this, laterz!\") If you are concerned about the non-portability of os.geteuid() you probably shouldn't be mucking with /etc anyway."} +{"question_id": 21500354, "score": 105, "creation_date": 1391270969, "tags": ["python", "django", "unit-testing", "django-unittest"], "instruction": "How to see which tests were run during Django's manage.py test command\n\nAfter tests execution is finished using Django's manage.py test command only number of passed tests is printed to the console. (virtualenv) G:\\Project\\>python manage.py test Creating test database for alias 'default'... True .. ---------------------------------------------------------------------- Ran 2 tests in 0.017s OK Destroying test database for alias 'default'... Is there any way to see: which tests were actually executed from what module in what order I haven't found any solution in the doc.", "output": "You can pass -v 2 to the test command: python manage.py test -v 2 After running this command you'll get something like this (I'm using django 2, feel free to ignore migrations/database stuff): Creating test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')... Operations to perform: Synchronize unmigrated apps: messages, staticfiles Apply all migrations: admin, auth, contenttypes, sessions Synchronizing apps without migrations: Creating tables... Running deferred SQL... Running migrations: Applying contenttypes.0001_initial... OK ... Applying sessions.0001_initial... OK System check identified no issues (0 silenced). test_equal_hard (polls.tests.TestHard) ... ok <--------+ test_equal_simple (polls.tests.TestSimple) ... ok <--------+ | | That's your tests! >----------------------------+ By the way, v stands for verbosity (You can also use --verbosity=2): python manage.py test --verbosity=2 Here's the excerpt from the python manage.py test --help: -v {0,1,2,3}, --verbosity {0,1,2,3} Verbosity level; 0=minimal output, 1=normal output, 2=verbose output, 3=very verbose output"} +{"question_id": 19211828, "score": 105, "creation_date": 1381081258, "tags": ["python", "list", "set", "any"], "instruction": "Using any() and all() to check if a list contains one set of values or another\n\nMy code is for a Tic Tac Toe game and checking for a draw state but I think this question could be more useful in a general sense. I have a list that represents the board, it looks like this: board = [1,2,3,4,5,6,7,8,9] When a player makes a move, the integer they moved on is replaced with their marker ('x' or 'o'). I already have checks in place to look for a winning state. What I can't do is check for a draw state, where none of the list values are integers but a winning state has not been set. The code I have so far: if any(board) != playerOne or any(board) != playerTwo: print 'continue' elif all(board) == playerOne or playerTwo: print 'Draw' The if statement works, the elif does not. I think the problem is my 'or' operator. What I want to check for is: if the every item on the board is either playerOne marker or playerTwo marker. If I were to make the code: elif all(board) == playerOne or all(board) == playerTwo: I would be checking to see if every place on the board was playerOne or every place on the board is playerTwo, which it won't be. So how do I check if the board is taken up by a combination of playerOne markers and playerTwo markers?", "output": "Generally speaking: all and any are functions that take some iterable and return True, if in the case of all, no values in the iterable are falsy; in the case of any, at least one value is truthy. A value x is falsy iff bool(x) == False. A value x is truthy iff bool(x) == True. Any non-boolean elements in the iterable are perfectly acceptable \u2014 bool(x) maps, or coerces, any x according to these rules: 0, 0.0, None, [], (), [], set(), and other empty collections are mapped to False all other values are mapped to True. The docstring for bool uses the terms 'true'/'false' for 'truthy'/'falsy', and True/False for the concrete boolean values. For example: if all(x > 0 for x in xs) or any(x > 100 for x in xs): # if nothing is zero or something is over a hundred \u2026 In your specific code samples: You\u2019ve slightly misunderstood how these functions work. The following does something completely different from what you thought: if any(foobars) == big_foobar: ...because any(foobars) would first be evaluated to either True or False, and then that boolean value would be compared to big_foobar, which generally always gives you False (unless big_foobar coincidentally happened to be the same boolean value). Note: the iterable can be a list, but it can also be a generator or a generator expression (\u2248 lazily evaluated/generated list), or any other iterator. What you want instead is: if any(x == big_foobar for x in foobars): which basically first constructs an iterable that yields a sequence of booleans\u2014for each item in foobars, it compares the item to the value held by big_foobar, and (lazily) emits the resulting boolean into the resulting sequence of booleans: tmp = (x == big_foobar for x in foobars) then any walks over all items in tmp and returns True as soon as it finds the first truthy element. It's as if you did the following: In [1]: foobars = ['big', 'small', 'medium', 'nice', 'ugly'] In [2]: big_foobar = 'big' In [3]: any(['big' == big_foobar, 'small' == big_foobar, 'medium' == big_foobar, 'nice' == big_foobar, 'ugly' == big_foobar]) Out[3]: True Note: As DSM pointed out, any(x == y for x in xs) is equivalent to y in xs but the latter is more readable, quicker to write and runs faster. Some examples: In [1]: any(x > 5 for x in range(4)) Out[1]: False In [2]: all(isinstance(x, int) for x in range(10)) Out[2]: True In [3]: any(x == 'Erik' for x in ['Erik', 'John', 'Jane', 'Jim']) Out[3]: True In [4]: all([True, True, True, False, True]) Out[4]: False See also: http://docs.python.org/2/library/functions.html#all"} +{"question_id": 56859803, "score": 105, "creation_date": 1562100188, "tags": ["python", "pytorch"], "instruction": "ModuleNotFoundError: No module named 'tools.nnwrap'\n\nI tried to install torch using: pip install torch Installation started, but after a few seconds I got the error: from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' OS: Windows", "output": "Anyone who is looking for the solution refer below: It seems command to install torch not is working as expected, instead, you can try to install PyTorch using below command. It's working and solved my above-mentioned issue. Run below command(for below-specified OS, package-manager, Language): # for OS: Windows, package-manager: pip, Language: python3.6 (below command is valid for only mentioned python 3.6) pip3 install https://download.pytorch.org/whl/cu90/torch-1.1.0-cp36-cp36m-win_amd64.whl pip3 install https://download.pytorch.org/whl/cu90/torchvision-0.3.0-cp36-cp36m-win_amd64.whl For another version/type of the software (OS, package, Language) installed, the command must be generated from the below-mentioned link. https://pytorch.org/get-started/locally/ Also, look for the Python version in your IDE(If you are using PyCharm) from the terminal using the command: python. If it returns 32bit this could happen, instead install Python 64-bit."} +{"question_id": 25239650, "score": 105, "creation_date": 1407748285, "tags": ["python", "http", "python-requests", "python-sockets"], "instruction": "Python requests speed up using keep-alive\n\nIn the HTTP protocol you can send many requests in one socket using keep-alive and then receive the response from server at once, so that will significantly speed up whole process. Is there any way to do this in python requests lib? Or are there any other ways to speed this up that well using requests lib?", "output": "Yes, there is. Use requests.Session and it will do keep-alive by default. I guess I should include a quick example: import logging import requests logging.basicConfig(level=logging.DEBUG) s = requests.Session() s.get('http://httpbin.org/cookies/set/sessioncookie/123456789') s.get('http://httpbin.org/cookies/set/anothercookie/123456789') r = s.get(\"http://httpbin.org/cookies\") print(r.text) You will note that these log message occur INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org DEBUG:requests.packages.urllib3.connectionpool:\"GET /cookies/set/sessioncookie/123456789 HTTP/1.1\" 302 223 DEBUG:requests.packages.urllib3.connectionpool:\"GET /cookies HTTP/1.1\" 200 55 DEBUG:requests.packages.urllib3.connectionpool:\"GET /cookies/set/anothercookie/123456789 HTTP/1.1\" 302 223 DEBUG:requests.packages.urllib3.connectionpool:\"GET /cookies HTTP/1.1\" 200 90 DEBUG:requests.packages.urllib3.connectionpool:\"GET /cookies HTTP/1.1\" 200 90 If you wait a little while, and repeat the last get call INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: httpbin.org DEBUG:requests.packages.urllib3.connectionpool:\"GET /cookies HTTP/1.1\" 200 90 Note that it resets the dropped connection, i.e. reestablishing the connection to the server to make the new request."} +{"question_id": 41215365, "score": 105, "creation_date": 1482118077, "tags": ["python", "python-3.x", "string-formatting", "python-3.6", "f-string"], "instruction": "What are use cases for nested f-strings\n\nThanks to David Beazley's tweet, I've recently found out that the new Python 3.6 f-strings can also be nested: >>> price = 478.23 >>> f\"{f'${price:0.2f}':*>20s}\" '*************$478.23' Or: >>> x = 42 >>> f'''-{f\"\"\"*{f\"+{f'.{x}.'}+\"}*\"\"\"}-''' '-*+.42.+*-' While I am surprised that this is possible, I am missing on how practical is that, when would nesting f-strings be useful? What use cases can this cover? Note: The PEP itself does not mention nesting f-strings, but there is a specific test case.", "output": "I don't think formatted string literals allowing nesting (by nesting, I take it to mean f'{f\"..\"}') is a result of careful consideration of possible use cases; I'm more convinced it's just allowed in order for them to conform with their specification. The specification states that they support full Python expressions* inside brackets. It's also stated that a formatted string literal is really just an expression that is evaluated at runtime (see here and here). As a result, it only makes sense to allow a formatted string literal as the expression inside another formatted string literal; forbidding it would negate the full support for Python expressions. The fact that you can't find use cases mentioned in the docs (and can only find test cases in the test suite) is because this is probably a nice (side) effect of the implementation and not its motivating use case. *Actually, with three exceptions: An empty expression is not allowed. A lambda expression must be surrounded by explicit parentheses. Nesting a string with the same quotes results in a syntax error."} +{"question_id": 65635346, "score": 105, "creation_date": 1610134391, "tags": ["python", "cors", "fastapi"], "instruction": "How can I enable CORS in FastAPI?\n\nI'm trying to enable CORS in this very basic FastAPI example, however it doesn't seem to be working. from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=['*'] ) @app.get('/') def read_main(): return {'message': 'Hello World!'} This is the response I get: curl -v http://127.0.0.1:8000 * Trying 127.0.0.1... * TCP_NODELAY set * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) > GET / HTTP/1.1 > Host: 127.0.0.1:8000 > User-Agent: curl/7.64.1 > Accept: */* > < HTTP/1.1 200 OK < date: Fri, 08 Jan 2021 19:27:37 GMT < server: uvicorn < content-length: 26 < content-type: application/json < * Connection #0 to host 127.0.0.1 left intact {\"message\":\"Hello World!\"}*", "output": "you can find answer from this:fastapi cors then this is a very simple code to achieve it: create a python file and named it main.py. add code in this file. from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware app = FastAPI() origins = [\"*\"] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=[\"*\"], allow_headers=[\"*\"], ) @app.get(\"/\") async def main(): return {\"message\": \"Hello World\"} and run this app: uvicorn main:app --reload --host 0.0.0.0 --port 8000 if you computer ip is 192.12.12.12 you can check this link and just write a small javascript in html: "} +{"question_id": 44091886, "score": 105, "creation_date": 1495324153, "tags": ["python", "virtualenv", "python-venv"], "instruction": "What's the difference between \"virtualenv\" and \"-m venv\" in creating Python Virtual environments\n\nSorry if I sound a bit foolish. I'm confused about this: What's the difference between the two: virtualenv myvenv and -m venv myvenv The first one works well for me in creating virtual environments while the other does not. I cd into my development directory and use virtualenv myvenv and it creates the virtual environment. But if I use -m venv myvenv, it just gives errors.", "output": "venv is a package shipped directly with python 3. So you don't need to pip install anything. virtualenv instead is an independent library available at https://virtualenv.pypa.io/en/stable/ and can be installed with pip. They solve the same problem and work in a very similar manner. If you use python3 I suggest to avoid any \"extra\" dependencies and just stick with venv. Your error is probably because you use Python2/pip2."} +{"question_id": 54307300, "score": 105, "creation_date": 1548156570, "tags": ["python", "pandas"], "instruction": "What causes \"indexing past lexsort depth\" warning in Pandas?\n\nI'm indexing a large multi-index Pandas df using df.loc[(key1, key2)]. Sometimes I get a series back (as expected), but other times I get a dataframe. I'm trying to isolate the cases which cause the latter, but so far all I can see is that it's correlated with getting a PerformanceWarning: indexing past lexsort depth may impact performance warning. I'd like to reproduce it to post here, but I can't generate another case that gives me the same warning. Here's my attempt: def random_dates(start, end, n=10): start_u = start.value//10**9 end_u = end.value//10**9 return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s') np.random.seed(0) df = pd.DataFrame(np.random.random(3255000).reshape(465000,7)) # same shape as my data df['date'] = random_dates(pd.to_datetime('1990-01-01'), pd.to_datetime('2018-01-01'), 465000) df = df.set_index([0, 'date']) df = df.sort_values(by=[3]) # unsort indices, just in case df.index.lexsort_depth > 0 df.index.is_monotonic > False df.loc[(0.9987185534991936, pd.to_datetime('2012-04-16 07:04:34'))] # no warning So my question is: what causes this warning? How do I artificially induce it?", "output": "TL;DR: your index is unsorted and this severely impacts performance. Sort your DataFrame's index using df.sort_index() to address the warning and improve performance. I've actually written about this in detail in my writeup: Select rows in pandas MultiIndex DataFrame (under \"Question 3\"). To reproduce, mux = pd.MultiIndex.from_arrays([ list('aaaabbbbbccddddd'), list('tuvwtuvwtuvwtuvw') ], names=['one', 'two']) df = pd.DataFrame({'col': np.arange(len(mux))}, mux) col one two a t 0 u 1 v 2 w 3 b t 4 u 5 v 6 w 7 t 8 c u 9 v 10 d w 11 t 12 u 13 v 14 w 15 You'll notice that the second level is not properly sorted. Now, try to index a specific cross section: df.loc[pd.IndexSlice[('c', 'u')]] PerformanceWarning: indexing past lexsort depth may impact performance. # encoding: utf-8 col one two c u 9 You'll see the same behaviour with xs: df.xs(('c', 'u'), axis=0) PerformanceWarning: indexing past lexsort depth may impact performance. self.interact() col one two c u 9 The docs, backed by this timing test I once did seem to suggest that handling un-sorted indexes imposes a slowdown\u2014Indexing is O(N) time when it could/should be O(1). If you sort the index before slicing, you'll notice the difference: df2 = df.sort_index() df2.loc[pd.IndexSlice[('c', 'u')]] col one two c u 9 %timeit df.loc[pd.IndexSlice[('c', 'u')]] %timeit df2.loc[pd.IndexSlice[('c', 'u')]] 802 \u00b5s \u00b1 12.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) 648 \u00b5s \u00b1 20.3 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) Finally, if you want to know whether the index is sorted or not, check with MultiIndex.is_lexsorted. df.index.is_lexsorted() # False df2.index.is_lexsorted() # True As for your question on how to induce this behaviour, simply permuting the indices should suffice. This works if your index is unique: df2 = df.loc[pd.MultiIndex.from_tuples(np.random.permutation(df2.index))] If your index is not unique, add a cumcounted level first, df.set_index( df.groupby(level=list(range(len(df.index.levels)))).cumcount(), append=True) df2 = df.loc[pd.MultiIndex.from_tuples(np.random.permutation(df2.index))] df2 = df2.reset_index(level=-1, drop=True)"} +{"question_id": 52265120, "score": 105, "creation_date": 1536611727, "tags": ["python", "parallel-processing", "multiprocessing", "python-multiprocessing", "process-pool"], "instruction": "Python multiprocessing.Pool: AttributeError\n\nI have a method inside a class that needs to do a lot of work in a loop, and I would like to spread the work over all of my cores. I wrote the following code, which works if I use normal map(), but with pool.map() returns an error. import multiprocessing pool = multiprocessing.Pool(multiprocessing.cpu_count() - 1) class OtherClass: def run(sentence, graph): return False class SomeClass: def __init__(self): self.sentences = [[\"Some string\"]] self.graphs = [\"string\"] def some_method(self): other = OtherClass() def single(params): sentences, graph = params return [other.run(sentence, graph) for sentence in sentences] return list(pool.map(single, zip(self.sentences, self.graphs))) SomeClass().some_method() Error 1: AttributeError: Can't pickle local object 'SomeClass.some_method..single' Why can't it pickle single()? I even tried to move single() to the global module scope (not inside the class - makes it independent of the context): import multiprocessing pool = multiprocessing.Pool(multiprocessing.cpu_count() - 1) class OtherClass: def run(sentence, graph): return False def single(params): other = OtherClass() sentences, graph = params return [other.run(sentence, graph) for sentence in sentences] class SomeClass: def __init__(self): self.sentences = [[\"Some string\"]] self.graphs = [\"string\"] def some_method(self): return list(pool.map(single, zip(self.sentences, self.graphs))) SomeClass().some_method() and I get the following ... Error 2: AttributeError: Can't get attribute 'single' on module 'main' from '.../test.py'", "output": "Error 1: AttributeError: Can't pickle local object 'SomeClass.some_method..single' You solved this error yourself by moving the nested target-function single() out to the top-level. Background: Pool needs to pickle (serialize) everything it sends to its worker-processes (IPC). Pickling actually only saves the name of a function and unpickling requires re-importing the function by name. For that to work, the function needs to be defined at the top-level, nested functions won't be importable by the child and already trying to pickle them raises an exception (more). Error 2: AttributeError: Can't get attribute 'single' on module 'main' from '.../test.py' You are starting the pool before you define your function and classes, that way the child processes cannot inherit any code. Move your pool start up to the bottom and protect (why?) it with if __name__ == '__main__': import multiprocessing class OtherClass: def run(self, sentence, graph): return False def single(params): other = OtherClass() sentences, graph = params return [other.run(sentence, graph) for sentence in sentences] class SomeClass: def __init__(self): self.sentences = [[\"Some string\"]] self.graphs = [\"string\"] def some_method(self): return list(pool.map(single, zip(self.sentences, self.graphs))) if __name__ == '__main__': # <- prevent RuntimeError for 'spawn' # and 'forkserver' start_methods with multiprocessing.Pool(multiprocessing.cpu_count() - 1) as pool: print(SomeClass().some_method()) Appendix ...I would like to spread the work over all of my cores. Potentially helpful background on how multiprocessing.Pool is chunking work: Python multiprocessing: understanding logic behind chunksize"} +{"question_id": 4877290, "score": 105, "creation_date": 1296665577, "tags": ["python", "class", "metaprogramming", "magic-methods"], "instruction": "What is the __dict__.__dict__ attribute of a Python class?\n\n>>> class A(object): pass ... >>> A.__dict__ >>> A.__dict__.__dict__ Traceback (most recent call last): File \"\", line 1, in AttributeError: 'dictproxy' object has no attribute '__dict__' >>> A.__dict__.copy() {'__dict__': ... } >>> A.__dict__['__dict__'] # What is this object? If I do A.something = 10, this goes into A.__dict__. What is this found in A.__dict__.__dict__, and when does it contain something?", "output": "First of all A.__dict__.__dict__ is different from A.__dict__['__dict__']. The former doesn't exist and the latter is the __dict__ attribute that the instances of the class would have. It's a data descriptor object that returns the internal dictionary of attributes for the specific instance. In short, the __dict__ attribute of an object can't be stored in object's __dict__, so it's accessed through a descriptor defined in the class. To understand this, you'd have to read the documentation of the descriptor protocol. The short version: For an instance a of a class A, access to a.__dict__ is provided by A.__dict__['__dict__'] which is the same as vars(A)['__dict__']. For a class A, access to A.__dict__ is provided by type.__dict__['__dict__'] (in theory) which is the same as vars(type)['__dict__']. The long version: Both classes and objects provide access to attributes both through the attribute operator (implemented via the class or metaclass's __getattribute__), and the __dict__ attribute/protocol which is used by vars(ob). For normal objects, the __dict__ object creates a separate dict object, which stores the attributes, and __getattribute__ first tries to access it and get the attributes from there (before attempting to look for the attribute in the class by utilizing the descriptor protocol, and before calling __getattr__). The __dict__ descriptor on the class implements the access to this dictionary. a.name is equivalent to trying those in order: type(a).__dict__['name'].__get__(a, type(a)) (only if type(a).__dict__['name'] is a data descriptor), a.__dict__['name'], type(a).__dict__['name'].__get__(a, type(a)), type(a).__dict__['name']. a.__dict__ does the same but skips the second step for obvious reasons. As it's impossible for the __dict__ of an instance to be stored in itself, it's accessed through the descriptor protocol directly instead and is stored in a special field in the instance. A similar scenario is true for classes, although their __dict__ is a special proxy object that pretends to be a dictionary (but might not be internally), and doesn't allow you to change it or replace it with another one. This proxy allows you, among all else, to access the attributes of a class that are specific to it, and not defined in one of its bases. By default, a vars(cls) of an empty class carries three descriptors: __dict__ for storing the attributes of the instances, __weakref__ which is used internally by weakref, and __doc__ the docstring of the class. The first two might be gone if you define __slots__. Then you wouldn't have __dict__ and __weakref__ attributes, but instead you'd have a single class attribute for each slot. The attributes of the instance then wouldn't be stored in a dictionary, and access to them will be provided by the respective descriptors in the class. And lastly, the inconsistency that A.__dict__ is different from A.__dict__['__dict__'] is because the attribute __dict__ is, by exception, never looked up in vars(A), so what is true for it isn't true for practically any other attribute you'd use. For example, A.__weakref__ is the same thing as A.__dict__['__weakref__']. If this inconsistency didn't exist, using A.__dict__ would not work, and you'd have to always use vars(A) instead."} +{"question_id": 66194804, "score": 105, "creation_date": 1613301104, "tags": ["python", "python-3.x", "decorator", "python-dataclasses"], "instruction": "What does frozen mean for dataclasses?\n\nWhat's the difference between @dataclass(frozen=True) and @dataclass(frozen=False)? When should I use which?", "output": "In Python, \"frozen\" means an object cannot be modified. For example, consider set and frozenset: >>> s = set((1, 2, 3)) >>> s {1, 2, 3} >>> s.add(4) >>> s {1, 2, 3, 4} >>> fs = frozenset((1, 2, 3)) >>> fs frozenset({1, 2, 3}) >>> fs.add(4) ... AttributeError: 'frozenset' object has no attribute 'add' Likewise, creating a dataclass with frozen=True means its instances are frozen and cannot be changed. Be aware that frozen only applies to the dataclass instance itself \u2013 a frozen dataclass can contain mutable items such as lists, and a regular dataclass can contain frozen/immutable items such as tuples. The point of frozen objects is to avoid accidental modification, and to guarantee a consistent value. The former is advantageous to avoid bugs. When an object is not intended to be modified, making it frozen reveals accidental modification via an immediate error. The latter allows use as immutable object, for example the keys of a dict. A frozen dataclass is by default hashable and suitable as a dict key. from dataclasses import dataclass @dataclass(frozen=True) class Frozen: x: int y: int named_points = {Frozen(0, 0): \"Origin\"} Note that hashability does not just depend on the dataclass but is recursive \u2013 a frozen dataclass containing a list is not hashable, because the list is not hashable."} +{"question_id": 11687302, "score": 104, "creation_date": 1343390208, "tags": ["python", "pycharm"], "instruction": "PyCharm not recognizing Python files\n\nPyCharm is no longer recognizing Python files. The interpreter path is correctly set.", "output": "Please check File | Settings (Preferences on macOS) | Editor | File Types, ensure that file name or extension is not listed in Text files. To fix the problem remove it from the Text files and double check that .py extension is associated with Python files."} +{"question_id": 53740577, "score": 104, "creation_date": 1544609233, "tags": ["python", "machine-learning", "keras", "deep-learning"], "instruction": "\"AttributeError: 'str' object has no attribute 'decode' \" while Loading a Keras Saved Model\n\nAfter Training, I saved Both Keras whole Model and Only Weights using model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME) Models and Weights were saved successfully and there was no error. I can successfully load the weights simply using model.load_weights and they are good to go, but when i try to load the save model via load_model, i am getting an error. File \"C:/Users/Rizwan/model_testing/model_performance.py\", line 46, in Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras}) File \"C:\\Users\\Rizwan\\AppData\\Roaming\\Python\\Python36\\site-packages\\keras\\engine\\saving.py\", line 419, in load_model model = _deserialize_model(f, custom_objects, compile) File \"C:\\Users\\Rizwan\\AppData\\Roaming\\Python\\Python36\\site-packages\\keras\\engine\\saving.py\", line 321, in _deserialize_model optimizer_weights_group['weight_names']] File \"C:\\Users\\Rizwan\\AppData\\Roaming\\Python\\Python36\\site-packages\\keras\\engine\\saving.py\", line 320, in n.decode('utf8') for n in AttributeError: 'str' object has no attribute 'decode' I never received this error and i used to load any models successfully. I am using Keras 2.2.4 with tensorflow backend. Python 3.6. My Code for training is : from keras_preprocessing.image import ImageDataGenerator from keras import backend as K from keras.models import load_model from keras.callbacks import ReduceLROnPlateau, TensorBoard, ModelCheckpoint,EarlyStopping import pandas as pd MODEL_NAME = \"nasnet_RS2.h5\" MODEL_WEIGHTS = \"nasnet_RS2_weights.h5\" def euc_dist_keras(y_true, y_pred): return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True)) def main(): # Here, we initialize the \"NASNetMobile\" model type and customize the final #feature regressor layer. # NASNet is a neural network architecture developed by Google. # This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search. # NASNetMobile is a smaller version of NASNet. model = NASNetMobile() model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output)) # model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras}) # This model will use the \"Adam\" optimizer. model.compile(\"adam\", euc_dist_keras) lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003) # This callback will log model stats to Tensorboard. tb_callback = TensorBoard() # This callback will checkpoint the best model at every epoch. mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True) es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True) # This is the train DataSequence. # These are the callbacks. #callbacks = [lr_callback, tb_callback,mc_callback] callbacks = [lr_callback, tb_callback,es_callback] train_pd = pd.read_csv(\"./train3.txt\", delimiter=\" \", names=[\"id\", \"label\"], index_col=None) test_pd = pd.read_csv(\"./val3.txt\", delimiter=\" \", names=[\"id\", \"label\"], index_col=None) # train_pd = pd.read_csv(\"./train2.txt\",delimiter=\" \",header=None,index_col=None) # test_pd = pd.read_csv(\"./val2.txt\",delimiter=\" \",header=None,index_col=None) #model.summary() batch_size=32 datagen = ImageDataGenerator(rescale=1. / 255) train_generator = datagen.flow_from_dataframe(dataframe=train_pd, directory=\"./images\", x_col=\"id\", y_col=\"label\", has_ext=True, class_mode=\"other\", target_size=(224, 224), batch_size=batch_size) valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory=\"./images\", x_col=\"id\", y_col=\"label\", has_ext=True, class_mode=\"other\", target_size=(224, 224), batch_size=batch_size) STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, callbacks=callbacks, epochs=20) # we save the model. model.save_weights(MODEL_WEIGHTS) model.save(MODEL_NAME) if __name__ == '__main__': # freeze_support() here if program needs to be frozen main()", "output": "For me the solution was downgrading the h5py package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough."} +{"question_id": 33404752, "score": 104, "creation_date": 1446084233, "tags": ["python", "string", "unicode", "special-characters", "emoji"], "instruction": "removing emojis from a string in Python\n\nI found this code in Python for removing emojis but it is not working. Can you help with other codes or fix to this? I have observed all my emjois start with \\xf but when I try to search for str.startswith(\"\\xf\") I get invalid character error. emoji_pattern = r'/[x{1F601}-x{1F64F}]/u' re.sub(emoji_pattern, '', word) Here's the error: Traceback (most recent call last): File \"test.py\", line 52, in re.sub(emoji_pattern,'',word) File \"/usr/lib/python2.7/re.py\", line 151, in sub return _compile(pattern, flags).sub(repl, string, count) File \"/usr/lib/python2.7/re.py\", line 244, in _compile raise error, v # invalid expression sre_constants.error: bad character range Each of the items in a list can be a word ['This', 'dog', '\\xf0\\x9f\\x98\\x82', 'https://t.co/5N86jYipOI'] UPDATE: I used this other code: emoji_pattern=re.compile(ur\" \" \" [\\U0001F600-\\U0001F64F] # emoticons \\ |\\ [\\U0001F300-\\U0001F5FF] # symbols & pictographs\\ |\\ [\\U0001F680-\\U0001F6FF] # transport & map symbols\\ |\\ [\\U0001F1E0-\\U0001F1FF] # flags (iOS)\\ \" \" \", re.VERBOSE) emoji_pattern.sub('', word) But this still doesn't remove the emojis and shows them! Any clue why is that?", "output": "I am updating my answer to this by @jfs because my previous answer failed to account for other Unicode standards such as Latin, Greek etc. StackOverFlow doesn't allow me to delete my previous answer hence I am updating it to match the most acceptable answer to the question. #!/usr/bin/env python import re text = u'This is a smiley face \\U0001f602' print(text) # with emoji def deEmojify(text): regrex_pattern = re.compile(pattern = \"[\" u\"\\U0001F600-\\U0001F64F\" # emoticons u\"\\U0001F300-\\U0001F5FF\" # symbols & pictographs u\"\\U0001F680-\\U0001F6FF\" # transport & map symbols u\"\\U0001F1E0-\\U0001F1FF\" # flags (iOS) \"]+\", flags = re.UNICODE) return regrex_pattern.sub(r'',text) print(deEmojify(text)) This was my previous answer, do not use this. def deEmojify(inputString): return inputString.encode('ascii', 'ignore').decode('ascii')"} +{"question_id": 7589012, "score": 104, "creation_date": 1317240710, "tags": ["python", "image-processing", "opencv", "computer-vision"], "instruction": "Combining Two Images with OpenCV\n\nI'm trying to use OpenCV 2.1 to combine two images into one, with the two images placed adjacent to each other. In Python, I'm doing: import numpy as np, cv img1 = cv.LoadImage(fn1, 0) img2 = cv.LoadImage(fn2, 0) h1, w1 = img1.height,img1.width h2, w2 = img2.height,img2.width # Create an array big enough to hold both images next to each other. vis = np.zeros((max(h1, h2), w1+w2), np.float32) mat1 = cv.CreateMat(img1.height,img1.width, cv.CV_32FC1) cv.Convert( img1, mat1 ) mat2 = cv.CreateMat(img2.height, img2.width, cv.CV_32FC1) cv.Convert( img2, mat2 ) # Copy both images into the composite image. vis[:h1, :w1] = mat1 vis[:h2, w1:w1+w2] = mat2 h,w = vis.shape vis2 = cv.CreateMat(h, w, cv.CV_32FC3) vis0 = cv.fromarray(vis) cv.CvtColor(vis0, vis2, cv.CV_GRAY2BGR) cv.ShowImage('test', vis2) cv.WaitKey() The two input images are: https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/c/box.png?rev=2270 https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/c/box_in_scene.png?rev=2270 The resulting image is: It may be hard to distinguish from the rest of the site, but most of the image is white, corresponding to where the individual images should be. The black area is where no image data was written. Why is all my image data being converted to white?", "output": "For cases where your images happen to be the same size (which is a common case for displaying image processing results), you can use numpy's concatenate to simplify your code. To stack vertically (img1 over img2): vis = np.concatenate((img1, img2), axis=0) To stack horizontally (img1 to the left of img2): vis = np.concatenate((img1, img2), axis=1) To verify: import cv2 import numpy as np img1 = cv2.imread('img1.png') img2 = cv2.imread('img2.png') vis = np.concatenate((img1, img2), axis=1) cv2.imwrite('out.png', vis) The out.png image will contain img1 on the left and img2 on the right."} +{"question_id": 28836781, "score": 104, "creation_date": 1425399877, "tags": ["python"], "instruction": "Reading column names alone in a csv file\n\nI have a csv file with the following columns: id,name,age,sex Followed by a lot of values for the above columns. I am trying to read the column names alone and put them inside a list. I am using Dictreader and this gives out the correct details: with open('details.csv') as csvfile: i=[\"name\",\"age\",\"sex\"] re=csv.DictReader(csvfile) for row in re: for x in i: print row[x] But what I want to do is, I need the list of columns, (\"i\" in the above case)to be automatically parsed with the input csv than hardcoding them inside a list. with open('details.csv') as csvfile: rows=iter(csv.reader(csvfile)).next() header=rows[1:] re=csv.DictReader(csvfile) for row in re: print row for x in header: print row[x] This gives out an error Keyerrror:'name' in the line print row[x]. Where am I going wrong? Is it possible to fetch the column names using Dictreader?", "output": "You can read the header by using the next() function which return the next row of the reader\u2019s iterable object as a list. then you can add the content of the file to a list. import csv with open(\"C:/path/to/.filecsv\", \"rb\") as f: reader = csv.reader(f) i = reader.next() rest = list(reader) Now i has the column's names as a list. print i >>>['id', 'name', 'age', 'sex'] Also note that reader.next() does not work in python 3. Instead use the the inbuilt next() to get the first line of the csv immediately after reading like so: import csv with open(\"C:/path/to/.filecsv\", \"rb\") as f: reader = csv.reader(f) i = next(reader) print(i) >>>['id', 'name', 'age', 'sex']"} +{"question_id": 61922334, "score": 104, "creation_date": 1590006490, "tags": ["python", "tensorflow", "protocol-buffers", "object-detection-api", "proto"], "instruction": "How to solve \"AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key\"?\n\nI encountered it while executing from object_detection.utils import label_map_util in jupyter notebook. It is actually the tensorflow object detection tutorial notebook(it comes with the tensorflow object detection api) The complete error log: AttributeError Traceback (most recent call last) in 1 from object_detection.utils import ops as utils_ops ----> 2 from object_detection.utils import label_map_util 3 from object_detection.utils import visualization_utils as vis_util ~\\AppData\\Roaming\\Python\\Python37\\site-packages\\object_detection\\utils\\label_map_util.py in 25 import tensorflow as tf 26 from google.protobuf import text_format ---> 27 from object_detection.protos import string_int_label_map_pb2 28 29 ~\\AppData\\Roaming\\Python\\Python37\\site-packages\\object_detection\\protos\\string_int_label_map_pb2.py in 19 syntax='proto2', 20 serialized_options=None, ---> 21 create_key=_descriptor._internal_create_key, 22 serialized_pb=b'\\n2object_detection/protos/string_int_label_map.proto\\x12\\x17object_detection.protos\\\"\\xc0\\x01\\n\\x15StringIntLabelMapItem\\x12\\x0c\\n\\x04name\\x18\\x01 \\x01(\\t\\x12\\n\\n\\x02id\\x18\\x02 \\x01(\\x05\\x12\\x14\\n\\x0c\\x64isplay_name\\x18\\x03 \\x01(\\t\\x12M\\n\\tkeypoints\\x18\\x04 \\x03(\\x0b\\x32:.object_detection.protos.StringIntLabelMapItem.KeypointMap\\x1a(\\n\\x0bKeypointMap\\x12\\n\\n\\x02id\\x18\\x01 \\x01(\\x05\\x12\\r\\n\\x05label\\x18\\x02 \\x01(\\t\\\"Q\\n\\x11StringIntLabelMap\\x12<\\n\\x04item\\x18\\x01 \\x03(\\x0b\\x32..object_detection.protos.StringIntLabelMapItem' 23 ) AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'", "output": "The protoc version I got through pip show protobuf and protoc --version were different. The version in pip was a bit outdated. After I upgraded the pip version with pip install --upgrade protobuf the problem was solved."} +{"question_id": 6310867, "score": 104, "creation_date": 1307732219, "tags": ["python", "hash", "set"], "instruction": "Why aren't Python sets hashable?\n\nI stumbled across a blog post detailing how to implement a powerset function in Python. So I went about trying my own way of doing it, and discovered that Python apparently cannot have a set of sets, since set is not hashable. This is irksome, since the definition of a powerset is that it is a set of sets, and I wanted to implement it using actual set operations. >>> set([ set() ]) Traceback (most recent call last): File \"\", line 1, in TypeError: unhashable type: 'set' Is there a good reason Python sets are not hashable?", "output": "Generally, only immutable objects are hashable in Python. The immutable variant of set() -- frozenset() -- is hashable."} +{"question_id": 39924826, "score": 104, "creation_date": 1475869788, "tags": ["python", "jupyter-notebook"], "instruction": "Keyboard shortcut to clear cell output in Jupyter notebook\n\nDoes anyone know what is the keyboard shortcut to clear (not toggle) the cell output in Jupyter Notebook?", "output": "For older versions of Jupyter Notebook/Lab You can setup your own shortcut in the UI (for the latest master version): This menu can be found in Help > Keyboard Shortcuts in any open notebook. For new versions of Jupyter Notebook/Lab Find and modify existing shortcuts via Settings > Settings Editor > Keyboard Shortcuts OR Add new and custom shortcuts by opening the Advanced Settings Editor by going to Settings > Settings Editor > JSON Settings Editor. Note: JSON Settings Editor link is in the upper-right corner of the settings tab. OR Editing the JSON Settings file directly. On my Debian Bookworm installation I found the shortcut settings file for Jupyter Lab at the following path: ~/.jupyter/lab/user-settings/@jupyterlab/shortcuts-extension/shortcuts.jupyterlab-settings"} +{"question_id": 22642511, "score": 104, "creation_date": 1395769878, "tags": ["python", "matplotlib"], "instruction": "Change y range to start from 0 with matplotlib\n\nI am using matplotlib to plot data. Here's a code that does something similar: import matplotlib.pyplot as plt f, ax = plt.subplots(1) xdata = [1, 4, 8] ydata = [10, 20, 30] ax.plot(xdata, ydata) plt.show(f) This shows a line in a graph with the y axis that goes from 10 to 30. While I am satisfied with the x range, I would like to change the y range to start from 0 and adjust on the ymax to show everything. My current solution is to do: ax.set_ylim(0, max(ydata)) However I am wondering if there is a way to just say: autoscale but starts from 0.", "output": "The range must be set after the plot. import matplotlib.pyplot as plt f, ax = plt.subplots(1) xdata = [1, 4, 8] ydata = [10, 20, 30] ax.plot(xdata, ydata) ax.set_ylim(ymin=0) plt.show(f) If ymin is changed before plotting, this will result in a range of [0, 1]. Edit: the ymin argument has been replaced by bottom: ax.set_ylim(bottom=0) Documentation: https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_ylim.html You can do the same on the x axis with left and right: ax.set_xlim(left=0) Documentation: https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xlim.html"} +{"question_id": 3783238, "score": 104, "creation_date": 1285284949, "tags": ["python", "database-connection"], "instruction": "Python Database connection Close\n\nUsing the code below leaves me with an open connection, how do I close? import pyodbc conn = pyodbc.connect('DRIVER=MySQL ODBC 5.1 driver;SERVER=localhost;DATABASE=spt;UID=who;PWD=testest') csr = conn.cursor() csr.close() del csr", "output": "Connections have a close method as specified in PEP-249 (Python Database API Specification v2.0): import pyodbc conn = pyodbc.connect('DRIVER=MySQL ODBC 5.1 driver;SERVER=localhost;DATABASE=spt;UID=who;PWD=testest') csr = conn.cursor() csr.close() conn.close() #<--- Close the connection Since the pyodbc connection and cursor are both context managers, nowadays it would be more convenient (and preferable) to write this as: import pyodbc conn = pyodbc.connect('DRIVER=MySQL ODBC 5.1 driver;SERVER=localhost;DATABASE=spt;UID=who;PWD=testest') with conn: crs = conn.cursor() do_stuff # conn.commit() will automatically be called when Python leaves the outer `with` statement # Neither crs.close() nor conn.close() will be called upon leaving the `with` statement!! See https://github.com/mkleehammer/pyodbc/issues/43 for an explanation for why conn.close() is not called. Note that unlike the original code, this causes conn.commit() to be called. Use the outer with statement to control when you want commit to be called. Also note that regardless of whether or not you use the with statements, per the docs, Connections are automatically closed when they are deleted (typically when they go out of scope) so you should not normally need to call [conn.close()], but you can explicitly close the connection if you wish. and similarly for cursors (my emphasis): Cursors are closed automatically when they are deleted (typically when they go out of scope), so calling [csr.close()] is not usually necessary."} +{"question_id": 30132282, "score": 104, "creation_date": 1431116591, "tags": ["python", "datetime", "pandas"], "instruction": "datetime to string with series in pandas\n\nHow should I transform from datetime to string? My attempt: dates = p.to_datetime(p.Series(['20010101', '20010331']), format = '%Y%m%d') dates.str", "output": "There is no .str accessor for datetimes and you can't do .astype(str) either. Instead, use .dt.strftime: >>> series = pd.Series(['20010101', '20010331']) >>> dates = pd.to_datetime(series, format='%Y%m%d') >>> dates.dt.strftime('%Y-%m-%d') 0 2001-01-01 1 2001-03-31 dtype: object See the docs on customizing date string formats here: strftime() and strptime() Behavior. For old pandas versions <0.17.0, one can instead can call .apply with the Python standard library's datetime.strftime: >>> dates.apply(lambda x: x.strftime('%Y-%m-%d')) 0 2001-01-01 1 2001-03-31 dtype: object"} +{"question_id": 52670012, "score": 104, "creation_date": 1538757672, "tags": ["python", "opencv", "lbph-algorithm"], "instruction": "ConvergenceWarning: Liblinear failed to converge, increase the number of iterations\n\nRunning the code of linear binary pattern for Adrian. This program runs but gives the following warning: C:\\Python27\\lib\\site-packages\\sklearn\\svm\\base.py:922: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. \"the number of iterations.\", ConvergenceWarning I am running python2.7 with opencv3.7, what should I do?", "output": "Normally when an optimization algorithm does not converge, it is usually because the problem is not well-conditioned, perhaps due to a poor scaling of the decision variables. There are a few things you can try. Normalize your training data so that the problem hopefully becomes more well conditioned, which in turn can speed up convergence. One possibility is to scale your data to 0 mean, unit standard deviation using Scikit-Learn's StandardScaler for an example. Note that you have to apply the StandardScaler fitted on the training data to the test data. Also, if you have discrete features, make sure they are transformed properly so that scaling them makes sense. Related to 1), make sure the other arguments such as regularization weight, C, is set appropriately. C has to be > 0. Typically one would try various values of C in a logarithmic scale (1e-5, 1e-4, 1e-3, ..., 1, 10, 100, ...) before finetuning it at finer granularity within a particular interval. These days, it probably make more sense to tune parameters using, for e.g., Bayesian Optimization using a package such as Scikit-Optimize. Set max_iter to a larger value. The default is 1000. This should be your last resort. If the optimization process does not converge within the first 1000 iterations, having it converge by setting a larger max_iter typically masks other problems such as those described in 1) and 2). It might even indicate that you have some in appropriate features or strong correlations in the features. Debug those first before taking this easy way out. Set dual = True if number of features > number of examples and vice versa. This solves the SVM optimization problem using the dual formulation. Thanks @Nino van Hooff for pointing this out, and @JamesKo for spotting my mistake. Use a different solver, for e.g., the L-BFGS solver if you are using Logistic Regression. See @5ervant's answer. Note: One should not ignore this warning. This warning came about because Solving the linear SVM is just solving a quadratic optimization problem. The solver is typically an iterative algorithm that keeps a running estimate of the solution (i.e., the weight and bias for the SVM). It stops running when the solution corresponds to an objective value that is optimal for this convex optimization problem, or when it hits the maximum number of iterations set. If the algorithm does not converge, then the current estimate of the SVM's parameters are not guaranteed to be any good, hence the predictions can also be complete garbage. Edit In addition, consider the comment by @Nino van Hooff and @5ervant to use the dual formulation of the SVM. This is especially important if the number of features you have, D, is more than the number of training examples N. This is what the dual formulation of the SVM is particular designed for and helps with the conditioning of the optimization problem. Credit to @5ervant for noticing and pointing this out. Furthermore, @5ervant also pointed out the possibility of changing the solver, in particular the use of the L-BFGS solver. Credit to him (i.e., upvote his answer, not mine). I would like to provide a quick rough explanation for those who are interested (I am :)) why this matters in this case. Second-order methods, and in particular approximate second-order method like the L-BFGS solver, will help with ill-conditioned problems because it is approximating the Hessian at each iteration and using it to scale the gradient direction. This allows it to get better convergence rate but possibly at a higher compute cost per iteration. That is, it takes fewer iterations to finish but each iteration will be slower than a typical first-order method like gradient-descent or its variants. For e.g., a typical first-order method might update the solution at each iteration like x(k + 1) = x(k) - alpha(k) * gradient(f(x(k))) where alpha(k), the step size at iteration k, depends on the particular choice of algorithm or learning rate schedule. A second order method, for e.g., Newton, will have an update equation x(k + 1) = x(k) - alpha(k) * Hessian(x(k))^(-1) * gradient(f(x(k))) That is, it uses the information of the local curvature encoded in the Hessian to scale the gradient accordingly. If the problem is ill-conditioned, the gradient will be pointing in less than ideal directions and the inverse Hessian scaling will help correct this. In particular, L-BFGS mentioned in @5ervant's answer is a way to approximate the inverse of the Hessian as computing it can be an expensive operation. However, second-order methods might converge much faster (i.e., requires fewer iterations) than first-order methods like the usual gradient-descent based solvers, which as you guys know by now sometimes fail to even converge. This can compensate for the time spent at each iteration. In summary, if you have a well-conditioned problem, or if you can make it well-conditioned through other means such as using regularization and/or feature scaling and/or making sure you have more examples than features, you probably don't have to use a second-order method. But these days with many models optimizing non-convex problems (e.g., those in DL models), second order methods such as L-BFGS methods plays a different role there and there are evidence to suggest they can sometimes find better solutions compared to first-order methods. But that is another story."} +{"question_id": 15011367, "score": 104, "creation_date": 1361477678, "tags": ["python", "module"], "instruction": "Python Module Import: Single-line vs Multi-line\n\nWhat is the difference between these: from module import a, b, c, d from module import a from module import b from module import c from module import d It makes sense always to condense code and use the first example, but code samples do the second. Is it just preference?", "output": "There is no difference at all. They both function exactly the same. However, from a stylistic perspective, one might be more preferable than the other. And on that note, the PEP-8 for imports says that you should compress from module import name1, name2 onto a single line and leave import module1 on multiple lines: Yes: import os import sys No: import sys, os Ok: from subprocess import Popen, PIPE In response to @teewuane's comment: @inspectorG4dget What if you have to import several functions from one module and it ends up making that line longer than 80 char? I know that the 80 char thing is \"when it makes the code more readable\" but I am still wondering if there is a more tidy way to do this. And I don't want to do from foo import * even though I am basically importing everything. The issue here is that doing something like the following could exceed the 80 char limit: from module import func1, func2, func3, func4, func5 To this, I have two responses (I don't see PEP8 being overly clear about this): Break it up into two imports: from module import func1, func2, func3 from module import func4, func5 Doing this has the disadvantage that if module is removed from the codebase or otherwise refactored, then both import lines will need to be deleted. This could prove to be painful. Split the line: To mitigate the above concern, it may be wiser to do from module import func1, func2, func3, \\ func4, func5 This would result in an error if the second line is not deleted along with the first, while still maintaining the singular import statement."} +{"question_id": 2853531, "score": 104, "creation_date": 1274137542, "tags": ["python", "coding-style", "naming-conventions", "pep8"], "instruction": "How do you PEP 8-name a class whose name is an acronym?\n\nI try to adhere to the style guide for Python code (also known as PEP 8). Accordingly, the preferred way to name a class is using CamelCase: Almost without exception, class names use the CapWords convention. Classes for internal use have a leading underscore in addition. How can I be consistent with PEP 8 if my class name is formed by two acronyms (which in proper English should be capitalized). For instance, if my class name was 'NASA JPL', what would you name it?: class NASAJPL(): # 1 class NASA_JPL(): # 2 class NasaJpl(): # 3 I am using #1, but it looks weird; #3 looks weird too, and #2 seems to violate PEP 8.", "output": "PEP-8 does cover this (at least partially): Note: When using acronyms in CapWords, capitalize all the letters of the acronym. Thus HTTPServerError is better than HttpServerError. Which I would read to mean that NASAJPL() is the recommended name according to PEP-8. Personally I'd find NasaJpl() the easiest to scan since the upper case letters easily mark word boundaries and give the name a distinctive shape."} +{"question_id": 52547805, "score": 104, "creation_date": 1538104799, "tags": ["python", "pandas", "dictionary", "dataframe"], "instruction": "How to convert dataframe to dictionary in pandas WITHOUT index\n\nI have a dataframe df as follows: | name | coverage | |-------|----------| | Jason | 25.1 | I want to convert it to a dictionary. I used the following command in pandas : dict=df.to_dict() The output of dict gave me the following: {'coverage': {0: 25.1}, 'name': {0: 'Jason'}} I do not want the 0 in my output. I believe this is captured because of the column index in my dataframe df. What can I do to eliminate 0 in my output ( I do not want index to be captured.) expected output : {'coverage': 25.1, 'name': 'Jason'}", "output": "When I see your dataset with 2 columns I see a series and not a dataframe. Try this: d = df.set_index('name')['coverage'].to_dict() which will convert your dataframe to a series and output that. However, if your intent is to have more columns and not a common key you could store them in an array instead using 'records'. d = df.to_dict('r'). ` Runnable code: import pandas as pd df = pd.DataFrame({ 'name': ['Jason'], 'coverage': [25.1] }) print(df.to_dict()) print(df.set_index('name')['coverage'].to_dict()) print(df.to_dict('records')) Returns: {'name': {0: 'Jason'}, 'coverage': {0: 25.1}} {'Jason': 25.1} [{'name': 'Jason', 'coverage': 25.1}] And one more thing, try to avoid to use variable name dict as it is reserved. Updated 2013-11-01. Now use 'records' instead of 'r' thanks to comment."} +{"question_id": 2064202, "score": 104, "creation_date": 1263474379, "tags": ["python", "class", "oop", "member-variables"], "instruction": "Private members in Python\n\nHow can I make methods and data members private in Python? Or doesn't Python support private members?", "output": "9.6. Private Variables \u201cPrivate\u201d instance variables that cannot be accessed except from inside an object, don\u2019t exist in Python. However, there is a convention that is followed by most Python code: a name prefixed with an underscore (e.g. _spam) should be treated as a non-public part of the API (whether it is a function, a method or a data member). It should be considered an implementation detail and subject to change without notice. Since there is a valid use-case for class-private members (namely to avoid name clashes of names with names defined by subclasses), there is limited support for such a mechanism, called name mangling. Any identifier of the form __spam (at least two leading underscores, at most one trailing underscore) is textually replaced with _classname__spam, where classname is the current class name with leading underscore(s) stripped. This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class. So, for example, class Test: def __private_symbol(self): pass def normal_symbol(self): pass print dir(Test) will output: ['_Test__private_symbol', '__doc__', '__module__', 'normal_symbol'] __private_symbol should be considered a private method, but it would still be accessible through _Test__private_symbol."} +{"question_id": 24495695, "score": 104, "creation_date": 1404149457, "tags": ["python", "pandas"], "instruction": "Pandas: Get unique MultiIndex level values by label\n\nSay you have this MultiIndex-ed DataFrame: df = pd.DataFrame({'country':['DE','DE','FR','FR'], 'biome':['Lake','Forest','Lake','Forest'], 'area':[10,20,30,40], 'count':[7,5,2,3]}) df = df.set_index(['country','biome']) Which looks like this: area count country biome DE Lake 10 7 Forest 20 5 FR Lake 30 2 Forest 40 3 I would like to retrieve the unique values per index level. This can be accomplished using >>> df.index.levels[0] ['DE', 'FR'] >>> df.index.levels[1] ['Lake', 'Forest'] What I would really like to do, is to retrieve these lists by addressing the levels by their name, i.e. 'country' and 'biome'. The shortest two ways I could find looks like this: >>> list(set(df.index.get_level_values('country'))) ['DE', 'FR'] >>> df.index.levels[df.index.names.index('country')] ['DE', 'FR'] But non of them are very elegant. Is there a shorter and/or more performant way?", "output": "Pandas 0.23.0 finally introduced a much cleaner solution to this problem: the level argument to Index.unique(): In [3]: df.index.unique(level='country') Out[3]: Index(['DE', 'FR'], dtype='object', name='country') This is now the recommended solution. It is far more efficient because it avoids creating a complete representation of the level values in memory, and re-scanning it."} +{"question_id": 7542957, "score": 104, "creation_date": 1316912424, "tags": ["python", "multithreading"], "instruction": "is python capable of running on multiple cores?\n\nQuestion: Because of python's use of \"GIL\" is python capable running its separate threads simultaneously? Info: After reading this I came away rather uncertain on whether or not python is capable of taking advantage of a multi-core processor. As well done as python is, it feels really weird to think that it would lack such a powerful ability. So feeling uncertain, I decided to ask here. If I write a program that is multi threaded, will it be capable of executing simultaneously on multiple cores?", "output": "The answer is \"Yes, But...\" But cPython cannot when you are using regular threads for concurrency. You can either use something like multiprocessing, celery or mpi4py to split the parallel work into another process; Or you can use something like Jython or IronPython to use an alternative interpreter that doesn't have a GIL. A softer solution is to use libraries that don't run afoul of the GIL for heavy CPU tasks, for instance numpy can do the heavy lifting while not retaining the GIL, so other python threads can proceed. You can also use the ctypes library in this way. If you are not doing CPU bound work, you can ignore the GIL issue entirely (kind of) since python won't acquire the GIL while it's waiting for IO."} +{"question_id": 44209978, "score": 104, "creation_date": 1495832099, "tags": ["python", "reactjs", "flask", "create-react-app"], "instruction": "Serving a front end created with create-react-app with Flask\n\nI have a Flask back-end with API routes which are accessed by a React single page application created using create-react-app. When using the create-react-app dev server, my Flask back end works. I would like to serve the built (using npm run build) static React app from my Flask server. Building the React app leads to the following directory structure: - build - static - css - style.[crypto].css - style.[crypto].css.map - js - main.[crypto].js - main.[crypto].js.map - index.html - service-worker.js - [more meta files] By [crypto], I mean the randomly generated strings generated at build time. Having received the index.html file, the browser then makes the following requests: - GET /static/css/main.[crypto].css - GET /static/css/main.[crypto].css - GET /service-worker.js How should I serve these files? I came up with this: from flask import Blueprint, send_from_directory static = Blueprint('static', __name__) @static.route('/') def serve_static_index(): return send_from_directory('../client/build/', 'index.html') @static.route('/static/') # serve whatever the client requested in the static folder def serve_static(path): return send_from_directory('../client/build/static/', path) @static.route('/service-worker.js') def serve_worker(): return send_from_directory('../client/build/', 'service-worker.js') This way, the static assets are successfully served. On the other hand, I could incorporate this with the built-in Flask static utilities. But I do not understand how to configure this. Is my solution robust enough? Is there a way to use built-in Flask features to serve these assets? Is there a better way to use create-react-app?", "output": "import os from flask import Flask, send_from_directory app = Flask(__name__, static_folder='react_app/build') # Serve React App @app.route('/', defaults={'path': ''}) @app.route('/') def serve(path): if path != \"\" and os.path.exists(app.static_folder + '/' + path): return send_from_directory(app.static_folder, path) else: return send_from_directory(app.static_folder, 'index.html') if __name__ == '__main__': app.run(use_reloader=True, port=5000, threaded=True) Thats what I ended up with. So bascially catch all routes, test if the path is a file => send file => else send the index.html. That way you can reload the react app from any route you wish and it does not break."} +{"question_id": 51366452, "score": 104, "creation_date": 1531759089, "tags": ["python", "python-3.x", "pathlib"], "instruction": "Is there an idiomatic way to add an extension using Python's Pathlib?\n\nI'm using Python's Pathlib and I want to take something like p = Path('/path/to/foo') And then try a couple of different extensions. I can do for ext in ['.txt', '.md', '.note'] filename = Path(str(p) + ext) but that feels a little awkward. Is there a better way to do this?", "output": "The with_suffix method will return a new path with a different extension, either changing an existing extension or adding a new one. Examples from the docs: >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz') >>> p.with_suffix('.bz2') PureWindowsPath('c:/Downloads/pathlib.tar.bz2') >>> p = PureWindowsPath('README') >>> p.with_suffix('.txt') PureWindowsPath('README.txt') In your case, p.with_suffix(ext) would do the job. For cases where you need to add a suffix after any existing suffixes instead of removing existing suffixes, you can use p.with_suffix(p.suffix+ext). This is kind of clunky, though, and I don't know whether I would prefer it over Path(str(p)+ext)."} +{"question_id": 7334199, "score": 104, "creation_date": 1315399956, "tags": ["python", "bottle"], "instruction": "\"getaddrinfo failed\", what does that mean?\n\nFile \"C:\\Python27\\lib\\socket.py\", line 224, in meth return getattr(self._sock,name)(*args) gaierror: [Errno 11004] getaddrinfo failed Getting this error when launching the \"Hello World\" sample from the Bottle documentation", "output": "It most likely means the hostname can't be resolved. import socket socket.getaddrinfo('localhost', 8080) If it doesn't work there, it's not going to work in the Bottle example. You can try '127.0.0.1' instead of 'localhost' in case that's the problem."} +{"question_id": 7986567, "score": 104, "creation_date": 1320263837, "tags": ["python", "matplotlib"], "instruction": "Matplotlib: how to set the current figure?\n\nThis is hopefully a simple question but I can't figure it out at the moment. I want to use matplotlib to show 2 figures and then use them interactively. I create the figures with: import matplotlib import pylab as pl f1 = pl.figure() f2 = pl.figure() and can use the MATLAB-like pyplot interface to plot and draw in both figures. With current_figure = pl.gcf() I can determine the currently active figure for the pyplot interface, depending on which figure I clicked in. Now I want to draw something to the first figure with the pyplot interface but the current figure can be either of them. So is there something like pl.set_current_figure(figure) or any workaround? (I know that I can use the object oriented interface but for interactive stuff just using commands like plot(x, y) is much nicer)", "output": "You can simply set figure f1 as the new current figure with: pl.figure(f1.number) Another option is to give names (or numbers) to figures, which might help make the code easier to read: pl.figure(\"Share values\") # ... some plots ... pl.figure(\"Profits\") # ... some plots ... pl.figure(\"Share values\") # Selects the first figure again In fact, figure \"numbers\" can be strings, which are arguably more explicit that simple numbers. PS: The pyplot equivalent of pylab.figure() is matplotlib.pyplot.figure(). PPS: figure() now accepts a Figure object, so you should be able to activate figure f1 with figure(f1)."} +{"question_id": 1313812, "score": 104, "creation_date": 1250882899, "tags": ["python", "exception"], "instruction": "Raise exception vs. return None in functions?\n\nWhat's better practice in a user-defined function in Python: raise an exception or return None? For example, I have a function that finds the most recent file in a folder. def latestpdf(folder): # list the files and sort them try: latest = files[-1] except IndexError: # Folder is empty. return None # One possibility raise FileNotFoundError() # Alternative else: return somefunc(latest) # In my case, somefunc parses the filename Another option is leave the exception and handle it in the caller code, but I figure it's more clear to deal with a FileNotFoundError than an IndexError. Or is it bad form to re-raise an exception with a different name?", "output": "It's really a matter of semantics. What does foo = latestpdf(d) mean? Is it perfectly reasonable that there's no latest file? Then sure, just return None. Are you expecting to always find a latest file? Raise an exception. And yes, re-raising a more appropriate exception is fine. If this is just a general function that's supposed to apply to any directory, I'd do the former and return None. If the directory is, e.g., meant to be a specific data directory that contains an application's known set of files, I'd raise an exception."} +{"question_id": 14384739, "score": 104, "creation_date": 1358443780, "tags": ["python", "rest", "flask"], "instruction": "How can I add a background thread to flask?\n\nI'm busy writing a small game server to try out flask. The game exposes an API via REST to users. It's easy for users to perform actions and query data, however I'd like to service the \"game world\" outside the app.run() loop to update game entities, etc. Given that Flask is so cleanly implemented, I'd like to see if there's a Flask way to do this.", "output": "Your additional threads must be initiated from the same app that is called by the WSGI server. The example below creates a background timer-thread that executes every 5 seconds and manipulates data structures that are also available to Flask routed functions. import threading import atexit from flask import Flask POOL_TIME = 5 #Seconds # variables that are accessible from anywhere common_data_struct = {} # lock to control access to variable data_lock = threading.Lock() # timer handler your_timer = threading.Timer(0,lambda x: None,()) def create_app(): app = Flask(__name__) def interrupt(): global your_timer your_timer.cancel() def do_stuff(): global common_data_struct global your_timer with data_lock: pass # Do your stuff with common_data_struct Here # Set the next timeout to happen your_timer = threading.Timer(POOL_TIME, do_stuff, ()) your_timer.start() def do_stuff_start(): # Do initialisation stuff here global your_timer # Create your timer your_timer = threading.Timer(POOL_TIME, do_stuff, ()) your_timer.start() # Initiate do_stuff_start() # When you kill Flask (SIGTERM), cancels the timer atexit.register(interrupt) return app app = create_app() Call it from Gunicorn with something like this: gunicorn -b 0.0.0.0:5000 --log-config log.conf --pid=app.pid myfile:app Signal termination works best on OS's other than Windows. Although this creates a new timer after each timeout, the other timers should eventually be garbage-collected."} +{"question_id": 58298774, "score": 103, "creation_date": 1570604885, "tags": ["python", "docker", "kubernetes", "dockerfile", "minikube"], "instruction": "standard_init_linux.go:211: exec user process caused \"exec format error\"\n\nI am building the Dockerfile for python script which will run in minikube windows 10 system below is my Dockerfile Building the docker using the below command docker build -t python-helloworld . and loading that in minikube docker demon docker save python-helloworld | (eval $(minikube docker-env) && docker load) Docker File FROM python:3.7-alpine #add user group and ass user to that group RUN addgroup -S appgroup && adduser -S appuser -G appgroup #creates work dir WORKDIR /app #copy python script to the container folder app COPY helloworld.py /app/helloworld.py #user is appuser USER appuser ENTRYPOINT [\"python\", \"/app/helloworld.py\"] pythoncronjob.yml file (cron job file) apiVersion: batch/v1beta1 kind: CronJob metadata: name: python-helloworld spec: schedule: \"*/1 * * * *\" jobTemplate: spec: backoffLimit: 5 template: spec: containers: - name: python-helloworld image: python-helloworld imagePullPolicy: IfNotPresent command: [/app/helloworld.py] restartPolicy: OnFailure Below is the command to run this Kubernetes job kubectl create -f pythoncronjob.yml But getting the below error job is not running scuessfully but when u ran the Dockerfile alone its work fine standard_init_linux.go:211: exec user process caused \"exec format error\"", "output": "I can see that you add the command command: [/app/helloworld.py] to yaml file. so you need to (in Dockerfile): RUN chmod +x /app/helloworld.py set shebang to your py file: #!/usr/bin/env python # whatever your defualt python to run the script or setup the command the same as you did in Dockerfile"} +{"question_id": 4545661, "score": 103, "creation_date": 1293535485, "tags": ["python", "unicode"], "instruction": "UnicodeDecodeError when redirecting to file\n\nI run this snippet twice, in the Ubuntu terminal (encoding set to utf-8), once with ./test.py and then with ./test.py >out.txt: uni = u\"\\u001A\\u0BC3\\u1451\\U0001D10C\" print uni Without redirection it prints garbage. With redirection I get a UnicodeDecodeError. Can someone explain why I get the error only in the second case, or even better give a detailed explanation of what's going on behind the curtain in both cases?", "output": "The whole key to such encoding problems is to understand that there are in principle two distinct concepts of \"string\": (1) string of characters, and (2) string/array of bytes. This distinction has been mostly ignored for a long time because of the historic ubiquity of encodings with no more than 256 characters (ASCII, Latin-1, Windows-1252, Mac OS Roman,\u2026): these encodings map a set of common characters to numbers between 0 and 255 (i.e. bytes); the relatively limited exchange of files before the advent of the web made this situation of incompatible encodings tolerable, as most programs could ignore the fact that there were multiple encodings as long as they produced text that remained on the same operating system: such programs would simply treat text as bytes (through the encoding used by the operating system). The correct, modern view properly separates these two string concepts, based on the following two points: Characters are mostly unrelated to computers: one can draw them on a chalk board, etc., like for instance \u0628\u0627\u064a\u062b\u0648\u0646, \u4e2d\u87d2 and \ud83d\udc0d. \"Characters\" for machines also include \"drawing instructions\" like for example spaces, carriage return, instructions to set the writing direction (for Arabic, etc.), accents, etc. A very large character list is included in the Unicode standard; it covers most of the known characters. On the other hand, computers do need to represent abstract characters in some way: for this, they use arrays of bytes (numbers between 0 and 255 included), because their memory comes in byte chunks. The necessary process that converts characters to bytes is called encoding. Thus, a computer requires an encoding in order to represent characters. Any text present on your computer is encoded (until it is displayed), whether it be sent to a terminal (which expects characters encoded in a specific way), or saved in a file. In order to be displayed or properly \"understood\" (by, say, the Python interpreter), streams of bytes are decoded into characters. A few encodings (UTF-8, UTF-16,\u2026) are defined by Unicode for its list of characters (Unicode thus defines both a list of characters and encodings for these characters\u2014there are still places where one sees the expression \"Unicode encoding\" as a way to refer to the ubiquitous UTF-8, but this is incorrect terminology, as Unicode provides multiple encodings). In summary, computers need to internally represent characters with bytes, and they do so through two operations: Encoding: characters \u2192 bytes Decoding: bytes \u2192 characters Some encodings cannot encode all characters (e.g., ASCII), while (some) Unicode encodings allow you to encode all Unicode characters. The encoding is also not necessarily unique, because some characters can be represented either directly or as a combination (e.g. of a base character and of accents). Note that the concept of newline adds a layer of complication, since it can be represented by different (control) characters that depend on the operating system (this is the reason for Python's universal newline file reading mode). Some more information on Unicode, characters and code points, if you are interested: Now, what I have called \"character\" above is what Unicode calls a \"user-perceived character\". A single user-perceived character can sometimes be represented in Unicode by combining character parts (base character, accents,\u2026) found at different indexes in the Unicode list, which are called \"code points\"\u2014these codes points can be combined together to form a \"grapheme cluster\". Unicode thus leads to a third concept of string, made of a sequence of Unicode code points, that sits between byte and character strings, and which is closer to the latter. I will call them \"Unicode strings\" (like in Python 2). While Python can print strings of (user-perceived) characters, Python non-byte strings are essentially sequences of Unicode code points, not of user-perceived characters. The code point values are the ones used in Python's \\u and \\U Unicode string syntax. They should not be confused with the encoding of a character (and do not have to bear any relationship with it: Unicode code points can be encoded in various ways). This has an important consequence: the length of a Python (Unicode) string is its number of code points, which is not always its number of user-perceived characters: thus s = \"\\u1100\\u1161\\u11a8\"; print(s, \"len\", len(s)) (Python 3) gives \uac01 len 3 despite s having a single user-perceived (Korean) character (because it is represented with 3 code points\u2014even if it does not have to, as print(\"\\uac01\") shows). However, in many practical circumstances, the length of a string is its number of user-perceived characters, because many characters are typically stored by Python as a single Unicode code point. In Python 2, Unicode strings are called\u2026 \"Unicode strings\" (unicode type, literal form u\"\u2026\"), while byte arrays are \"strings\" (str type, where the array of bytes can for instance be constructed with string literals \"\u2026\"). In Python 3, Unicode strings are simply called \"strings\" (str type, literal form \"\u2026\"), while byte arrays are \"bytes\" (bytes type, literal form b\"\u2026\"). As a consequence, something like \"\ud83d\udc0d\"[0] gives a different result in Python 2 ('\\xf0', a byte) and Python 3 (\"\ud83d\udc0d\", the first and only character). With these few key points, you should be able to understand most encoding related questions! Normally, when you print u\"\u2026\" to a terminal, you should not get garbage: Python knows the encoding of your terminal. In fact, you can check what encoding the terminal expects: % python Python 2.7.6 (default, Nov 15 2013, 15:20:37) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import sys >>> print sys.stdout.encoding UTF-8 If your input characters can be encoded with the terminal's encoding, Python will do so and will send the corresponding bytes to your terminal without complaining. The terminal will then do its best to display the characters after decoding the input bytes (at worst the terminal font does not have some of the characters and will print some kind of blank instead). If your input characters cannot be encoded with the terminal's encoding, then it means that the terminal is not configured for displaying these characters. Python will complain (in Python with a UnicodeEncodeError since the character string cannot be encoded in a way that suits your terminal). The only possible solution is to use a terminal that can display the characters (either by configuring the terminal so that it accepts an encoding that can represent your characters, or by using a different terminal program). This is important when you distribute programs that can be used in different environments: messages that you print should be representable in the user's terminal. Sometimes it is thus best to stick to strings that only contain ASCII characters. However, when you redirect or pipe the output of your program, then it is generally not possible to know what the input encoding of the receiving program is, and the above code returns some default encoding: None (Python 2.7) or UTF-8 (Python 3): % python2.7 -c \"import sys; print sys.stdout.encoding\" | cat None % python3.4 -c \"import sys; print(sys.stdout.encoding)\" | cat UTF-8 The encoding of stdin, stdout and stderr can however be set through the PYTHONIOENCODING environment variable, if needed: % PYTHONIOENCODING=UTF-8 python2.7 -c \"import sys; print sys.stdout.encoding\" | cat UTF-8 If the printing to a terminal does not produce what you expect, you can check the UTF-8 encoding that you put manually in is correct; for instance, your first character (\\u001A) is not printable, if I'm not mistaken. At http://wiki.python.org/moin/PrintFails, you can find a solution like the following, for Python 2.x: import codecs import locale import sys # Wrap sys.stdout into a StreamWriter to allow writing unicode. sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout) uni = u\"\\u001A\\u0BC3\\u1451\\U0001D10C\" print uni For Python 3, you can check one of the questions asked previously on StackOverflow."} +{"question_id": 59882714, "score": 103, "creation_date": 1579795640, "tags": ["python", "pandas", "datetime", "date-range"], "instruction": "Python generating a list of dates between two dates\n\nI want to generate a list of dates between two dates and store them in a list in string format. This list is useful to compare with other dates I have. My code is given below: from datetime import date, timedelta sdate = date(2019,3,22) # start date edate = date(2019,4,9) # end date def dates_bwn_twodates(start_date, end_date): for n in range(int ((end_date - start_date).days)): yield start_date + timedelta(n) print(dates_bwn_twodates(sdate,edate)) My present output: My expected output: ['2019-03-22',.....,'2019-04-08'] Something wrong in my code.", "output": "You can use pandas.date_range() for this: import pandas pandas.date_range(sdate,edate-timedelta(days=1),freq='d') DatetimeIndex(['2019-03-22', '2019-03-23', '2019-03-24', '2019-03-25', '2019-03-26', '2019-03-27', '2019-03-28', '2019-03-29', '2019-03-30', '2019-03-31', '2019-04-01', '2019-04-02', '2019-04-03', '2019-04-04', '2019-04-05', '2019-04-06', '2019-04-07', '2019-04-08'], dtype='datetime64[ns]', freq='D')"} +{"question_id": 23231605, "score": 103, "creation_date": 1398206517, "tags": ["python", "pandas"], "instruction": "Convert Pandas dataframe to csv string\n\nHere is an example of what I am trying to get: I have: import pandas as pd df = pd.DataFrame({'A' : [0, 1], 'B' : [1, 6]}) My goal is: ',A,B\\n0,0,1\\n1,1,6\\n' I can achieve this with lazy and horrible: df.to_csv('temp.csv') # create unnecessary file body = open('temp.csv').read() Also to_string() methods looks very promising; however, the best I can come up with is this: body = df.to_string()[1:].replace(' ', ',') + '\\n' This does not create an unnecessary file, but seems sloppy and perhaps not very reliable. Am I missing a simpler solution?", "output": "In [10]: df = pd.DataFrame({'A' : [0, 1], 'B' : [1, 6]}) In [11]: import io In [12]: s = io.StringIO() In [13]: df.to_csv(s) In [14]: s.getvalue() Out[14]: ',A,B\\n0,0,1\\n1,1,6\\n'"} +{"question_id": 15193927, "score": 103, "creation_date": 1362366075, "tags": ["python", "math", "operators", "native"], "instruction": "What do these operators mean (** , ^ , %, //)?\n\nOther than the standard +, -, *and / operators; but what does these mean (** , ^ , %, //) ? >>> 9+float(2) # addition 11.0 >>> 9-float(2) # subtraction 7.0 >>> 9*float(2) # multiplication 18.0 >>> 9/float(2) # division 4.5 >>> >>> 9**float(2) # This looks like a square, (i.e. power 2) 81.0 >>> 9**float(3) # So ** is equivalent to `math.pow(x,p)` ? 729.0 How about the ^ operator? >>> 9^int(2) # What is `^` in `x^u` , it only allows `int` for `u` 11 >>> 9^int(3) 10 >>> 9^int(4) 13 >>> 9^int(5) 12 >>> 9^int(6) 15 >>> 9^int(7) 14 >>> 9^int(8) 1 >>> 9^int(9) 0 >>> 9^int(10) 3 >>> 9^int(11) 2 >>> 9^int(12) 5 % in x%m returns a normal remainder modulus, but only if m < x, why is that so? What does % do? >>> 9%float(2) 1.0 >>> 9%float(3) 0.0 >>> 9%float(4) 1.0 >>> 9%float(5) 4.0 >>> 9%float(6) 3.0 >>> 9%float(7) 2.0 >>> 9%float(8) 1.0 >>> 9%float(9) 0.0 >>> 9%float(10) 9.0 >>> 9%float(11) 9.0 >>> 9%float(12) 9.0 How about the // operator? what does it do? >>> 9//float(2) 4.0 >>> 9//float(3) 3.0 >>> 9//float(4) 2.0 >>> 9//float(5) 1.0 >>> 9//float(6) 1.0 >>> 9//float(7) 1.0 >>> 9//float(8) 1.0 >>> 9//float(9) 1.0 >>> 9//float(1) 9.0 >>> 9//float(0.5) 18.0", "output": "**: exponentiation ^: exclusive-or (bitwise) %: modulus //: divide with integral result (discard remainder)"} +{"question_id": 7981840, "score": 103, "creation_date": 1320243540, "tags": ["python", "xml", "lxml"], "instruction": "how to remove an element in lxml\n\nI need to completely remove elements, based on the contents of an attribute, using python's lxml. Example: import lxml.etree as et xml=\"\"\" apple pear starfruit mango peach \"\"\" tree=et.fromstring(xml) for bad in tree.xpath(\"//fruit[@state=\\'rotten\\']\"): #remove this element from the tree print et.tostring(tree, pretty_print=True) I would like this to print: pear starfruit peach Is there a way to do this without storing a temporary variable and printing to it manually, as: newxml=\"\\n\" for elt in tree.xpath('//fruit[@state=\\'fresh\\']'): newxml+=et.tostring(elt) newxml+=\"\"", "output": "Use the remove method of an xmlElement : tree=et.fromstring(xml) for bad in tree.xpath(\"//fruit[@state=\\'rotten\\']\"): bad.getparent().remove(bad) # here I grab the parent of the element to call the remove directly on it print et.tostring(tree, pretty_print=True, xml_declaration=True) If I had to compare with the @Acorn version, mine will work even if the elements to remove are not directly under the root node of your xml."} +{"question_id": 40183108, "score": 103, "creation_date": 1477073938, "tags": ["python", "pip"], "instruction": "pip says: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes\n\nI am using pip to install all my python packages but get error as shown in the trace below. What is the problem and how can I solve it? usr@comp:~$ pip install flask Collecting flask Using cached Flask-0.11.1-py2.py3-none-any.whl Collecting itsdangerous>=0.21 (from flask) Using cached itsdangerous-0.24.tar.gz Collecting click>=2.0 (from flask) Using cached click-6.6.tar.gz Collecting Werkzeug>=0.7 (from flask) Using cached Werkzeug-0.11.11-py2.py3-none-any.whl Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.4 in /usr/lib/python2.7/dist-packages (from flask) Requirement already satisfied (use --upgrade to upgrade): MarkupSafe in /usr/lib/python2.7/dist-packages (from Jinja2>=2.4->flask) THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them. Werkzeug>=0.7 from https://pypi.python.org/packages/a9/5e/41f791a3f380ec50f2c4c3ef1399d9ffce6b4fe9a7f305222f014cf4fe83/Werkzeug-0.11.11-py2.py3-none-any.whl#md5=c63a21eedce9504d223ed89358c4bdc9 (from flask): Expected md5 c63a21eedce9504d223ed89358c4bdc9 Got 13a168aafcc43354b6c79ef44bb0dc71", "output": "There is a similar problem (Why does pip fail with bad md5 hash for package?) from 2013 the solution that I tried that worked for me is this: sudo pip install --no-cache-dir flask given by attolee"} +{"question_id": 36725843, "score": 103, "creation_date": 1461088175, "tags": ["python", "pip", "openstack"], "instruction": "installing python packages without internet and using source code as .tar.gz and .whl\n\nwe are trying to install couple of python packages without internet. For ex : python-keystoneclient For that we have the packages downloaded from https://pypi.python.org/pypi/python-keystoneclient/1.7.1 and kept it in server. However, while installing tar.gz and .whl packages , the installation is looking for dependent packages to be installed first. Since there is no internet connection in the server, it is getting failed. For ex : For python-keystoneclient we have the following dependent packages stevedore (>=1.5.0) six (>=1.9.0) requests (>=2.5.2) PrettyTable (<0.8,>=0.7) oslo.utils (>=2.0.0) oslo.serialization (>=1.4.0) oslo.i18n (>=1.5.0) oslo.config (>=2.3.0) netaddr (!=0.7.16,>=0.7.12) debtcollector (>=0.3.0) iso8601 (>=0.1.9) Babel (>=1.3) argparse pbr (<2.0,>=1.6) When i try to install packages one by one from the above list, once again its looking for nested dependency . Is there any way we could list ALL the dependent packages for installing a python module like python-keystoneclient.", "output": "This is how I handle this case: On the machine where I have access to Internet: mkdir keystone-deps pip download python-keystoneclient -d \"/home/aviuser/keystone-deps\" tar cvfz keystone-deps.tgz keystone-deps Then move the tar file to the destination machine that does not have Internet access and perform the following: tar xvfz keystone-deps.tgz cd keystone-deps pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index You may need to add --no-deps to the command as follows: pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index --no-deps"} +{"question_id": 18018033, "score": 103, "creation_date": 1375449904, "tags": ["python", "multithreading", "wxpython"], "instruction": "How to stop a looping thread in Python?\n\nWhat's the proper way to tell a looping thread to stop looping? I have a fairly simple program that pings a specified host in a separate threading.Thread class. In this class it sleeps 60 seconds, the runs again until the application quits. I'd like to implement a 'Stop' button in my wx.Frame to ask the looping thread to stop. It doesn't need to end the thread right away, it can just stop looping once it wakes up. Here is my threading class (note: I haven't implemented looping yet, but it would likely fall under the run method in PingAssets) class PingAssets(threading.Thread): def __init__(self, threadNum, asset, window): threading.Thread.__init__(self) self.threadNum = threadNum self.window = window self.asset = asset def run(self): config = controller.getConfig() fmt = config['timefmt'] start_time = datetime.now().strftime(fmt) try: if onlinecheck.check_status(self.asset): status = \"online\" else: status = \"offline\" except socket.gaierror: status = \"an invalid asset tag.\" msg =(\"{}: {} is {}. \\n\".format(start_time, self.asset, status)) wx.CallAfter(self.window.Logger, msg) And in my wxPyhton Frame I have this function called from a Start button: def CheckAsset(self, asset): self.count += 1 thread = PingAssets(self.count, asset, self) self.threads.append(thread) thread.start()", "output": "Threaded stoppable function Instead of subclassing threading.Thread, one can modify the function to allow stopping by a flag. We need an object, accessible to running function, to which we set the flag to stop running. We can use threading.currentThread() object. import threading import time def doit(arg): t = threading.currentThread() while getattr(t, \"do_run\", True): print (\"working on %s\" % arg) time.sleep(1) print(\"Stopping as you wish.\") def main(): t = threading.Thread(target=doit, args=(\"task\",)) t.start() time.sleep(5) t.do_run = False if __name__ == \"__main__\": main() The trick is, that the running thread can have attached additional properties. The solution builds on assumptions: the thread has a property \"do_run\" with default value True driving parent process can assign to started thread the property \"do_run\" to False. Running the code, we get following output: $ python stopthread.py working on task working on task working on task working on task working on task Stopping as you wish. Pill to kill - using Event Other alternative is to use threading.Event as function argument. It is by default False, but external process can \"set it\" (to True) and function can learn about it using wait(timeout) function. We can wait with zero timeout, but we can also use it as the sleeping timer (used below). def doit(stop_event, arg): while not stop_event.wait(1): print (\"working on %s\" % arg) print(\"Stopping as you wish.\") def main(): pill2kill = threading.Event() t = threading.Thread(target=doit, args=(pill2kill, \"task\")) t.start() time.sleep(5) pill2kill.set() t.join() Edit: I tried this in Python 3.6. stop_event.wait() blocks the event (and so the while loop) until release. It does not return a boolean value. Using stop_event.is_set() works instead. Stopping multiple threads with one pill Advantage of pill to kill is better seen, if we have to stop multiple threads at once, as one pill will work for all. The doit will not change at all, only the main handles the threads a bit differently. def main(): pill2kill = threading.Event() tasks = [\"task ONE\", \"task TWO\", \"task THREE\"] def thread_gen(pill2kill, tasks): for task in tasks: t = threading.Thread(target=doit, args=(pill2kill, task)) yield t threads = list(thread_gen(pill2kill, tasks)) for thread in threads: thread.start() time.sleep(5) pill2kill.set() for thread in threads: thread.join()"} +{"question_id": 15321431, "score": 103, "creation_date": 1362912847, "tags": ["javascript", "python", "variables", "jinja2"], "instruction": "How to pass a list from Python, by Jinja2 to JavaScript\n\nLet's say I have a Python variable: list_of_items = ['1','2','3','4','5'] and I pass it to Jinja by rendering HTML, and I also have a function in JavaScript called somefunction(variable). I am trying to pass each item of list_of_items. I tried something like this: {% for item in list_of_items %} {{item}}
{% endfor %} Is it possible to pass a list from Python to JavaScript or should I pass each item from list one by one in a loop? How can I do this?", "output": "This answer is unsafe, please refer to Mark Amery's answer below for the right way to do this. To pass some context data to javascript code, you have to serialize it in a way it will be \"understood\" by javascript (namely JSON). You also need to mark it as safe using the safe Jinja filter, to prevent your data from being htmlescaped. You can achieve this by doing something like that: The view import json @app.route('/') def my_view(): data = [1, 'foo'] return render_template('index.html', data=json.dumps(data)) The template Edit - exact answer So, to achieve exactly what you want (loop over a list of items, and pass them to a javascript function), you'd need to serialize every item in your list separately. Your code would then look like this: The view import json @app.route('/') def my_view(): data = [1, \"foo\"] return render_template('index.html', data=map(json.dumps, data)) The template {% for item in data %} {{ item }} {% endfor %} Edit 2 In my example, I use Flask, I don't know what framework you're using, but you got the idea, you just have to make it fit the framework you use. Edit 3 (Security warning) NEVER EVER DO THIS WITH USER-SUPPLIED DATA, ONLY DO THIS WITH TRUSTED DATA! Otherwise, you would expose your application to XSS vulnerabilities!"} +{"question_id": 10507230, "score": 103, "creation_date": 1336514692, "tags": ["python"], "instruction": "Insert line at middle of file with Python?\n\nIs there a way to do this? Say I have a file that's a list of names that goes like this: Alfred Bill Donald How could I insert the third name, \"Charlie\", at line x (in this case 3), and automatically send all others down one line? I've seen other questions like this, but they didn't get helpful answers. Can it be done, preferably with either a method or a loop?", "output": "This is a way of doing the trick. with open(\"path_to_file\", \"r\") as f: contents = f.readlines() contents.insert(index, value) with open(\"path_to_file\", \"w\") as f: contents = \"\".join(contents) f.write(contents) index and value are the line and value of your choice, lines starting from 0."} +{"question_id": 6205442, "score": 103, "creation_date": 1306950217, "tags": ["python", "datetime"], "instruction": "How to find datetime 10 mins after current time?\n\nI want to find out the datetime 10 mins after current time. Let's say we have from datetime import datetime now = datetime.now() new_now = datetime.strptime(now, '%a, %d %b %Y %H:%M:%S %Z') I want to find this now and new_now 10 minutes later. How can I do that?", "output": "This is a duplicate of this question. You basically just need to add a timedelta of 10 minutes to get the time you want. from datetime import datetime, timedelta now = datetime.now() now_plus_10 = now + timedelta(minutes = 10)"} +{"question_id": 14049983, "score": 103, "creation_date": 1356587410, "tags": ["python", "xpath", "selenium"], "instruction": "Selenium Webdriver finding an element in a sub-element\n\nI am trying to search for an element in a sub-element with Selenium (Version 2.28.0), but selenium does not seem to limit its search to the sub-element. Am I doing this wrong or is there a way to use element.find to search a sub-element? For an example I created a simple test webpage with this code:

My First Heading

My first paragraph.

My Second Heading

My second paragraph.

My Third Heading

My third paragraph.

My python (Version 2.6) code looks like this: from selenium import webdriver driver = webdriver.Firefox() # Open the test page with this instance of Firefox # element2 gets the second division as a web element element2 = driver.find_element_by_xpath(\"//div[@title='div2']\") # Search second division for a paragraph with a class of 'test' and print the content print element2.find_element_by_xpath(\"//p[@class='test']\").text # expected output: \"My second paragraph.\" # actual output: \"My first paragraph.\" If I run: print element2.get_attribute('innerHTML') It returns the html from the second division. So selenium is not limiting its search to element2. I would like to be able to find a sub-element of element2. This post suggests my code should work Selenium WebDriver access a sub element but his problem was caused by a time-out issue. Can anyone help me understand what is happening here?", "output": "If you start an XPath expression with //, it begins searching from the root of document. To search relative to a particular element, you should prepend the expression with . instead: element2 = driver.find_element_by_xpath(\"//div[@title='div2']\") element2.find_element_by_xpath(\".//p[@class='test']\").text"} +{"question_id": 42220458, "score": 103, "creation_date": 1487058243, "tags": ["python", "multiprocessing", "joblib"], "instruction": "What does the delayed() function do (when used with joblib in Python)\n\nI've read through the documentation, but I don't understand what is meant by: The delayed function is a simple trick to be able to create a tuple (function, args, kwargs) with a function-call syntax. I'm using it to iterate over the list I want to operate on (allImages) as follows: def joblib_loop(): Parallel(n_jobs=8)(delayed(getHog)(i) for i in allImages) This returns my HOG features, like I want (and with the speed gain using all my 8 cores), but I'm just not sure what it is actually doing. My Python knowledge is alright at best, and it's very possible that I'm missing something basic. Any pointers in the right direction would be most appreciated", "output": "Perhaps things become clearer if we look at what would happen if instead we simply wrote Parallel(n_jobs=8)(getHog(i) for i in allImages) which, in this context, could be expressed more naturally as: Create a Parallel instance with n_jobs=8 create a generator for the list [getHog(i) for i in allImages] pass that generator to the Parallel instance What's the problem? By the time the list gets passed to the Parallel object, all getHog(i) calls have already returned - so there's nothing left to execute in Parallel! All the work was already done in the main thread, sequentially. What we actually want is to tell Python what functions we want to call with what arguments, without actually calling them - in other words, we want to delay the execution. This is what delayed conveniently allows us to do, with clear syntax. If we want to tell Python that we'd like to call foo(2, g=3) sometime later, we can simply write delayed(foo)(2, g=3). Returned is the tuple (foo, [2], {g: 3}), containing: a reference to the function we want to call, e.g.foo all arguments (short \"args\") without a keyword, e.g.t 2 all keyword arguments (short \"kwargs\"), e.g. g=3 So, by writing Parallel(n_jobs=8)(delayed(getHog)(i) for i in allImages), instead of the above sequence, now the following happens: A Parallel instance with n_jobs=8 gets created The list [delayed(getHog)(i) for i in allImages] gets created, evaluating to [(getHog, [img1], {}), (getHog, [img2], {}), ... ] That list is passed to the Parallel instance The Parallel instance creates 8 threads and distributes the tuples from the list to them Finally, each of those threads starts executing the tuples, i.e., they call the first element with the second and the third elements unpacked as arguments tup[0](*tup[1], **tup[2]), turning the tuple back into the call we actually intended to do, getHog(img2)."} +{"question_id": 27019079, "score": 103, "creation_date": 1416406552, "tags": ["python", "matplotlib", "seaborn", "legend"], "instruction": "Move seaborn plot legend to a different position\n\nI'm using factorplot(kind=\"bar\") with seaborn. The plot is fine except the legend is misplaced: too much to the right, text goes out of the plot's shaded area. How do I make seaborn place the legend somewhere else, such as in top-left instead of middle-right?", "output": "Building on @user308827's answer: you can use legend=False in factorplot and specify the legend through matplotlib: import seaborn as sns import matplotlib.pyplot as plt sns.set(style=\"whitegrid\") titanic = sns.load_dataset(\"titanic\") g = sns.factorplot(\"class\", \"survived\", \"sex\", data=titanic, kind=\"bar\", size=6, palette=\"muted\", legend=False) g.despine(left=True) plt.legend(loc='upper left') g.set_ylabels(\"survival probability\") plt acts on the current axes. To get axes from a FacetGrid use fig. g.fig.get_axes()[0].legend(loc='lower left')"} +{"question_id": 10814452, "score": 103, "creation_date": 1338373613, "tags": ["python", "scope", "global"], "instruction": "How can I access global variable inside class in Python\n\nI have this: g_c = 0 class TestClass(): global g_c def run(self): for i in range(10): g_c = 1 print(g_c) t = TestClass() t.run() print(g_c) how can I actually modify my global variable g_c?", "output": "By declaring it global inside the function that accesses it: g_c = 0 class TestClass(): def run(self): global g_c for i in range(10): g_c = 1 print(g_c) The Python documentation says this, about the global statement: The global statement is a declaration which holds for the entire current code block."} +{"question_id": 19009932, "score": 103, "creation_date": 1380125271, "tags": ["python", "python-3.x", "python-3.3"], "instruction": "Import arbitrary python source file. (Python 3.3+)\n\nHow can I import an arbitrary python source file (whose filename could contain any characters, and does not always ends with .py) in Python 3.3+? I used imp.load_module as follows: >>> import imp >>> path = '/tmp/a-b.txt' >>> with open(path, 'U') as f: ... mod = imp.load_module('a_b', f, path, ('.py', 'U', imp.PY_SOURCE)) ... >>> mod It still works in Python 3.3, but according to imp.load_module documentation, it is deprecated: Deprecated since version 3.3: Unneeded as loaders should be used to load modules and find_module() is deprecated. and imp module documentation recommends to use importlib: Note New programs should use importlib rather than this module. What is the proper way to load an arbitrary python source file in Python 3.3+ without using the deprecated imp.load_module function?", "output": "Found a solution from importlib test code. Using importlib.machinery.SourceFileLoader: >>> import importlib.machinery >>> loader = importlib.machinery.SourceFileLoader('a_b', '/tmp/a-b.txt') >>> mod = loader.load_module() >>> mod NOTE: only works in Python 3.3+. UPDATE Loader.load_module is deprecated since Python 3.4. Use Loader.exec_module instead: >>> import types >>> import importlib.machinery >>> loader = importlib.machinery.SourceFileLoader('a_b', '/tmp/a-b.txt') >>> mod = types.ModuleType(loader.name) >>> loader.exec_module(mod) >>> mod >>> import importlib.machinery >>> import importlib.util >>> loader = importlib.machinery.SourceFileLoader('a_b', '/tmp/a-b.txt') >>> spec = importlib.util.spec_from_loader(loader.name, loader) >>> mod = importlib.util.module_from_spec(spec) >>> loader.exec_module(mod) >>> mod "} +{"question_id": 50563546, "score": 103, "creation_date": 1527501007, "tags": ["python", "python-typing", "python-dataclasses"], "instruction": "Validating detailed types in Python dataclasses\n\nPython 3.7 was released a while ago, and I wanted to test some of the fancy new dataclass+typing features. Getting hints to work right is easy enough, with both native types and those from the typing module: >>> import dataclasses >>> import typing as ty >>> ... @dataclasses.dataclass ... class Structure: ... a_str: str ... a_str_list: ty.List[str] ... >>> my_struct = Structure(a_str='test', a_str_list=['t', 'e', 's', 't']) >>> my_struct.a_str_list[0]. # IDE suggests all the string methods :) But one other thing that I wanted to try was forcing the type hints as conditions during runtime, i.e. it should not be possible for a dataclass with incorrect types to exist. It can be implemented nicely with __post_init__: >>> @dataclasses.dataclass ... class Structure: ... a_str: str ... a_str_list: ty.List[str] ... ... def validate(self): ... ret = True ... for field_name, field_def in self.__dataclass_fields__.items(): ... actual_type = type(getattr(self, field_name)) ... if actual_type != field_def.type: ... print(f\"\\t{field_name}: '{actual_type}' instead of '{field_def.type}'\") ... ret = False ... return ret ... ... def __post_init__(self): ... if not self.validate(): ... raise ValueError('Wrong types') This kind of validate function works for native types and custom classes, but not those specified by the typing module: >>> my_struct = Structure(a_str='test', a_str_list=['t', 'e', 's', 't']) Traceback (most recent call last): a_str_list: '' instead of 'typing.List[str]' ValueError: Wrong types Is there a better approach to validate an untyped list with a typing-typed one? Preferably one that doesn't include checking the types of all elements in any list, dict, tuple, or set that is a dataclass' attribute. Revisiting this question after a couple of years, I've now moved to use pydantic in cases where I want to validate classes that I'd normally just define a dataclass for. I'll leave my mark with the currently accepted answer though, since it correctly answers the original question and has outstanding educational value.", "output": "Instead of checking for type equality, you should use isinstance. But you cannot use a parametrized generic type (typing.List[int]) to do so, you must use the \"generic\" version (typing.List). So you will be able to check for the container type but not the contained types. Parametrized generic types define an __origin__ attribute that you can use for that. Contrary to Python 3.6, in Python 3.7 most type hints have a useful __origin__ attribute. Compare: # Python 3.6 >>> import typing >>> typing.List.__origin__ >>> typing.List[int].__origin__ typing.List and # Python 3.7 >>> import typing >>> typing.List.__origin__ >>> typing.List[int].__origin__ Python 3.8 introduce even better support with the typing.get_origin() introspection function: # Python 3.8 >>> import typing >>> typing.get_origin(typing.List) >>> typing.get_origin(typing.List[int]) Notable exceptions being typing.Any, typing.Union and typing.ClassVar\u2026 Well, anything that is a typing._SpecialForm does not define __origin__. Fortunately: >>> isinstance(typing.Union, typing._SpecialForm) True >>> isinstance(typing.Union[int, str], typing._SpecialForm) False >>> typing.get_origin(typing.Union[int, str]) typing.Union But parametrized types define an __args__ attribute that store their parameters as a tuple; Python 3.8 introduce the typing.get_args() function to retrieve them: # Python 3.7 >>> typing.Union[int, str].__args__ (, ) # Python 3.8 >>> typing.get_args(typing.Union[int, str]) (, ) So we can improve type checking a bit: for field_name, field_def in self.__dataclass_fields__.items(): if isinstance(field_def.type, typing._SpecialForm): # No check for typing.Any, typing.Union, typing.ClassVar (without parameters) continue try: actual_type = field_def.type.__origin__ except AttributeError: # In case of non-typing types (such as , for instance) actual_type = field_def.type # In Python 3.8 one would replace the try/except with # actual_type = typing.get_origin(field_def.type) or field_def.type if isinstance(actual_type, typing._SpecialForm): # case of typing.Union[\u2026] or typing.ClassVar[\u2026] actual_type = field_def.type.__args__ actual_value = getattr(self, field_name) if not isinstance(actual_value, actual_type): print(f\"\\t{field_name}: '{type(actual_value)}' instead of '{field_def.type}'\") ret = False This is not perfect as it won't account for typing.ClassVar[typing.Union[int, str]] or typing.Optional[typing.List[int]] for instance, but it should get things started. Next is the way to apply this check. Instead of using __post_init__, I would go the decorator route: this could be used on anything with type hints, not only dataclasses: import inspect import typing from contextlib import suppress from functools import wraps def enforce_types(callable): spec = inspect.getfullargspec(callable) def check_types(*args, **kwargs): parameters = dict(zip(spec.args, args)) parameters.update(kwargs) for name, value in parameters.items(): with suppress(KeyError): # Assume un-annotated parameters can be any type type_hint = spec.annotations[name] if isinstance(type_hint, typing._SpecialForm): # No check for typing.Any, typing.Union, typing.ClassVar (without parameters) continue try: actual_type = type_hint.__origin__ except AttributeError: # In case of non-typing types (such as , for instance) actual_type = type_hint # In Python 3.8 one would replace the try/except with # actual_type = typing.get_origin(type_hint) or type_hint if isinstance(actual_type, typing._SpecialForm): # case of typing.Union[\u2026] or typing.ClassVar[\u2026] actual_type = type_hint.__args__ if not isinstance(value, actual_type): raise TypeError('Unexpected type for \\'{}\\' (expected {} but found {})'.format(name, type_hint, type(value))) def decorate(func): @wraps(func) def wrapper(*args, **kwargs): check_types(*args, **kwargs) return func(*args, **kwargs) return wrapper if inspect.isclass(callable): callable.__init__ = decorate(callable.__init__) return callable return decorate(callable) Usage being: @enforce_types @dataclasses.dataclass class Point: x: float y: float @enforce_types def foo(bar: typing.Union[int, str]): pass Appart from validating some type hints as suggested in the previous section, this approach still have some drawbacks: type hints using strings (class Foo: def __init__(self: 'Foo'): pass) are not taken into account by inspect.getfullargspec: you may want to use typing.get_type_hints and inspect.signature instead; a default value which is not the appropriate type is not validated: @enforce_type def foo(bar: int = None): pass foo() does not raise any TypeError. You may want to use inspect.Signature.bind in conjuction with inspect.BoundArguments.apply_defaults if you want to account for that (and thus forcing you to define def foo(bar: typing.Optional[int] = None)); variable number of arguments can't be validated as you would have to define something like def foo(*args: typing.Sequence, **kwargs: typing.Mapping) and, as said at the beginning, we can only validate containers and not contained objects. Update After this answer got some popularity and a library heavily inspired by it got released, the need to lift the shortcomings mentioned above is becoming a reality. So I played a bit more with the typing module and will propose a few findings and a new approach here. For starter, typing is doing a great job in finding when an argument is optional: >>> def foo(a: int, b: str, c: typing.List[str] = None): ... pass ... >>> typing.get_type_hints(foo) {'a': , 'b': , 'c': typing.Union[typing.List[str], NoneType]} This is pretty neat and definitely an improvement over inspect.getfullargspec, so better use that instead as it can also properly handle strings as type hints. But typing.get_type_hints will bail out for other kind of default values: >>> def foo(a: int, b: str, c: typing.List[str] = 3): ... pass ... >>> typing.get_type_hints(foo) {'a': , 'b': , 'c': typing.List[str]} So you may still need extra strict checking, even though such cases feels very fishy. Next is the case of typing hints used as arguments for typing._SpecialForm, such as typing.Optional[typing.List[str]] or typing.Final[typing.Union[typing.Sequence, typing.Mapping]]. Since the __args__ of these typing._SpecialForms is always a tuple, it is possible to recursively find the __origin__ of the hints contained in that tuple. Combined with the above checks, we will then need to filter any typing._SpecialForm left. Proposed improvements: import inspect import typing from functools import wraps def _find_type_origin(type_hint): if isinstance(type_hint, typing._SpecialForm): # case of typing.Any, typing.ClassVar, typing.Final, typing.Literal, # typing.NoReturn, typing.Optional, or typing.Union without parameters return actual_type = typing.get_origin(type_hint) or type_hint # requires Python 3.8 if isinstance(actual_type, typing._SpecialForm): # case of typing.Union[\u2026] or typing.ClassVar[\u2026] or \u2026 for origins in map(_find_type_origin, typing.get_args(type_hint)): yield from origins else: yield actual_type def _check_types(parameters, hints): for name, value in parameters.items(): type_hint = hints.get(name, typing.Any) actual_types = tuple(_find_type_origin(type_hint)) if actual_types and not isinstance(value, actual_types): raise TypeError( f\"Expected type '{type_hint}' for argument '{name}'\" f\" but received type '{type(value)}' instead\" ) def enforce_types(callable): def decorate(func): hints = typing.get_type_hints(func) signature = inspect.signature(func) @wraps(func) def wrapper(*args, **kwargs): parameters = dict(zip(signature.parameters, args)) parameters.update(kwargs) _check_types(parameters, hints) return func(*args, **kwargs) return wrapper if inspect.isclass(callable): callable.__init__ = decorate(callable.__init__) return callable return decorate(callable) def enforce_strict_types(callable): def decorate(func): hints = typing.get_type_hints(func) signature = inspect.signature(func) @wraps(func) def wrapper(*args, **kwargs): bound = signature.bind(*args, **kwargs) bound.apply_defaults() parameters = dict(zip(signature.parameters, bound.args)) parameters.update(bound.kwargs) _check_types(parameters, hints) return func(*args, **kwargs) return wrapper if inspect.isclass(callable): callable.__init__ = decorate(callable.__init__) return callable return decorate(callable) Thanks to @Aran-Fey that helped me improve this answer."} +{"question_id": 57837609, "score": 103, "creation_date": 1567891907, "tags": ["python", "python-typing"], "instruction": "python typing signature (typing.Callable) for function with kwargs\n\nI heavily use python typing support from python 3. Recently I was trying to pass a function as an argument and I do not find any help for using kwargs in typing.Callable signature. Please check the code below and the comments. import typing # some function with a type signature def fn1_as_arg_with_kwargs(a: int, b: float) -> float: return a + b # some function with a type signature def fn2_as_arg_with_kwargs(a: int, b: float) -> float: return a * b # function that get callables as arg # this works with typing def function_executor( a: int, b: float, fn: typing.Callable[[int, float], float]): return fn(a, b) # But what if I want to name my kwargs # (something like below which does not work) # ... this will help me more complex scenarios # ... or am I expecting a lot from python3 ;) def function_executor( a: int, b: float, fn: typing.Callable[[\"a\": int, \"b\": float], float]): return fn(a=a, b=b)", "output": "You are probably looking for Callback protocols. In short, when you want to express a callable with a complex signature, what you'll want to do is to create a custom Protocol that defines a __call__ method with the precise signature you want. For example, in your case: from typing import Protocol # Or, if you want to support Python 3.7 and below, install the typing_extensions # module via pip and do the below: from typing_extensions import Protocol class MyCallable(Protocol): def __call__(self, a: int, b: float) -> float: ... def good(a: int, b: float) -> float: ... def bad(x: int, y: float) -> float: ... def function_executor(a: int, b: float, fn: MyCallable) -> float: return fn(a=a, b=b) function_executor(1, 2.3, good) # Ok! function_executor(1, 2.3, bad) # Errors If you try type-checking this program using mypy, you'll get the following (admittedly cryptic) error on the last line: Argument 3 to \"function_executor\" has incompatible type \"Callable[[int, float], float]\"; expected \"MyCallable\" (Callback protocols are somewhat new, so hopefully the quality of the error messages will improve over time.)"} +{"question_id": 46308374, "score": 103, "creation_date": 1505849280, "tags": ["python", "validation", "keras", "training-data", "keras-2"], "instruction": "What is validation data used for in a Keras Sequential model?\n\nMy question is simple, what is the validation data passed to model.fit in a Sequential model used for? And, does it affect how the model is trained (normally a validation set is used, for example, to choose hyper-parameters in a model, but I think this does not happen here)? I am talking about the validation set that can be passed like this: # Create model model = Sequential() # Add layers model.add(...) # Train model (use 10% of training set as validation set) history = model.fit(X_train, Y_train, validation_split=0.1) # Train model (use validation data as validation set) history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test)) I investigated a bit, and I saw that keras.models.Sequential.fit calls keras.models.training.fit, which creates variables like val_accand val_loss (which can be accessed from Callbacks). keras.models.training.fit also calls keras.models.training._fit_loop, which adds the validation data to the callbacks.validation_data, and also calls keras.models.training._test_loop, which will loop the validation data in batches on the self.test_function of the model. The result of this function is used to fill the values of the logs, which are the values accessible from the callbacks. After seeing all this, I feel that the validation set passed to model.fit is not used to validate anything during training, and its only use is to get feedback on how the trained model will perform in every epoch for a completely independent set. Therefore, it would be okey to use the same validation and test set, right? Could anyone confirm if the validation set in model.fit has any other goal besides being read from the callbacks?", "output": "If you want to build a solid model you have to follow that specific protocol of splitting your data into three sets: One for training, one for validation and one for final evaluation, which is the test set. The idea is that you train on your training data and tune your model with the results of metrics (accuracy, loss etc) that you get from your validation set. Your model doesn't \"see\" your validation set and isn't in any way trained on it, but you as the architect and master of the hyperparameters tune the model according to this data. Therefore it indirectly influences your model because it directly influences your design decisions. You nudge your model to work well with the validation data and that can possibly bring in a tilt. Exactly that is the reason you only evaluate your model's final score on data that neither your model nor you yourself has used \u2013 and that is the third chunk of data, your test set. Only this procedure makes sure you get an unaffected view of your models quality and ability to generalize what is has learned on totally unseen data."} +{"question_id": 41130255, "score": 103, "creation_date": 1481661949, "tags": ["python", "pandas", "dataframe", "indexing"], "instruction": "Accessing Pandas column using squared brackets vs using a dot (like an attribute)\n\nIn both the bellow cases: import pandas d = {'col1': 2, 'col2': 2.5} df = pandas.DataFrame(data=d, index=[0]) print(df['col2']) print(df.col2) Both methods can be used to index on a column and yield the same result, so is there any difference between them?", "output": "The \"dot notation\", i.e. df.col2 is the attribute access that's exposed as a convenience. You may access an index on a Series, column on a DataFrame, and an item on a Panel directly as an attribute: df['col2'] does the same: it returns a pd.Series of the column. A few caveats about attribute access: you cannot add a column (df.new_col = x won't work, worse: it will silently actually create a new attribute rather than a column - think monkey-patching here) it won't work if you have spaces in the column name or if the column name is an integer."} +{"question_id": 17134653, "score": 103, "creation_date": 1371395260, "tags": ["python"], "instruction": "Difference between class and instance methods\n\nI was reading PEP 8 (style guide), and I noticed that it suggested to use self as the first argument in an instance method, but cls as the first argument in a class method. I've used and written a few classes, but I've never encountered a class method (well, a method which passes cls as a parameter). What are some examples?", "output": "Instance methods When creating an instance method, the first parameter is always self. You can name it anything you want, but the meaning will always be the same, and you should use self since it's the naming convention. self is (usually) passed hiddenly when calling an instance method; it represents the instance calling the method. Here's an example of a class called Inst that has an instance method called introduce(): class Inst: def __init__(self, name): self.name = name def introduce(self): print(\"Hello, I am %s, and my name is \" %(self, self.name)) Now to call this method, we first need to create an instance of our class. Once we have an instance, we can call introduce() on it, and the instance will automatically be passed as self: myinst = Inst(\"Test Instance\") otherinst = Inst(\"An other instance\") myinst.introduce() # outputs: Hello, I am , and my name is Test Instance otherinst.introduce() # outputs: Hello, I am , and my name is An other instance As you see, we're not passing the parameter self. It gets hiddenly passed with the period operator. We're calling Inst class's instance method introduce, with the parameter of myinst or otherinst. This means that we can call Inst.introduce(myinst) and get the exact same result. Class methods The idea of a class method is very similar to an instance method, only difference being that instead of passing the instance hiddenly as a first parameter, we're now passing the class itself as a first parameter. class Cls: @classmethod def introduce(cls): print(\"Hello, I am %s!\" %cls) Since we're passing only a class to the method, no instance is involved. This means that we don't need an instance at all, and we call the class method as if it was a static function: Cls.introduce() # same as Cls.introduce(Cls) # outputs: Hello, I am Notice that again Cls is passed hiddenly, so we could also say Cls.introduce(Inst) and get output \"Hello, I am . This is particularly useful when we're inheriting a class from Cls: class SubCls(Cls): pass SubCls.introduce() # outputs: Hello, I am "} +{"question_id": 1673071, "score": 103, "creation_date": 1257332438, "tags": ["python", "list-comprehension"], "instruction": "What do backticks mean to the Python interpreter? Example: `num`\n\nI'm playing around with list comprehensions and I came across this little snippet on another site: return ''.join([`num` for num in xrange(loop_count)]) I spent a few minutes trying to replicate the function (by typing) before realising the `num` bit was breaking it. What does enclosing a statement in those characters do? From what I can see it is the equivalent of str(num). But when I timed it: return ''.join([str(num) for num in xrange(10000000)]) It takes 4.09 seconds whereas: return ''.join([`num` for num in xrange(10000000)]) takes 2.43 seconds. Both give identical results, but one is a lot slower. What is going on here? Oddly... repr() gives slightly slower results than `num`. 2.99 seconds vs 2.43 seconds. I am using Python 2.6 (haven't tried 3.0 yet).", "output": "Backticks are a deprecated alias for repr(). Don't use them any more; the syntax was removed in Python 3.0. Using backticks seems to be faster than using repr(num) or num.__repr__() in version 2.x. I guess it's because additional dictionary lookup is required in the global namespace (for repr), or in the object's namespace (for __repr__), respectively. Using the dis module proves my assumption: def f1(a): return repr(a) def f2(a): return a.__repr__() def f3(a): return `a` Disassembling shows: >>> import dis >>> dis.dis(f1) 3 0 LOAD_GLOBAL 0 (repr) 3 LOAD_FAST 0 (a) 6 CALL_FUNCTION 1 9 RETURN_VALUE >>> dis.dis(f2) 6 0 LOAD_FAST 0 (a) 3 LOAD_ATTR 0 (__repr__) 6 CALL_FUNCTION 0 9 RETURN_VALUE >>> dis.dis(f3) 9 0 LOAD_FAST 0 (a) 3 UNARY_CONVERT 4 RETURN_VALUE f1 involves a global lookup for repr, f2 an attribute lookup for __repr__, whereas the backtick operator is implemented in a separate opcode. Since there is no overhead for dictionary lookup (LOAD_GLOBAL/LOAD_ATTR) nor for function calls (CALL_FUNCTION), backticks are faster. I guess that the Python folks decided that having a separate low-level operation for repr() is not worth it, and having both repr() and backticks violates the principle \"There should be one-- and preferably only one --obvious way to do it\" so the feature was removed in Python 3.0."} +{"question_id": 25336726, "score": 103, "creation_date": 1408160567, "tags": ["python", "iterator"], "instruction": "Why can't I iterate twice over the same iterator? How can I \"reset\" the iterator or reuse the data?\n\nConsider the code: def test(data): for row in data: print(\"first loop\") for row in data: print(\"second loop\") When data is an iterator, for example a list iterator or a generator expression*, this does not work: >>> test(iter([1, 2])) first loop first loop >>> test((_ for _ in [1, 2])) first loop first loop This prints first loop a few times, since data is non-empty. However, it does not print second loop. Why does iterating over data work the first time, but not the second time? How can I make it work a second time? Aside from for loops, the same problem appears to occur with any kind of iteration: list/set/dict comprehensions, passing the iterator to list(), sum() or reduce(), etc. On the other hand, if data is another kind of iterable, such as a list or a range (which are both sequences), both loops run as expected: >>> test([1, 2]) first loop first loop second loop second loop >>> test(range(2)) first loop first loop second loop second loop * More examples: file objects generators created from an explicit generator function filter, map, and zip objects (in 3.x) enumerate objects csv.readers various iterators defined in the itertools standard library For general theory and terminology explanation, see What are iterator, iterable, and iteration?. To detect whether the input is an iterator or a \"reusable\" iterable, see Ensure that an argument can be iterated twice.", "output": "An iterator can only be consumed once. For example: data = [1, 2, 3] it = iter(data) next(it) # => 1 next(it) # => 2 next(it) # => 3 next(it) # => StopIteration When the iterator is supplied to a for loop instead, that last StopIteration will cause it to exit the first time. Trying to use the same iterator in another for loop will cause StopIteration again immediately, because the iterator has already been consumed. A simple way to work around this is to save all the elements to a list, which can be traversed as many times as needed. For example: data = list(it) If the iterator would iterate over many elements at roughly the same time, however, it's a better idea to create independent iterators using tee(): import itertools it1, it2 = itertools.tee(data, 2) # create as many as needed Now each one can be iterated over separately: next(it1) # => 1 next(it1) # => 2 next(it2) # => 1 next(it2) # => 2 next(it1) # => 3 next(it2) # => 3"} +{"question_id": 4103430, "score": 103, "creation_date": 1288928565, "tags": ["python", "pickle", "shelve", "object-serialization"], "instruction": "What is the difference between pickle and shelve?\n\nWhen is it appropriate to use pickle, and when is it appropriate to use shelve? That is to say, what do they do differently from each other? From my research, I understood that pickle can turn every Python object into stream of bytes which can be persisted into a file. Then why do we need shelve as well? Isn't pickle faster?", "output": "pickle is for serializing some object (or objects) as a single bytestream in a file. shelve builds on top of pickle and implements a serialization dictionary where objects are pickled, but associated with a key (some string), so you can load your shelved data file and access your pickled objects via keys. This could be more convenient were you to be serializing many objects. Here is an example of usage between the two. (should work in latest versions of Python 2.7 and Python 3.x). pickle Example import pickle integers = [1, 2, 3, 4, 5] with open('pickle-example.p', 'wb') as pfile: pickle.dump(integers, pfile) This will dump the integers list to a binary file called pickle-example.p. Now try reading the pickled file back. import pickle with open('pickle-example.p', 'rb') as pfile: integers = pickle.load(pfile) print(integers) The above should output [1, 2, 3, 4, 5]. shelve Example import shelve integers = [1, 2, 3, 4, 5] # If you're using Python 2.7, import contextlib and use # the line: # with contextlib.closing(shelve.open('shelf-example', 'c')) as shelf: with shelve.open('shelf-example', 'c') as shelf: shelf['ints'] = integers Notice how you add objects to the shelf via dictionary-like access. Read the object back in with code like the following: import shelve # If you're using Python 2.7, import contextlib and use # the line: # with contextlib.closing(shelve.open('shelf-example', 'r')) as shelf: with shelve.open('shelf-example', 'r') as shelf: for key in shelf.keys(): print(repr(key), repr(shelf[key])) The output will be 'ints', [1, 2, 3, 4, 5]."} +{"question_id": 72294299, "score": 103, "creation_date": 1652898582, "tags": ["python", "python-poetry"], "instruction": "Multiple top-level packages discovered in a flat-layout\n\nI am trying to install a library from the source that makes use of Poetry, but I get this error error: Multiple top-level packages discovered in a flat-layout: ['tulips', 'fixtures']. To avoid accidental inclusion of unwanted files or directories, setuptools will not proceed with this build. If you are trying to create a single distribution with multiple packages on purpose, you should not rely on automatic discovery. Instead, consider the following options: 1. set up custom discovery (`find` directive with `include` or `exclude`) 2. use a `src-layout` 3. explicitly set `py_modules` or `packages` with a list of names To find more information, look for \"package discovery\" on setuptools docs What do I need to do to fix it?", "output": "Based on this comment on a GitHub issue, adding the following lines to your pyproject.toml might solve your problem: [tool.setuptools] py-modules = [] (For my case, the other workaround provided in that comment, i.e. adding py_modules=[] as a keyword argument to the setup() function in setup.py worked) See Package Discovery and Namespace Packages for additional information."} +{"question_id": 25609153, "score": 103, "creation_date": 1409585782, "tags": ["python", "paramiko"], "instruction": "Paramiko : Error reading SSH protocol banner\n\nRecently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko. I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner This is strange because it happens randomly on any connections. Is there any way to fix it ?", "output": "It depends on what you mean by \"fix\". The underlying cause, as pointed out in the comments, are congestion/lack of resources. In that way, it's similar to some HTTP codes. That's the normal cause, it could be that the ssh server is returning the wrong header data. 429 Too Many Requests, tells the client to use rate limiting, or sometimes APIs will return 503 in a similar way, if you exceed your quota. The idea being, to try again later, with a delay. You can attempt to handle this exception in your code, wait a little while, and try again. You can also edit your transport.py file, to set the banner timeout to something higher. If you have an application where it doesn't matter how quickly the server responds, you could set this to 60 seconds. EDIT: Editing your transport file is no longer needed as per Greg's answer. When you call connect, you can pass a banner_timeout (which solves this issue), a timeout (for the underlying TCP), and an auth_timeout (waiting for authentication response). Greg's answer has a code example with banner_timeout that you can directly lift."} +{"question_id": 16307307, "score": 102, "creation_date": 1367349220, "tags": ["python", "django", "django-models", "django-admin", "admin"], "instruction": "Django Admin Show Image from Imagefield\n\nWhile I can show an uploaded image in list_display, is it possible to do this on the model page (the page you get for changing an element)? A quick sample model would be: class Model1(models.Model): image = models.ImageField(upload_to=directory) The default admin shows the URL of the uploaded image but not the image itself. Thanks!", "output": "In addition to the answer of Michael C. O'Connor Note that since Django v.1.9 (updated - tested and worked all the way to Django 3.0) image_tag.allow_tags = True is deprecated and you should use format_html(), format_html_join(), or mark_safe() instead So if you are storing your uploaded files in your public /directory folder, your code should look like this: from django.utils.html import mark_safe Class Model1(models.Model): image = models.ImageField(upload_to=directory) def image_tag(self): return mark_safe('' % (self.image)) image_tag.short_description = 'Image' and in your admin.py add: fields = ['image_tag'] readonly_fields = ['image_tag']"} +{"question_id": 25958708, "score": 102, "creation_date": 1411298825, "tags": ["python", "django"], "instruction": "Django 1.7 - \"No migrations to apply\" when run migrate after makemigrations\n\nI use Django1.7 with Mezzanine. I create simple profile (according to Mezzanine documentation) stored in separate app \"profiles\": class RoadmapProfile(models.Model): user = models.OneToOneField(\"auth.User\") fullname = models.CharField(max_length=100, verbose_name=\"Full name\") Creation of migrations returns: Migrations for 'profiles': 0001_initial.py: - Create model RoadmapProfile When I run \"migrate profiles\": Operations to perform: Apply all migrations: profiles Running migrations: No migrations to apply. The issue is, when I try to open any page related to mezzanine.accounts (for example update account), it crashes with: OperationalError at /accounts/update/ no such column: profiles_roadmapprofile.fullname What I have done wrong?", "output": "Sounds like your initial migration was faked because the table already existed (probably with an outdated schema): https://docs.djangoproject.com/en/1.8/topics/migrations/#adding-migrations-to-apps \"This will make a new initial migration for your app. Now, when you run migrate, Django will detect that you have an initial migration and that the tables it wants to create already exist, and will mark the migration as already applied.\" Otherwise you would get an no-such-table error :) Did you clean up the applied-migrations table? That's also a common cause for non-applied migrations."} +{"question_id": 8693024, "score": 102, "creation_date": 1325419866, "tags": ["python", "string", "path", "prefix"], "instruction": "How to remove a path prefix in python?\n\nI wanted to know what is the pythonic function for this : I want to remove everything before the wa path. p = path.split('/') counter = 0 while True: if p[counter] == 'wa': break counter += 1 path = '/'+'/'.join(p[counter:]) For instance, I want '/book/html/wa/foo/bar/' to become '/wa/foo/bar/'.", "output": "A better answer would be to use os.path.relpath: http://docs.python.org/3/library/os.path.html#os.path.relpath >>> import os >>> full_path = '/book/html/wa/foo/bar/' >>> relative_path = '/book/html' >>> print(os.path.relpath(full_path, relative_path)) 'wa/foo/bar'"} +{"question_id": 19584029, "score": 102, "creation_date": 1382687095, "tags": ["python", "pandas", "histogram"], "instruction": "Plotting histograms from grouped data in a pandas DataFrame\n\nHow do I plot a block of histograms from a group of data in a dataframe? For example, given: from pandas import DataFrame import numpy as np x = ['A']*300 + ['B']*400 + ['C']*300 y = np.random.randn(1000) df = DataFrame({'Letter': x, 'N': y}) I tried: df.groupby('Letter').hist() ...which failed with the error message: TypeError: cannot concatenate 'str' and 'float' objects", "output": "I'm on a roll, just found an even simpler way to do it using the by keyword in the hist method: df.hist('N', by='Letter') That's a very handy little shortcut for quickly scanning your grouped data! For future visitors, the product of this call is the following chart: In answer to questions below, here's an example of specific tailoring of the histogram plots: # import libraries import pandas as pd import numpy as np # Create test dataframe x = ['A']*300 + ['B']*400 + ['C']*300 y = np.random.randn(1000) z = np.random.randn(1000) df = pd.DataFrame({'Letter':x, 'N1':y, 'N2':z}) # Plot histograms axes = df.hist(['N1','N2'], by='Letter',bins=10, layout=(2,2), legend=True, yrot=90,sharex=True,sharey=True, log=True, figsize=(6,6)) for ax in axes.flatten(): ax.set_xlabel('N') ax.set_ylabel('Count') ax.set_ylim(bottom=1,top=100)"} +{"question_id": 9168340, "score": 102, "creation_date": 1328567618, "tags": ["python", "function", "dictionary", "python-inspect"], "instruction": "Using a dictionary to select function to execute\n\nI am trying to use functional programming to create a dictionary containing a key and a function to execute: myDict={} myItems=(\"P1\",\"P2\",\"P3\",....\"Pn\") def myMain(key): def ExecP1(): pass def ExecP2(): pass def ExecP3(): pass ... def ExecPn(): pass Now, I have seen a code used to find the defined functions in a module, and I need to do something like this: for myitem in myItems: myDict[myitem] = ??? #to dynamically find the corresponding function So my question is, How do I make a list of all the Exec functions and then assign them to the desired item using the a dictionary? so at the end I will have myDict[\"P1\"]() #this will call ExecP1() My real problem is that I have tons of those items and I making a library that will handle them so the final user only needs to call myMain(\"P1\") I think using the inspect module, but I am not so sure how to do it. My reason to avoid: def ExecPn(): pass myDict[\"Pn\"]=ExecPn is that I have to protect code as I am using it to provide a scripting feature within my application.", "output": "Not proud of it, but: def myMain(key): def ExecP1(): pass def ExecP2(): pass def ExecP3(): pass def ExecPn(): pass locals()['Exec' + key]() I do however recommend that you put those in a module/class whatever, this is truly horrible. If you are willing to add a decorator for each function, you can define a decorator which adds each function to a dictionary: def myMain(key): tasks = {} def task(task_fn): tasks[task_fn.__name__] = task_fn @task def ExecP1(): print(1) @task def ExecP2(): print(2) @task def ExecP3(): print(3) @task def ExecPn(): print(4) tasks['Exec' + key]() Another option is to place all the functions under a class (or in a different module) and use getattr: def myMain(key): class Tasks: def ExecP1(): print(1) def ExecP2(): print(2) def ExecP3(): print(3) def ExecPn(): print(4) task = getattr(Tasks, 'Exec' + key) task()"} +{"question_id": 16266720, "score": 102, "creation_date": 1367176522, "tags": ["python", "matrix", "numpy", "scipy"], "instruction": "Find out if a matrix is positive definite with NumPy\n\nHow can I find out if a matrix is positive definite? My matrix is a NumPy matrix. I was expecting to find any related method in the NumPy library, but I didn't have any success.", "output": "You can also check if all the eigenvalues of matrix are positive. If so, the matrix is positive definite: import numpy as np def is_pos_def(x): return np.all(np.linalg.eigvals(x) > 0)"} +{"question_id": 8595973, "score": 102, "creation_date": 1324499469, "tags": ["python"], "instruction": "Truncate to three decimals in Python\n\nHow do I get 1324343032.324? As you can see below, the following do not work: >>1324343032.324325235 * 1000 / 1000 1324343032.3243253 >>int(1324343032.324325235 * 1000) / 1000.0 1324343032.3239999 >>round(int(1324343032.324325235 * 1000) / 1000.0,3) 1324343032.3239999 >>str(1324343032.3239999) '1324343032.32'", "output": "You can use an additional float() around it if you want to preserve it as a float. val = '%.3f'%(1324343032.324325235)"} +{"question_id": 9050355, "score": 102, "creation_date": 1327803254, "tags": ["python", "syntax", "string-literals", "quoting"], "instruction": "Using quotation marks inside quotation marks\n\nWhen I want to do a print command in Python and I need to use quotation marks, I don't know how to do it without closing the string. For instance: print \" \"a word that needs quotation marks\" \" But when I try to do what I did above, I end up closing the string and I can't put the word I need between quotation marks. How can I do that?", "output": "You could do this in one of three ways: Use single and double quotes together: print('\"A word that needs quotation marks\"') \"A word that needs quotation marks\" Escape the double quotes within the string: print(\"\\\"A word that needs quotation marks\\\"\") \"A word that needs quotation marks\" Use triple-quoted strings: print(\"\"\" \"A word that needs quotation marks\" \"\"\") \"A word that needs quotation marks\""} +{"question_id": 56658553, "score": 102, "creation_date": 1560903359, "tags": ["python", "visual-studio-code", "modulenotfounderror"], "instruction": "Why do I get a \"ModuleNotFoundError\" in VS Code despite the fact that I already installed the module?\n\nI'm trying to debug some Python code using VS Code. I'm getting the following error about a module that I am sure is already installed. Exception has occurred: ModuleNotFoundError No module named 'SimpleITK' File \"C:\\Users\\Mido\\Desktop\\ProstateX-project\\src\\01-preprocessing\\03_resample_nifti.py\", line 8, in import SimpleITK as sitk I installed the module using sudo pip install SimpleITK I know that it was installed because I was getting a similar error when I ran the code through the command line, and it was fixed by doing the above. I don't understand why VS Code does not recognize that it's already installed.", "output": "sudo pip install is most likely installing globally into a Python interpreter that is different than the one that you have selected in VS Code. Please select the Python interpreter you want to use and then install explicitly using that interpreter (if you're not using a virtual environment then use something like /path/to/python -m pip install SimpleITK, although I strongly recommend using a virtual environment and to not install packages globally)."} +{"question_id": 2158097, "score": 102, "creation_date": 1264714064, "tags": ["python"], "instruction": "drop into python interpreter while executing function\n\ni have a python module with a function: def do_stuff(param1 = 'a'): if type(param1) == int: # enter python interpreter here do_something() else: do_something_else() is there a way to drop into the command line interpreter where i have the comment? so that if i run the following in python: >>> import my_module >>> do_stuff(1) i get my next prompt in the scope and context of where i have the comment in do_stuff()?", "output": "Update for 2023: breakpoint() is included since py3.7 and will drop you into the debugger (pdb by default). If you have other debuggers installed (ipdb, pdbpp, ... to name a few) then breakpoint() will instead drop you into those. Please see these docs for more info: https://docs.python.org/3/library/pdb.html"} +{"question_id": 23361057, "score": 102, "creation_date": 1398764599, "tags": ["python", "django", "django-signals"], "instruction": "django - comparing old and new field value before saving\n\nI have a django model, and I need to compare old and new values of field BEFORE saving. I've tried the save() inheritance, and pre_save signal. It was triggered correctly, but I can't find the list of actually changed fields and can't compare old and new values. Is there a way? I need it for optimization of pre-save actions. Thank you!", "output": "There is very simple django way for doing it. \"Memorise\" the values in model init like this: def __init__(self, *args, **kwargs): super(MyClass, self).__init__(*args, **kwargs) self.initial_parametername = self.parametername --- self.initial_parameternameX = self.parameternameX Real life example: At class: def __init__(self, *args, **kwargs): super(MyClass, self).__init__(*args, **kwargs) self.__important_fields = ['target_type', 'target_id', 'target_object', 'number', 'chain', 'expiration_date'] for field in self.__important_fields: setattr(self, '__original_%s' % field, getattr(self, field)) def has_changed(self): for field in self.__important_fields: orig = '__original_%s' % field if getattr(self, orig) != getattr(self, field): return True return False And then in modelform save method: def save(self, force_insert=False, force_update=False, commit=True): # Prep the data obj = super(MyClassForm, self).save(commit=False) if obj.has_changed(): # If we're down with commitment, save this shit if commit: obj.save(force_insert=True) return obj Edit ( 06.05.2024 ): Please read through all the comments that have been added since I added this post 10 years ago. Things have changed, and though even I still use this approach myself and recommend using it, there are cases, where you need to tweak the approach or forgo using it. Again, read through the comments others have added."} +{"question_id": 39409866, "score": 102, "creation_date": 1473418099, "tags": ["python", "pandas", "seaborn", "heatmap", "correlation"], "instruction": "Correlation heatmap\n\nI want to represent correlation matrix using a heatmap. There is something called correlogram in R, but I don't think there's such a thing in Python. How can I do this? The values go from -1 to 1, for example: [[ 1. 0.00279981 0.95173379 0.02486161 -0.00324926 -0.00432099] [ 0.00279981 1. 0.17728303 0.64425774 0.30735071 0.37379443] [ 0.95173379 0.17728303 1. 0.27072266 0.02549031 0.03324756] [ 0.02486161 0.64425774 0.27072266 1. 0.18336236 0.18913512] [-0.00324926 0.30735071 0.02549031 0.18336236 1. 0.77678274] [-0.00432099 0.37379443 0.03324756 0.18913512 0.77678274 1. ]] I was able to produce the following heatmap based on another question, but the problem is that my values get 'cut' at 0, so I would like to have a map which goes from blue(-1) to red(1), or something like that, but here values below 0 are not presented in an adequate way. Here's the code for that: plt.imshow(correlation_matrix,cmap='hot',interpolation='nearest')", "output": "Another alternative is to use the heatmap function in seaborn to plot the covariance. This example uses the 'mpg' data set from seaborn. import seaborn as sns %matplotlib inline # load the Auto dataset auto_df = sns.load_dataset('mpg') # calculate the correlation matrix on the numeric columns corr = auto_df.select_dtypes('number').corr() # plot the heatmap sns.heatmap(corr) If you wanted to be even more fancy, you can use Pandas Style, for example: cmap = sns.diverging_palette(5, 250, as_cmap=True) def magnify(): return [dict(selector=\"th\", props=[(\"font-size\", \"7pt\")]), dict(selector=\"td\", props=[('padding', \"0em 0em\")]), dict(selector=\"th:hover\", props=[(\"font-size\", \"12pt\")]), dict(selector=\"tr:hover td:hover\", props=[('max-width', '200px'), ('font-size', '12pt')]) ] corr.style.background_gradient(cmap, axis=1)\\ .format(precision=3)\\ .set_properties(**{'max-width': '80px', 'font-size': '10pt'})\\ .set_caption(\"Hover to magify\")\\ .set_table_styles(magnify())"} +{"question_id": 30280856, "score": 102, "creation_date": 1431811048, "tags": ["python", "dictionary"], "instruction": "Populating a dictionary using two for loops in Python\n\nI'm trying to create a dictionary using two for loops. Here is my code: dicts = {} keys = range(4) values = [\"Hi\", \"I\", \"am\", \"John\"] for i in keys: for x in values: dicts[i] = x print(dicts) This outputs: {0: 'John', 1: 'John', 2: 'John', 3: 'John'} Why? I was planning on making it output: {0: 'Hi', 1: 'I', 2: 'am', 3: 'John'} Why doesn't it output that way and how do we make it output correctly?", "output": "dicts = {} keys = range(4) values = [\"Hi\", \"I\", \"am\", \"John\"] for i in keys: dicts[i] = values[i] print(dicts) alternatively In [7]: dict(list(enumerate(values))) Out[7]: {0: 'Hi', 1: 'I', 2: 'am', 3: 'John'}"} +{"question_id": 3193060, "score": 102, "creation_date": 1278491156, "tags": ["python", "http", "urllib2", "urllib"], "instruction": "How do I catch a specific HTTP error in Python?\n\nI have import urllib2 try: urllib2.urlopen(\"some url\") except urllib2.HTTPError: but what I end up is catching any kind of HTTP error. I want to catch only if the specified webpage doesn't exist (404?).", "output": "Python 3 from urllib.error import HTTPError Python 2 from urllib2 import HTTPError Just catch HTTPError, handle it, and if it's not Error 404, simply use raise to re-raise the exception. See the Python tutorial. Here is a complete example for Python 2: import urllib2 from urllib2 import HTTPError try: urllib2.urlopen(\"some url\") except HTTPError as err: if err.code == 404: else: raise"} +{"question_id": 36419342, "score": 102, "creation_date": 1459839809, "tags": ["python", "jupyter-notebook"], "instruction": "How to wrap code/text in Jupyter notebooks\n\nI am using jupyter-notebooks for python coding. Is there a way to wrap text/code in a jupyter notebook code cell? Picture provided below. By wrap text means \"how text is wrapped in MS-word\"", "output": "When using jupyter-lab (rather than jupyter notebook) the solution is much simpler. You can do: Settings > Advanced setting Editor > TextEditor > checkbox enable Line Wrap."} +{"question_id": 41635547, "score": 102, "creation_date": 1484313494, "tags": ["python", "datetime", "unix-timestamp"], "instruction": "Convert python datetime to timestamp in milliseconds\n\nHow do I convert a human-readable time such as 20.12.2016 09:38:42,76 to a Unix timestamp in milliseconds?", "output": "In Python 3 this can be done in 2 steps: Convert timestring to datetime object Multiply the timestamp of the datetime object by 1000 to convert it to milliseconds. For example like this: from datetime import datetime dt_obj = datetime.strptime('20.12.2016 09:38:42,76', '%d.%m.%Y %H:%M:%S,%f') millisec = dt_obj.timestamp() * 1000 print(millisec) Output: 1482223122760.0 strptime accepts your timestring and a format string as input. The timestring (first argument) specifies what you actually want to convert to a datetime object. The format string (second argument) specifies the actual format of the string that you have passed. Here is the explanation of the format specifiers from the official documentation: %d - Day of the month as a zero-padded decimal number. %m - Month as a zero-padded decimal number. %Y - Year with century as a decimal number %H - Hour (24-hour clock) as a zero-padded decimal number. %M - Minute as a zero-padded decimal number. %S - Second as a zero-padded decimal number. %f - Microsecond as a decimal number, zero-padded to 6 digits."} +{"question_id": 3085696, "score": 102, "creation_date": 1277131814, "tags": ["python", "tkinter", "tkinter.scrollbar"], "instruction": "Adding a scrollbar to a group of widgets in Tkinter\n\nI am using Python to parse entries from a log file, and display the entry contents using Tkinter and so far it's been excellent. The output is a grid of label widgets, but sometimes there are more rows than can be displayed on the screen. I'd like to add a scrollbar, which looks like it should be very easy, but I can't figure it out. The documentation implies that only the List, Textbox, Canvas and Entry widgets support the scrollbar interface. None of these appear to be suitable for displaying a grid of widgets. It's possible to put arbitrary widgets in a Canvas widget, but you appear to have to use absolute co-ordinates, so I wouldn't be able to use the grid layout manager? I've tried putting the widget grid into a Frame, but that doesn't seem to support the scrollbar interface, so this doesn't work: mainframe = Frame(root, yscrollcommand=scrollbar.set) Can anyone suggest a way round this limitation? I'd hate to have to rewrite in PyQt and increase my executable image size by so much, just to add a scrollbar!", "output": "Overview You can only associate scrollbars with a few widgets, and the root widget and Frame aren't part of that group of widgets. There are at least a couple of ways to do this. If you need a simple vertical or horizontal group of widgets, you can use a text widget and the window_create method to add widgets. This method is simple, but doesn't allow for a complex layout of the widgets. A more common general-purpose solution is to create a canvas widget and associate the scrollbars with that widget. Then, into that canvas embed the frame that contains your label widgets. Determine the width/height of the frame and feed that into the canvas scrollregion option so that the scrollregion exactly matches the size of the frame. Why put the widgets in a frame rather than directly in the canvas? A scrollbar attached to a canvas can only scroll items created with one of the create_ methods. You cannot scroll items added to a canvas with pack, place, or grid. By using a frame, you can use those methods inside the frame, and then call create_window once for the frame. Drawing the text items directly on the canvas isn't very hard, so you might want to reconsider that approach if the frame-embedded-in-a-canvas solution seems too complex. Since you're creating a grid, the coordinates of each text item is going to be very easy to compute, especially if each row is the same height (which it probably is if you're using a single font). For drawing directly on the canvas, just figure out the line height of the font you're using (and there are commands for that). Then, each y coordinate is row*(lineheight+spacing). The x coordinate will be a fixed number based on the widest item in each column. If you give everything a tag for the column it is in, you can adjust the x coordinate and width of all items in a column with a single command. Object-oriented solution Here's an example of the frame-embedded-in-canvas solution, using an object-oriented approach: import tkinter as tk class Example(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) self.canvas = tk.Canvas(self, borderwidth=0, background=\"#ffffff\") self.frame = tk.Frame(self.canvas, background=\"#ffffff\") self.vsb = tk.Scrollbar(self, orient=\"vertical\", command=self.canvas.yview) self.canvas.configure(yscrollcommand=self.vsb.set) self.vsb.pack(side=\"right\", fill=\"y\") self.canvas.pack(side=\"left\", fill=\"both\", expand=True) self.canvas.create_window((4,4), window=self.frame, anchor=\"nw\", tags=\"self.frame\") self.frame.bind(\"\", self.onFrameConfigure) self.populate() def populate(self): '''Put in some fake data''' for row in range(100): tk.Label(self.frame, text=\"%s\" % row, width=3, borderwidth=\"1\", relief=\"solid\").grid(row=row, column=0) t=\"this is the second column for row %s\" %row tk.Label(self.frame, text=t).grid(row=row, column=1) def onFrameConfigure(self, event): '''Reset the scroll region to encompass the inner frame''' self.canvas.configure(scrollregion=self.canvas.bbox(\"all\")) if __name__ == \"__main__\": root=tk.Tk() example = Example(root) example.pack(side=\"top\", fill=\"both\", expand=True) root.mainloop() Procedural solution Here is a solution that doesn't use a class: import tkinter as tk def populate(frame): '''Put in some fake data''' for row in range(100): tk.Label(frame, text=\"%s\" % row, width=3, borderwidth=\"1\", relief=\"solid\").grid(row=row, column=0) t=\"this is the second column for row %s\" %row tk.Label(frame, text=t).grid(row=row, column=1) def onFrameConfigure(canvas): '''Reset the scroll region to encompass the inner frame''' canvas.configure(scrollregion=canvas.bbox(\"all\")) root = tk.Tk() canvas = tk.Canvas(root, borderwidth=0, background=\"#ffffff\") frame = tk.Frame(canvas, background=\"#ffffff\") vsb = tk.Scrollbar(root, orient=\"vertical\", command=canvas.yview) canvas.configure(yscrollcommand=vsb.set) vsb.pack(side=\"right\", fill=\"y\") canvas.pack(side=\"left\", fill=\"both\", expand=True) canvas.create_window((4,4), window=frame, anchor=\"nw\") frame.bind(\"\", lambda event, canvas=canvas: onFrameConfigure(canvas)) populate(frame) root.mainloop()"} +{"question_id": 70422866, "score": 102, "creation_date": 1640008358, "tags": ["python", "python-venv"], "instruction": "How to create a venv with a different Python version\n\nI have different venvs in my machine in which I have Python 3.10. Now for a specific project, I realised that Python 3.10 is not suitable as some libraries are still not compatible. Therefore, when creating a new venv for a new project, I would like to downgrade Python, say to 3.8, only for this specific venv. How can I do that? What should I type onto the terminal to do this? PS: I use Visual Studio Code and its terminal to create venv.", "output": "You can have multiple Python interpreter versions installed at the same time and you can create virtual environments with the needed version. Make sure you have installed the Python version you need and then specify its location when you create the virtual environment: virtualenv -p Example: virtualenv -p C:\\Users\\ssharma\\AppData\\Local\\Programs\\Python\\Python38\\python.exe venv38 This will create a virtual environment called venv38 with Python 3.8."} +{"question_id": 2656322, "score": 102, "creation_date": 1271455075, "tags": ["python", "windows", "file-permissions", "shutil"], "instruction": "shutil.rmtree fails on Windows with 'Access is denied'\n\nIn Python, when running shutil.rmtree over a folder that contains a read-only file, the following exception is printed: File \"C:\\Python26\\lib\\shutil.py\", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File \"C:\\Python26\\lib\\shutil.py\", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File \"C:\\Python26\\lib\\shutil.py\", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File \"C:\\Python26\\lib\\shutil.py\", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File \"C:\\Python26\\lib\\shutil.py\", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File \"C:\\Python26\\lib\\shutil.py\", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File \"C:\\Python26\\lib\\shutil.py\", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File \"C:\\Python26\\lib\\shutil.py\", line 221, in rmtree onerror(os.remove, fullname, sys.exc_info()) File \"C:\\Python26\\lib\\shutil.py\", line 219, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'build\\\\tcl\\\\tcl8.5\\\\msgs\\\\af.msg' Looking in File Properties dialog I noticed that af.msg file is set to be read-only. So the question is: what is the simplest workaround/fix to get around this problem - given that my intention is to do an equivalent of rm -rf build/ but on Windows? (without having to use third-party tools like unxutils or cygwin - as this code is targeted to be run on a bare Windows install with Python 2.6 w/ PyWin32 installed)", "output": "Check this question out: What user do python scripts run as in windows? Apparently the answer is to change the file/folder to not be read-only and then remove it. Here's onerror() handler from pathutils.py mentioned by @Sridhar Ratnakumar in comments: def onerror(func, path, exc_info): \"\"\" Error handler for ``shutil.rmtree``. If the error is due to an access error (read only file) it attempts to add write permission and then retries. If the error is for another reason it re-raises the error. Usage : ``shutil.rmtree(path, onerror=onerror)`` \"\"\" import stat # Is the error an access error? if not os.access(path, os.W_OK): os.chmod(path, stat.S_IWUSR) func(path) else: raise"} +{"question_id": 74981558, "score": 102, "creation_date": 1672657056, "tags": ["python", "ubuntu", "pip", "windows-subsystem-for-linux"], "instruction": "Error Updating Python3 pip AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms'\n\nI'm having an error when installing/updating any pip module in python3. Purging and reinstalling pip and every package I can thing of hasn't helped. Here's the error that I get in response to running python -m pip install --upgrade pip specifically (but the error is the same for attempting to install or update any pip module): Traceback (most recent call last): File \"/usr/lib/python3.8/runpy.py\", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code exec(code, run_globals) File \"/usr/lib/python3/dist-packages/pip/__main__.py\", line 16, in from pip._internal.cli.main import main as _main # isort:skip # noqa File \"/usr/lib/python3/dist-packages/pip/_internal/cli/main.py\", line 10, in from pip._internal.cli.autocompletion import autocomplete File \"/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py\", line 9, in from pip._internal.cli.main_parser import create_main_parser File \"/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py\", line 7, in from pip._internal.cli import cmdoptions File \"/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py\", line 24, in from pip._internal.exceptions import CommandError File \"/usr/lib/python3/dist-packages/pip/_internal/exceptions.py\", line 10, in from pip._vendor.six import iteritems File \"/usr/lib/python3/dist-packages/pip/_vendor/__init__.py\", line 65, in vendored(\"cachecontrol\") File \"/usr/lib/python3/dist-packages/pip/_vendor/__init__.py\", line 36, in vendored __import__(modulename, globals(), locals(), level=0) File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py\", line 9, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py\", line 1, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py\", line 5, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py\", line 95, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py\", line 46, in File \"/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py\", line 8, in from OpenSSL import crypto, SSL File \"/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py\", line 3268, in _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' Error in sys.excepthook: Traceback (most recent call last): File \"/usr/lib/python3/dist-packages/apport_python_hook.py\", line 72, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File \"/usr/lib/python3/dist-packages/apport/__init__.py\", line 5, in from apport.report import Report File \"/usr/lib/python3/dist-packages/apport/report.py\", line 32, in import apport.fileutils File \"/usr/lib/python3/dist-packages/apport/fileutils.py\", line 12, in import os, glob, subprocess, os.path, time, pwd, sys, requests_unixsocket File \"/usr/lib/python3/dist-packages/requests_unixsocket/__init__.py\", line 1, in import requests File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py\", line 95, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py\", line 46, in File \"/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py\", line 8, in from OpenSSL import crypto, SSL File \"/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py\", line 3268, in _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' Original exception was: Traceback (most recent call last): File \"/usr/lib/python3.8/runpy.py\", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code exec(code, run_globals) File \"/usr/lib/python3/dist-packages/pip/__main__.py\", line 16, in from pip._internal.cli.main import main as _main # isort:skip # noqa File \"/usr/lib/python3/dist-packages/pip/_internal/cli/main.py\", line 10, in from pip._internal.cli.autocompletion import autocomplete File \"/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py\", line 9, in from pip._internal.cli.main_parser import create_main_parser File \"/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py\", line 7, in from pip._internal.cli import cmdoptions File \"/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py\", line 24, in from pip._internal.exceptions import CommandError File \"/usr/lib/python3/dist-packages/pip/_internal/exceptions.py\", line 10, in from pip._vendor.six import iteritems File \"/usr/lib/python3/dist-packages/pip/_vendor/__init__.py\", line 65, in vendored(\"cachecontrol\") File \"/usr/lib/python3/dist-packages/pip/_vendor/__init__.py\", line 36, in vendored __import__(modulename, globals(), locals(), level=0) File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py\", line 9, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py\", line 1, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py\", line 5, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py\", line 95, in File \"\", line 991, in _find_and_load File \"\", line 975, in _find_and_load_unlocked File \"\", line 655, in _load_unlocked File \"\", line 618, in _load_backward_compatible File \"\", line 259, in load_module File \"/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py\", line 46, in File \"/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py\", line 8, in from OpenSSL import crypto, SSL File \"/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py\", line 3268, in _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' I'm running Ubuntu 20.04 in WSL. Python openssl is already installed. sudo apt install python3-openssl Reading package lists... Done Building dependency tree Reading state information... Done python3-openssl is already the newest version (19.0.0-1build1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. My assumption is that I need to re-install some stuff, but I'm not sure what. I've tried the obvious stuff like python3-openssl, libssl-dev, libffi-dev, and python3-pip itself and python3 alltogether.", "output": "As version 39.0.0 presented this bug, downgrading the cryptography package solves this, without purging or touching your OS. pip install cryptography==38.0.4 to downgrade from 39.0.0 which presented this error EDIT per @thomas, The error is a result of incompatibility between cryptography and pyopenssl, so if possible, also upgrading to openssl>22.1.0 should work: pip install -U pyopenssl cryptography"} +{"question_id": 6307761, "score": 102, "creation_date": 1307716122, "tags": ["python", "class", "decorator", "wrapper"], "instruction": "How to decorate all functions of a class without typing it over and over for each method?\n\nLets say my class has many methods, and I want to apply my decorator on each one of them, later when I add new methods, I want the same decorator to be applied, but I don't want to write @mydecorator above the method declaration all the time. If I look into __call__ is that the right way to go? I'd like to show this way, which is a similar solution to my problem for anybody finding this question later, using a mixin as mentioned in the comments. class WrapinMixin(object): def __call__(self, hey, you, *args): print 'entering', hey, you, repr(args) try: ret = getattr(self, hey)(you, *args) return ret except: ret = str(e) raise finally: print 'leaving', hey, repr(ret) Then you can in another class Wrapmymethodsaround(WrapinMixin): def __call__(self, hey, you, *args): return super(Wrapmymethodsaround, self).__call__(hey, you, *args) Editor's note: this example appears to be solving a different problem than what is asked about.", "output": "Decorate the class with a function that walks through the class's attributes and decorates callables. This may be the wrong thing to do if you have class variables that may happen to be callable, and will also decorate nested classes (credits to Sven Marnach for pointing this out) but generally it's a rather clean and simple solution. Example implementation (note that this will not exclude special methods (__init__ etc.), which may or may not be desired): def for_all_methods(decorator): def decorate(cls): for attr in cls.__dict__: # there's propably a better way to do this if callable(getattr(cls, attr)): setattr(cls, attr, decorator(getattr(cls, attr))) return cls return decorate Use like this: @for_all_methods(mydecorator) class C(object): def m1(self): pass def m2(self, x): pass ..."} +{"question_id": 16780002, "score": 102, "creation_date": 1369685797, "tags": ["python", "generator", "python-3.3"], "instruction": "Return in generator together with yield\n\nIn Python 2 it used to cause an error when return occurred together with yield inside a function definition. But for this code in Python 3.3: def f(): return 3 yield 2 x = f() print(x.__next__()) there is no error that return is used in function with yield. However when the function __next__ is called then there is thrown exception StopIteration. Why there is not just returned value 3? Is this return somehow ignored?", "output": "This is a new feature in Python 3.3. Much like return in a generator has long been equivalent to raise StopIteration(), return in a generator is now equivalent to raise StopIteration(). For that reason, the exception you're seeing should be printed as StopIteration: 3, and the value is accessible through the attribute value on the exception object. If the generator is delegated to using the (also new) yield from syntax, it is the result. See PEP 380 for details. def f(): return 1 yield 2 def g(): x = yield from f() print(x) # g is still a generator so we need to iterate to run it: for _ in g(): pass This prints 1, but not 2."} +{"question_id": 69437526, "score": 102, "creation_date": 1633356862, "tags": ["python", "algorithm", "sorting"], "instruction": "What is this odd sorting algorithm?\n\nSome answer originally had this sorting algorithm: for i from 0 to n-1: for j from 0 to n-1: if A[j] > A[i]: swap A[i] and A[j] Note that both i and j go the full range and thus j can be both larger and smaller than i, so it can make pairs both correct and wrong order (and it actually does do both!). I thought that's a mistake (and the author later called it that) and that this would jumble the array, but it does appear to sort correctly. It's not obvious why, though. But the code simplicity (going full ranges, and no +1 as in bubble sort) makes it interesting. Is it correct? If so, why does it work? And does it have a name? Python implementation with testing: from random import shuffle for _ in range(3): n = 20 A = list(range(n)) shuffle(A) print('before:', A) for i in range(n): for j in range(n): if A[j] > A[i]: A[i], A[j] = A[j], A[i] print('after: ', A, '\\n') Sample output (Try it online!): before: [9, 14, 8, 12, 16, 19, 2, 1, 10, 11, 18, 4, 15, 3, 6, 17, 7, 0, 5, 13] after: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] before: [5, 1, 18, 10, 19, 14, 17, 7, 12, 16, 2, 0, 6, 8, 9, 11, 4, 3, 15, 13] after: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] before: [11, 15, 7, 14, 0, 2, 9, 4, 13, 17, 8, 10, 1, 12, 6, 16, 18, 3, 5, 19] after: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] Edit: Someone pointed out a very nice brand new paper about this algorithm. Just to clarify: We're unrelated, it's a coincidence. As far as I can tell it was submitted to arXiv before that answer that sparked my question and published by arXiv after my question.", "output": "To prove that it's correct, you have to find some sort of invariant. Something that's true during every pass of the loop. Looking at it, after the very first pass of the inner loop, the largest element of the list will actually be in the first position. Now in the second pass of the inner loop, i = 1, and the very first comparison is between i = 1 and j = 0. So, the largest element was in position 0, and after this comparison, it will be swapped to position 1. In general, then it's not hard to see that after each step of the outer loop, the largest element will have moved one to the right. So after the full steps, we know at least the largest element will be in the correct position. What about all the rest? Let's say the second-largest element sits at position i of the current loop. We know that the largest element sits at position i-1 as per the previous discussion. Counter j starts at 0. So now we're looking for the first A[j] such that it's A[j] > A[i]. Well, the A[i] is the second largest element, so the first time that happens is when j = i-1, at the first largest element. Thus, they're adjacent and get swapped, and are now in the \"right\" order. Now A[i] again points to the largest element, and hence for the rest of the inner loop no more swaps are performed. So we can say: Once the outer loop index has moved past the location of the second largest element, the second and first largest elements will be in the right order. They will now slide up together, in every iteration of the outer loop, so we know that at the end of the algorithm both the first and second-largest elements will be in the right position. What about the third-largest element? Well, we can use the same logic again: Once the outer loop counter i is at the position of the third-largest element, it'll be swapped such that it'll be just below the second largest element (if we have found that one already!) or otherwise just below the first largest element. Ah. And here we now have our invariant: After k iterations of the outer loop, the k-length sequence of elements, ending at position k-1, will be in sorted order: After the 1st iteration, the 1-length sequence, at position 0, will be in the correct order. That's trivial. After the 2nd iteration, we know the largest element is at position 1, so obviously the sequence A[0], A[1] is in the correct order. Now let's assume we're at step k, so all the elements up to position k-1 will be in order. Now i = k and we iterate over j. What this does is basically find the position at which the new element needs to be slotted into the existing sorted sequence so that it'll be properly sorted. Once that happens, the rest of the elements \"bubble one up\" until now the largest element sits at position i = k and no further swaps happen. Thus finally at the end of step N, all the elements up to position N-1 are in the correct order, QED."} +{"question_id": 46743068, "score": 102, "creation_date": 1507973974, "tags": ["python", "visual-studio-code", "vscode-extensions", "python-module", "auto-import"], "instruction": "Python auto-import extension for VSCode\n\nIs there a Python auto-import extension/plugin available for VSCode? By auto-import I mean automatically importing of Python modules, so if you type sys.argv then it should automatically import the sys module. Eclipse and IntelliJ have this feature with Java.", "output": "VSCode team recently released Pylance Features Docstrings Signature help, with type information Parameter suggestions Code completion Auto-imports (as well as add and remove import code actions) As-you-type reporting of code errors and warnings (diagnostics) Code outline Code navigation Code lens (references/implementations) Type checking mode Native multi-root workspace support IntelliCode compatibility Jupyter Notebooks compatibility"} +{"question_id": 25858497, "score": 102, "creation_date": 1410823981, "tags": ["python", "enums"], "instruction": "Should enum instances be compared by identity or equality?\n\nPEP 8 Programming Recommendations says: Comparisons to singletons like None should always be done with is or is not, never the equality operators. According to the docs, enum members are singletons. Does that mean they should also be compared by identity? class Color(Enum): RED = 1 GREEN = 2 BLUE = 3 # like this? if color is Color.RED: ... # or like this if color == Color.RED: ... When using equality operators, I haven't noticed any issues with this to warrant such strong wording as PEP 8. What's the drawback of using equality, if any? Doesn't it just fall back to an identity-based comparison anyway? Is this just a micro-optimisation?", "output": "From https://docs.python.org/3/howto/enum.html#comparisons : Enumeration members are compared by identity: >>> Color.RED is Color.RED True >>> Color.RED is Color.BLUE False >>> Color.RED is not Color.BLUE True"} +{"question_id": 18727347, "score": 101, "creation_date": 1378841543, "tags": ["python", "urllib", "urlparse"], "instruction": "How to extract a filename from a URL and append a word to it?\n\nI have the following URL: url = http://photographs.500px.com/kyle/09-09-201315-47-571378756077.jpg I would like to extract the file name in this URL: 09-09-201315-47-571378756077.jpg Once I get this file name, I'm going to save it with this name to the Desktop. filename = **extracted file name from the url** download_photo = urllib.urlretrieve(url, \"/home/ubuntu/Desktop/%s.jpg\" % (filename)) After this, I'm going to resize the photo, once that is done, I've going to save the resized version and append the word \"_small\" to the end of the filename. downloadedphoto = Image.open(\"/home/ubuntu/Desktop/%s.jpg\" % (filename)) resize_downloadedphoto = downloadedphoto.resize.((300, 300), Image.ANTIALIAS) resize_downloadedphoto.save(\"/home/ubuntu/Desktop/%s.jpg\" % (filename + _small)) From this, what I am trying to achieve is to get two files, the original photo with the original name, then the resized photo with the modified name. Like so: 09-09-201315-47-571378756077.jpg rename to: 09-09-201315-47-571378756077_small.jpg How can I go about doing this?", "output": "You can use urllib.parse.urlparse with os.path.basename: import os from urllib.parse import urlparse url = \"http://photographs.500px.com/kyle/09-09-201315-47-571378756077.jpg\" a = urlparse(url) print(a.path) # Output: /kyle/09-09-201315-47-571378756077.jpg print(os.path.basename(a.path)) # Output: 09-09-201315-47-571378756077.jpg Your URL might contain percent-encoded characters like %20 for space or %E7%89%B9%E8%89%B2 for \"\u7279\u8272\". If that's the case, you'll need to unquote (or unquote_plus) them. You can also use pathlib.Path().name instead of os.path.basename, which could help to add a suffix in the name (like asked in the original question): from pathlib import Path, PurePosixPath from urllib.parse import urlparse, unquote url = \"http://photographs.500px.com/kyle/09-09-2013%20-%2015-47-571378756077.jpg\" urlparse(url).path url_parsed = urlparse(url) print(unquote(url_parsed.path)) # Output: /kyle/09-09-2013 - 15-47-571378756077.jpg file_path = Path(\"/home/ubuntu/Desktop/\") / unquote(PurePosixPath(url_parsed.path).name) print(file_path) # Output: /home/ubuntu/Desktop/09-09-2013 - 15-47-571378756077.jpg new_file = file_path.with_stem(file_path.stem + \"_small\") print(new_file) # Output: /home/ubuntu/Desktop/09-09-2013 - 15-47-571378756077_small.jpg For parsing URL paths, note that pathlib.PurePosixPath is more specific and more applicable than pathlib.Path. Also, a less-robust alternative is to use unquote(urlparse(url).path.split(\"/\")[-1])."} +{"question_id": 9647202, "score": 101, "creation_date": 1331389669, "tags": ["python", "nlp", "nltk", "ordinals"], "instruction": "Ordinal numbers replacement\n\nI am currently looking for the way to replace words like first, second, third,...with appropriate ordinal number representation (1st, 2nd, 3rd). I have been googling for the last week and I didn't find any useful standard tool or any function from NLTK. So is there any or should I write some regular expressions manually? Thanks for any advice", "output": "The package number-parser can parse ordinal words (\"first\", \"second\", etc) to integers. from number_parser import parse_ordinal n = parse_ordinal(\"first\") To convert an integer to \"1st\", \"2nd\", etc, you can use the following: def ordinal(n: int): if 11 <= (n % 100) <= 13: suffix = 'th' else: suffix = ['th', 'st', 'nd', 'rd', 'th'][min(n % 10, 4)] return str(n) + suffix Here is a more terse but less readable version (taken from Gareth on codegolf): ordinal = lambda n: \"%d%s\" % (n,\"tsnrhtdd\"[(n//10%10!=1)*(n%10<4)*n%10::4]) This works on any number: print([ordinal(n) for n in range(1,32)]) ['1st', '2nd', '3rd', '4th', '5th', '6th', '7th', '8th', '9th', '10th', '11th', '12th', '13th', '14th', '15th', '16th', '17th', '18th', '19th', '20th', '21st', '22nd', '23rd', '24th', '25th', '26th', '27th', '28th', '29th', '30th', '31st']"} +{"question_id": 4166447, "score": 101, "creation_date": 1289577413, "tags": ["python", "compression", "python-zipfile"], "instruction": "python zipfile module doesn't seem to be compressing my files\n\nI made a little helper function: import zipfile def main(archive_list=[],zfilename='default.zip'): print zfilename zout = zipfile.ZipFile(zfilename, \"w\") for fname in archive_list: print \"writing: \", fname zout.write(fname) zout.close() if __name__ == '__main__': main() The problem is that all my files are NOT being COMPRESSED! The files are the same size and, effectively, just the extension is being change to \".zip\" (from \".xls\" in this case). I'm running python 2.5 on winXP sp2.", "output": "This is because ZipFile requires you to specify the compression method. If you don't specify it, it assumes the compression method to be zipfile.ZIP_STORED, which only stores the files without compressing them. You need to specify the method to be zipfile.ZIP_DEFLATED. You will need to have the zlib module installed for this (it is usually installed by default). import zipfile def main(archive_list=[],zfilename='default.zip'): print zfilename zout = zipfile.ZipFile(zfilename, \"w\", zipfile.ZIP_DEFLATED) # <--- this is the change you need to make for fname in archive_list: print \"writing: \", fname zout.write(fname) zout.close() if __name__ == '__main__': main() Update: As per the documentation (python 3.7), value for 'compression' argument should be specified to override the default, which is ZIP_STORED. The available options are ZIP_DEFLATED, ZIP_BZIP2 or ZIP_LZMA and the corresponding libraries zlib, bz2 or lzma should be available."} +{"question_id": 59183863, "score": 101, "creation_date": 1575489690, "tags": ["python", "code-formatting"], "instruction": "In Python, how to tweak Black formatter, if possible?\n\nI know that Black is an opinionated formatter, but I love everything it does except one major thing. When I have a function with multiple arguments, instead of displaying it like this: def example_function(arg_1: str, arg_2: bool, arg_3: int = 0, arg_4: int = 1, arg_5: float = 0.0): pass I'd rather display it as follows for readability: def example_function( arg_1: str, arg_2: bool, arg_3: int = 0, arg_4: int = 1, arg_5: float = 0.0 ): Is this achievable with Black or some other formatter? I have this problem several times and it makes me consider not to use Black, either something else or nothing at all. Any ideas or comments?", "output": "This is due to the default line length for black being longer than you'd like \u2013 88 characters per line. To decrease the line length, you can use the --line-length flag as documented here: https://black.readthedocs.io/en/stable/usage_and_configuration/the_basics.html For example: $ black --line-length 80 example.py Black explains the --line-length setting in more detail here: https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length Line length You probably noticed the peculiar default line length. Black defaults to 88 characters per line, which happens to be 10% over 80. This number was found to produce significantly shorter files than sticking with 80 (the most popular), or even 79 (used by the standard library). In general, 90-ish seems like the wise choice. If you're paid by the line of code you write, you can pass --line-length with a lower number. Black will try to respect that. However, sometimes it won't be able to without breaking other rules. In those rare cases, auto-formatted code will exceed your allotted limit. You can also increase it, but remember that people with sight disabilities find it harder to work with line lengths exceeding 100 characters. It also adversely affects side-by-side diff review on typical screen resolutions. Long lines also make it harder to present code neatly in documentation or talk slides. Emphasis on the final paragraph. I'd recommend just keeping the default settings. The beauty of Black is that it chooses for you, and therefor preempts any arguments about which way is \"best\"."} +{"question_id": 5944708, "score": 101, "creation_date": 1304996687, "tags": ["python", "list"], "instruction": "How can I automatically limit the length of a list as new elements are added?\n\nI want to create a list that will contain the last 5 values entered into it. Here is an example: >>> l = [] >>> l.append('apple') >>> l.append('orange') >>> l.append('grape') >>> l.append('banana') >>> l.append('mango') >>> print(l) ['apple', 'orange', 'grape', 'banana', 'mango'] >>> l.append('kiwi') >>> print(l) # only 5 items in list ['orange', 'grape', 'banana', 'mango', 'kiwi'] So, in Python, is there any way to achieve what is demonstrated above? The variable does not need to be a list, I just used it as an example.", "output": "You might want to use a collections.deque object with the maxlen constructor argument instead: >>> l = collections.deque(maxlen=5) >>> l.append('apple') >>> l.append('orange') >>> l.append('grape') >>> l.append('banana') >>> l.append('mango') >>> print(l) deque(['apple', 'orange', 'grape', 'banana', 'mango'], maxlen=5) >>> l.append('kiwi') >>> print(l) # only 5 items in list deque(['orange', 'grape', 'banana', 'mango', 'kiwi'], maxlen=5)"} +{"question_id": 70882092, "score": 101, "creation_date": 1643300597, "tags": ["python", "cpython", "python-internals"], "instruction": "Can we make 1 == 2 true?\n\nPython ints are objects that encapsulate the actual number value. Can we mess with that value, for example setting the value of the object 1 to 2? So that 1 == 2 becomes True?", "output": "Yes, we can. But don't do this at home. Seriously, the 1 object is used in many places and I have no clue what this might break and what that might do to your computer. I reject all responsibility. But I found it interesting to learn about these things. The id function gives us the memory address and the ctypes module lets us mess with memory: import ctypes ctypes.memmove(id(1) + 24, id(2) + 24, 4) print(1 == 2) x = 40 print(x + 1) Output: True 42 Try it online!. I tried it there because such sites have got to be protected from our hacking anyway. More explanation / analysis: The memmove copied the value from the 2 object into the 1 object. Their size is 28 bytes each, but I skipped the first 24 bytes, because that's the object's reference count, type address, and value size, as we can view/verify as well: import ctypes, struct, sys x = 1 data = ctypes.string_at(id(x), 28) ref_count, type_address, number_of_digits, lowest_digit = \\ struct.unpack('qqqi', data) print('reference count: ', ref_count, sys.getrefcount(x)) print('type address: ', type_address, id(type(x))) print('number of digits:', number_of_digits, -(-x.bit_length() // 30)) print('lowest digit: ', lowest_digit, x % 2**30) Output (Try it online!): reference count: 135 138 type address: 140259718753696 140259718753696 number of digits: 1 1 lowest digit: 1 1 The reference count gets increased by the getrefcount call, but I don't know why by 3. Anyway, ~134 things other than us reference the 1 object, and we're potentially messing all of them up, so... really don't try this at home. The \"digits\" refer to how CPython stores ints as digits in base 230. For example, x = 2 ** 3000 has 101 such digits. Output for x = 123 ** 456 for a better test: reference count: 1 2 type address: 140078560107936 140078560107936 number of digits: 106 106 lowest digit: 970169057 970169057"} +{"question_id": 53266350, "score": 101, "creation_date": 1542040199, "tags": ["python", "machine-learning", "deep-learning", "pytorch", "gpu"], "instruction": "How to tell PyTorch to not use the GPU?\n\nI want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell pytorch to not use the GPU and instead use the CPU only? I realize I could install another CPU-only pytorch, but hoping there's an easier way.", "output": "Before running your code, run this shell command to tell torch that there are no GPUs: export CUDA_VISIBLE_DEVICES=\"\" This will tell it to use only one GPU (the one with id 0) and so on: export CUDA_VISIBLE_DEVICES=\"0\""} +{"question_id": 14849293, "score": 101, "creation_date": 1360744022, "tags": ["python", "list", "substring", "enumerate"], "instruction": "Find all index position in list based on partial string inside item in list\n\nmylist = [\"aa123\", \"bb2322\", \"aa354\", \"cc332\", \"ab334\", \"333aa\"] I need the index position of all items that contain 'aa'. I'm having trouble combining enumerate() with partial string matching. I'm not even sure if I should be using enumerate. I just need to return the index positions: 0,2,5", "output": "You can use enumerate inside a list-comprehension: indices = [i for i, s in enumerate(mylist) if 'aa' in s]"} +{"question_id": 26305704, "score": 101, "creation_date": 1412965328, "tags": ["python", "linear-programming", "glpk", "integer-programming"], "instruction": "Python Mixed Integer Linear Programming\n\nAre there any Mixed Integer Linear Programming(MILP) solver for Python? Can GLPK python solve MILP problem? I read that it can solve Mixed integer problem. I am very new to linear programming problem. So i am rather confused and cant really differentiate if Mixed Integer Programming is different from Mixed Integer Linear programming(MILP).", "output": "Pulp is a python modeling interface that hooks up to solvers like CBC(open source), CPLEX (commercial), Gurobi(commercial), XPRESS-MP(commercial) and YALMIP(open source). You can also use Pyomo to model the optimization problem and then call an external solver, namely CPLEX, Gurobi GLPK and the AMPL solver library. You can also call GLPK from GLPK/Python, PyGLPK or PyMathProg. Yet another modelling language is CMPL, which has a python interface for MIP solvers (for linear programs only). All the above solvers solve Mixed Integer Linear Programs, while some of them (CPLEX, GUROBI and XRESS-MP for sure) can solve Mixed Integer Quadratic Programs and Quadratically constrained quadratic programs (and also conic programs but this probably goes beyond the scope of this question). MIP refers to Mixed integer programs, but it is commonly used to refer to linear programs only. To make the terminology more precise, one should always refer to MILP or MINLP (Mixed integer non-linear programming). Note that CPLEX and GUROBI have their own python APIs as well, but they (and also) XPRESS-MP are commercial products, but free for academic research. CyLP is similar to Pulp above but interfaces with the COIN-OR solvers CBC and CGL and CLP. Note that there is a big difference in the performance of commercial and free solvers: the latter are falling behind the former by a large margin. SCIP is perhaps the best non-commercial solver (see below for an update). Its python interface, PySCIPOpt, is here. Also, have a look at this SO question. Finally, if you are interested at a simple constraint solver (not optimization) then have a look at python-constraint. UPDATES More solvers and python interfaces that fell into my radar: Update: MIPCL links appear to be broken. MIPCL, which appears to be the fastest non-commercial MIP solver, has a python interface that has quite good documentation. Note, however, that the Python API does not include the advanced functionality that comes together with the native MIPCLShell. I particularly like the MIPCL-PY manual, which demonstrates an array of models used in Operations Management, on top of some small-scale implementations. It is a very interesting introductory manual in its own right, regardless of which solver/API one may want to make use of. Google Optimization Tools, which include a multitude of functionalities, such as A constraint programming solver and a linear programming (not MIP) solver An interface for MIP solvers (supports CBC, CLP, GLOP, GLPK, Gurobi, CPLEX, and SCIP) Specialized algorithms for graphs, for the Travelling Salesman Problem, the Vehicle Routing problem and for Bin packing & Knapsack problems It has extensive documentation of several traditional OR problems and simple implementations. I could not find a complete Python API documentation, although there exist some examples here. It is somewhat unclear to me how other solvers hook up on the interface and whether methods of these solvers are available. CVXOPT, an open-source package for convex optimization, which interfaces to GLPK (open source) and MOSEK (commercial). It is versatile, as it can tackle many problem classes (notably linear, second-order, semidefinite, convex nonlinear). The only disadvantage is that it modeling complex problems may be cumbersome, as the user needs to pass the data in a \"Matlab-y\" fashion (i.e., to specify the matrix, rhs vectors, etc). However, it can be called from the modeling interfaces PICOS and... CVXPY, a python-embedded optimization language for convex optimization problems, which contains CVXOPT as a default solver, but it can hook up to the usual MIP solvers. Thanks to RedPanda for pointing out that CVXOPT/CVXPY support MIP solvers as well. For a very comprehensive article on optimization modeling capabilities of packages and object-oriented languages (not restricted to Python), check this article."} +{"question_id": 10494312, "score": 101, "creation_date": 1336461576, "tags": ["python", "datetime", "datetime-parsing"], "instruction": "Parsing time string in Python\n\nI have a date time string that I don't know how to parse it in Python. The string is like this: Tue May 08 15:14:45 +0800 2012 I tried datetime.strptime(\"Tue May 08 15:14:45 +0800 2012\",\"%a %b %d %H:%M:%S %z %Y\") but Python raises 'z' is a bad directive in format '%a %b %d %H:%M:%S %z %Y' According to Python doc: %z UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive). What is the right format to parse this time string?", "output": "datetime.datetime.strptime has problems with timezone parsing. Have a look at the dateutil package: >>> from dateutil import parser >>> parser.parse(\"Tue May 08 15:14:45 +0800 2012\") datetime.datetime(2012, 5, 8, 15, 14, 45, tzinfo=tzoffset(None, 28800))"} +{"question_id": 2755201, "score": 101, "creation_date": 1272837991, "tags": ["python", "syntax", "string-formatting", "delimiter", "missing-data"], "instruction": "str.format() raises KeyError\n\nThe following code raises a KeyError exception: addr_list_formatted = [] addr_list_idx = 0 for addr in addr_list: # addr_list is a list addr_list_idx = addr_list_idx + 1 addr_list_formatted.append(\"\"\" \"{0}\" { \"gamedir\" \"str\" \"address\" \"{1}\" } \"\"\".format(addr_list_idx, addr)) Why? I am using Python 3.1.", "output": "The problem is that those { and } characters you have there don't specify a key for formatting. You need to double them up, so change your code to: addr_list_formatted.append(\"\"\" \"{0}\" {{ \"gamedir\" \"str\" \"address\" \"{1}\" }} \"\"\".format(addr_list_idx, addr))"} +{"question_id": 19618912, "score": 101, "creation_date": 1382882598, "tags": ["python", "pandas", "dataframe", "intersect"], "instruction": "Finding common rows (intersection) in two Pandas dataframes\n\nAssume I have two dataframes of this format (call them df1 and df2): +------------------------+------------------------+--------+ | user_id | business_id | rating | +------------------------+------------------------+--------+ | rLtl8ZkDX5vH5nAx9C3q5Q | eIxSLxzIlfExI6vgAbn2JA | 4 | | C6IOtaaYdLIT5fWd7ZYIuA | eIxSLxzIlfExI6vgAbn2JA | 5 | | mlBC3pN9GXlUUfQi1qBBZA | KoIRdcIfh3XWxiCeV1BDmA | 3 | +------------------------+------------------------+--------+ I'm looking to get a dataframe of all the rows that have a common user_id in df1 and df2. (ie. if a user_id is in both df1 and df2, include the two rows in the output dataframe) I can think of many ways to approach this, but they all strike me as clunky. For example, we could find all the unique user_ids in each dataframe, create a set of each, find their intersection, filter the two dataframes with the resulting set and concatenate the two filtered dataframes. Maybe that's the best approach, but I know Pandas is clever. Is there a simpler way to do this? I've looked at merge but I don't think that's what I need.", "output": "My understanding is that this question is better answered over in this post. But briefly, the answer to the OP with this method is simply: s1 = pd.merge(df1, df2, how='inner', on=['user_id']) Which gives s1 with 5 columns: user_id and the other two columns from each of df1 and df2."} +{"question_id": 4806911, "score": 101, "creation_date": 1296058661, "tags": ["python", "string", "comparison"], "instruction": "How are strings compared?\n\nI'm wondering how Python does string comparison, more specifically how it determines the outcome when a less than < or greater than > operator is used. For instance if I put print('abc' < 'bac') I get True. I understand that it compares corresponding characters in the string, however its unclear as to why there is more, for lack of a better term, \"weight\" placed on the fact that a is less thanb (first position) in first string rather than the fact that a is less than b in the second string (second position). Many people ask this question when the strings contain representations of numbers, and want to compare the numbers by numeric value. The straightforward solution is to convert the values first. See How do I parse a string to a float or int? . If there are multiple numbers in a list or other collection, see How can I collect the results of a repeated calculation in a list, dictionary etc. (or make a copy of a list with each element modified)? for batch conversion. If you are trying to compare strings that contain digit sequences, treating the digits as if they were numeric (sometimes called \"natural sort\"), see Is there a built in function for string natural sort? .", "output": "From the docs: The comparison uses lexicographical ordering: first the first two items are compared, and if they differ this determines the outcome of the comparison; if they are equal, the next two items are compared, and so on, until either sequence is exhausted. Also: Lexicographical ordering for strings uses the Unicode code point number to order individual characters. or on Python 2: Lexicographical ordering for strings uses the ASCII ordering for individual characters. As an example: >>> 'abc' > 'bac' False >>> ord('a'), ord('b') (97, 98) The result False is returned as soon as a is found to be less than b. The further items are not compared (as you can see for the second items: b > a is True). Be aware of lower and uppercase: >>> [(x, ord(x)) for x in abc] [('a', 97), ('b', 98), ('c', 99), ('d', 100), ('e', 101), ('f', 102), ('g', 103), ('h', 104), ('i', 105), ('j', 106), ('k', 107), ('l', 108), ('m', 109), ('n', 110), ('o', 111), ('p', 112), ('q', 113), ('r', 114), ('s', 115), ('t', 116), ('u', 117), ('v', 118), ('w', 119), ('x', 120), ('y', 121), ('z', 122)] >>> [(x, ord(x)) for x in abc.upper()] [('A', 65), ('B', 66), ('C', 67), ('D', 68), ('E', 69), ('F', 70), ('G', 71), ('H', 72), ('I', 73), ('J', 74), ('K', 75), ('L', 76), ('M', 77), ('N', 78), ('O', 79), ('P', 80), ('Q', 81), ('R', 82), ('S', 83), ('T', 84), ('U', 85), ('V', 86), ('W', 87), ('X', 88), ('Y', 89), ('Z', 90)] Specifically, this has the consequence of 'a' > 'A', 'b' > 'B', etc. including 'a' > 'Z' all evaluate to True as all lowercase characters from a to z have a higher code point number than all uppercase characters."} +{"question_id": 4995116, "score": 101, "creation_date": 1297704079, "tags": ["python", "parsing", "text", "beautifulsoup"], "instruction": "Only extracting text from this element, not its children\n\nI want to extract only the text from the top-most element of my soup; however soup.text gives the text of all the child elements as well: I have import BeautifulSoup soup=BeautifulSoup.BeautifulSoup('yesno') print soup.text The output to this is yesno. I want simply 'yes'. What's the best way of achieving this? Edit: I also want yes to be output when parsing 'noyes'.", "output": "In modern (as of 2023-06-17) BeautifulSoup4, given: from bs4 import BeautifulSoup node = BeautifulSoup(\"\"\"
A B C D
\"\"\").find('div') Use the following to get direct children text elements (BD): s = \"\".join(node.find_all(string=True, recursive=False)) And the following to get all descendants text elements (ABCD): s = \"\".join(node.find_all(string=True, recursive=True))"} +{"question_id": 47034888, "score": 101, "creation_date": 1509451189, "tags": ["python", "tensorflow", "neural-network", "logistic-regression", "cross-entropy"], "instruction": "How to choose cross-entropy loss in TensorFlow?\n\nClassification problems, such as logistic regression or multinomial logistic regression, optimize a cross-entropy loss. Normally, the cross-entropy layer follows the softmax layer, which produces probability distribution. In tensorflow, there are at least a dozen of different cross-entropy loss functions: tf.losses.softmax_cross_entropy tf.losses.sparse_softmax_cross_entropy tf.losses.sigmoid_cross_entropy tf.contrib.losses.softmax_cross_entropy tf.contrib.losses.sigmoid_cross_entropy tf.nn.softmax_cross_entropy_with_logits tf.nn.sigmoid_cross_entropy_with_logits ... Which one works only for binary classification and which are suitable for multi-class problems? When should you use sigmoid instead of softmax? How are sparse functions different from others and why is it only softmax? Related (more math-oriented) discussion: What are the differences between all these cross-entropy losses in Keras and TensorFlow?.", "output": "Preliminary facts In functional sense, the sigmoid is a partial case of the softmax function, when the number of classes equals 2. Both of them do the same operation: transform the logits (see below) to probabilities. In simple binary classification, there's no big difference between the two, however in case of multinomial classification, sigmoid allows to deal with non-exclusive labels (a.k.a. multi-labels), while softmax deals with exclusive classes (see below). A logit (also called a score) is a raw unscaled value associated with a class, before computing the probability. In terms of neural network architecture, this means that a logit is an output of a dense (fully-connected) layer. Tensorflow naming is a bit strange: all of the functions below accept logits, not probabilities, and apply the transformation themselves (which is simply more efficient). Sigmoid functions family tf.nn.sigmoid_cross_entropy_with_logits tf.nn.weighted_cross_entropy_with_logits tf.losses.sigmoid_cross_entropy tf.contrib.losses.sigmoid_cross_entropy (DEPRECATED) As stated earlier, sigmoid loss function is for binary classification. But tensorflow functions are more general and allow to do multi-label classification, when the classes are independent. In other words, tf.nn.sigmoid_cross_entropy_with_logits solves N binary classifications at once. The labels must be one-hot encoded or can contain soft class probabilities. tf.losses.sigmoid_cross_entropy in addition allows to set the in-batch weights, i.e. make some examples more important than others. tf.nn.weighted_cross_entropy_with_logits allows to set class weights (remember, the classification is binary), i.e. make positive errors larger than negative errors. This is useful when the training data is unbalanced. Softmax functions family tf.nn.softmax_cross_entropy_with_logits (DEPRECATED IN 1.5) tf.nn.softmax_cross_entropy_with_logits_v2 tf.losses.softmax_cross_entropy tf.contrib.losses.softmax_cross_entropy (DEPRECATED) These loss functions should be used for multinomial mutually exclusive classification, i.e. pick one out of N classes. Also applicable when N = 2. The labels must be one-hot encoded or can contain soft class probabilities: a particular example can belong to class A with 50% probability and class B with 50% probability. Note that strictly speaking it doesn't mean that it belongs to both classes, but one can interpret the probabilities this way. Just like in sigmoid family, tf.losses.softmax_cross_entropy allows to set the in-batch weights, i.e. make some examples more important than others. As far as I know, as of tensorflow 1.3, there's no built-in way to set class weights. [UPD] In tensorflow 1.5, v2 version was introduced and the original softmax_cross_entropy_with_logits loss got deprecated. The only difference between them is that in a newer version, backpropagation happens into both logits and labels (here's a discussion why this may be useful). Sparse functions family tf.nn.sparse_softmax_cross_entropy_with_logits tf.losses.sparse_softmax_cross_entropy tf.contrib.losses.sparse_softmax_cross_entropy (DEPRECATED) Like ordinary softmax above, these loss functions should be used for multinomial mutually exclusive classification, i.e. pick one out of N classes. The difference is in labels encoding: the classes are specified as integers (class index), not one-hot vectors. Obviously, this doesn't allow soft classes, but it can save some memory when there are thousands or millions of classes. However, note that logits argument must still contain logits per each class, thus it consumes at least [batch_size, classes] memory. Like above, tf.losses version has a weights argument which allows to set the in-batch weights. Sampled softmax functions family tf.nn.sampled_softmax_loss tf.contrib.nn.rank_sampled_softmax_loss tf.nn.nce_loss These functions provide another alternative for dealing with huge number of classes. Instead of computing and comparing an exact probability distribution, they compute a loss estimate from a random sample. The arguments weights and biases specify a separate fully-connected layer that is used to compute the logits for a chosen sample. Like above, labels are not one-hot encoded, but have the shape [batch_size, num_true]. Sampled functions are only suitable for training. In test time, it's recommended to use a standard softmax loss (either sparse or one-hot) to get an actual distribution. Another alternative loss is tf.nn.nce_loss, which performs noise-contrastive estimation (if you're interested, see this very detailed discussion). I've included this function to the softmax family, because NCE guarantees approximation to softmax in the limit."} +{"question_id": 3521715, "score": 101, "creation_date": 1282220788, "tags": ["python"], "instruction": "Call a Python method by name\n\nIf I have an object and a method name in a string, how can I call the method? class Foo: def bar1(self): print 1 def bar2(self): print 2 def callMethod(o, name): ??? f = Foo() callMethod(f, \"bar1\")", "output": "Use the built-in getattr() function: class Foo: def bar1(self): print(1) def bar2(self): print(2) def call_method(o, name): return getattr(o, name)() f = Foo() call_method(f, \"bar1\") # prints 1 You can also use setattr() for setting class attributes by names."} +{"question_id": 2400643, "score": 101, "creation_date": 1268044455, "tags": ["python", "json", "large-files"], "instruction": "Is there a memory efficient and fast way to load big JSON files?\n\nI have some json files with 500MB. If I use the \"trivial\" json.load() to load its content all at once, it will consume a lot of memory. Is there a way to read partially the file? If it was a text, line delimited file, I would be able to iterate over the lines. I am looking for analogy to it.", "output": "Update See the other answers for advice. Original answer from 2010, now outdated Short answer: no. Properly dividing a json file would take intimate knowledge of the json object graph to get right. However, if you have this knowledge, then you could implement a file-like object that wraps the json file and spits out proper chunks. For instance, if you know that your json file is a single array of objects, you could create a generator that wraps the json file and returns chunks of the array. You would have to do some string content parsing to get the chunking of the json file right. I don't know what generates your json content. If possible, I would consider generating a number of managable files, instead of one huge file."} +{"question_id": 73600082, "score": 101, "creation_date": 1662300788, "tags": ["python", "pip", "setuptools", "pyproject.toml"], "instruction": "How to reference a requirements.txt in the pyproject.toml of a setuptools project?\n\nI'm trying to migrate a setuptools-based project from the legacy setup.py towards modern pyproject.toml configuration. At the same time I want to keep well established workflows based on pip-compile, i.e., a requirements.in that gets compiled to a requirements.txt (for end-user / non-library projects of course). This has important benefits as a result of the full transparency: 100% reproducible installs due to pinning the full transitive closure of dependencies. better understanding of dependency conflicts in the transitive closure of dependencies. For this reason I don't want to maintain the dependencies directly inside the pyproject.toml via a dependencies = [] list, but rather externally in the pip-compiled managed requirements.txt. This makes me wonder: Is there a way to reference a requirements.txt file in the pyproject.toml configuration, without having to fallback to a setup.py script?", "output": "In setuptools 62.6 the file directive was made available for dependencies and optional-dependencies. Use dynamic metadata: [project] dynamic = [\"dependencies\"] [tool.setuptools.dynamic] dependencies = {file = [\"requirements.txt\"]} Note that the referenced file will use a requirements.txt-like syntax; each line must conform to PEP 508, so flags like -r, -c, and -e are not supported inside this requirements.txt. Also note that this capability is still technically in beta. Additionally: If you are using an old version of setuptools, you might need to ensure that all files referenced by the file directive are included in the sdist (you can do that via MANIFEST.in or using plugins such as setuptools-scm, please have a look on [sic] Controlling files in the distribution for more information). Changed in version 66.1.0: Newer versions of setuptools will automatically add these files to the sdist. If you want to use optional-dependencies, say, with a requirements-dev.txt, you will need to put an extra group, as follows (credit to Billbottom): [project] dynamic = [\"dependencies\", \"optional-dependencies\"] [tool.setuptools.dynamic] dependencies = {file = [\"requirements.txt\"]} optional-dependencies = {dev = { file = [\"requirements-dev.txt\"] }} However: Currently, when specifying optional-dependencies dynamically, all of the groups must be specified dynamically; one can not specify some of them statically and some of them dynamically."} +{"question_id": 7110604, "score": 101, "creation_date": 1313683836, "tags": ["python", "debian", "packaging", "distutils", "debhelper"], "instruction": "Is there a standard way to create Debian packages for distributing Python programs?\n\nThere is a ton of information on how to do this, but since \"there is more than one way to skin a cat\", and all the tutorials/manuals that cover a bit of the process seem to make certain assumptions which are different from other tutorials, I still didn't manage to grasp it. So far this is what I think I understood. My final goal should be that of creating a \"binary\" .deb package. Such package will be platform-independent (32/64 bit) as all Python programs are such. To create a \"binary\" package I need first to create a source package. To create the source package I can use either CDBS or debhelper. Debhelper is the recommended way for beginners. The core of creating a source package is populating the DEBIAN directory in the source directory with a number of files clarifying where files need to be copied, what copyright and licensing scheme they are subject to, what dependencies they have, etc... Step #4 can be largely automated the dh_makecommand if the Python source also comes with a distutils' setup.py script. Now my questions: Is my understanding of the process correct? Is there anything I am missing, or anything that I got wrong? Step #5 is really the more confusing to me: specifically the two points that remains most obscure to me are: How do I write a setup.py script that install a stand-alone programme? EDIT: By standalone programme I mean a program intended to be used by a desktop user (as opposed to a module which I understand like a collection of functionality to be used by other software after having been imported). In my specific case I would actually need two such \"programs\": the main software and a separate utility (in effect a second \"program\" that should be in the same package with the other one). What are the specificities of such a script for DEB packages? The official documentation only seems to deal with RPM and Windows stuff... BTW: These are the best sources of information that I could find myself so far. If you have anything better than this, please share! :) Ubuntu's Python packaging guide Creating a .deb package from a python setup.py (it shows the steps, but it doesn't explain them enough for me to follow along) ShowMeDo video on \"creating a .deb package out of a python program\" (it doesn't seem up-to-date and - if I got it right - will produce packages for personal use, without dependencies and without a signed changelog and other key data that will make it incompatible with the Debian policy).", "output": "It looks like stdeb will do what you want. Also, for installing scripts, I strongly recommend distribute's console_scripts entry point support."} +{"question_id": 1714027, "score": 100, "creation_date": 1257931319, "tags": ["python", "string-comparison"], "instruction": "Version number comparison in Python\n\nI want to write a cmp-like function that compares two version numbers and returns -1, 0, or 1 based on their compared values. Return -1 if version A is older than version B Return 0 if versions A and B are equivalent Return 1 if version A is newer than version B Each subsection is supposed to be interpreted as a number, therefore 1.10 > 1.1. Desired function outputs are mycmp('1.0', '1') == 0 mycmp('1.0.0', '1') == 0 mycmp('1', '1.0.0.1') == -1 mycmp('12.10', '11.0.0.0.0') == 1 ... And here is my implementation, open for improvement: def mycmp(version1, version2): parts1 = [int(x) for x in version1.split('.')] parts2 = [int(x) for x in version2.split('.')] # fill up the shorter version with zeros ... lendiff = len(parts1) - len(parts2) if lendiff > 0: parts2.extend([0] * lendiff) elif lendiff < 0: parts1.extend([0] * (-lendiff)) for i, p in enumerate(parts1): ret = cmp(p, parts2[i]) if ret: return ret return 0 I'm using Python 2.4.5 btw. (installed at my working place ...). Here's a small 'test suite' you can use assert mycmp('1', '2') == -1 assert mycmp('2', '1') == 1 assert mycmp('1', '1') == 0 assert mycmp('1.0', '1') == 0 assert mycmp('1', '1.000') == 0 assert mycmp('12.01', '12.1') == 0 assert mycmp('13.0.1', '13.00.02') == -1 assert mycmp('1.1.1.1', '1.1.1.1') == 0 assert mycmp('1.1.1.2', '1.1.1.1') == 1 assert mycmp('1.1.3', '1.1.3.000') == 0 assert mycmp('3.1.1.0', '3.1.2.10') == -1 assert mycmp('1.1', '1.10') == -1", "output": "Remove the uninteresting part of the string (trailing zeroes and dots), and then compare the lists of numbers. import re def mycmp(version1, version2): def normalize(v): return [int(x) for x in re.sub(r'(\\.0+)*$','', v).split(\".\")] return cmp(normalize(version1), normalize(version2)) This is the same approach as P\u00e4r Wieslander, but a bit more compact: Here are some tests, thanks to \"How to compare two strings in dot separated version format in Bash?\": assert mycmp(\"1\", \"1\") == 0 assert mycmp(\"2.1\", \"2.2\") < 0 assert mycmp(\"3.0.4.10\", \"3.0.4.2\") > 0 assert mycmp(\"4.08\", \"4.08.01\") < 0 assert mycmp(\"3.2.1.9.8144\", \"3.2\") > 0 assert mycmp(\"3.2\", \"3.2.1.9.8144\") < 0 assert mycmp(\"1.2\", \"2.1\") < 0 assert mycmp(\"2.1\", \"1.2\") > 0 assert mycmp(\"5.6.7\", \"5.6.7\") == 0 assert mycmp(\"1.01.1\", \"1.1.1\") == 0 assert mycmp(\"1.1.1\", \"1.01.1\") == 0 assert mycmp(\"1\", \"1.0\") == 0 assert mycmp(\"1.0\", \"1\") == 0 assert mycmp(\"1.0\", \"1.0.1\") < 0 assert mycmp(\"1.0.1\", \"1.0\") > 0 assert mycmp(\"1.0.2.0\", \"1.0.2\") == 0"} +{"question_id": 46580253, "score": 100, "creation_date": 1507188843, "tags": ["python", "apache-spark", "pyspark"], "instruction": "collect_list by preserving order based on another variable\n\nI am trying to create a new column of lists in Pyspark using a groupby aggregation on existing set of columns. An example input data frame is provided below: ------------------------ id | date | value ------------------------ 1 |2014-01-03 | 10 1 |2014-01-04 | 5 1 |2014-01-05 | 15 1 |2014-01-06 | 20 2 |2014-02-10 | 100 2 |2014-03-11 | 500 2 |2014-04-15 | 1500 The expected output is: id | value_list ------------------------ 1 | [10, 5, 15, 20] 2 | [100, 500, 1500] The values within a list are sorted by the date. I tried using collect_list as follows: from pyspark.sql import functions as F ordered_df = input_df.orderBy(['id','date'],ascending = True) grouped_df = ordered_df.groupby(\"id\").agg(F.collect_list(\"value\")) But collect_list doesn't guarantee order even if I sort the input data frame by date before aggregation. Could someone help on how to do aggregation by preserving the order based on a second (date) variable?", "output": "EDIT: pyspark.sql.functions.array_sort was added in PySpark 2.4, which operates exactly the same as the sorter UDF defined below and will generally be more performant. Leaving the old answer for posterity. For PySpark <2.4 only: If you collect both dates and values as a list, you can sort the resulting column according to date using and udf, and then keep only the values in the result. import operator import pyspark.sql.functions as F # create list column grouped_df = input_df.groupby(\"id\") \\ .agg(F.collect_list(F.struct(\"date\", \"value\")) \\ .alias(\"list_col\")) # define udf def sorter(l): res = sorted(l, key=operator.itemgetter(0)) return [item[1] for item in res] sort_udf = F.udf(sorter) # test grouped_df.select(\"id\", sort_udf(\"list_col\") \\ .alias(\"sorted_list\")) \\ .show(truncate = False) +---+----------------+ |id |sorted_list | +---+----------------+ |1 |[10, 5, 15, 20] | |2 |[100, 500, 1500]| +---+----------------+"} +{"question_id": 2512386, "score": 100, "creation_date": 1269476669, "tags": ["python", "csv", "merge", "concatenation"], "instruction": "How can I merge 200 CSV files in Python?\n\nI here have 200 separate CSV files named from SH (1) to SH (200). I want to merge them into a single CSV file. How can I do it?", "output": "As ghostdog74 said, but this time with headers: with open(\"out.csv\", \"ab\") as fout: # First file: with open(\"sh1.csv\", \"rb\") as f: fout.writelines(f) # Now the rest: for num in range(2, 201): with open(\"sh\" + str(num) + \".csv\", \"rb\") as f: next(f) # Skip the header, portably fout.writelines(f)"} +{"question_id": 802578, "score": 100, "creation_date": 1241014867, "tags": ["python", "attributes", "immutability"], "instruction": "In Python, how can I make unassignable attributes (like ones marked with `final` in Java)?\n\nIs there anything in Python that works like the final keyword in Java - i.e., to disallow assigning to a specific attribute of the instances of a class, after those instances have been created? I couldn't find anything like this in the documentation. I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.", "output": "There is no final equivalent in Python. To create read-only fields of class instances, you can use the property function, or you could do something like this: class WriteOnceReadWhenever: def __setattr__(self, attr, value): if hasattr(self, attr): raise Exception(\"Attempting to alter read-only value\") self.__dict__[attr] = value Also note that while there's @typing.final as of Python 3.8 (as Cerno mentions), that will not actually make values final at runtime."} +{"question_id": 1166118, "score": 100, "creation_date": 1248276581, "tags": ["python", "decorator"], "instruction": "How to strip decorators from a function in Python\n\nLet's say I have the following: def with_connection(f): def decorated(*args, **kwargs): f(get_connection(...), *args, **kwargs) return decorated @with_connection def spam(connection): # Do something I want to test the spam function without going through the hassle of setting up a connection (or whatever the decorator is doing). Given spam, how do I strip the decorator from it and get the underlying \"undecorated\" function?", "output": "In Python 2, in the general case, you can't, because @with_connection def spam(connection): # Do something is equivalent to def spam(connection): # Do something spam = with_connection(spam) which means that the \"original\" spam might not even exist anymore. A (not too pretty) hack would be this: def with_connection(f): def decorated(*args, **kwargs): f(get_connection(...), *args, **kwargs) decorated._original = f return decorated @with_connection def spam(connection): # Do something spam._original(testcon) # calls the undecorated function"} +{"question_id": 25250998, "score": 100, "creation_date": 1407785722, "tags": ["python", "multiple-instances", "spyder"], "instruction": "How do I run two separate instances of Spyder\n\nI want to be able to have two instances which are completely independent in the sense that I can be working on two separate unrelated projects in different folders without any interference.", "output": "(Spyder maintainer here) This is easy. You need to go to: Tools > Preferences > Application in Spyder 5, or Tools > Preferences > General in Spyder 4, click the \"Advanced Settings\" tab, and deactivate the option called [ ] Use a single instance Then every time you start Spyder a new window will be opened. If you want the old behavior back, just activate that option again."} +{"question_id": 4518641, "score": 100, "creation_date": 1293106405, "tags": ["python", "floating-point"], "instruction": "How to round a floating point number up to a certain decimal place?\n\nSuppose I have 8.8333333333333339, and I want to convert it to 8.84. How can I accomplish this in Python? round(8.8333333333333339, 2) gives 8.83 and not 8.84. I am new to Python or programming in general. I don't want to print it as a string, and the result will be further used. For more information on the problem, please check Tim Wilson's Python Programming Tips: Loan and payment calculator.", "output": "8.833333333339 (or 8.833333333333334, the result of 106.00/12) properly rounded to two decimal places is 8.83. Mathematically it sounds like what you want is a ceiling function. The one in Python's math module is named ceil: import math v = 8.8333333333333339 print(math.ceil(v*100)/100) # -> 8.84 Respectively, the floor and ceiling functions generally map a real number to the largest previous or smallest following integer which has zero decimal places \u2014 so to use them for 2 decimal places the number is first multiplied by 102 (or 100) to shift the decimal point and is then divided by it afterwards to compensate. If you don't want to use the math module for some reason, you can use this (minimally tested) implementation I just wrote: def ceiling(x): n = int(x) return n if n-1 < x <= n else n+1 How all this relates to the linked Loan and payment calculator problem: From the sample output it appears that they rounded up the monthly payment, which is what many call the effect of the ceiling function. This means that each month a little more than 1\u204412 of the total amount is being paid. That made the final payment a little smaller than usual \u2014 leaving a remaining unpaid balance of only 8.76. It would have been equally valid to use normal rounding producing a monthly payment of 8.83 and a slightly higher final payment of 8.87. However, in the real world people generally don't like to have their payments go up, so rounding up each payment is the common practice \u2014 it also returns the money to the lender more quickly."} +{"question_id": 35104097, "score": 100, "creation_date": 1454171102, "tags": ["python", "postgresql", "psycopg2"], "instruction": "How to install psycopg2 with pg_config error?\n\nI've tried to install psycopg2 (PostgreSQL Database adapater) from this site, but when I try to install after I cd into the package and write python setup.py install I get the following error: Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. I've also tried 'sudo pip install psycopg2' and I got the same message. After reading through the docs, it asks to look at the setup.cfg file (which is below): [build_ext] define= # PSYCOPG_DISPLAY_SIZE enable display size calculation (a little slower) # HAVE_PQFREEMEM should be defined on PostgreSQL >= 7.4 # PSYCOPG_DEBUG can be added to enable verbose debug information # \"pg_config\" is required to locate PostgreSQL headers and libraries needed to # build psycopg2. If pg_config is not in the path or is installed under a # different name uncomment the following option and set it to the pg_config # full path. #pg_config= # Set to 1 to use Python datetime objects for default date/time representation. use_pydatetime=1 # If the build system does not find the mx.DateTime headers, try # uncommenting the following line and setting its value to the right path. #mx_include_dir= # For Windows only: # Set to 1 if the PostgreSQL library was built with OpenSSL. # Required to link in OpenSSL libraries and dependencies. have_ssl=0 # Statically link against the postgresql client library. #static_libpq=1 # Add here eventual extra libraries required to link the module. #libraries= However, I'm not sure if I'm suppose to edit this file, since the documentation states the following: then take a look at the setup.cfg file. Some of the options available in setup.cfg are also available as command line arguments of the build_ext sub-command. For instance you can specify an alternate pg_config version using: $ python setup.py build_ext --pg-config /path/to/pg_config build Use python setup.py build_ext --help to get a list of the options supported. I've gotten the list of options supported but I'm not sure where to go from there", "output": "If you are on Ubuntu or any other debian-based distro, try sudo apt-get install python3-psycopg2 Otherwise, you need to find and install the Postgresql client packages for your distribution. psycopg2 installation from source"} +{"question_id": 55319949, "score": 100, "creation_date": 1553390778, "tags": ["python", "python-imaging-library"], "instruction": "PIL TypeError: Cannot handle this data type\n\nI have an image stored in a numpy array that I want to convert to PIL.Image in order to perform an interpolation only available with PIL. When trying to convert it through Image.fromarray() it raises the following error: TypeError: Cannot handle this data type I have read the answers here and here but they do not seem to help in my situation. What I'm trying to run: from PIL import Image x # a numpy array representing an image, shape: (256, 256, 3) Image.fromarray(x)", "output": "tl;dr Does x contain uint values in [0, 255]? If not and especially if x ranges from 0 to 1, that is the reason for the error. Explanation Most image libraries (e.g. matplotlib, opencv, scikit-image) have two ways of representing images: as uint with values ranging from 0 to 255. as float with values ranging from 0 to 1. The latter is more convenient when performing operations between images and thus is more popular in the field of Computer Vision. However PIL seems to not support it for RGB images. If you take a look here it seems that when you try to read an image from an array, if the array has a shape of (height, width, 3) it automatically assumes it's an RGB image and expects it to have a dtype of uint8! In your case, however, you have an RGB image with float values from 0 to 1. Solution You can fix it by converting your image to the format expected by PIL: im = Image.fromarray((x * 255).astype(np.uint8))"} +{"question_id": 3852780, "score": 100, "creation_date": 1286165539, "tags": ["python", "list", "set", "intersection"], "instruction": "Python -Intersection of multiple lists?\n\nI am playing with python and am able to get the intersection of two lists: result = set(a).intersection(b) Now if d is a list containing a and b and a third element c, is there an built-in function for finding the intersection of all the three lists inside d? So for instance, d = [[1,2,3,4], [2,3,4], [3,4,5,6,7]] then the result should be [3,4]", "output": "for 2.4, you can just define an intersection function. def intersect(*d): sets = iter(map(set, d)) result = sets.next() for s in sets: result = result.intersection(s) return result for newer versions of python: the intersection method takes an arbitrary amount of arguments result = set(d[0]).intersection(*d[1:]) alternatively, you can intersect the first set with itself to avoid slicing the list and making a copy: result = set(d[0]).intersection(*d) I'm not really sure which would be more efficient and have a feeling that it would depend on the size of the d[0] and the size of the list unless python has an inbuilt check for it like if s1 is s2: return s1 in the intersection method. >>> d = [[1,2,3,4], [2,3,4], [3,4,5,6,7]] >>> set(d[0]).intersection(*d) set([3, 4]) >>> set(d[0]).intersection(*d[1:]) set([3, 4]) >>>"} +{"question_id": 28512710, "score": 100, "creation_date": 1423891241, "tags": ["python", "django", "django-admin"], "instruction": "How to add custom search box in Django-admin?\n\nI know this is gonna be a very basic question. In Django, I have successfully created an admin panel. Now I want to add a custom search box in one of my field namely Photo field. But I don't know how to add custom search box in a django-admin panel. If I get some proper hints than I believe that I can do it. Admin.py: from django.contrib import admin from photo.models import Photo class PhotoAdmin(admin.ModelAdmin): list_display = ('name', 'approved', 'approved_time', 'uploaded_time', 'user') models.py: class Photo(models.Model): name = models.CharField(max_length=100) photo = models.ImageField(upload_to='photos', blank=False, null=True) approved = models.BooleanField(default=False) approved_time = models.DateTimeField(auto_now=True,null=True, blank=True) uploaded_time = models.DateTimeField() description = models.CharField(max_length=500, blank=False , null=True) keyword = models.CharField(max_length=500, blank=False, null=True) image_id = models.CharField(max_length=300, blank=True, null=True) Certified = models.BooleanField(default=False) approved_by = models.CharField(max_length=100) user = models.ForeignKey(User) total_download = models.IntegerField(default=0) watermarked_image = models.ImageField(upload_to='temp', blank=True, null=True) I want to add a custom search box in this Photo field where image can be searched by it's ID. Now how can I add this search box in my above given model.", "output": "Use the search_fields attribute of the ModelAdmin: class PhotoAdmin(admin.ModelAdmin): ... search_fields = ['name', 'description', 'user__related_fieldname', 'keyword']"} +{"question_id": 24719368, "score": 100, "creation_date": 1405223431, "tags": ["python"], "instruction": "SyntaxError: non-default argument follows default argument\n\nfrom os import system def a(len1,hgt=len1,til,col=0): system('mode con cols='+len1,'lines='+hgt) system('title',til) system('color',col) a(64,25,\"hi\",\"0b\") input() When I run this, it rejects \"def a(...\" and highlights \"(\" in red. I have no clue why.", "output": "Let me clarify two points here: Firstly non-default argument should not follow the default argument, it means you can't define (a='b', c) in function. The correct order of defining parameter in function are: positional parameter or non-default parameter i.e (a, b, c) keyword parameter or default parameter i.e (a='b', r='j') keyword-only parameter i.e (*args) var-keyword parameter i.e (**kwargs) def example(a, b, c=None, r=\"w\", d=[], *ae, **ab): (a,b) are positional parameter (c=none) is optional parameter (r=\"w\") is keyword parameter (d=[]) is list parameter (*ae) is keyword-only (*ab) is var-keyword parameter so first re-arrange your parameters now the second thing is you have to define len1 when you are doing hgt=len1 the len1 argument is not defined when default values are saved, Python computes and saves default values when you define the function len1 is not defined, does not exist when this happens (it exists only when the function is executed) so second remove this \"len1 = hgt\" it's not allowed in python. keep in mind the difference between argument and parameters."} +{"question_id": 2988211, "score": 100, "creation_date": 1275901891, "tags": ["python", "file-io", "character"], "instruction": "How can I read a single character at a time from a file in Python?\n\nIn Python, given the name of a file, how can I write a loop that reads one character each time through the loop?", "output": "with open(filename) as f: while True: c = f.read(1) if not c: print(\"End of file\") break print(\"Read a character:\", c)"} +{"question_id": 30706799, "score": 100, "creation_date": 1433759430, "tags": ["python", "django", "geodjango"], "instruction": "Which Model Field to use in Django to store longitude and latitude values?\n\nI want to store my users location using longitude and latitude, at the moment this comes from Google Maps, but I will be using GeoDango and some point to work out distances between to points also. However, my first confusion is which field in Django I should be using to store the longitude and latitude values? The information I'm getting is conflicting. The official documentation uses a FloatField https://docs.djangoproject.com/en/dev/ref/contrib/gis/tutorial/#geographic-models lon = models.FloatField() lat = models.FloatField() Where almost every answer on stackoverflow shows a DecimalField long = models.DecimalField(max_digits=8, decimal_places=3) lat = models.DecimalField(max_digits=8, decimal_places=3) So what should I be using?", "output": "Float is generally an approximation, see here for some simple examples. You could get very nice results modifying your model to something like DecimalField(max_digits=9, decimal_places=6), since decimals are very important in coordinates but using more than 6 is basically meaningless."} +{"question_id": 53209127, "score": 100, "creation_date": 1541685115, "tags": ["python", "subprocess"], "instruction": "subprocess: unexpected keyword argument capture_output\n\nWhen executing subprocess.run() as given in the Python docs, I get a TypeError: >>> import subprocess >>> subprocess.run([\"ls\", \"-l\", \"/dev/null\"], capture_output=True) Traceback (most recent call last): File \"\", line 1, in File \"/usr/lib/python3.6/subprocess.py\", line 403, in run with Popen(*popenargs, **kwargs) as process: TypeError: __init__() got an unexpected keyword argument 'capture_output' I am running Python 3.6.6: $ python3 --version Python 3.6.6", "output": "You inspected the wrong documentation, for python-3.6 this parameter does not exist, as can be found in the documentation (you select the version at the top left): subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, shell=False, cwd=None, timeout=None, check=False, encoding=None, errors=None, env=None) You can however easily \"emulate\" this by setting both stdout and stderr to PIPE: from subprocess import PIPE subprocess.run([\"ls\", \"-l\", \"/dev/null\"], stdout=PIPE, stderr=PIPE) In fact, if we look at the source code of the python-3.7 version, where the feature was introduced, we see in the source code [GitHub]: if capture_output: if ('stdout' in kwargs) or ('stderr' in kwargs): raise ValueError('stdout and stderr arguments may not be used ' 'with capture_output.') kwargs['stdout'] = PIPE kwargs['stderr'] = PIPE"} +{"question_id": 6431973, "score": 100, "creation_date": 1308690401, "tags": ["python", "numpy"], "instruction": "How to copy data from a numpy array to another\n\nWhat is the fastest way to copy data from array b to array a, without modifying the address of array a. I need this because an external library (PyFFTW) uses a pointer to my array that cannot change. For example: a = numpy.empty(n, dtype=complex) for i in xrange(a.size): a[i] = b[i] It is possible to do it without a loop?", "output": "I believe a = numpy.empty_like(b) a[:] = b will copy the values quickly. As Funsi mentions, recent versions of numpy also have the copyto function."} +{"question_id": 4587915, "score": 100, "creation_date": 1294085365, "tags": ["python"], "instruction": "Return list of items in list greater than some value\n\nj=[4,5,6,7,1,7,5] What's the simplest way to return [5,5,6,7,7] being the elements in j >=5?", "output": "You can use a list comprehension to filter it: j2 = [i for i in j if i >= 5] If you actually want it sorted like your example was, you can use sorted: j2 = sorted(i for i in j if i >= 5) Or call sort on the final list: j2 = [i for i in j if i >= 5] j2.sort()"} +{"question_id": 6494508, "score": 100, "creation_date": 1309185174, "tags": ["python", "list", "random"], "instruction": "How do you pick \"x\" number of unique numbers from a list in Python?\n\nI need to pick out \"x\" number of non-repeating, random numbers out of a list. For example: all_data = [1, 2, 2, 3, 4, 5, 6, 7, 8, 8, 9, 10, 11, 11, 12, 13, 14, 15, 15] How do I pick out a list like [2, 11, 15] and not [3, 8, 8]?", "output": "That's exactly what random.sample() does. >>> random.sample(range(1, 16), 3) [11, 10, 2] Edit: I'm almost certain this is not what you asked, but I was pushed to include this comment: If the population you want to take samples from contains duplicates, you have to remove them first: population = [1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1] population = list(set(population)) samples = random.sample(population, 3)"} +{"question_id": 14788181, "score": 100, "creation_date": 1360413184, "tags": ["python", "python-3.x", "django", "django-views", "python-class"], "instruction": "What are the advantages of class-based views compared to function-based views?\n\nI always use FBVs (Function Based Views) when creating a django app because it's very easy to handle. But most developers said that it's better to use CBVs (Class Based Views) and use only FBVs if it is complicated views that would be a pain to implement with CBVs. Why? What are the advantages of using CBVs?", "output": "The single most significant advantage is inheritance. On a large project it's likely that you will have lots of similar views. Rather than write the same code again and again, you can simply have your views inherit from a base view. Also django ships with a collection of generic view classes that can be used to do some of the most common tasks. For example the DetailView class is used to pass a single object from one of your models, render it with a template and return the http response. You can plug it straight into your url conf.. url(r'^author/(?P\\d+)/$', DetailView.as_view(model=Author)), Or you could extend it with custom functionality class SpecialDetailView(DetailView): model = Author def get_context_data(self, *args, **kwargs): context = super(SpecialDetailView, self).get_context_data(*args, **kwargs) context['books'] = Book.objects.filter(popular=True) return context Now your template will be passed a collection of book objects for rendering. A nice place to start with this is having a good read of the docs (Django 4.0+). Update ccbv.co.uk has comprehensive and easy to use information about the class based views you already have available to you."} +{"question_id": 39280638, "score": 100, "creation_date": 1472762963, "tags": ["python", "cross-platform", "conda"], "instruction": "How to share conda environments across platforms\n\nThe conda docs at http://conda.pydata.org/docs/using/envs.html explain how to share environments with other people. However, the docs tell us this is not cross platform: NOTE: These explicit spec files are not usually cross platform, and therefore have a comment at the top such as # platform: osx-64 showing the platform where they were created. This platform is the one where this spec file is known to work. On other platforms, the packages specified might not be available or dependencies might be missing for some of the key packages already in the spec. NOTE: Conda does not check architecture or dependencies when installing from an explicit specification file. To ensure the packages work correctly, be sure that the file was created from a working environment and that it is used on the same architecture, operating system and platform, such as linux- 64 or osx-64. Is there a good method to share and recreate a conda environment in one platform (e.g. CentOS) in another platform (e.g. Windows)?", "output": "This answer is given with the assumption that you would like to make sure that the same versions of the packages that you generally care about are on different platforms and that you don't care about the exact same versions of all packages in the entire dependency tree. If you are trying to install the exact same version of all packages in your entire dependency tree that has a high likelihood of failure since some conda packages have different dependencies for osx/win/linux. For example, the recipe for otrobopt will install different packages on Win vs. osx/linux, so the environment list would be different. Recommendation: manually create an environment.yaml file and specify or pin only the dependencies that you care about. Let the conda solver do the rest. Probably worth noting is that conda-env (the tool that you use to manage conda environments) explicitly recommends that you \"Always create your environment.yml file by hand.\" Then you would just do conda env create --file environment.yml Have a look at the readme for conda-env. They can be quite simple: name: basic_analysis dependencies: - numpy - pandas Or more complex where you pin dependencies and specify anaconda.org channels to install from: name: stats-web channels: - javascript dependencies: - python=3.4 # or 2.7 if you are feeling nostalgic - bokeh=0.9.2 - numpy=1.9 - nodejs=0.10 - flask - pip: - Flask-Testing In either case, you can create an environment with conda env create --file environment.yaml. NOTE: You may need to use .* as a version suffix if you're using an older version of conda."} +{"question_id": 66751657, "score": 100, "creation_date": 1616436781, "tags": ["python", "requirements.txt", "pip-tools", "pip-compile"], "instruction": "What does pip-compile do? What is its use? (how do I maintain the contents of my requirements.txt file?)\n\nI read pip-compiles definition in pip-tools documentation but I don't understand how it works. Q1: What is pip-compile's use? Q2: What does compiling requirements.in to produce requirements.txt mean? Q3: How do I maintain the contents of the requirements.txt file?", "output": "You want to be able to lock down the versions of all of the packages that your Python code depends on in your requirements.txt file. You want this file to include versions for not just the direct dependencies that your code imports directly, but also versions for all of the transitive dependencies as well, that is, the versions of modules that your directly dependent modules themselves depend on. How do you maintain the contents of requirements.txt? You can use pip freeze > requirements.txt, but this is messy. It depends not on a clear list of what the direct and indirect dependencies of your app are, but rather on what happens to be in your environment at the time of creation. What you really want is to have a file in which you list the direct dependencies of your app, optionally specifying version restrictions for any of them, and then somehow produce the appropriate requirements.txt file from that list such that it contains specific versions for your app's direct dependencies as well as versions for the transitive dependencies needed by those direct dependencies. The requirements.in file and pip-compile together give you this desired behavior. In requirements.in, you list just the direct dependencies of your app. Then you run pip-compile on that file to produce requirements.txt. The compile process will produce what you want -- a file that locks down the versions of both the modules listed in requirements.in and also versions of the transitive dependencies of those modules. UPDATE: Someone asked why you should go through this exercise to lock down all of the versions of the packages upon which your application relies. The reason for this is that if you don't do this, then whenever you rebuild your application, you will get a build that uses the latest (ie: different) versions of some or all of the packages that it uses. So what if a change is made to one of those packages that causes your app's behavior to change? Maybe it causes an exception to be thrown, killing your app. Or worse, it might cause a subtle change in behavior that is quite difficult to track down. You want to prevent both of these possibilities. Going through the process discussed by this question/answer locks down all of the versions of the packages that your application uses, preventing changes made to later versions of those packages from affecting the behavior of your application."} +{"question_id": 31291608, "score": 100, "creation_date": 1436355739, "tags": ["python", "pythonpath"], "instruction": "Effect of using sys.path.insert(0, path) and sys.path.append(path) when loading modules\n\nI was recently having a problem with a python ImportError, where the module was found when running on my local computer but not found on the CI server. I solved this problem by swapping sys.path.append(path) in my script with sys.path.insert(0, path) where path is the string module location. Since this is my module and not an installed package (related question), why does the order of paths fix this problem?", "output": "Because Python checks in the directories in sequential order, starting at the first directory in the sys.path list, until it finds the .py file it was looking for. Normally, the current directory or the directory of the script is the first element in the list, unless you modify it (like you did). From documentation: As initialized upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of PYTHONPATH. So, most probably, you had a .py file with the same name as the module you were trying to import from, in the current directory (where the script was being run from). Also, something to note about ImportErrors - Let's say the import error says: ImportError: No module named main. This doesn't mean main.py is overwritten - if it was overwritten it would still exist and we would not have issues trying to read it. It's some module above this that got overwritten with a .py or some other file. For example, consider a directory structure that looks like this: - test - shared - __init__.py - phtest.py - testmain.py Now from testmain.py, I call from shared import phtest, which works fine. Now let's say I introduce a shared.py in test directory: - test - shared - __init__.py - phtest.py - testmain.py - shared.py Now when I try to do from shared import phtest from testmain.py, I get the error: ImportError: cannot import name 'phtest' As you can see above, the file that introduced the issue is shared.py, not phtest.py."} +{"question_id": 7590682, "score": 100, "creation_date": 1317251095, "tags": ["python", "unit-testing", "scope"], "instruction": "Access self from decorator\n\nIn setUp() method of unittest I've setup some self variables, which are later referenced in actual tests. I've also created a decorator to do some logging. Is there a way in which I can access those self variables from decorator? For the sake of simplicity, I'm posting this code: def decorator(func): def _decorator(*args, **kwargs): # access a from TestSample func(*args, **kwargs) return _decorator class TestSample(unittest.TestCase): def setUp(self): self.a = 10 def tearDown(self): # tear down code @decorator def test_a(self): # testing code goes here What would be the best way of accessing a (set in setUp()) from decorator?", "output": "Since you're decorating a method, and self is a method argument, your decorator has access to self at runtime. Obviously not at parsetime, because there are no objects yet, just a class. So you change your decorator to: def decorator(func): def _decorator(self, *args, **kwargs): # access a from TestSample print 'self is %s' % self return func(self, *args, **kwargs) return _decorator"} +{"question_id": 22272081, "score": 100, "creation_date": 1394296879, "tags": ["python", "matplotlib", "plot", "label", "annotate"], "instruction": "Label data points on plot\n\nIf you want to label your plot points using python matplotlib, I used the following code. from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = anyarray B = anyotherarray plt.plot(A,B) for i,j in zip(A,B): ax.annotate('%s)' %j, xy=(i,j), xytext=(30,0), textcoords='offset points') ax.annotate('(%s,' %i, xy=(i,j)) plt.grid() plt.show() I know that xytext=(30,0) goes along with the textcoords and you use those 30,0 values to position the data label point, so it's on the y=0 and x=30 on its own little area. You need both the lines plotting i and j otherwise you only plot x or y data label. You get something like this out (note the labels only): It's not ideal, there is still some overlap.", "output": "How about print (x, y) at once. from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) A = -0.75, -0.25, 0, 0.25, 0.5, 0.75, 1.0 B = 0.73, 0.97, 1.0, 0.97, 0.88, 0.73, 0.54 ax.plot(A,B) for xy in zip(A, B): # <-- ax.annotate('(%s, %s)' % xy, xy=xy, textcoords='data') # <-- ax.grid() plt.show()"} +{"question_id": 62011741, "score": 100, "creation_date": 1590445718, "tags": ["python", "pydantic"], "instruction": "Pydantic: dataclass vs BaseModel\n\nWhat are the advantages and disadvantages of using Pydantic's dataclass vs BaseModel? Are there any performance issues or is it easier to Pydantic's dataclass in the other python module?", "output": "Your question is answered in Pydantic's documentation, specifically: Keep in mind that pydantic.dataclasses.dataclass is a drop-in replacement for dataclasses.dataclass with validation, not a replacement for pydantic.BaseModel (with a small difference in how initialization hooks work). There are cases where subclassing pydantic.BaseModel is the better choice. For more information and discussion see samuelcolvin/pydantic#710. The discussion link will give you some of the context you are looking for. In general, Pydantic's BaseModel implementation is not bound to behave the same as Python's dataclass implementation. The example cited in the issue above is one good example: from pydantic import BaseModel from pydantic.dataclasses import dataclass from typing import List @dataclass class A: x: List[int] = [] # Above definition with a default of `[]` will result in: # ValueError: mutable default for field x is not allowed: use default_factory # If you resolve this, the output will read as in the comments below. class B(BaseModel): x: List[int] = [] print(A(x=[1, 2]), A(x=[3, 4])) # Output: A(x=[1, 2]) A(x=[3, 4]) print(B(x=[1, 2]), B(x=[3, 4])) # Output: x=[1, 2] x=[3, 4] If what you want first and foremost is dataclass behavior and then to simply augment it with some Pydantic validation features, the pydantic.dataclasses.dataclass approach may be what you want. Otherwise, BaseModel is probably what you want."} +{"question_id": 2104305, "score": 99, "creation_date": 1264016113, "tags": ["python", "list"], "instruction": "Finding elements not in a list\n\nSo heres my code: item = [0,1,2,3,4,5,6,7,8,9] z = [] # list of integers for item in z: if item not in z: print item z contains a list of integers. I want to compare item to z and print out the numbers that are not in z when compared to item. I can print the elements that are in z when compared not item, but when I try and do the opposite using the code above nothing prints. Any help?", "output": "Your code is not doing what I think you think it is doing. The line for item in z: will iterate through z, each time making item equal to one single element of z. The original item list is therefore overwritten before you've done anything with it. I think you want something like this: item = [0,1,2,3,4,5,6,7,8,9] for element in item: if element not in z: print(element) But you could easily do this like: [x for x in item if x not in z] or (if you don't mind losing duplicates of non-unique elements): set(item) - set(z)"} +{"question_id": 37354105, "score": 99, "creation_date": 1463770951, "tags": ["python", "date", "datetime", "pandas"], "instruction": "Find the end of the month of a Pandas DataFrame Series\n\nI have a series within a DataFrame that I read in initially as an object, and then need to convert it to a date in the form of yyyy-mm-dd where dd is the end of the month. As an example, I have DataFrame df with a column Date as an object: ... Date ... ... 200104 ... ... 200508 ... What I want when this is all said and done is a date object: ... Date ... ... 2001-04-30 ... ... 2005-08-31 ... such that df['Date'].item() returns datetime.date(2001, 04, 30) I've used the following code to get almost there, but all my dates are at the beginning of the month, not the end. Please advise. df['Date'] = pd.to_datetime(df['Date'], format=\"%Y%m\").dt.date Note: I've already imported Pandas (pd), and datetime (dt)", "output": "You can use pandas.tseries.offsets.MonthEnd: from pandas.tseries.offsets import MonthEnd df['Date'] = pd.to_datetime(df['Date'], format=\"%Y%m\") + MonthEnd(0) The 0 in MonthEnd just specifies to roll forward to the end of the given month. Note that if we'd used MonthEnd(1), then we'd have got the next date which is at the end of the month. If you wanted the last day of the next month, you'd then add an extra MonthEnd(1), etc. This should work for any month, so you don't need to know the number days in the month, or anything like that. More offset information can be found in the documentation. Example usage and output: df = pd.DataFrame({'Date': [200104, 200508, 201002, 201602, 199912, 200611]}) df['EndOfMonth'] = pd.to_datetime(df['Date'], format=\"%Y%m\") + MonthEnd(1) Date EndOfMonth 0 200104 2001-04-30 1 200508 2005-08-31 2 201002 2010-02-28 3 201602 2016-02-29 4 199912 1999-12-31 5 200611 2006-11-30"} +{"question_id": 11017466, "score": 99, "creation_date": 1339598829, "tags": ["python", "binary", "flask"], "instruction": "Flask to return image stored in database\n\nMy images are stored in a MongoDB, and I'd like to return them to the client, here is how the code is like: @app.route(\"/images/.jpg\") def getImage(pid): # get image binary from MongoDB, which is bson.Binary type return image_binary However, it seems that I can't return binary directly in Flask? My idea so far: Return the base64 of the image binary. The problem is that IE<8 doesn't support this. Create a temporary file then return it with send_file. Are there better solutions?", "output": "Create a response object with the data and then set the content type header. Set the content disposition header to attachment if you want the browser to save the file instead of displaying it. @app.route('/images/.jpg') def get_image(pid): image_binary = read_image(pid) response = make_response(image_binary) response.headers.set('Content-Type', 'image/jpeg') response.headers.set( 'Content-Disposition', 'attachment', filename='%s.jpg' % pid) return response Relevant: werkzeug.Headers and flask.Response You can pass a file-like object and the header arguments to send_file to let it set up the complete response. Use io.BytesIO for binary data: return send_file( io.BytesIO(image_binary), mimetype='image/jpeg', as_attachment=True, download_name='%s.jpg' % pid) Prior to Flask 2.0, download_name was called attachment_filename."} +{"question_id": 7484454, "score": 99, "creation_date": 1316518625, "tags": ["python", "logging", "python-logging"], "instruction": "Removing handlers from python's logging loggers\n\nI am playing with Python's logging system. I have noticed a strange behavior while removing handlers from a Logger object in a loop. Namely, my for loop removes all but one handler. Additional call to .removeHandler removes the last handler smoothly. No error messages are issued during the calls. This is the test code: import logging import sys logging.basicConfig() dbg = logging.getLogger('dbg') dbg.setLevel(logging.DEBUG) testLogger = logging.getLogger('mylogger') sh = logging.StreamHandler(sys.stdout) fh = logging.FileHandler('mylogfile.log') dbg.debug('before adding handlers: %d handlers'%len(testLogger.handlers)) testLogger.addHandler(fh) testLogger.addHandler(sh) dbg.debug('before removing. %d handlers: %s'%(len(testLogger.handlers), str(testLogger.handlers))) for h in testLogger.handlers: dbg.debug('removing handler %s'%str(h)) testLogger.removeHandler(h) dbg.debug('%d more to go'%len(testLogger.handlers)) #HERE I EXPECT THAT NO HANDLER WILL REMAIN dbg.debug('after removing: %d handlers: %s'%(len(testLogger.handlers), str(testLogger.handlers))) if len(testLogger.handlers) > 0: #Why is this happening? testLogger.removeHandler(testLogger.handlers[0]) dbg.debug('after manually removing the last handler: %d handlers'%len(testLogger.handlers)) I expect that at the end of the loop no handlers will remain in the testLogger object, however the last call to .removeHandler apparently fails, as can be seen from the output below. Nevertheless additional call to this function removes the handler as expected. Here is the output: DEBUG:dbg:before adding handlers: 0 handlers DEBUG:dbg:before removing. 2 handlers: [, ] DEBUG:dbg:removing handler DEBUG:dbg:1 more to go DEBUG:dbg:after removing: 1 handlers: [] DEBUG:dbg:after manually removing the last handler: 0 handlers More interestingly, if I replace the original loop with the following one, the loop works as expected and no handlers remain in the testLogger object at the end of the loop. Here is the modified loop: while len(testLogger.handlers) > 0: h = testLogger.handlers[0] dbg.debug('removing handler %s'%str(h)) testLogger.removeHandler(h) dbg.debug('%d more to go'%len(testLogger.handlers)) What explains this behaviour? Is this a bug or am I missing something?", "output": "This isn't logger-specific behaviour. Never mutate (insert/remove elements) the list you're currently iterating on. If you need, make a copy. In this case testLogger.handlers.clear() should do the trick."} +{"question_id": 19073683, "score": 99, "creation_date": 1380420283, "tags": ["python", "matplotlib", "annotate"], "instruction": "How to fix overlapping annotations / text\n\nI'm trying to stop annotation text overlapping in my graphs. The method suggested in the accepted answer to Matplotlib overlapping annotations looks extremely promising, however is for bar graphs. I'm having trouble converting the \"axis\" methods over to what I want to do, and I don't understand how the text lines up. import sys import matplotlib.pyplot as plt # start new plot plt.clf() plt.xlabel(\"Proportional Euclidean Distance\") plt.ylabel(\"Percentage Timewindows Attended\") plt.title(\"Test plot\") together = [(0, 1.0, 0.4), (25, 1.0127692669427917, 0.41), (50, 1.016404709797609, 0.41), (75, 1.1043426359673716, 0.42), (100, 1.1610446924342996, 0.44), (125, 1.1685687930691457, 0.43), (150, 1.3486407784550272, 0.45), (250, 1.4013999168008104, 0.45)] together.sort() for x,y,z in together: plt.annotate(str(x), xy=(y, z), size=8) eucs = [y for (x,y,z) in together] covers = [z for (x,y,z) in together] p1 = plt.plot(eucs,covers,color=\"black\", alpha=0.5) plt.savefig(\"test.png\") Images (if this works) can be found here (this code): and here (more complicated):", "output": "I just wanted to post here another solution, a small library I wrote to implement this kind of things: https://github.com/Phlya/adjustText An example of the process can be seen here: Here is the example image: import matplotlib.pyplot as plt from adjustText import adjust_text import numpy as np together = [(0, 1.0, 0.4), (25, 1.0127692669427917, 0.41), (50, 1.016404709797609, 0.41), (75, 1.1043426359673716, 0.42), (100, 1.1610446924342996, 0.44), (125, 1.1685687930691457, 0.43), (150, 1.3486407784550272, 0.45), (250, 1.4013999168008104, 0.45)] together.sort() text = [x for (x,y,z) in together] eucs = [y for (x,y,z) in together] covers = [z for (x,y,z) in together] p1 = plt.plot(eucs,covers,color=\"black\", alpha=0.5) texts = [] for x, y, s in zip(eucs, covers, text): texts.append(plt.text(x, y, s)) plt.xlabel(\"Proportional Euclidean Distance\") plt.ylabel(\"Percentage Timewindows Attended\") plt.title(\"Test plot\") adjust_text(texts, only_move={'points':'y', 'texts':'y'}, arrowprops=dict(arrowstyle=\"->\", color='r', lw=0.5)) plt.show() If you want a perfect figure, you can fiddle around a little. First, let's also make text repel the lines - for that we just create lots of virtual points along them using scipy.interpolate.interp1d. We want to avoid moving the labels along the x-axis, because, well, why not do it for illustrative purposes. For that we use the parameter only_move={'points':'y', 'text':'y'}. If we want to move them along x axis only in the case that they are overlapping with text, use move_only={'points':'y', 'text':'xy'}. Also in the beginning the function chooses optimal alignment of texts relative to their original points, so we only want that to happen along the y axis too, hence autoalign='y'. We also reduce the repelling force from points to avoid text flying too far away due to our artificial avoidance of lines. All together: from scipy import interpolate p1 = plt.plot(eucs,covers,color=\"black\", alpha=0.5) texts = [] for x, y, s in zip(eucs, covers, text): texts.append(plt.text(x, y, s)) f = interpolate.interp1d(eucs, covers) x = np.arange(min(eucs), max(eucs), 0.0005) y = f(x) plt.xlabel(\"Proportional Euclidean Distance\") plt.ylabel(\"Percentage Timewindows Attended\") plt.title(\"Test plot\") adjust_text(texts, x=x, y=y, autoalign='y', only_move={'points':'y', 'text':'y'}, force_points=0.15, arrowprops=dict(arrowstyle=\"->\", color='r', lw=0.5)) plt.show()"} +{"question_id": 45312377, "score": 99, "creation_date": 1501012436, "tags": ["python", "pandas", "numpy", "scikit-learn", "sklearn-pandas"], "instruction": "How to one-hot-encode from a pandas column containing a list?\n\nI would like to break down a pandas column consisting of a list of elements into as many columns as there are unique elements i.e. one-hot-encode them (with value 1 representing a given element existing in a row and 0 in the case of absence). For example, taking dataframe df Col1 Col2 Col3 C 33 [Apple, Orange, Banana] A 2.5 [Apple, Grape] B 42 [Banana] I would like to convert this to: df Col1 Col2 Apple Orange Banana Grape C 33 1 1 1 0 A 2.5 1 0 0 1 B 42 0 0 1 0 How can I use pandas/sklearn to achieve this?", "output": "We can also use sklearn.preprocessing.MultiLabelBinarizer: Often we want to use sparse DataFrame for the real world data in order to save a lot of RAM. Sparse solution (for Pandas v0.25.0+) from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer(sparse_output=True) df = df.join( pd.DataFrame.sparse.from_spmatrix( mlb.fit_transform(df.pop('Col3')), index=df.index, columns=mlb.classes_)) result: In [38]: df Out[38]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0 In [39]: df.dtypes Out[39]: Col1 object Col2 float64 Apple Sparse[int32, 0] Banana Sparse[int32, 0] Grape Sparse[int32, 0] Orange Sparse[int32, 0] dtype: object In [40]: df.memory_usage() Out[40]: Index 128 Col1 24 Col2 24 Apple 16 # <--- NOTE! Banana 16 # <--- NOTE! Grape 8 # <--- NOTE! Orange 8 # <--- NOTE! dtype: int64 Dense solution mlb = MultiLabelBinarizer() df = df.join(pd.DataFrame(mlb.fit_transform(df.pop('Col3')), columns=mlb.classes_, index=df.index)) Result: In [77]: df Out[77]: Col1 Col2 Apple Banana Grape Orange 0 C 33.0 1 1 0 1 1 A 2.5 1 0 1 0 2 B 42.0 0 1 0 0"} +{"question_id": 30356892, "score": 99, "creation_date": 1432144110, "tags": ["python", "defaultdict"], "instruction": "defaultdict with default value 1?\n\nI read an implementation of counting sort: from collections import defaultdict def sort_colors(A): ht = {} # a hash map ht = defaultdict(lambda:0, ht) # with default value 1 for i in A: ht[i] += 1 ret = [] for k in [0, 1, 2]: ret.extend([k]*ht[k]) return ret In the first two lines of the func: ht = {} ht = defaultdict(lambda:0, ht) I am not clear about this initialization. Shall we just replace these two lines with following? ht = defaultdict(int) # default value 0", "output": "Short answer (as per Montaro's answer below) defaultdict(lambda:1) Long answer on how defaultdicts work ht = {} ht = defaultdict(lambda:0, ht) defaultdicts are different from dict in that when you try to access a regular dict with a key that does not exists, it raises a KeyError. defaultdict, however, doesn't raise an error: it creates the key for you instead. With which value? With the return of the callable you passed as an argument. In this case, every new keys will be created with value 0 (which is the return of the simple lambda function lambda:0), which also happens to be the same return of int() , so in this case, there would be no difference in changing the default function to int(). Breaking down this line in more detail: ht = defaultdict(lambda:0, ht) The first argument is a function, which is a callable object. This is the function that will be called to create a new value for an inexistent key. The second argument, ht is optional and refers to the base dictionary that the new defaultdict will be built on. Therefore, if ht had some keys and values, the defaultdict would also have these keys with the corresponding values. If you tried to access these keys, you would get the old values. However, if you did not pass the base dictionary, a brand new defaultdict would be created, and thus, all new keys accessed would get the default value returned from the callable. (In this case, as ht is initially an empty dict, there would be no difference at all in doing ht = defaultdict(lambda:0) , ht = defaultdict(int) or ht = defaultdict(lambda:0, ht) : they would all build the same defaultdict."} +{"question_id": 31147660, "score": 99, "creation_date": 1435695236, "tags": ["python", "macos", "selenium", "module", "webdriver"], "instruction": "ImportError: No module named 'selenium'\n\nI'm trying to write a script to check a website. It's the first time I'm using selenium. I'm trying to run the script on a OSX system. Although I checked in /Library/Python/2.7/site-packages and selenium-2.46.0-py2.7.egg is present, when I run the script it keeps telling me that there is no selenium module to import. This is the log that I get when I run my code: Traceback (most recent call last): File \"/Users/GiulioColleluori/Desktop/Class_Checker.py\", line 10, in from selenium import webdriver ImportError: No module named 'selenium'", "output": "If you have pip installed you can install selenium like so. pip install selenium or depending on your permissions: sudo pip install selenium For python3: sudo pip3 install selenium As you can see from this question pip vs easy_install pip is a more reliable package installer as it was built to improve easy_install. I would also suggest that when creating new projects you do so in virtual environments, even a simple selenium project. You can read more about virtual environments here. In fact pip is included out of the box with virtualenv!"} +{"question_id": 43258461, "score": 99, "creation_date": 1491489840, "tags": ["python", "python-3.x", "image", "python-imaging-library"], "instruction": "Convert png to jpeg using Pillow\n\nI am trying to convert png to jpeg using pillow. I've tried several scrips without success. These 2 seemed to work on small png images like this one. First code: from PIL import Image import os, sys im = Image.open(\"Ba_b_do8mag_c6_big.png\") bg = Image.new(\"RGB\", im.size, (255,255,255)) bg.paste(im,im) bg.save(\"colors.jpg\") Second code: image = Image.open('Ba_b_do8mag_c6_big.png') bg = Image.new('RGBA',image.size,(255,255,255)) bg.paste(image,(0,0),image) bg.save(\"test.jpg\", quality=95) But if I try to convert a bigger image like this one I'm getting Traceback (most recent call last): File \"png_converter.py\", line 14, in bg.paste(image,(0,0),image) File \"/usr/lib/python2.7/dist-packages/PIL/Image.py\", line 1328, in paste self.im.paste(im, box, mask.im) ValueError: bad transparency mask What am i doing wrong?", "output": "You should use convert() method: from PIL import Image im = Image.open(\"Ba_b_do8mag_c6_big.png\") rgb_im = im.convert('RGB') rgb_im.save('colors.jpg') more info: http://pillow.readthedocs.io/en/latest/reference/Image.html#PIL.Image.Image.convert"} +{"question_id": 7741878, "score": 99, "creation_date": 1318430503, "tags": ["python", "numpy"], "instruction": "How to apply numpy.linalg.norm to each row of a matrix?\n\nI have a 2D matrix and I want to take norm of each row. But when I use numpy.linalg.norm(X) directly, it takes the norm of the whole matrix. I can take norm of each row by using a for loop and then taking norm of each X[i], but it takes a huge time since I have 30k rows. Any suggestions to find a quicker way? Or is it possible to apply np.linalg.norm to each row of a matrix?", "output": "For numpy 1.9+ Note that, as perimosocordiae shows, as of NumPy version 1.9, np.linalg.norm(x, axis=1) is the fastest way to compute the L2-norm. For numpy < 1.9 If you are computing an L2-norm, you could compute it directly (using the axis=-1 argument to sum along rows): np.sum(np.abs(x)**2,axis=-1)**(1./2) Lp-norms can be computed similarly of course. It is considerably faster than np.apply_along_axis, though perhaps not as convenient: In [48]: %timeit np.apply_along_axis(np.linalg.norm, 1, x) 1000 loops, best of 3: 208 us per loop In [49]: %timeit np.sum(np.abs(x)**2,axis=-1)**(1./2) 100000 loops, best of 3: 18.3 us per loop Other ord forms of norm can be computed directly too (with similar speedups): In [55]: %timeit np.apply_along_axis(lambda row:np.linalg.norm(row,ord=1), 1, x) 1000 loops, best of 3: 203 us per loop In [54]: %timeit np.sum(abs(x), axis=-1) 100000 loops, best of 3: 10.9 us per loop"} +{"question_id": 48190959, "score": 99, "creation_date": 1515598623, "tags": ["python", "python-3.x", "pathlib"], "instruction": "How do I append a string to a Path?\n\nThe following code: from pathlib import Path Desktop = Path('Desktop') SubDeskTop = Desktop + \"/subdir\" gets the following error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 1 from pathlib import Path 2 Desktop = Path('Desktop') ----> 3 SubDeskTop = Desktop+\"/subdir\" TypeError: unsupported operand type(s) for +: 'PosixPath' and 'str' I'm clearly doing something shady here, but it raises the question: How do I access a subdirectory of a Path object?", "output": "The correct operator to extend a pathlib object is / from pathlib import Path Desktop = Path('Desktop') print(Desktop) # WindowsPath('Desktop') # extend the path to include subdir SubDeskTop = Desktop / \"subdir\" print(SubDeskTop) # WindowsPath('Desktop/subdir') # passing an absolute path has different behavior SubDeskTop = Path('Desktop') / '/subdir' print(SubDeskTop) # WindowsPath('/subdir') When several absolute paths are given, the last is taken as an anchor (mimicking os.path.join()\u2019s behavior): >>> PurePath('/etc', '/usr', 'lib64') PurePosixPath('/usr/lib64') >>> PureWindowsPath('c:/Windows', 'd:bar') PureWindowsPath('d:bar') In a Windows path, changing the local root doesn\u2019t discard the previous drive setting: >>> PureWindowsPath('c:/Windows', '/Program Files') PureWindowsPath('c:/Program Files') Refer to the documentation for addition details pertaining to giving an absolute path, such as Path('/subdir'). Resources: pathlib Basic use Python 3's pathlib Module: Taming the File System"} +{"question_id": 31444036, "score": 99, "creation_date": 1437010868, "tags": ["python", "mysql", "flask", "werkzeug", "flask-restful"], "instruction": "RuntimeError: working outside of application context\n\napp.py from flask import Flask, render_template, request,jsonify,json,g import mysql.connector app = Flask(__name__) class TestMySQL(): @app.before_request def before_request(): try: g.db = mysql.connector.connect(user='root', password='root', database='mysql') except mysql.connector.errors.Error as err: resp = jsonify({'status': 500, 'error': \"Error:{}\".format(err)}) resp.status_code = 500 return resp @app.route('/') def input_info(self): try: cursor = g.db.cursor() cursor.execute ('CREATE TABLE IF NOT EXISTS testmysql (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(40) NOT NULL, \\ email VARCHAR(40) NOT NULL UNIQUE)') cursor.close() test.py from app import * class Test(unittest.TestCase): def test_connection1(self): with patch('__main__.mysql.connector.connect') as mock_mysql_connector_connect: object = TestMySQL() object.before_request() # Runtime error on calling this I am importing app into test.py for unit testing. On calling 'before_request' function into test.py, it is throwing a RuntimeError: working outside of application context, the same is happening on calling 'input_info()'", "output": "Flask has an Application Context, and it seems like you'll need to do something like: def test_connection(self): with app.app_context(): #test code You can probably also shove the app.app_context() call into a test setup method as well."} +{"question_id": 8739227, "score": 99, "creation_date": 1325749788, "tags": ["python", "numpy", "scipy", "sympy"], "instruction": "How can I solve a pair of nonlinear equations using Python?\n\nWhat's the (best) way to solve a pair of nonlinear equations using Python? (NumPy, SciPy, or SymPy) For example: x+y^2 = 4 e^x+ xy = 3 A code snippet which solves the above pair will be great. (These are not the equations that I want to solve, but the method will be applicable to my case.)", "output": "For a numerical solution, you can use fsolve: from scipy.optimize import fsolve import math def equations(p): x, y = p return (x+y**2-4, math.exp(x) + x*y - 3) x, y = fsolve(equations, (1, 1)) print equations((x, y))"} +{"question_id": 15821465, "score": 99, "creation_date": 1365108707, "tags": ["python", "unit-testing", "mocking"], "instruction": "How to properly use mock in python with unittest setUp\n\nIn my attempt to learn TDD, trying to learn unit testing and using mock with python. Slowly getting the hang of it, but unsure if I'm doing this correctly. Forewarned: I'm stucking using python 2.4 because the vendor API's come as pre-compiled 2.4 pyc files, so I'm using mock 0.8.0 and unittest ( not unittest2 ) Given this example code in 'mymodule.py' import ldap class MyCustomException(Exception): pass class MyClass: def __init__(self, server, user, passwd): self.ldap = ldap.initialize(server) self.user = user self.passwd = passwd def connect(self): try: self.ldap.simple_bind_s(self.user, self.passwd) except ldap.INVALID_CREDENTIALS: # do some stuff raise MyCustomException Now in my test case file 'test_myclass.py', I want to mock the ldap object out. ldap.initialize returns the ldap.ldapobject.SimpleLDAPObject, so I figured that'd be the method I'd have to mock out. import unittest from ldap import INVALID_CREDENTIALS from mock import patch, MagicMock from mymodule import MyClass class LDAPConnTests(unittest.TestCase): @patch('ldap.initialize') def setUp(self, mock_obj): self.ldapserver = MyClass('myserver','myuser','mypass') self.mocked_inst = mock_obj.return_value def testRaisesMyCustomException(self): self.mocked_inst.simple_bind_s = MagicMock() # set our side effect to the ldap exception to raise self.mocked_inst.simple_bind_s.side_effect = INVALID_CREDENTIALS self.assertRaises(mymodule.MyCustomException, self.ldapserver.connect) def testMyNextTestCase(self): # blah blah Leads me to a couple of questions: Does that look right? :) Is that the proper way to try and mock an object that gets instantiated within the class I'm testing? Is it ok to be calling the @patch decorator on setUp or is this going to cause weird side effects? Is there anyway to get mock to raise the ldap.INVALID_CREDENTIALS exception without having to import the exception into my testcase file? Should I be using patch.object() instead and if so, how? Thanks.", "output": "You can use patch() as a class decorator, not just as a function decorator. You can then pass in the mocked function as before: @patch('mymodule.SomeClass') class MyTest(TestCase): def test_one(self, MockSomeClass): self.assertIs(mymodule.SomeClass, MockSomeClass) See: Applying the same patch to every test method (which also lists alternatives) It makes more sense to set up the patcher this way on setUp if you want the patching to be done for all the test methods."} +{"question_id": 33837918, "score": 99, "creation_date": 1448063167, "tags": ["python", "python-typing"], "instruction": "Type hints: solve circular dependency\n\nThe following produces NameError: name 'Client' is not defined. How can I solve it? class Server: def register_client(self, client: Client) pass class Client: def __init__(self, server: Server): server.register_client(self)", "output": "You can use a forward reference by using a string name for the not-yet-defined Client class: class Server: def register_client(self, client: 'Client') pass As of Python 3.7, you can also postpone all runtime parsing of annotations by adding the following __future__ import at the top of your module: from __future__ import annotations at which point the annotations are stored as string representations of the abstract syntax tree for the expression; you can use typing.get_type_hints() to resolve those (and resolve forward references as used above). See PEP 563 -- Postponed Evaluation of Annotations for details."} +{"question_id": 27431390, "score": 99, "creation_date": 1418328083, "tags": ["python", "python-3.x"], "instruction": "In Python 3, why do I get \"TypeError: 'zip' object is not subscriptable\" (or see a strange result instead of a list)?\n\nI have a tagged file in the format token/tag and I try a function that returns a tuple with words from a (word,tag) list. def text_from_tagged_ngram(ngram): if type(ngram) == tuple: return ngram[0] return \" \".join(zip(*ngram)[0]) # zip(*ngram)[0] returns a tuple with words from a (word,tag) list In python 2.7 it worked well, but in python 3.4 it gives an error on the last line which says TypeError: 'zip' object is not subscriptable. Why did it stop working? How can I fix this? Several other things changed from Python 2 to Python 3 in a parallel manner; see How to use filter, map, and reduce in Python 3.", "output": "In Python 2, zip returned a list. In Python 3, zip returns an iterable object. But you can make it into a list just by calling list, as in: list(zip(...)) In this case, that would be: list(zip(*ngram)) With a list, you can use indexing: items = list(zip(*ngram)) ... items[0] etc. But if you only need the first element, then you don't strictly need a list. You could just use next. In this case, that would be: next(zip(*ngram))"} +{"question_id": 29324037, "score": 99, "creation_date": 1427586961, "tags": ["python"], "instruction": "Convert BytesIO into File\n\nI have a BytesIO object containing the data of an excel document. The library I want to use doesn't support BytesIO and expects a File object instead. How can I take my BytesIO object and convert it into a File object?", "output": "It would be helpful if you supplied the library you were using to work on excel files, but here's a buckshot of solutions, based on some assumptions I'm making: Based on the first paragraph in the io module's documentation, it sounds like all the concrete classes- including BytesIO- are file-like objects. Without knowing what code you've tried so far, I don't know if you have tried passing the BytesIO to the module you're using. On the off chance that doesn't work, you can simply convert BytesIO to a another io Writer/Reader/Wrapper by passing it to the constructor. Example: . import io b = io.BytesIO(b\"Hello World\") ## Some random BytesIO Object print(type(b)) ## For sanity's sake with open(\"test.xlsx\") as f: ## Excel File print(type(f)) ## Open file is TextIOWrapper bw=io.TextIOWrapper(b) ## Conversion to TextIOWrapper print(type(bw)) ## Just to confirm You may need to check which kind of Reader/Writer/Wrapper is expected by the module you're using to convert the BytesIO to the correct one I believe I have heard that (for memory reasons, due to extremely large excel files) excel modules do not load the entire file. If this ends up meaning that what you need is a physical file on the disk, then you can easily write the Excel file temporarily and just delete it when you're done. Example: . import io import os with open(\"test.xlsx\",'rb') as f: g=io.BytesIO(f.read()) ## Getting an Excel File represented as a BytesIO Object temporarylocation=\"testout.xlsx\" with open(temporarylocation,'wb') as out: ## Open temporary file as bytes out.write(g.read()) ## Read bytes into file ## Do stuff with module/file os.remove(temporarylocation) ## Delete file when done I'll hope that one of these points will solve your problem."} +{"question_id": 25675943, "score": 99, "creation_date": 1409869709, "tags": ["python", "string", "operator-overloading", "string-concatenation", "string-conversion"], "instruction": "How can I concatenate str and int objects?\n\nIf I try to do the following: things = 5 print(\"You have \" + things + \" things.\") I get the following error in Python 3.x: Traceback (most recent call last): File \"\", line 1, in TypeError: can only concatenate str (not \"int\") to str ... and a similar error in Python 2.x: Traceback (most recent call last): File \"\", line 1, in TypeError: cannot concatenate 'str' and 'int' objects How can I get around this problem?", "output": "The problem here is that the + operator has (at least) two different meanings in Python: for numeric types, it means \"add the numbers together\": >>> 1 + 2 3 >>> 3.4 + 5.6 9.0 ... and for sequence types, it means \"concatenate the sequences\": >>> [1, 2, 3] + [4, 5, 6] [1, 2, 3, 4, 5, 6] >>> 'abc' + 'def' 'abcdef' As a rule, Python doesn't implicitly convert objects from one type to another1 in order to make operations \"make sense\", because that would be confusing: for instance, you might think that '3' + 5 should mean '35', but someone else might think it should mean 8 or even '8'. Similarly, Python won't let you concatenate two different types of sequence: >>> [7, 8, 9] + 'ghi' Traceback (most recent call last): File \"\", line 1, in TypeError: can only concatenate list (not \"str\") to list Because of this, you need to do the conversion explicitly, whether what you want is concatenation or addition: >>> 'Total: ' + str(123) 'Total: 123' >>> int('456') + 789 1245 However, there is a better way. Depending on which version of Python you use, there are three different kinds of string formatting available2, which not only allow you to avoid multiple + operations: >>> things = 5 >>> 'You have %d things.' % things # % interpolation 'You have 5 things.' >>> 'You have {} things.'.format(things) # str.format() 'You have 5 things.' >>> f'You have {things} things.' # f-string (since Python 3.6) 'You have 5 things.' ... but also allow you to control how values are displayed: >>> value = 5 >>> sq_root = value ** 0.5 >>> sq_root 2.23606797749979 >>> 'The square root of %d is %.2f (roughly).' % (value, sq_root) 'The square root of 5 is 2.24 (roughly).' >>> 'The square root of {v} is {sr:.2f} (roughly).'.format(v=value, sr=sq_root) 'The square root of 5 is 2.24 (roughly).' >>> f'The square root of {value} is {sq_root:.2f} (roughly).' 'The square root of 5 is 2.24 (roughly).' Whether you use % interpolation, str.format(), or f-strings is up to you: % interpolation has been around the longest (and is familiar to people with a background in C), str.format() is often more powerful, and f-strings are more powerful still (but available only in Python 3.6 and later). Another alternative is to use the fact that if you give print multiple positional arguments, it will join their string representations together using the sep keyword argument (which defaults to ' '): >>> things = 5 >>> print('you have', things, 'things.') you have 5 things. >>> print('you have', things, 'things.', sep=' ... ') you have ... 5 ... things. ... but that's usually not as flexible as using Python's built-in string formatting abilities. 1 Although it makes an exception for numeric types, where most people would agree on the 'right' thing to do: >>> 1 + 2.3 3.3 >>> 4.5 + (5.6+7j) (10.1+7j) 2 Actually four, but template strings are rarely used, and are somewhat awkward. Other Resources: Real Python: Splitting, Concatenating, and Joining Strings in Python Python.org: string - Common string operations python string concatenation with int site:stackoverflow.com"} +{"question_id": 55052434, "score": 99, "creation_date": 1551991120, "tags": ["python", "requirements.txt"], "instruction": "Does Python requirements file have to specify version?\n\nI have a requirements.txt file for a Python code base. The file has everything specified: pytz==2017.2 requests==2.18.4 six==1.11.0 I am adding a new package. Should I list its version? If yes, how do I pick a version to specify?", "output": "Check out the pip docs for more info, but basically you do not need to specify a version. Doing so can avoid headaches though, as specifying a version allows you to guarantee you do not end up in dependency hell. Note that if you are creating a package to be deployed and pip-installed, you should use the install-requires metadata instead of relying on requirements.txt. Also, it's a good idea to get into the habit of using virtual environments to avoid dependency issues, especially when developing your own stuff. Anaconda offers a simple solution with the conda create command, and virtualenv works great with virtualenvwrapper for a lighter-weight solution. Another solution, pipenv, is quite popular."} +{"question_id": 43214978, "score": 98, "creation_date": 1491330660, "tags": ["python", "pandas", "matplotlib", "seaborn", "bar-chart"], "instruction": "How to display custom values on a bar plot\n\nI'm looking to see how to do two things in Seaborn with using a bar chart to display values that are in the dataframe, but not in the graph. I'm looking to display the values of one field in a dataframe while graphing another. For example, below, I'm graphing 'tip', but I would like to place the value of 'total_bill' centered above each of the bars (i.e.325.88 above Friday, 1778.40 above Saturday, etc.) Is there a way to scale the colors of the bars, with the lowest value of 'total_bill' having the lightest color (in this case Friday) and the highest value of 'total_bill' having the darkest? Obviously, I'd stick with one color (i.e., blue) when I do the scaling. While I see that others think that this is a duplicate of another problem (or two), I am missing the part of how I use a value that is not in the graph as the basis for the label or the shading. How do I say, use total_bill as the basis. I'm sorry, but I just can't figure it out based on those answers. Starting with the following code, import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https://raw.githubusercontent.com/wesm/pydata-book/1st-edition/ch08/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() g = sns.barplot(x='day', y='tip', data=groupedvalues) I get the following result: Interim Solution: for index, row in groupedvalues.iterrows(): g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\") On the shading, using the example below, I tried the following: import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https://raw.githubusercontent.com/wesm/pydata-book/1st-edition/ch08/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() pal = sns.color_palette(\"Greens_d\", len(data)) rank = groupedvalues.argsort().argsort() g = sns.barplot(x='day', y='tip', data=groupedvalues) for index, row in groupedvalues.iterrows(): g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\") But that gave me the following error: AttributeError: 'DataFrame' object has no attribute 'argsort' So I tried a modification: import pandas as pd import seaborn as sns %matplotlib inline df = pd.read_csv(\"https://raw.githubusercontent.com/wesm/pydata-book/1st-edition/ch08/tips.csv\", sep=',') groupedvalues = df.groupby('day').sum().reset_index() pal = sns.color_palette(\"Greens_d\", len(data)) rank = groupedvalues['total_bill'].rank(ascending=True) g = sns.barplot(x='day', y='tip', data=groupedvalues, palette=np.array(pal[::-1])[rank]) and that leaves me with IndexError: index 4 is out of bounds for axis 0 with size 4", "output": "Stick to the solution from Changing color scale in seaborn bar plot, which uses argsort to determine the order of the bar colors. In the linked question, argsort is applied to a Series object, while here you have a DataFrame. Select one column of the DataFrame to apply argsort on. import seaborn as sns import matplotlib.pyplot as plt import numpy as np df = sns.load_dataset('tips') groupedvalues = df.groupby('day').sum().reset_index() pal = sns.color_palette('Greens_d', len(groupedvalues)) rank = groupedvalues['total_bill'].argsort().argsort() g = sns.barplot(x='day', y='tip', data=groupedvalues, palette=np.array(pal[::-1])[rank]) for index, row in groupedvalues.iterrows(): g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha='center') plt.show() The second attempt works fine as well, the only issue is the rank, as returned by rank(), starts at 1 instead of 0. So one has to subtract 1 from the array. For indexing, we need integer values, so cast it to int. rank = groupedvalues['total_bill'].rank(ascending=True).values rank = (rank-1).astype(int) From matplotlib 3.4.0, there is .bar_label, which has a label parameter for custom labels. Other answers using .bar_label didn't customize the labels with labels=. See this answer from May 16, 2021, for a thorough explanation of .bar_label with links to documentation and examples. The day column downloads as a category Dtype, which keeps the days of the week in order. This also ensures the plot order of the bars on the x-axis and the values in tb. .bar_label adds labels from left to right, so the values in tb are in the same order as the bars. If working with a column that isn't categorical, pd.Categorical can be used on the column to set the order. In sns.barplot, estimator=sum is specified to sum tip. The default is mean. df = sns.load_dataset(\"tips\") # sum total_bill by day tb = df.groupby('day').total_bill.sum() # get the colors in blues as requested pal = sns.color_palette(\"Blues_r\", len(tb)) # rank the total_bill sums rank = tb.argsort() # plot fig, ax = plt.subplots(figsize=(8, 6)) sns.barplot(x='day', y='tip', data=df, palette=np.array(pal[::-1])[rank], estimator=sum, ci=False, ax=ax) # 1. add labels using bar_label with custom labels from tb ax.bar_label(ax.containers[0], labels=tb, padding=3) # pad the spacing between the number and the edge of the figure ax.margins(y=0.1) plt.show()"} +{"question_id": 22938679, "score": 98, "creation_date": 1396964029, "tags": ["python", "django", "heroku", "psycopg2"], "instruction": "Error trying to install Postgres for python (psycopg2)\n\nI tried to install psycopg2 to my environment, but I get the following error: (venv)avlahop@apostolos-laptop:~/development/django/rhombus-dental$ sudo pip install psycopg2 Downloading/unpacking psycopg2, Downloading psycopg2-2.5.2.tar.gz (685kB): 685kB downloaded Running setup.py egg_info for package psycopg2 Installing collected packages: psycopg2 Running setup.py install for psycopg2 building 'psycopg2._psycopg' extension x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION=\"2.5.2 (dt dec pq3 ext)\" -DPG_VERSION_HEX=0x09010D -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement In file included from psycopg/psycopgmodule.c:27:0: ./psycopg/psycopg.h:30:20: fatal error: Python.h: \u0394\u03b5\u03bd \u03c5\u03c0\u03ac\u03c1\u03c7\u03b5\u03b9 \u03c4\u03ad\u03c4\u03bf\u03b9\u03bf \u03b1\u03c1\u03c7\u03b5\u03af\u03bf \u03ae \u03ba\u03b1\u03c4\u03ac\u03bb\u03bf\u03b3\u03bf\u03c2 #include ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 Complete output from command /usr/bin/python -c \"import setuptools;__file__='/tmp/pip_build_root/psycopg2/setup.py';exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record /tmp/pip-SgfQCA-record/install-record.txt --single-version-externally-managed: running install running build running build_py creating build creating build/lib.linux-x86_64-2.7 creating build/lib.linux-x86_64-2.7/psycopg2 copying lib/pool.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/errorcodes.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/__init__.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/_json.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/_range.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/extensions.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/psycopg1.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/tz.py -> build/lib.linux-x86_64-2.7/psycopg2 copying lib/extras.py -> build/lib.linux-x86_64-2.7/psycopg2 creating build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/testconfig.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copyng tests/test_bug_gc.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_dates.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_copy.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_cancel.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_bugX000.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_extras_dictcursor.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_psycopg2_dbapi20.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_types_basic.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_async.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_lobject.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_cursor.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_with.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/__init__.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_types_extras.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/testutils.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_notify.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_green.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_quote.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_connection.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_transaction.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/dbapi20.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/test_module.py -> build/lib.linux-x86_64-2.7/psycopg2/tests copying tests/dbapi20_tpc.py -> build/lib.linux-x86_64-2.7/psycopg2/tests running build_ext building 'psycopg2._psycopg' extension creating build/temp.linux-x86_64-2.7 creating build/temp.linux-x86_64-2.7/psycopg x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION=\"2.5.2 (dt dec pq3 ext)\" -DPG_VERSION_HEX=0x09010D -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement In file included from psycopg/psycopgmodule.c:27:0: ./psycopg/psycopg.h:30:20: fatal error: Python.h: No such file or directory #include ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Cleaning up... Command /usr/bin/python -c \"import setuptools;__file__='/tmp/pip_build_root/psycopg2/setup.py';exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\" install --record /tmp/pip-SgfQCA-record/install-record.txt --single-version-externally-managed failed with error code 1 in /tmp/pip_build_root/psycopg2 Traceback (most recent call last): File \"/usr/bin/pip\", line 9, in load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File \"/usr/lib/python2.7/dist-packages/pip/__init__.py\", line 148, in main return command.main(args[1:], options) File \"/usr/lib/python2.7/dist-packages/pip/basecommand.py\", line 169, in main text = '\\n'.join(complete_log) UnicodeDecodeError: 'ascii' codec can't decode byte 0xce in position 54: ordinal not in range(128) I also get errors trying to install gunicorn. I followed this example here but nothing. My virtual environment was created using the --no-site-packagesflag. I use ubuntu 13.10 and greek environment. Maybe it's that?", "output": "The python-dev package is required for compilation of Python extensions written in C or C++, like psycopg2. If you're running a Debian-based distribution (e.g. Ubuntu), you can install python-dev by running sudo apt install python-dev or sudo apt install python3-dev depending on your python version (for example, sudo apt install python3.8-dev). After that, proceed to install psycopg2 in your virtualenv environment as usual."} +{"question_id": 24736146, "score": 98, "creation_date": 1405339369, "tags": ["python", "makefile", "virtualenv"], "instruction": "How to use virtualenv in makefile\n\nI want to perform several operations while working on a specified virtualenv. For example command make install would be equivalent to source path/to/virtualenv/bin/activate pip install -r requirements.txt Is it possible?", "output": "In make you can run a shell as command. In this shell you can do everything you can do in a shell you started from comandline. Example: install: ( \\ source path/to/virtualenv/bin/activate; \\ pip install -r requirements.txt; \\ ) Attention must be paid to the ;and the \\. Everything between the open and close brace will be done in a single instance of a shell."} +{"question_id": 7941226, "score": 98, "creation_date": 1319917153, "tags": ["python", "matplotlib"], "instruction": "How to add line based on slope and intercept\n\nIn R, there is a function called abline in which a line can be drawn on a plot based on the specification of the intercept (first argument) and the slope (second argument). For instance, plot(1:10, 1:10) abline(0, 1) where the line with an intercept of 0 and the slope of 1 spans the entire range of the plot. Is there such a function in Matplotlib?", "output": "A lot of these solutions are focusing on adding a line to the plot that fits the data. Here's a simple solution for adding an arbitrary line to the plot based on a slope and intercept. import matplotlib.pyplot as plt import numpy as np def abline(slope, intercept): \"\"\"Plot a line from slope and intercept\"\"\" axes = plt.gca() x_vals = np.array(axes.get_xlim()) y_vals = intercept + slope * x_vals plt.plot(x_vals, y_vals, '--')"} +{"question_id": 33745964, "score": 98, "creation_date": 1447713477, "tags": ["python", "apache-spark", "join", "pyspark", "apache-spark-sql"], "instruction": "How to join on multiple columns in Pyspark?\n\nI am using Spark 1.3 and would like to join on multiple columns using python interface (SparkSQL) The following works: I first register them as temp tables. numeric.registerTempTable(\"numeric\") Ref.registerTempTable(\"Ref\") test = numeric.join(Ref, numeric.ID == Ref.ID, joinType='inner') I would now like to join them based on multiple columns. I get SyntaxError: invalid syntax with this: test = numeric.join(Ref, numeric.ID == Ref.ID AND numeric.TYPE == Ref.TYPE AND numeric.STATUS == Ref.STATUS , joinType='inner')", "output": "You should use & / | operators and be careful about operator precedence (== has lower precedence than bitwise AND and OR): df1 = sqlContext.createDataFrame( [(1, \"a\", 2.0), (2, \"b\", 3.0), (3, \"c\", 3.0)], (\"x1\", \"x2\", \"x3\")) df2 = sqlContext.createDataFrame( [(1, \"f\", -1.0), (2, \"b\", 0.0)], (\"x1\", \"x2\", \"x3\")) df = df1.join(df2, (df1.x1 == df2.x1) & (df1.x2 == df2.x2)) df.show() ## +---+---+---+---+---+---+ ## | x1| x2| x3| x1| x2| x3| ## +---+---+---+---+---+---+ ## | 2| b|3.0| 2| b|0.0| ## +---+---+---+---+---+---+"} +{"question_id": 52540121, "score": 98, "creation_date": 1538060992, "tags": ["python", "pipenv"], "instruction": "Make Pipenv create the virtualenv in the same folder\n\nI want Pipenv to make virtual environment in the same folder with my project (Django). I searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.", "output": "PIPENV_VENV_IN_PROJECT is an environment variable, just set it (the value doesn't matter, but must not be empty). Make sure to export it so child processes of the shell can see it: export PIPENV_VENV_IN_PROJECT=1 This causes the virtualenv to be created in the .venv directory next to the Pipfile file. Use unset PIPENV_VENV_IN_PROJECT to remove the option again. You may want to see if the direnv project can be useful here. It'll set environment variables for you, automatically, when you enter your project directory, provided you created a .envrc file in the project directory and enabled the directory with direnv allow. You then can add any such export commands to that file."} +{"question_id": 64010263, "score": 98, "creation_date": 1600779549, "tags": ["python", "gcloud", "fedora", "python-3.9"], "instruction": "AttributeError: module 'importlib' has no attribute 'util'\n\nI've just upgraded from Fedora 32 to Fedora 33 (which comes with Python 3.9). Since then gcloud command stopped working: [guy@Gandalf32 ~]$ gcloud Error processing line 3 of /home/guy/.local/lib/python3.9/site-packages/XStatic-1.0.2-py3.9-nspkg.pth: Traceback (most recent call last): File \"/usr/lib64/python3.9/site.py\", line 169, in addpackage exec(line) File \"\", line 1, in File \"\", line 562, in module_from_spec AttributeError: 'NoneType' object has no attribute 'loader' Remainder of file ignored Traceback (most recent call last): File \"/usr/lib64/google-cloud-sdk/lib/gcloud.py\", line 104, in main() File \"/usr/lib64/google-cloud-sdk/lib/gcloud.py\", line 62, in main from googlecloudsdk.core.util import encoding File \"/usr/lib64/google-cloud-sdk/lib/googlecloudsdk/__init__.py\", line 23, in from googlecloudsdk.core.util import importing File \"/usr/lib64/google-cloud-sdk/lib/googlecloudsdk/core/util/importing.py\", line 23, in import imp File \"/usr/lib64/python3.9/imp.py\", line 23, in from importlib import util File \"/usr/lib64/python3.9/importlib/util.py\", line 2, in from . import abc File \"/usr/lib64/python3.9/importlib/abc.py\", line 17, in from typing import Protocol, runtime_checkable File \"/usr/lib64/python3.9/typing.py\", line 26, in import re as stdlib_re # Avoid confusion with the re we export. File \"/usr/lib64/python3.9/re.py\", line 124, in import enum File \"/usr/lib64/google-cloud-sdk/lib/third_party/enum/__init__.py\", line 26, in spec = importlib.util.find_spec('enum') AttributeError: module 'importlib' has no attribute 'util'", "output": "Update from GCP support GCP support mentioned that the new version 318.0.0 released on 2020.11.10 should support python 3.9 I updated my gcloud sdk to 318.0.0 and now looks like python 3.9.0 is supported. To fix this issue run gcloud components update Fedora 33 includes python 2.7 and to force GCloud SDK to use it please set this environment variable export CLOUDSDK_PYTHON=python2 You can add this export command to your ~/.bash_profile Python 3.9 is very new and is expected that Gcloud SDK does not support 3.9, it is written to be compatible with 2.7.x & 3.6 - 3.8 (3.8 can cause some compat issues I recommend to use 3.7) As a workaround, configure Python 3.8 or 3.7 (these versions work well for Gcloud and most of linux distros) as system wide interpreter and try to use gcloud commands."} +{"question_id": 68938628, "score": 98, "creation_date": 1629981354, "tags": ["python", "performance"], "instruction": "Why is any(True for ... if cond) much faster than any(cond for ...)?\n\nTwo similar ways to check whether a list contains an odd number: any(x % 2 for x in a) any(True for x in a if x % 2) Timing results with a = [0] * 10000000 (five attempts each, times in seconds): 0.60 0.60 0.60 0.61 0.63 any(x % 2 for x in a) 0.36 0.36 0.36 0.37 0.37 any(True for x in a if x % 2) Why is the second way almost twice as fast? My testing code: from timeit import repeat setup = 'a = [0] * 10000000' expressions = [ 'any(x % 2 for x in a)', 'any(True for x in a if x % 2)', ] for expression in expressions: times = sorted(repeat(expression, setup, number=1)) print(*('%.2f ' % t for t in times), expression) Try it online!", "output": "The first method sends everything to any() whilst the second only sends to any() when there's an odd number, so any() has fewer elements to go through."} +{"question_id": 41624241, "score": 98, "creation_date": 1484260423, "tags": ["python", "pandas", "numpy", "dataframe"], "instruction": "Pandas Split Dataframe into two Dataframes at a specific column\n\nI have pandas DataFrame which I have composed from concat. One row consists of 96 values, I would like to split the DataFrame from the value 72. So that the first 72 values of a row are stored in Dataframe1, and the next 24 values of a row in Dataframe2. I create my DF as follows: temps = DataFrame(myData) datasX = concat( [temps.shift(72), temps.shift(71), temps.shift(70), temps.shift(69), temps.shift(68), temps.shift(67), temps.shift(66), temps.shift(65), temps.shift(64), temps.shift(63), temps.shift(62), temps.shift(61), temps.shift(60), temps.shift(59), temps.shift(58), temps.shift(57), temps.shift(56), temps.shift(55), temps.shift(54), temps.shift(53), temps.shift(52), temps.shift(51), temps.shift(50), temps.shift(49), temps.shift(48), temps.shift(47), temps.shift(46), temps.shift(45), temps.shift(44), temps.shift(43), temps.shift(42), temps.shift(41), temps.shift(40), temps.shift(39), temps.shift(38), temps.shift(37), temps.shift(36), temps.shift(35), temps.shift(34), temps.shift(33), temps.shift(32), temps.shift(31), temps.shift(30), temps.shift(29), temps.shift(28), temps.shift(27), temps.shift(26), temps.shift(25), temps.shift(24), temps.shift(23), temps.shift(22), temps.shift(21), temps.shift(20), temps.shift(19), temps.shift(18), temps.shift(17), temps.shift(16), temps.shift(15), temps.shift(14), temps.shift(13), temps.shift(12), temps.shift(11), temps.shift(10), temps.shift(9), temps.shift(8), temps.shift(7), temps.shift(6), temps.shift(5), temps.shift(4), temps.shift(3), temps.shift(2), temps.shift(1), temps, temps.shift(-1), temps.shift(-2), temps.shift(-3), temps.shift(-4), temps.shift(-5), temps.shift(-6), temps.shift(-7), temps.shift(-8), temps.shift(-9), temps.shift(-10), temps.shift(-11), temps.shift(-12), temps.shift(-13), temps.shift(-14), temps.shift(-15), temps.shift(-16), temps.shift(-17), temps.shift(-18), temps.shift(-19), temps.shift(-20), temps.shift(-21), temps.shift(-22), temps.shift(-23)], axis=1) Question is: How can split them? :)", "output": "iloc df1 = datasX.iloc[:, :72] df2 = datasX.iloc[:, 72:] (iloc docs)"} +{"question_id": 7498595, "score": 98, "creation_date": 1316602785, "tags": ["python", "argparse"], "instruction": "Python argparse - Add argument to multiple subparsers\n\nMy script defines one main parser and multiple subparsers. I want to apply the -p argument to some subparsers. So far the code looks like this: parser = argparse.ArgumentParser(prog=\"myProg\") subparsers = parser.add_subparsers(title=\"actions\") parser.add_argument(\"-v\", \"--verbose\", action=\"store_true\", dest=\"VERBOSE\", help=\"run in verbose mode\") parser_create = subparsers.add_parser (\"create\", help = \"create the orbix environment\") parser_create.add_argument (\"-p\", type = int, required = True, help = \"set db parameter\") # Update parser_update = subparsers.add_parser (\"update\", help = \"update the orbix environment\") parser_update.add_argument (\"-p\", type = int, required = True, help = \"set db parameter\") As you can see the add_arument (\"-p\") is repeated twice. I actually have a lot more subparsers. Is there a way to loop through the existing subparsers in order to avoid repetition? For the record, I am using Python 2.7", "output": "Update by @hpaulj Due to changes in handling subparsers since 2011, it is a bad idea to use the main parser as a parent. More generally, don't try to define the same argument (same dest) in both main and sub parsers. The subparser values will overwrite anything set by the main (even the subparser default does this). Create separate parser(s) to use as parents. And as shown in the documentation, parents should use add_help=False. Original answer This can be achieved by defining a parent parser containing the common option(s): import argparse parent_parser = argparse.ArgumentParser(description=\"The parent parser\") parent_parser.add_argument(\"-p\", type=int, required=True, help=\"set db parameter\") subparsers = parent_parser.add_subparsers(title=\"actions\") parser_create = subparsers.add_parser(\"create\", parents=[parent_parser], add_help=False, description=\"The create parser\", help=\"create the orbix environment\") parser_create.add_argument(\"--name\", help=\"name of the environment\") parser_update = subparsers.add_parser(\"update\", parents=[parent_parser], add_help=False, description=\"The update parser\", help=\"update the orbix environment\") This produces help messages of the format: parent_parser.print_help() Output: usage: main.py [-h] -p P {create,update} ... The parent parser optional arguments: -h, --help show this help message and exit -p P set db parameter actions: {create,update} create create the orbix environment update update the orbix environment parser_create.print_help() Output: usage: main.py create [-h] -p P [--name NAME] {create,update} ... The create parser optional arguments: -h, --help show this help message and exit -p P set db parameter --name NAME name of the environment actions: {create,update} create create the orbix environment update update the orbix environment However, if you run your program, you will not encounter an error if you do not specify an action (i.e. create or update). If you desire this behavior, modify your code as follows. <...> subparsers = parent_parser.add_subparsers(title=\"actions\") subparsers.required = True subparsers.dest = 'command' <...> This fix was brought up in this SO question which refers to an issue tracking a pull request."} +{"question_id": 37730243, "score": 98, "creation_date": 1465486120, "tags": ["python", "mysql", "sql", "pandas", "numpy"], "instruction": "Importing data from a MySQL database into a Pandas data frame including column names\n\nI am importing data from a MySQL database into a Pandas data frame. The following excerpt is the code that I am using: import mysql.connector as sql import pandas as pd db_connection = sql.connect(host='hostname', database='db_name', user='username', password='password') db_cursor = db_connection.cursor() db_cursor.execute('SELECT * FROM table_name') table_rows = db_cursor.fetchall() df = pd.DataFrame(table_rows) When I print the data frame it does properly represent the data but my question is, is it possible to also keep the column names? Here is an example output: 0 1 2 3 4 5 6 7 8 0 :ID[giA0CqQcx+(9kbuSKV== NaN NaN None None None None None None 1 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 2 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 3 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None 4 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None What I would like to do is keep the column name, which would replace the pandas column indexes. For example, instead of having 0, the column name would be: \"First_column\" as in the MySQL table. Is there a good way to go about this? or is there a more efficient approach of importing data from MySQL into a Pandas data frame than mine?", "output": "IMO it would be much more efficient to use pandas for reading data from your MySQL server: from sqlalchemy import create_engine import pandas as pd db_connection_str = 'mysql+pymysql://mysql_user:mysql_password@mysql_host/mysql_db' db_connection = create_engine(db_connection_str) df = pd.read_sql('SELECT * FROM table_name', con=db_connection) this should also take care of column names..."} +{"question_id": 2676763, "score": 98, "creation_date": 1271780221, "tags": ["python", "windows", "visual-studio", "visual-c++"], "instruction": "What version of Visual Studio is Python on my computer compiled with?\n\nI am trying to find out the version of Visual Studio that is used to compile the Python on my computer It says Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 What I do not understand is this MSC V.1500 designation. Does it mean it is compiled with Visual Studio 2005? I cannot find this information on http://python.org.", "output": "Visual C++ version _MSC_VER Visual C++ 4.x 1000 Visual C++ 5 1100 Visual C++ 6 1200 Visual C++ .NET 1300 Visual C++ .NET 2003 1310 Visual C++ 2005 (8.0) 1400 Visual C++ 2008 (9.0) 1500 Visual C++ 2010 (10.0) 1600 Visual C++ 2012 (11.0) 1700 Visual C++ 2013 (12.0) 1800 Visual C++ 2015 (14.0) 1900 Visual C++ 2017 (15.0) 1910 Visual C++ 2017 (15.3) 1911 Visual C++ 2017 (15.5) 1912 Visual C++ 2017 (15.6) 1913 Visual C++ 2017 (15.7) 1914 Visual C++ 2017 (15.8) 1915 Visual C++ 2017 (15.9) 1916 Visual C++ 2019 RTW (16.0) 1920 Visual C++ 2019 (16.1) 1921 Visual C++ 2019 (16.2) 1922 Visual C++ 2019 (16.3) 1923 Visual C++ 2019 (16.4) 1924 Visual C++ 2019 (16.5) 1925 Visual C++ 2019 (16.6) 1926 Visual C++ 2019 (16.7) 1927 Visual C++ 2019 (16.8) 1928 Visual C++ 2019 (16.9) 1928 Visual C++ 2019 (16.10) 1929 Visual C++ 2019 (16.11) 1929 Visual Studio 2022 RTW (17.0) 1930 Source: the documentation for the _MSC_VER predefined macro"} +{"question_id": 52796630, "score": 98, "creation_date": 1539459508, "tags": ["python", "python-3.6", "python-asyncio"], "instruction": "Python3.6 AttributeError: module 'asyncio' has no attribute 'run'\n\nI tried to read https://hackernoon.com/asynchronous-python-45df84b82434. It's about asynchronous python and I tried the code from this, but I'm getting a weird Error. The code is: ` import asyncio import aiohttp urls = ['http://www.google.com', 'http://www.yandex.ru', 'http://www.python.org'] async def call_url(url): print('Starting {}'.format(url)) response = await aiohttp.ClientSession().get(url) data = await response.text() print('{}: {} bytes: {}'.format(url, len(data), data)) return data futures = [call_url(url) for url in urls] asyncio.run(asyncio.wait(futures)) When I try to run it says: Traceback (most recent call last): File \"test.py\", line 15, in asyncio.run(asyncio.wait(futures)) AttributeError: module 'asyncio' has no attribute 'run' sys:1: RuntimeWarning: coroutine 'call_url' was never awaited I dont have any files named ayncio and I have proof: >>> asyncio ", "output": "asyncio.run is a Python 3.7 addition. In 3.5-3.6, your example is roughly equivalent to: import asyncio futures = [...] loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.wait(futures))"} +{"question_id": 17742789, "score": 98, "creation_date": 1374226573, "tags": ["python", "bash", "subprocess"], "instruction": "running multiple bash commands with subprocess\n\nIf I run echo a; echo b in bash the result will be that both commands are run. However if I use subprocess then the first command is run, printing out the whole of the rest of the line. The code below echos a; echo b instead of a b, how do I get it to run both commands? import subprocess, shlex def subprocess_cmd(command): process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE) proc_stdout = process.communicate()[0].strip() print proc_stdout subprocess_cmd(\"echo a; echo b\")", "output": "You have to use shell=True in subprocess and no shlex.split: import subprocess command = \"echo a; echo b\" ret = subprocess.run(command, capture_output=True, shell=True) # before Python 3.7: # ret = subprocess.run(command, stdout=subprocess.PIPE, shell=True) print(ret.stdout.decode()) returns: a b"} +{"question_id": 21186327, "score": 98, "creation_date": 1389962594, "tags": ["python", "selenium"], "instruction": "Fill username and password using selenium in python\n\nHow can I auto fill the username and password over the link below: from selenium import webdriver from selenium.webdriver.common.keys import Keys chromedriver = 'C:\\\\chromedriver.exe' browser = webdriver.Chrome(chromedriver) browser.get('http://www.example.com') After that I really do not know: username = Select(browser.find_element_by_name('Username')) password = Select(browser.find_element_by_name('Password')) username.select_by_visible_text(\"text\") password.select_by_visible_text(\"text\")", "output": "Docs: https://selenium-python.readthedocs.io/navigating.html For versions 4.3.0 (released in June 2022) and later, calls to find_element_by_* and find_elements_by_* were removed from Selenium. You need to use the new API: from selenium.webdriver.common.by import By driver = webdriver.Firefox(...) # Or Chrome(), or Ie(), or Opera() # To catch password = driver.find_element(By.ID, \"passwd\") # To catch password = driver.find_element(By.NAME, \"passwd\") password.send_keys(\"Pa55worD\") driver.find_element(By.NAME, \"submit\").click() The original response, for API versions 4.2.0 or previous: driver = webdriver.Firefox(...) # Or Chrome(), or Ie(), or Opera() username = driver.find_element_by_id(\"username\") password = driver.find_element_by_id(\"password\") username.send_keys(\"YourUsername\") password.send_keys(\"Pa55worD\") driver.find_element_by_name(\"submit\").click() A note to your code: Select() is used to act on a Select Element (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select)."} +{"question_id": 36870953, "score": 98, "creation_date": 1461689068, "tags": ["python", "yaml", "jinja2", "newline"], "instruction": "jinja2 how to remove trailing newline\n\nI'm using jinja 2 to output a yaml file but can't seem to get rid of a trailing newline and the end of a for loop. Eg the below - request: path: {{ path }} headers: origin: 'somedomain.com' user-agent: 'agent' referer: 'some.domain.com' authority: 'somedomain.com' querystring: {% for key, value in querystring.items() -%} {{ key }}: '{{ value }}' {% endfor %} response: content: file: {{ content }} gives me the output: - request: path: /some/path headers: origin: 'somedomain.com' user-agent: 'agent' referer: 'somedomain.com' authority: 'somedomain.com' querystring: postcode: 'xxxxxx' houseNo: '55' response: content: file: address.json With an additional unwanted blank line after houseNo. How do I get rid of this line?", "output": "Change your loop to strip whitespace from the top AND bottom of the output (notice extra - at the for loop close): {% for key, value in querystring.items() -%} {{ key }}: '{{ value }}' {%- endfor %} In my tests (using https://github.com/abourguignon/jinja2-live-parser), the - must come after the first {%, not before the last to achieve what you're asking for. Docs: https://jinja.palletsprojects.com/en/latest/templates/#whitespace-control"} +{"question_id": 37107223, "score": 98, "creation_date": 1462763096, "tags": ["python", "neural-network", "tensorflow", "deep-learning"], "instruction": "How to add regularizations in TensorFlow?\n\nI found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value. My questions are: Is there a more elegant or recommended way of regularization than doing it manually? I also find that get_variable has an argument regularizer. How should it be used? According to my observation, if we pass a regularizer to it (such as tf.contrib.layers.l2_regularizer, a tensor representing regularized term will be computed and added to a graph collection named tf.GraphKeys.REGULARIZATOIN_LOSSES. Will that collection be automatically used by TensorFlow (e.g. used by optimizers when training)? Or is it expected that I should use that collection by myself?", "output": "As you say in the second point, using the regularizer argument is the recommended way. You can use it in get_variable, or set it once in your variable_scope and have all your variables regularized. The losses are collected in the graph, and you need to manually add them to your cost function like this. reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_constant = 0.01 # Choose an appropriate one. loss = my_normal_loss + reg_constant * sum(reg_losses)"} +{"question_id": 25929319, "score": 98, "creation_date": 1411114506, "tags": ["python", "pandas"], "instruction": "How to iterate over pandas multiindex dataframe using index\n\nI have a data frame df which looks like this. Date and Time are 2 multilevel index observation1 observation2 date Time 2012-11-02 9:15:00 79.373668 224 9:16:00 130.841316 477 2012-11-03 9:15:00 45.312814 835 9:16:00 123.776946 623 9:17:00 153.76646 624 9:18:00 463.276946 626 9:19:00 663.176934 622 9:20:00 763.77333 621 2012-11-04 9:15:00 115.449437 122 9:16:00 123.776946 555 9:17:00 153.76646 344 9:18:00 463.276946 212 I want to run some complex process over daily data block. Pseudo code would look like for count in df(level 0 index) : new_df = get only chunk for count complex_process(new_df) So, first of all, I could not find a way to access only blocks for a date 2012-11-03 9:15:00 45.312814 835 9:16:00 123.776946 623 9:17:00 153.76646 624 9:18:00 463.276946 626 9:19:00 663.176934 622 9:20:00 763.77333 621 and then send it for processing. I am doing this in for loop as I am not sure if there is any way to do it without mentioning exact value of level 0 column. I did some basic search and found df.index.get_level_values(0), but it returns all the values and that causes loop to run multiple times for a given day. I want to create a Dataframe per day and send it for processing.", "output": "One easy way would be to groupby the first level of the index - iterating over the groupby object will return the group keys and a subframe containing each group. In [136]: for date, new_df in df.groupby(level=0): ...: print(new_df) ...: observation1 observation2 date Time 2012-11-02 9:15:00 79.373668 224 9:16:00 130.841316 477 observation1 observation2 date Time 2012-11-03 9:15:00 45.312814 835 9:16:00 123.776946 623 9:17:00 153.766460 624 9:18:00 463.276946 626 9:19:00 663.176934 622 9:20:00 763.773330 621 observation1 observation2 date Time 2012-11-04 9:15:00 115.449437 122 9:16:00 123.776946 555 9:17:00 153.766460 344 9:18:00 463.276946 212 You can also use droplevel to remove the first index (the useless date index): In [136]: for date, new_df in df.groupby(level=0): ...: print(new_df.droplevel(0)) ...: observation1 observation2 Time 9:15:00 79.373668 224 9:16:00 130.841316 477 ..."} +{"question_id": 41217310, "score": 98, "creation_date": 1482130802, "tags": ["python", "pandas", "numpy"], "instruction": "Get index of a row of a pandas dataframe as an integer\n\nAssume an easy dataframe, for example A B 0 1 0.810743 1 2 0.595866 2 3 0.154888 3 4 0.472721 4 5 0.894525 5 6 0.978174 6 7 0.859449 7 8 0.541247 8 9 0.232302 9 10 0.276566 How can I retrieve an index value of a row, given a condition? For example: dfb = df[df['A']==5].index.values.astype(int) returns [4], but what I would like to get is just 4. This is causing me troubles later in the code. Based on some conditions, I want to have a record of the indexes where that condition is fulfilled, and then select rows between. I tried dfb = df[df['A']==5].index.values.astype(int) dfbb = df[df['A']==8].index.values.astype(int) df.loc[dfb:dfbb,'B'] for a desired output A B 4 5 0.894525 5 6 0.978174 6 7 0.859449 but I get TypeError: '[4]' is an invalid key", "output": "The easier is add [0] - select first value of list with one element: dfb = df[df['A']==5].index.values.astype(int)[0] dfbb = df[df['A']==8].index.values.astype(int)[0] dfb = int(df[df['A']==5].index[0]) dfbb = int(df[df['A']==8].index[0]) But if possible some values not match, error is raised, because first value not exist. Solution is use next with iter for get default parameetr if values not matched: dfb = next(iter(df[df['A']==5].index), 'no match') print (dfb) 4 dfb = next(iter(df[df['A']==50].index), 'no match') print (dfb) no match Then it seems need substract 1: print (df.loc[dfb:dfbb-1,'B']) 4 0.894525 5 0.978174 6 0.859449 Name: B, dtype: float64 Another solution with boolean indexing or query: print (df[(df['A'] >= 5) & (df['A'] < 8)]) A B 4 5 0.894525 5 6 0.978174 6 7 0.859449 print (df.loc[(df['A'] >= 5) & (df['A'] < 8), 'B']) 4 0.894525 5 0.978174 6 0.859449 Name: B, dtype: float64 print (df.query('A >= 5 and A < 8')) A B 4 5 0.894525 5 6 0.978174 6 7 0.859449"} +{"question_id": 29317262, "score": 98, "creation_date": 1427546437, "tags": ["python", "opencv"], "instruction": "openCV video saving in python\n\nI am trying to save the video but it's not working. I followed the instructions from the openCV documentation. import numpy as np import cv2 cap = cv2.VideoCapture(0) fourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640,480)) while(cap.isOpened()): ret, frame = cap.read() if ret==True: frame = cv2.flip(frame,0) out.write(frame) cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break else: break cap.release() out.release() cv2.destroyAllWindows() What is wrong?", "output": "Try this. It's working for me (Windows 10). import numpy as np import cv2 cap = cv2.VideoCapture(0) # Define the codec and create VideoWriter object #fourcc = cv2.cv.CV_FOURCC(*'DIVX') #out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480)) out = cv2.VideoWriter('output.avi', -1, 20.0, (640,480)) while(cap.isOpened()): ret, frame = cap.read() if ret==True: frame = cv2.flip(frame,0) # write the flipped frame out.write(frame) cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break else: break # Release everything if job is finished cap.release() out.release() cv2.destroyAllWindows()"} +{"question_id": 25386870, "score": 98, "creation_date": 1408460748, "tags": ["python", "pandas", "matplotlib", "seaborn", "bar-chart"], "instruction": "Pandas Plotting with Multi-Index\n\nAfter performing a groupby.sum() on a DataFrame I'm having some trouble trying to create my intended plot. import pandas as pd import numpy as np np.random.seed(365) rows = 100 data = {'Month': np.random.choice(['2014-01', '2014-02', '2014-03', '2014-04'], size=rows), 'Code': np.random.choice(['A', 'B', 'C'], size=rows), 'ColA': np.random.randint(5, 125, size=rows), 'ColB': np.random.randint(0, 51, size=rows),} df = pd.DataFrame(data) Month Code ColA ColB 0 2014-03 C 59 47 1 2014-01 A 24 9 2 2014-02 C 77 50 dfg = df.groupby(['Code', 'Month']).sum() ColA ColB Code Month A 2014-01 124 102 2014-02 398 282 2014-03 474 198 2014-04 830 237 B 2014-01 477 300 2014-02 591 167 2014-03 522 192 2014-04 367 169 C 2014-01 412 180 2014-02 275 205 2014-03 795 291 2014-04 901 309 How can I create a subplot (kind='bar') for each Code, where the x-axis is the Month and the bars are ColA and ColB?", "output": "I found the unstack(level) method to work perfectly, which has the added benefit of not needing a priori knowledge about how many Codes there are. ax = dfg.unstack(level=0).plot(kind='bar', subplots=True, rot=0, figsize=(9, 7), layout=(2, 3)) plt.tight_layout()"} +{"question_id": 980249, "score": 98, "creation_date": 1244713611, "tags": ["python"], "instruction": "Difference between dir(\u2026) and vars(\u2026).keys() in Python?\n\nIs there a difference between dir(\u2026) and vars(\u2026).keys() in Python? (I hope there is a difference, because otherwise this would break the \"one way to do it\" principle... :)", "output": "Python objects usually store their instance variables in a dictionary that belongs to the object (except for slots). vars(x) returns this dictionary (as does x.__dict__). dir(x), on the other hand, returns a dictionary of x's \"attributes, its class's attributes, and recursively the attributes of its class's base classes.\" When you access an object's attribute using the dot operator, Python does a lot more than just look up the attribute in that objects dictionary. A common case is when x is an instance of class C and you call its method m: class C: def m(self): print(\"m\") x = C() x.m() The method m is not stored in x.__dict__. It is an attribute of the class C. When you call x.m(), Python will begin by looking for m in x.__dict__, but it won't find it. However, it knows that x is an instance of C, so it will next look in C.__dict__, find it there, and call m with x as the first argument. So the difference between vars(x) and dir(x) is that dir(x) does the extra work of looking in x's class (and its bases) for attributes that are accessible from it, not just those attributes that are stored in x's own symbol table. In the above example, vars(x) returns an empty dictionary, because x has no instance variables. However, dir(x) returns ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'm']"} +{"question_id": 50242968, "score": 98, "creation_date": 1525817487, "tags": ["python", "pandas", "dataframe", "duplicates"], "instruction": "Check for duplicate values in Pandas dataframe column\n\nIs there a way in pandas to check if a dataframe column has duplicate values, without actually dropping rows? I have a function that will remove duplicate rows, however, I only want it to run if there are actually duplicates in a specific column. Currently I compare the number of unique values in the column to the number of rows: if there are less unique values than rows then there are duplicates and the code runs. if len(df['Student'].unique()) < len(df.index): # Code to remove duplicates based on Date column runs Is there an easier or more efficient way to check if duplicate values exist in a specific column, using pandas? Some of the sample data I am working with (only two columns shown). If duplicates are found then another function identifies which row to keep (row with oldest date): Student Date 0 Joe December 2017 1 James January 2018 2 Bob April 2018 3 Joe December 2017 4 Jack February 2018 5 Jack March 2018", "output": "Main question Is there a duplicate value in a column, True/False? \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2566\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557 \u2551 Student \u2551 Date \u2551 \u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 \u2551 Joe \u2551 December 2017 \u2551 \u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 \u2551 Bob \u2551 April 2018 \u2551 \u2560\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2563 \u2551 Joe \u2551 December 2018 \u2551 \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d Assuming above dataframe (df), we could do a quick check if duplicated in the Student col by: boolean = not df[\"Student\"].is_unique # True (credit to @Carsten) boolean = df['Student'].duplicated().any() # True Further reading and references Above we are using one of the Pandas Series methods. The pandas DataFrame has several useful methods, two of which are: drop_duplicates(self[, subset, keep, inplace]) - Return DataFrame with duplicate rows removed, optionally only considering certain columns. duplicated(self[, subset, keep]) - Return boolean Series denoting duplicate rows, optionally only considering certain columns. These methods can be applied on the DataFrame as a whole, and not just a Serie (column) as above. The equivalent would be: boolean = df.duplicated(subset=['Student']).any() # True # We were expecting True, as Joe can be seen twice. However, if we are interested in the whole frame we could go ahead and do: boolean = df.duplicated().any() # False boolean = df.duplicated(subset=['Student','Date']).any() # False # We were expecting False here - no duplicates row-wise # ie. Joe Dec 2017, Joe Dec 2018 And a final useful tip. By using the keep paramater we can normally skip a few rows directly accessing what we need: keep : {\u2018first\u2019, \u2018last\u2019, False}, default \u2018first\u2019 first : Drop duplicates except for the first occurrence. last : Drop duplicates except for the last occurrence. False : Drop all duplicates. Example to play around with import pandas as pd import io data = '''\\ Student,Date Joe,December 2017 Bob,April 2018 Joe,December 2018''' df = pd.read_csv(io.StringIO(data), sep=',') # Approach 1: Simple True/False boolean = df.duplicated(subset=['Student']).any() print(boolean, end='\\n\\n') # True # Approach 2: First store boolean array, check then remove duplicate_in_student = df.duplicated(subset=['Student']) if duplicate_in_student.any(): print(df.loc[~duplicate_in_student], end='\\n\\n') # Approach 3: Use drop_duplicates method df.drop_duplicates(subset=['Student'], inplace=True) print(df) Returns True Student Date 0 Joe December 2017 1 Bob April 2018 Student Date 0 Joe December 2017 1 Bob April 2018"} +{"question_id": 41186818, "score": 98, "creation_date": 1481899419, "tags": ["python", "random", "uuid"], "instruction": "How to generate a random UUID which is reproducible (with a seed) in Python\n\nThe uuid4() function of Python's module uuid generates a random UUID, and seems to generate a different one every time: In [1]: import uuid In [2]: uuid.uuid4() Out[2]: UUID('f6c9ad6c-eea0-4049-a7c5-56253bc3e9c0') In [3]: uuid.uuid4() Out[3]: UUID('2fc1b6f9-9052-4564-9be0-777e790af58f') I would like to be able to generate the same random UUID every time I run a script - that is, I'd like to seed the random generator in uuid4(). Is there a way to do this? (Or achieve this by some other means)? What I've tried so far I've to generate a UUID using the uuid.UUID() method with a random 128-bit integer (from a seeded instance of random.Random()) as input: import uuid import random rd = random.Random() rd.seed(0) uuid.UUID(rd.getrandbits(128)) However, UUID() seems not to accept this as input: Traceback (most recent call last): File \"uuid_gen_seed.py\", line 6, in uuid.UUID(rd.getrandbits(128)) File \"/usr/lib/python2.7/uuid.py\", line 133, in __init__ hex = hex.replace('urn:', '').replace('uuid:', '') AttributeError: 'long' object has no attribute 'replace' Any other suggestions?", "output": "Almost there: uuid.UUID(int=rd.getrandbits(128), version=4) This was determined with the help of help: >>> help(uuid.UUID.__init__) Help on method __init__ in module uuid: __init__(self, hex=None, bytes=None, bytes_le=None, fields=None, int=None, version=None) unbound uuid.UUID method Create a UUID from either a string of 32 hexadecimal digits, a string of 16 bytes as the 'bytes' argument, a string of 16 bytes in little-endian order as the 'bytes_le' argument, a tuple of six integers (32-bit time_low, 16-bit time_mid, 16-bit time_hi_version, 8-bit clock_seq_hi_variant, 8-bit clock_seq_low, 48-bit node) as the 'fields' argument, or a single 128-bit integer as the 'int' argument. When a string of hex digits is given, curly braces, hyphens, and a URN prefix are all optional. For example, these expressions all yield the same UUID: UUID('{12345678-1234-5678-1234-567812345678}') UUID('12345678123456781234567812345678') UUID('urn:uuid:12345678-1234-5678-1234-567812345678') UUID(bytes='\\x12\\x34\\x56\\x78'*4) UUID(bytes_le='\\x78\\x56\\x34\\x12\\x34\\x12\\x78\\x56' + '\\x12\\x34\\x56\\x78\\x12\\x34\\x56\\x78') UUID(fields=(0x12345678, 0x1234, 0x5678, 0x12, 0x34, 0x567812345678)) UUID(int=0x12345678123456781234567812345678) Exactly one of 'hex', 'bytes', 'bytes_le', 'fields', or 'int' must be given. The 'version' argument is optional; if given, the resulting UUID will have its variant and version set according to RFC 4122, overriding the given 'hex', 'bytes', 'bytes_le', 'fields', or 'int'."} +{"question_id": 12382719, "score": 98, "creation_date": 1347433101, "tags": ["python", "printing", "format"], "instruction": "Python way of printing: with 'format' or percent form?\n\nIn Python there seem to be two different ways of generating formatted output: user = \"Alex\" number = 38746 print(\"%s asked %d questions on stackoverflow.com\" % (user, number)) print(\"{0} asked {1} questions on stackoverflow.com\".format(user, number)) Is there one way to be preferred over the other? Are they equivalent, what is the difference? What form should be used, especially for Python3?", "output": "Use the format method, especially if you're concerned about Python 3 and the future. From the documentation: The formatting operations described here are modelled on C's printf() syntax. They only support formatting of certain builtin types. The use of a binary operator means that care may be needed in order to format tuples and dictionaries correctly. As the new :ref:string-formatting syntax is more flexible and handles tuples and dictionaries naturally, it is recommended for new code. However, there are no current plans to deprecate printf-style formatting."} +{"question_id": 52582685, "score": 98, "creation_date": 1538346598, "tags": ["python", "python-3.x", "async-await", "python-asyncio"], "instruction": "Using asyncio.Queue for producer-consumer flow\n\nI'm confused about how to use asyncio.Queue for a particular producer-consumer pattern in which both the producer and consumer operate concurrently and independently. First, consider this example, which closely follows that from the docs for asyncio.Queue: import asyncio import random import time async def worker(name, queue): while True: sleep_for = await queue.get() await asyncio.sleep(sleep_for) queue.task_done() print(f'{name} has slept for {sleep_for:0.2f} seconds') async def main(n): queue = asyncio.Queue() total_sleep_time = 0 for _ in range(20): sleep_for = random.uniform(0.05, 1.0) total_sleep_time += sleep_for queue.put_nowait(sleep_for) tasks = [] for i in range(n): task = asyncio.create_task(worker(f'worker-{i}', queue)) tasks.append(task) started_at = time.monotonic() await queue.join() total_slept_for = time.monotonic() - started_at for task in tasks: task.cancel() # Wait until all worker tasks are cancelled. await asyncio.gather(*tasks, return_exceptions=True) print('====') print(f'3 workers slept in parallel for {total_slept_for:.2f} seconds') print(f'total expected sleep time: {total_sleep_time:.2f} seconds') if __name__ == '__main__': import sys n = 3 if len(sys.argv) == 1 else sys.argv[1] asyncio.run(main()) There is one finer detail about this script: the items are put into the queue synchronously, with queue.put_nowait(sleep_for) over a conventional for-loop. My goal is to create a script that uses async def worker() (or consumer()) and async def producer(). Both should be scheduled to run concurrently. No one consumer coroutine is explicitly tied to or chained from a producer. How can I modify the program above so that the producer(s) is its own coroutine that can be scheduled concurrently with the consumers/workers? There is a second example from PYMOTW. It requires the producer to know the number of consumers ahead of time, and uses None as a signal to the consumer that production is done.", "output": "How can I modify the program above so that the producer(s) is its own coroutine that can be scheduled concurrently with the consumers/workers? The example can be generalized without changing its essential logic: Move the insertion loop to a separate producer coroutine. Start the consumers in the background, letting them process the items as they are produced. With the consumers running, start the producers and wait for them to finish producing items, as with await producer() or await gather(*producers), etc. Once all producers are done, wait for consumers to process the remaining items with await queue.join(). Cancel the consumers, all of which are now idly waiting for the queue to deliver the next item, which will never arrive as we know the producers are done. Here is an example implementing the above: import asyncio, random async def rnd_sleep(t): # sleep for T seconds on average await asyncio.sleep(t * random.random() * 2) async def producer(queue): while True: # produce a token and send it to a consumer token = random.random() if token < .05: break print(f'produced {token}') await queue.put(token) await rnd_sleep(.1) async def consumer(queue): while True: token = await queue.get() # process the token received from a producer await rnd_sleep(.3) queue.task_done() print(f'consumed {token}') async def main(): queue = asyncio.Queue() # fire up the both producers and consumers producers = [asyncio.create_task(producer(queue)) for _ in range(3)] consumers = [asyncio.create_task(consumer(queue)) for _ in range(10)] # with both producers and consumers running, wait for # the producers to finish await asyncio.gather(*producers) print('---- done producing') # wait for the remaining tasks to be processed await queue.join() # cancel the consumers, which are now idle for c in consumers: c.cancel() asyncio.run(main()) Note that in real-life producers and consumers, especially those that involve network access, you probably want to catch IO-related exceptions that occur during processing. If the exception is recoverable, as most network-related exceptions are, you can simply catch the exception and log the error. You should still invoke task_done() because otherwise queue.join() will hang due to an unprocessed item. If it makes sense to re-try processing the item, you can return it into the queue prior to calling task_done(). For example: # like the above, but handling exceptions during processing: async def consumer(queue): while True: token = await queue.get() try: # this uses aiohttp or whatever await process(token) except aiohttp.ClientError as e: print(f\"Error processing token {token}: {e}\") # If it makes sense, return the token to the queue to be # processed again. (You can use a counter to avoid # processing a faulty token infinitely.) #await queue.put(token) queue.task_done() print(f'consumed {token}')"} +{"question_id": 2554185, "score": 98, "creation_date": 1270048866, "tags": ["python", "regex"], "instruction": "Match groups in Python\n\nIs there a way in Python to access match groups without explicitly creating a match object (or another way to beautify the example below)? Here is an example to clarify my motivation for the question: Following Perl code if ($statement =~ /I love (\\w+)/) { print \"He loves $1\\n\"; } elsif ($statement =~ /Ich liebe (\\w+)/) { print \"Er liebt $1\\n\"; } elsif ($statement =~ /Je t\\'aime (\\w+)/) { print \"Il aime $1\\n\"; } translated into Python m = re.search(\"I love (\\w+)\", statement) if m: print \"He loves\",m.group(1) else: m = re.search(\"Ich liebe (\\w+)\", statement) if m: print \"Er liebt\",m.group(1) else: m = re.search(\"Je t'aime (\\w+)\", statement) if m: print \"Il aime\",m.group(1) looks very awkward (if-else-cascade, match object creation).", "output": "You could create a little class that returns the boolean result of calling match, and retains the matched groups for subsequent retrieval: import re class REMatcher(object): def __init__(self, matchstring): self.matchstring = matchstring def match(self,regexp): self.rematch = re.match(regexp, self.matchstring) return bool(self.rematch) def group(self,i): return self.rematch.group(i) for statement in (\"I love Mary\", \"Ich liebe Margot\", \"Je t'aime Marie\", \"Te amo Maria\"): m = REMatcher(statement) if m.match(r\"I love (\\w+)\"): print \"He loves\",m.group(1) elif m.match(r\"Ich liebe (\\w+)\"): print \"Er liebt\",m.group(1) elif m.match(r\"Je t'aime (\\w+)\"): print \"Il aime\",m.group(1) else: print \"???\" Update for Python 3 print as a function, and Python 3.8 assignment expressions - no need for a REMatcher class now: import re for statement in (\"I love Mary\", \"Ich liebe Margot\", \"Je t'aime Marie\", \"Te amo Maria\"): if m := re.match(r\"I love (\\w+)\", statement): print(\"He loves\", m.group(1)) elif m := re.match(r\"Ich liebe (\\w+)\", statement): print(\"Er liebt\", m.group(1)) elif m := re.match(r\"Je t'aime (\\w+)\", statement): print(\"Il aime\", m.group(1)) else: print()"} +{"question_id": 5669878, "score": 98, "creation_date": 1302816204, "tags": ["python", "mysql", "mysql-python"], "instruction": "When to close cursors using MySQLdb\n\nI'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. What is the standard practice for getting and closing cursors? In particular, how long should my cursors last? Should I get a new cursor for each transaction? I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?", "output": "Instead of asking what is standard practice, since that's often unclear and subjective, you might try looking to the module itself for guidance. In general, using the with keyword as another user suggested is a great idea, but in this specific circumstance it may not give you quite the functionality you expect. As of version 1.2.5 of the module, MySQLdb.Connection implements the context manager protocol with the following code (github): def __enter__(self): if self.get_autocommit(): self.query(\"BEGIN\") return self.cursor() def __exit__(self, exc, value, tb): if exc: self.rollback() else: self.commit() There are several existing Q&A about with already, or you can read Understanding Python's \"with\" statement, but essentially what happens is that __enter__ executes at the start of the with block, and __exit__ executes upon leaving the with block. You can use the optional syntax with EXPR as VAR to bind the object returned by __enter__ to a name if you intend to reference that object later. So, given the above implementation, here's a simple way to query your database: connection = MySQLdb.connect(...) with connection as cursor: # connection.__enter__ executes at this line cursor.execute('select 1;') result = cursor.fetchall() # connection.__exit__ executes after this line print result # prints \"((1L,),)\" The question now is, what are the states of the connection and the cursor after exiting the with block? The __exit__ method shown above calls only self.rollback() or self.commit(), and neither of those methods go on to call the close() method. The cursor itself has no __exit__ method defined \u2013 and wouldn't matter if it did, because with is only managing the connection. Therefore, both the connection and the cursor remain open after exiting the with block. This is easily confirmed by adding the following code to the above example: try: cursor.execute('select 1;') print 'cursor is open;', except MySQLdb.ProgrammingError: print 'cursor is closed;', if connection.open: print 'connection is open' else: print 'connection is closed' You should see the output \"cursor is open; connection is open\" printed to stdout. I believe you need to close the cursor before committing the connection. Why? The MySQL C API, which is the basis for MySQLdb, does not implement any cursor object, as implied in the module documentation: \"MySQL does not support cursors; however, cursors are easily emulated.\" Indeed, the MySQLdb.cursors.BaseCursor class inherits directly from object and imposes no such restriction on cursors with regard to commit/rollback. An Oracle developer had this to say: cnx.commit() before cur.close() sounds most logical to me. Maybe you can go by the rule: \"Close the cursor if you do not need it anymore.\" Thus commit() before closing the cursor. In the end, for Connector/Python, it does not make much difference, but or other databases it might. I expect that's as close as you're going to get to \"standard practice\" on this subject. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? I very much doubt it, and in trying to do so, you may introduce additional human error. Better to decide on a convention and stick with it. Is there a lot of overhead for getting new cursors, or is it just not a big deal? The overhead is negligible, and doesn't touch the database server at all; it's entirely within the implementation of MySQLdb. You can look at BaseCursor.__init__ on github if you're really curious to know what's happening when you create a new cursor. Going back to earlier when we were discussing with, perhaps now you can understand why the MySQLdb.Connection class __enter__ and __exit__ methods give you a brand new cursor object in every with block and don't bother keeping track of it or closing it at the end of the block. It's fairly lightweight and exists purely for your convenience. If it's really that important to you to micromanage the cursor object, you can use contextlib.closing to make up for the fact that the cursor object has no defined __exit__ method. For that matter, you can also use it to force the connection object to close itself upon exiting a with block. This should output \"my_curs is closed; my_conn is closed\": from contextlib import closing import MySQLdb with closing(MySQLdb.connect(...)) as my_conn: with closing(my_conn.cursor()) as my_curs: my_curs.execute('select 1;') result = my_curs.fetchall() try: my_curs.execute('select 1;') print 'my_curs is open;', except MySQLdb.ProgrammingError: print 'my_curs is closed;', if my_conn.open: print 'my_conn is open' else: print 'my_conn is closed' Note that with closing(arg_obj) will not call the argument object's __enter__ and __exit__ methods; it will only call the argument object's close method at the end of the with block. (To see this in action, simply define a class Foo with __enter__, __exit__, and close methods containing simple print statements, and compare what happens when you do with Foo(): pass to what happens when you do with closing(Foo()): pass.) This has two significant implications: First, if autocommit mode is enabled, MySQLdb will BEGIN an explicit transaction on the server when you use with connection and commit or rollback the transaction at the end of the block. These are default behaviors of MySQLdb, intended to protect you from MySQL's default behavior of immediately committing any and all DML statements. MySQLdb assumes that when you use a context manager, you want a transaction, and uses the explicit BEGIN to bypass the autocommit setting on the server. If you're used to using with connection, you might think autocommit is disabled when actually it was only being bypassed. You might get an unpleasant surprise if you add closing to your code and lose transactional integrity; you won't be able to rollback changes, you may start seeing concurrency bugs and it may not be immediately obvious why. Second, with closing(MySQLdb.connect(user, pass)) as VAR binds the connection object to VAR, in contrast to with MySQLdb.connect(user, pass) as VAR, which binds a new cursor object to VAR. In the latter case you would have no direct access to the connection object! Instead, you would have to use the cursor's connection attribute, which provides proxy access to the original connection. When the cursor is closed, its connection attribute is set to None. This results in an abandoned connection that will stick around until one of the following happens: All references to the cursor are removed The cursor goes out of scope The connection times out The connection is closed manually via server administration tools You can test this by monitoring open connections (in Workbench or by using SHOW PROCESSLIST) while executing the following lines one by one: with MySQLdb.connect(...) as my_curs: pass my_curs.close() my_curs.connection # None my_curs.connection.close() # throws AttributeError, but connection still open del my_curs # connection will close here"} +{"question_id": 16188420, "score": 98, "creation_date": 1366795925, "tags": ["python", "tkinter", "scrollbar", "frame"], "instruction": "Tkinter scrollbar for frame\n\nMy objective is to add a vertical scroll bar to a frame which has several labels in it. The scroll bar should automatically enabled as soon as the labels inside the frame exceed the height of the frame. After searching through, I found this useful post. Based on that post I understand that in order to achieve what i want, (correct me if I am wrong, I am a beginner) I have to create a Frame first, then create a Canvas inside that frame and stick the scroll bar to that frame as well. After that, create another frame and put it inside the canvas as a window object. So, I finally come up with this: from Tkinter import * def data(): for i in range(50): Label(frame,text=i).grid(row=i,column=0) Label(frame,text=\"my text\"+str(i)).grid(row=i,column=1) Label(frame,text=\"..........\").grid(row=i,column=2) def myfunction(event): canvas.configure(scrollregion=canvas.bbox(\"all\"),width=200,height=200) root=Tk() sizex = 800 sizey = 600 posx = 100 posy = 100 root.wm_geometry(\"%dx%d+%d+%d\" % (sizex, sizey, posx, posy)) myframe=Frame(root,relief=GROOVE,width=50,height=100,bd=1) myframe.place(x=10,y=10) canvas=Canvas(myframe) frame=Frame(canvas) myscrollbar=Scrollbar(myframe,orient=\"vertical\",command=canvas.yview) canvas.configure(yscrollcommand=myscrollbar.set) myscrollbar.pack(side=\"right\",fill=\"y\") canvas.pack(side=\"left\") canvas.create_window((0,0),window=frame,anchor='nw') frame.bind(\"\",myfunction) data() root.mainloop() Am I doing it right? Is there better/smarter way to achieve the output this code gave me? Why must I use grid method? (I tried place method, but none of the labels appear on the canvas.) What so special about using anchor='nw' when creating window on canvas? Please keep your answer simple, as I am a beginner.", "output": "\"Am i doing it right?Is there better/smarter way to achieve the output this code gave me?\" Generally speaking, yes, you're doing it right. Tkinter has no native scrollable container other than the canvas. As you can see, it's really not that difficult to set up. As your example shows, it only takes 5 or 6 lines of code to make it work -- depending on how you count lines. \"Why must i use grid method?(i tried place method, but none of the labels appear on the canvas?)\" You ask about why you must use grid. There is no requirement to use grid. Place, grid and pack can all be used. It's simply that some are more naturally suited to particular types of problems. In this case it looks like you're creating an actual grid -- rows and columns of labels -- so grid is the natural choice. \"What so special about using anchor='nw' when creating window on canvas?\" The anchor tells you what part of the window is positioned at the coordinates you give. By default, the center of the window will be placed at the coordinate. In the case of your code above, you want the upper left (\"northwest\") corner to be at the coordinate."} +{"question_id": 26136894, "score": 98, "creation_date": 1412150969, "tags": ["python", "hash", "set", "cpython", "python-internals"], "instruction": "Why does tuple(set([1,\"a\",\"b\",\"c\",\"z\",\"f\"])) == tuple(set([\"a\",\"b\",\"c\",\"z\",\"f\",1])) 85% of the time with hash randomization enabled?\n\nGiven Zero Piraeus' answer to another question, we have that x = tuple(set([1, \"a\", \"b\", \"c\", \"z\", \"f\"])) y = tuple(set([\"a\", \"b\", \"c\", \"z\", \"f\", 1])) print(x == y) Prints True about 85% of the time with hash randomization enabled. Why 85%?", "output": "I'm going to assume any readers of this question to have read both: Zero Piraeus' answer and My explanation of CPython's sets. The first thing to note is that hash randomization is decided on interpreter start-up. The hash of each letter will be the same for both sets, so the only thing that can matter is if there is a collision (where order will be affected). By the deductions of that second link we know the backing array for these sets starts at length 8: _ _ _ _ _ _ _ _ In the first case, we insert 1: _ 1 _ _ _ _ _ _ and then insert the rest: \u03b1 1 ? ? ? ? ? ? Then it is rehashed to size 32: 1 can't collide with \u03b1 as \u03b1 is an even hash \u2193 so 1 is inserted at slot 1 first ? 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? In the second case, we insert the rest: ? \u03b2 ? ? ? ? ? ? And then try to insert 1: Try to insert 1 here, but will \u2193 be rehashed if \u03b2 exists ? \u03b2 ? ? ? ? ? ? And then it will be rehashed: Try to insert 1 here, but will be rehashed if \u03b2 exists and has \u2193 not rehashed somewhere else ? \u03b2 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? So whether the iteration orders are different depends solely on whether \u03b2 exists. The chance of a \u03b2 is the chance that any of the 5 letters will hash to 1 modulo 8 and hash to 1 modulo 32. Since anything that hashes to 1 modulo 32 also hashes to 1 modulo 8, we want to find the chance that of the 32 slots, one of the five is in slot 1: 5 (number of letters) / 32 (number of slots) 5/32 is 0.15625, so there is a 15.625% chance\u00b9 of the orders being different between the two set constructions. Not very strangely at all, this is exactly what Zero Piraeus measured. \u00b9Technically even this isn't obvious. We can pretend every one of the 5 hashes uniquely because of rehashing, but because of linear probing it's actually more likely for \"bunched\" structures to occur... but because we're only looking at whether a single slot is occupied, this doesn't actually affect us."} +{"question_id": 52827463, "score": 98, "creation_date": 1539659677, "tags": ["python", "python-typing", "isinstance"], "instruction": "collections.Iterable vs typing.Iterable in type annotation and checking for Iterable\n\nI found that in Python both collections.Iterable and typing.Iterable can be used in type annotation and checking for whether an object is iterable, i.e., both isinstance(obj, collections.Iterable) and isinstance(obj, typing.Iterable) works. My question is, what are the differences among them? And which one is preferred in which situations?", "output": "Due to PEP 585 - Type Hinting Generics In Standard Collections, Python's standard library container types are also able to accept a generic argument for type annotations. This includes the collections.abc.Iterable class. When supporting only Python 3.9 or later, there is no longer any reason to use the typing.Iterable at all and importing any of these container types from typing is deprecated. For older Python versions: The typing.Iterable is generic, so you can say what it's an iterable of in your type annotations, e.g. Iterable[int] for an iterable of ints. The collections iterable is an abstract base class. These can include extra mixin methods to make the interface easier to implement when you create your own subclasses. Now it so happens that Iterable doesn't include any of these mixins, but it is part of the interface of other abstract base classes that do. Theoretically, the typing iterable works for either, but it uses some weird metaclass magic to do it, so they don't behave exactly the same way in all cases. You really don't need generics at runtime, so there's no need to ever use it outside of type annotations and such. The collections iterable is less likely to cause problems as a superclass. So in short, you should use the typing iterable in type annotations, but the collections iterable as a superclass."} +{"question_id": 14159482, "score": 97, "creation_date": 1357311402, "tags": ["python", "git", "pip"], "instruction": "pip install a local git repository\n\nI can't find the correct way to install a local directory as a python package using pip. (venv) C:\\(...)>pip install . --no-index Ignoring indexes: http://pypi.python.org/simple/ Unpacking c:\\users\\fsantos\\desktop\\biskates.com\\biskates\\forks\\django-pipeline Running setup.py egg_info for package from file:///(...)%5Cforks%5Cdjango-pipeline Installing collected packages: django-pipeline Running setup.py install for django-pipeline Successfully installed django-pipeline Cleaning up... (venv) C:\\(...)>cd .. (venv) C:\\(...)>python Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import pipeline >>> pipeline.__file__ 'C:\\\\(...)site-packages\\\\pipeline\\\\__init__.py' >>> As you can see pip just copied over the package to site-packages. How can I avoid this, and use the package directly from its source folder? I'm trying to integrate django-pipeline into my Django project, but I want to add support for Django 1.4 first, so I forked and cloned my fork.", "output": "I can also just use: cd your-local-repo pip install -e ."} +{"question_id": 11977279, "score": 97, "creation_date": 1345064711, "tags": ["python", "design-patterns", "builder-pattern"], "instruction": "Builder pattern equivalent in Python\n\nIn Java, you can use the builder pattern to provide a more readable means to instantiating a class with many parameters. In the builder pattern, one constructs a configuration object with methods to set named attributes, and then uses it to construct another object. What is the equivalent in Python? Is the best way to mimic the same implementation?", "output": "Design patterns can often be replaced with built-in language features. Your use case You say \"I wanted to have a more readable \"means\" to instantiating a class with many parameters.\". In Java's case: [A] use case for the builder pattern is when the constructor of the object to be built must take very many parameters. In such cases, it is often more convenient to lump such configuration parameters in a builder object (setMaxTemperature(int t), setMinTemperature(int t), set.. , etc. ) than to burden the caller with a long list of arguments to pass in the class's constructor.. Builder pattern not needed But Python supports named parameters, so this is not necessary. You can just define a class's constructor: class SomeClass(object): def __init__(self, foo=\"default foo\", bar=\"default bar\", baz=\"default baz\"): # do something and call it using named parameters: s = SomeClass(bar=1, foo=0) Note that you can freely reorder and omit arguments, just as with a builder in Java you can omit or reorder calls to the set methods on the builder object. Also worth stating is that Python's dynamic nature gives you more freedom over construction of objects (using __new__ etc.), which can replace other uses of the builder pattern. But if you really want to use it you can use collections.namedtuple as your config object. namedtuple() returns a new type representing a tuple, each of whose parameters has a given name, without having to write a boilerplate class. You can use objects of the resulting type in a similar way to Java builders. (Thanks to Paul McGuire for suggesting this.) StringBuilder A related pattern is Java's StringBuilder, which is used to efficiently construct an (immutable) String in stages. In Python, this can be replaced with str.join. For example: final StringBuilder sb = new StringBuilder(); for(int i = 0; i < 100; i++) sb.append(\"Hello(\" + i + \")\"); return sb.toString(); can be replaced with return \"\".join(f\"Hello({i})\" for i in range(100))"} +{"question_id": 23240969, "score": 97, "creation_date": 1398247257, "tags": ["python", "python-2.7"], "instruction": "Python: count repeated elements in the list\n\nI am new to Python. I am trying to find a simple way of getting a count of the number of elements repeated in a list e.g. MyList = [\"a\", \"b\", \"a\", \"c\", \"c\", \"a\", \"c\"] Output: a: 3 b: 1 c: 3", "output": "You can do that using count: my_dict = {i:MyList.count(i) for i in MyList} >>> print my_dict #or print(my_dict) in python-3.x {'a': 3, 'c': 3, 'b': 1} Or using collections.Counter: from collections import Counter a = dict(Counter(MyList)) >>> print a #or print(a) in python-3.x {'a': 3, 'c': 3, 'b': 1}"} +{"question_id": 8955448, "score": 97, "creation_date": 1327170042, "tags": ["python", "numpy", "scipy"], "instruction": "Save / load scipy sparse csr_matrix in portable data format\n\nHow do you save/load a scipy sparse csr_matrix in a portable format? The scipy sparse matrix is created on Python 3 (Windows 64-bit) to run on Python 2 (Linux 64-bit). Initially, I used pickle (with protocol=2 and fix_imports=True) but this didn't work going from Python 3.2.2 (Windows 64-bit) to Python 2.7.2 (Windows 32-bit) and got the error: TypeError: ('data type not understood', , (, (0,), '[98]')). Next, tried numpy.save and numpy.load as well as scipy.io.mmwrite() and scipy.io.mmread() and none of these methods worked either.", "output": "edit: scipy 0.19 now has scipy.sparse.save_npz and scipy.sparse.load_npz. from scipy import sparse sparse.save_npz(\"yourmatrix.npz\", your_matrix) your_matrix_back = sparse.load_npz(\"yourmatrix.npz\") For both functions, the file argument may also be a file-like object (i.e. the result of open) instead of a filename. Got an answer from the Scipy user group: A csr_matrix has 3 data attributes that matter: .data, .indices, and .indptr. All are simple ndarrays, so numpy.save will work on them. Save the three arrays with numpy.save or numpy.savez, load them back with numpy.load, and then recreate the sparse matrix object with: new_csr = csr_matrix((data, indices, indptr), shape=(M, N)) So for example: def save_sparse_csr(filename, array): np.savez(filename, data=array.data, indices=array.indices, indptr=array.indptr, shape=array.shape) def load_sparse_csr(filename): loader = np.load(filename) return csr_matrix((loader['data'], loader['indices'], loader['indptr']), shape=loader['shape'])"} +{"question_id": 51691563, "score": 97, "creation_date": 1533445827, "tags": ["python", "pytorch"], "instruction": "CUDA runtime error (59) : device-side assert triggered\n\nTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu line=265 error=59 : device-side assert triggered Traceback (most recent call last): File \"main.py\", line 109, in train(loader_train, model, criterion, optimizer) File \"main.py\", line 54, in train optimizer.step() File \"/usr/local/anaconda35/lib/python3.6/site-packages/torch/optim/sgd.py\", line 93, in step d_p.add_(weight_decay, p.data) RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu:265 How do I resolve this error?", "output": "In general, when encountering cuda runtine errors, it is advisable to run your program again using the CUDA_LAUNCH_BLOCKING=1 flag to obtain an accurate stack trace. In your specific case, the targets of your data were too high (or low) for the specified number of classes."} +{"question_id": 1938048, "score": 97, "creation_date": 1261366914, "tags": ["python", "time"], "instruction": "High-precision clock in Python\n\nIs there a way to measure time with high-precision in Python --- more precise than one second? I doubt that there is a cross-platform way of doing that; I'm interesting in high precision time on Unix, particularly Solaris running on a Sun SPARC machine. timeit seems to be capable of high-precision time measurement, but rather than measure how long a code snippet takes, I'd like to directly access the time values.", "output": "The standard time.time() function provides sub-second precision, though that precision varies by platform. For Linux and Mac precision is +- 1 microsecond or 0.001 milliseconds. Python on Windows with Python < 3.7 uses +- 16 milliseconds precision due to clock implementation problems due to process interrupts. The timeit module can provide higher resolution if you're measuring execution time. >>> import time >>> time.time() #return seconds from epoch 1261367718.971009 Python 3.7 introduces new functions to the time module that provide higher resolution for longer time periods: >>> import time >>> time.time_ns() 1530228533161016309 >>> time.time_ns() / (10 ** 9) # convert to floating-point seconds 1530228544.0792289"} +{"question_id": 1949318, "score": 97, "creation_date": 1261517501, "tags": ["python", "http", "scripting", "httprequest", "http-head"], "instruction": "Checking if a website is up via Python\n\nBy using python, how can I check if a website is up? From what I read, I need to check the \"HTTP HEAD\" and see status code \"200 OK\", but how to do so ? Cheers Related How do you send a HEAD HTTP request in Python?", "output": "You could try to do this with getcode() from urllib import urllib.request print(urllib.request.urlopen(\"https://www.stackoverflow.com\").getcode()) 200 For Python 2, use print urllib.urlopen(\"http://www.stackoverflow.com\").getcode() 200"} +{"question_id": 31700691, "score": 97, "creation_date": 1438173631, "tags": ["python", "pandas", "csv", "delimiter", "separator"], "instruction": "Convert commas decimal separators to dots within a Dataframe\n\nI am importing a CSV file like the one below, using pandas.read_csv: df = pd.read_csv(Input, delimiter=\";\") Example of CSV file: 10;01.02.2015 16:58;01.02.2015 16:58;-0.59;0.1;-4.39;NotApplicable;0.79;0.2 11;01.02.2015 16:58;01.02.2015 16:58;-0.57;0.2;-2.87;NotApplicable;0.79;0.21 The problem is that when I later on in my code try to use these values I get this error: TypeError: can't multiply sequence by non-int of type 'float' The error is because the number I'm trying to use is not written with a dot (.) as a decimal separator but a comma(,). After manually changing the commas to a dots my program works. I can't change the format of my input, and thus have to replace the commas in my DataFrame in order for my code to work, and I want python to do this without the need of doing it manually. Do you have any suggestions?", "output": "pandas.read_csv has a decimal parameter for this. I.e. try with: df = pd.read_csv(Input, delimiter=\";\", decimal=\",\")"} +{"question_id": 31521027, "score": 97, "creation_date": 1437407739, "tags": ["python", "r", "pandas"], "instruction": "groupby weighted average and sum in pandas dataframe\n\nI have a dataframe: Out[78]: contract month year buys adjusted_lots price 0 W Z 5 Sell -5 554.85 1 C Z 5 Sell -3 424.50 2 C Z 5 Sell -2 424.00 3 C Z 5 Sell -2 423.75 4 C Z 5 Sell -3 423.50 5 C Z 5 Sell -2 425.50 6 C Z 5 Sell -3 425.25 7 C Z 5 Sell -2 426.00 8 C Z 5 Sell -2 426.75 9 CC U 5 Buy 5 3328.00 10 SB V 5 Buy 5 11.65 11 SB V 5 Buy 5 11.64 12 SB V 5 Buy 2 11.60 I need a sum of adjusted_lots , price which is weighted average , of price and adjusted_lots , grouped by all the other columns , ie. grouped by (contract, month , year and buys) Similar solution on R was achieved by following code, using dplyr, however unable to do the same in pandas. > newdf = df %>% select ( contract , month , year , buys , adjusted_lots , price ) %>% group_by( contract , month , year , buys) %>% summarise(qty = sum( adjusted_lots) , avgpx = weighted.mean(x = price , w = adjusted_lots) , comdty = \"Comdty\" ) > newdf Source: local data frame [4 x 6] contract month year comdty qty avgpx 1 C Z 5 Comdty -19 424.8289 2 CC U 5 Comdty 5 3328.0000 3 SB V 5 Comdty 12 11.6375 4 W Z 5 Comdty -5 554.8500 is the same possible by groupby or any other solution ?", "output": "EDIT: update aggregation so it works with recent version of pandas To pass multiple functions to a groupby object, you need to pass a tuples with the aggregation functions and the column to which the function applies: # Define a lambda function to compute the weighted mean: wm = lambda x: np.average(x, weights=df.loc[x.index, \"adjusted_lots\"]) # Define a dictionary with the functions to apply for a given column: # the following is deprecated since pandas 0.20: # f = {'adjusted_lots': ['sum'], 'price': {'weighted_mean' : wm} } # df.groupby([\"contract\", \"month\", \"year\", \"buys\"]).agg(f) # Groupby and aggregate with namedAgg [1]: df.groupby([\"contract\", \"month\", \"year\", \"buys\"]).agg(adjusted_lots=(\"adjusted_lots\", \"sum\"), price_weighted_mean=(\"price\", wm)) adjusted_lots price_weighted_mean contract month year buys C Z 5 Sell -19 424.828947 CC U 5 Buy 5 3328.000000 SB V 5 Buy 12 11.637500 W Z 5 Sell -5 554.850000 You can see more here: http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once and in a similar question here: Apply multiple functions to multiple groupby columns [1] : https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.25.0.html#groupby-aggregation-with-relabeling"} +{"question_id": 9805451, "score": 97, "creation_date": 1332335976, "tags": ["python", "pymongo"], "instruction": "How to find names of all collections using PyMongo?\n\nHow to find names of all collections using PyMongo and find all fields in chosen collection ? I have name of database and name of chosen collection. (Scenario : user input name of database, need to find all collections and show in dropdown list, when user click on one item need to find all fields in that collection)", "output": "To find the collections, you can use collection_names() - https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.collection_names Update: The collection_names is deprecated from 3.7 onwards and been replaced by list_collection_names() - https://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.list_collection_names"} +{"question_id": 13648774, "score": 97, "creation_date": 1354291636, "tags": ["python", "datetime", "numpy"], "instruction": "Get year, month or day from numpy datetime64\n\nI have an array of datetime64 type: dates = np.datetime64(['2010-10-17', '2011-05-13', \"2012-01-15\"]) Is there a better way than looping through each element just to get np.array of years: years = f(dates) #output: array([2010, 2011, 2012], dtype=int8) #or dtype = string I'm using stable numpy version 1.6.2.", "output": "I find the following tricks give between 2x and 4x speed increase versus the pandas method described in this answer (i.e. pd.DatetimeIndex(dates).year etc.). The speed of [dt.year for dt in dates.astype(object)] I find to be similar to the pandas method. Also these tricks can be applied directly to ndarrays of any shape (2D, 3D etc.) dates = np.arange(np.datetime64('2000-01-01'), np.datetime64('2010-01-01')) years = dates.astype('datetime64[Y]').astype(int) + 1970 months = dates.astype('datetime64[M]').astype(int) % 12 + 1 days = dates - dates.astype('datetime64[M]') + 1"} +{"question_id": 4372797, "score": 97, "creation_date": 1291688037, "tags": ["python", "mongodb", "pymongo", "database"], "instruction": "How do I update a Mongo document after inserting it?\n\nLet's say I insert the document. post = { some dictionary } mongo_id = mycollection.insert(post) Now, let's say I want to add a field and update it. How do I do that? This doesn't seem to work..... post = mycollection.find_one({\"_id\":mongo_id}) post['newfield'] = \"abc\" mycollection.save(post)", "output": "In pymongo you can update with: mycollection.update({'_id':mongo_id}, {\"$set\": post}, upsert=False) Upsert parameter will insert instead of updating if the post is not found in the database. Documentation is available at mongodb site. UPDATE For version > 3 use update_one instead of update: mycollection.update_one({'_id':mongo_id}, {\"$set\": post}, upsert=False)"} +{"question_id": 33149428, "score": 97, "creation_date": 1444914465, "tags": ["python", "pandas", "matplotlib", "plot"], "instruction": "Modify the legend of pandas bar plot\n\nI am always bothered when I make a bar plot with pandas and I want to change the names of the labels in the legend. Consider for instance the output of this code: import pandas as pd from matplotlib.pyplot import * df = pd.DataFrame({'A':26, 'B':20}, index=['N']) df.plot(kind='bar') Now, if I want to change the name in the legend, I would usually try to do: legend(['AAA', 'BBB']) But I end up with this: In fact, the first dashed line seems to correspond to an additional patch. So I wonder if there is a simple trick here to change the labels, or do I need to plot each of the columns independently with matplotlib and set the labels myself. Thanks.", "output": "To change the labels for Pandas df.plot() use ax.legend([...]): import pandas as pd import matplotlib.pyplot as plt fig, ax = plt.subplots() df = pd.DataFrame({'A':26, 'B':20}, index=['N']) df.plot(kind='bar', ax=ax) #ax = df.plot(kind='bar') # \"same\" as above ax.legend([\"AAA\", \"BBB\"]); Another approach is to do the same by plt.legend([...]): import matplotlib.pyplot as plt df.plot(kind='bar') plt.legend([\"AAA\", \"BBB\"]);"} +{"question_id": 51827134, "score": 97, "creation_date": 1534179158, "tags": ["python", "pandas", "datetime", "datetime64"], "instruction": "Comparison between datetime and datetime64[ns] in pandas\n\nI'm writing a program that checks an excel file and if today's date is in the excel file's date column, I parse it I'm using: cur_date = datetime.today() for today's date. I'm checking if today is in the column with: bool_val = cur_date in df['date'] #evaluates to false I do know for a fact that today's date is in the file in question. The dtype of the series is datetime64[ns] Also, I am only checking the date itself and not the timestamp afterwards, if that matters. I'm doing this to make the timestamp 00:00:00: cur_date = datetime.strptime(cur_date.strftime('%Y_%m_%d'), '%Y_%m_%d') And the type of that object after printing is datetime as well", "output": "You can use pd.Timestamp('today') or pd.to_datetime('today') But both of those give the date and time for 'now'. Try this instead: pd.Timestamp('today').floor('D') or pd.to_datetime('today').floor('D') You could have also passed the datetime object to pandas.to_datetime but I like the other option more. pd.to_datetime(datetime.datetime.today()).floor('D') Pandas also has a Timedelta object pd.Timestamp('now').floor('D') + pd.Timedelta(-3, unit='D') Or you can use the offsets module pd.Timestamp('now').floor('D') + pd.offsets.Day(-3) To check for membership, try one of these cur_date in df['date'].tolist() Or df['date'].eq(cur_date).any()"} +{"question_id": 12871066, "score": 97, "creation_date": 1350114171, "tags": ["python", "regex", "python-module", "rawstring"], "instruction": "What exactly is a \"raw string regex\" and how can you use it?\n\nFrom the python documentation on regex, regarding the '\\' character: The solution is to use Python\u2019s raw string notation for regular expression patterns; backslashes are not handled in any special way in a string literal prefixed with 'r'. So r\"\\n\" is a two-character string containing '\\' and 'n', while \"\\n\" is a one-character string containing a newline. Usually patterns will be expressed in Python code using this raw string notation. What is this raw string notation? If you use a raw string format, does that mean \"*\" is taken as a a literal character rather than a zero-or-more indicator? That obviously can't be right, or else regex would completely lose its power. But then if it's a raw string, how does it recognize newline characters if \"\\n\" is literally a backslash and an \"n\"? I don't follow. Edit for bounty: I'm trying to understand how a raw string regex matches newlines, tabs, and character sets, e.g. \\w for words or \\d for digits or all whatnot, if raw string patterns don't recognize backslashes as anything more than ordinary characters. I could really use some good examples.", "output": "Zarkonnen's response does answer your question, but not directly. Let me try to be more direct, and see if I can grab the bounty from Zarkonnen. You will perhaps find this easier to understand if you stop using the terms \"raw string regex\" and \"raw string patterns\". These terms conflate two separate concepts: the representations of a particular string in Python source code, and what regular expression that string represents. In fact, it's helpful to think of these as two different programming languages, each with their own syntax. The Python language has source code that, among other things, builds strings with certain contents, and calls the regular expression system. The regular expression system has source code that resides in string objects, and matches strings. Both languages use backslash as an escape character. First, understand that a string is a sequence of characters (i.e. bytes or Unicode code points; the distinction doesn't much matter here). There are many ways to represent a string in Python source code. A raw string is simply one of these representations. If two representations result in the same sequence of characters, they produce equivalent behaviour. Imagine a 2-character string, consisting of the backslash character followed by the n character. If you know that the character value for backslash is 92, and for n is 110, then this expression generates our string: s = chr(92)+chr(110) print len(s), s 2 \\n The conventional Python string notation \"\\n\" does not generate this string. Instead it generates a one-character string with a newline character. The Python docs 2.4.1. String literals say, \"The backslash (\\) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character.\" s = \"\\n\" print len(s), s 1 (Note that the newline isn't visible in this example, but if you look carefully, you'll see a blank line after the \"1\".) To get our two-character string, we have to use another backslash character to escape the special meaning of the original backslash character: s = \"\\\\n\" print len(s), s 2 \\n What if you want to represent strings that have many backslash characters in them? Python docs 2.4.1. String literals continue, \"String literals may optionally be prefixed with a letter 'r' or 'R'; such strings are called raw strings and use different rules for interpreting backslash escape sequences.\" Here is our two-character string, using raw string representation: s = r\"\\n\" print len(s), s 2 \\n So we have three different string representations, all giving the same string, or sequence of characters: print chr(92)+chr(110) == \"\\\\n\" == r\"\\n\" True Now, let's turn to regular expressions. The Python docs, 7.2. re \u2014 Regular expression operations says, \"Regular expressions use the backslash character ('\\') to indicate special forms or to allow special characters to be used without invoking their special meaning. This collides with Python\u2019s usage of the same character for the same purpose in string literals...\" If you want a Python regular expression object which matches a newline character, then you need a 2-character string, consisting of the backslash character followed by the n character. The following lines of code all set prog to a regular expression object which recognises a newline character: prog = re.compile(chr(92)+chr(110)) prog = re.compile(\"\\\\n\") prog = re.compile(r\"\\n\") So why is it that \"Usually patterns will be expressed in Python code using this raw string notation.\"? Because regular expressions are frequently static strings, which are conveniently represented as string literals. And from the different string literal notations available, raw strings are a convenient choice, when the regular expression includes a backslash character. Questions Q: what about the expression re.compile(r\"\\s\\tWord\")? A: It's easier to understand by separating the string from the regular expression compilation, and understanding them separately. s = r\"\\s\\tWord\" prog = re.compile(s) The string s contains eight characters: a backslash, an s, a backslash, a t, and then four characters Word. Q: What happens to the tab and space characters? A: At the Python language level, string s doesn't have tab and space character. It starts with four characters: backslash, s, backslash, t . The regular expression system, meanwhile, treats that string as source code in the regular expression language, where it means \"match a string consisting of a whitespace character, a tab character, and the four characters Word. Q: How do you match those if that's being treated as backlash-s and backslash-t? A: Maybe the question is clearer if the words 'you' and 'that' are made more specific: how does the regular expression system match the expressions backlash-s and backslash-t? As 'any whitespace character' and as 'tab character'. Q: Or what if you have the 3-character string backslash-n-newline? A: In the Python language, the 3-character string backslash-n-newline can be represented as conventional string \"\\\\n\\n\", or raw plus conventional string r\"\\n\" \"\\n\", or in other ways. The regular expression system matches the 3-character string backslash-n-newline when it finds any two consecutive newline characters. N.B. All examples and document references are to Python 2.7. Update: Incorporated clarifications from answers of @Vladislav Zorov and @m.buettner, and from follow-up question of @Aerovistae."} +{"question_id": 22024661, "score": 97, "creation_date": 1393357845, "tags": ["python", "css", "if-statement", "jinja2"], "instruction": "Jinja2 template not rendering if-elif-else statement properly\n\nI am trying to set the text color using css in a jinja2 template. In the following code I want to set the output string to print in a specific font color if the variable contains a string. Everytime the template is generated though it prints in red due to the else statement, it never see the first two conditions even though the output should be matched, I can tell what the output from the variable is when the table generates and it is as expected. I know my css is correct due to the printing of the string in red by default. My first thought was to enclose the string I was checking for in quotes but that didn't work. Next was that jinja was not expanding RepoOutput[RepoName.index(repo)] but the for loop above it works, RepoName is expanded upon properly. I know if I add the braces it will print the variable which I am fairly certain will either break the template or just not work. I tried looking at these sites and went through the list of global expressions as well but couldn't find any examples similar to mine or a direction in which to look further. http://jinja.pocoo.org/docs/templates/#if http://wsgiarea.pocoo.org/jinja/docs/conditions.html {% for repo in RepoName %} {{ repo }} {% if error in RepoOutput[RepoName.index(repo)] %} {{ RepoOutput[RepoName.index(repo)] }} {% elif Already in RepoOutput[RepoName.index(repo)] %} {{ RepoOutput[RepoName.index(repo)] }} {% else %} {{ RepoOutput[RepoName.index(repo)] }} {% endif %} {% endfor %} Thanks", "output": "You are testing if the values of the variables error and Already are present in RepoOutput[RepoName.index(repo)]. If these variables don't exist then an undefined object is used. Both of your if and elif tests therefore are false; there is no undefined object in the value of RepoOutput[RepoName.index(repo)]. I think you wanted to test if certain strings are in the value instead: {% if \"error\" in RepoOutput[RepoName.index(repo)] %} {{ RepoOutput[RepoName.index(repo)] }} {% elif \"Already\" in RepoOutput[RepoName.index(repo)] %} {{ RepoOutput[RepoName.index(repo)] }} {% else %} {{ RepoOutput[RepoName.index(repo)] }} {% endif %} Other corrections I made: Used {% elif ... %} instead of {$ elif ... %}. moved the tag out of the if conditional structure, it needs to be there always. put quotes around the id attribute Note that most likely you want to use a class attribute instead here, not an id, the latter must have a value that must be unique across your HTML document. Personally, I'd set the class value here and reduce the duplication a little: {% if \"Already\" in RepoOutput[RepoName.index(repo)] %} {% set row_class = \"good\" %} {% else %} {% set row_class = \"error\" %} {% endif %} {{ RepoOutput[RepoName.index(repo)] }} "} +{"question_id": 39359245, "score": 97, "creation_date": 1473204909, "tags": ["python", "python-3.x", "datetime", "stat", "pathlib"], "instruction": "From stat().st_mtime to datetime?\n\nWhat is the most idiomatic/efficient way to convert from a modification time retrieved from stat() call to a datetime object? I came up with the following (python3): from datetime import datetime, timedelta, timezone from pathlib import Path path = Path('foo') path.touch() statResult = path.stat() epoch = datetime(1970, 1, 1, tzinfo=timezone.utc) modified = epoch + timedelta(seconds=statResult.st_mtime) print('modified', modified) Seems round a bout, and a bit surprising that I have to hard code the Unix epoch in there. Is there a more direct way?", "output": "You can use datetime.datetime.fromtimestamp, i.e. from datetime import datetime, timezone ... stat_result = path.stat() modified = datetime.fromtimestamp(stat_result.st_mtime, tz=timezone.utc) print('modified', modified)"} +{"question_id": 61937520, "score": 97, "creation_date": 1590073514, "tags": ["python", "python-3.x", "python-dataclasses"], "instruction": "Proper way to create class variable in Data Class\n\nI've just begun playing around with Python's Data Classes, and I would like confirm that I am declaring Class Variables in the proper way. Using regular python classes class Employee: raise_amount = .05 def __init__(self, fname, lname, pay): self.fname = fname self.lname = lname self.pay = pay Using python Data Class @dataclass class Employee: fname: str lname: str pay: int raise_amount = .05 The class variable I am referring to is raise_amount. Is this a properly declared class variable using Data Classes? Or is there a better way of doing so? I have tested the data class implementation already and it provides the expected functionality, but I am mainly wondering if my implementation is following best practices.", "output": "To create a class variable, annotate the field as a typing.ClassVar or not at all. from typing import ClassVar from dataclasses import dataclass @dataclass class Foo: ivar: float = 0.5 cvar: ClassVar[float] = 0.5 nvar = 0.5 foo = Foo() Foo.ivar, Foo.cvar, Foo.nvar = 1, 1, 1 print(Foo().ivar, Foo().cvar, Foo().nvar) # 0.5 1 1 print(foo.ivar, foo.cvar, foo.nvar) # 0.5 1 1 print(Foo(), Foo(12)) # Foo(ivar=0.5) Foo(ivar=12) There is a subtle difference in that the unannotated field is completely ignored by @dataclass, whereas the ClassVar field is stored but not converted to an attribute. dataclasses \u2014 Data Classes The member variables [...] are defined using PEP 526 type annotations. Class variables One of two places where dataclass() actually inspects the type of a field is to determine if a field is a class variable as defined in PEP 526. It does this by checking if the type of the field is typing.ClassVar. If a field is a ClassVar, it is excluded from consideration as a field and is ignored by the dataclass mechanisms. Such ClassVar pseudo-fields are not returned by the module-level fields() function."} +{"question_id": 60532678, "score": 97, "creation_date": 1583347091, "tags": ["python", "conda", "miniconda", "conda-forge"], "instruction": "What is the difference between miniconda and miniforge?\n\nThe miniforge installer is a relatively new, community-led, minimal conda installer that (as it says in its readme) \"can be directly compared to Miniconda, with the added feature that conda-forge is the default channel\". It is unclear what is different between miniforge and Miniconda, or what the miniforge use case is. If miniforge is the same as Miniconda except it just uses the conda-forge channel by default, why create a whole different installer - why not just use miniconda and add conda-forge as the first channel to use in ~/.condarc? If miniforge is different from Miniconda, what is different about the two?", "output": "miniforge is the community (conda-forge) driven minimalistic conda installer. Subsequent package installations come thus from conda-forge channel. miniconda is the Anaconda (company) driven minimalistic conda installer. Subsequent package installations come from the anaconda channels (default or otherwise). miniforge started a few months ago because miniconda doens't support aarch64, very quickly the 'PyPy' people jumped on board, and in the mean time there are also miniforge versions for all Linux architectures, as well as MacOS. Soon there will also be a windows variant (hopefully also for both CPython and PyPy) I guess that an ARMv7 (32Bit ARM) variant is also on the horizon (Raspbian)"} +{"question_id": 16641119, "score": 97, "creation_date": 1369011151, "tags": ["python", "list", "append", "nonetype"], "instruction": "Why does append() always return None in Python?\n\nlist = [1, 2, 3] print(list.append(4)) ## WRONG, print does not work, append() returns None ## RIGHT: list.append(4) print(list) ## [1, 2, 3, 4] I'm learning Python and I'm not sure if this problem is specific to the language and how append is implemented in Python.", "output": "append is a mutating (destructive) operation (it modifies the list in place instead of of returning a new list). The idiomatic way to do the non-destructive equivalent of append would be >>> l = [1,2,3] >>> l + [4] [1,2,3,4] >>> l [1,2,3] to answer your question, my guess is that if append returned the newly modified list, users might think that it was non-destructive, ie they might write code like m = l.append(\"a\") n = l.append(\"b\") and expect n to be [1,2,3,\"b\"]"} +{"question_id": 8131942, "score": 97, "creation_date": 1321335270, "tags": ["python", "instance-variables", "default-arguments"], "instruction": "How can I use an attribute of the instance as a default argument for a method?\n\nI want to pass a default argument to an instance method using the value of an attribute of the instance: class C: def __init__(self, format): self.format = format def process(self, formatting=self.format): print(formatting) When trying that, I get the following error message: NameError: name 'self' is not defined I want the method to behave like this: C(\"abc\").process() # prints \"abc\" C(\"abc\").process(\"xyz\") # prints \"xyz\" What is the problem here, why does this not work? And how could I make this work?", "output": "You can't really define this as the default value, since the default value is evaluated when the method is defined which is before any instances exist. The usual pattern is to do something like this instead: class C: def __init__(self, format): self.format = format def process(self, formatting=None): if formatting is None: formatting = self.format print(formatting) self.format will only be used if formatting is None. To demonstrate the point of how default values work, see this example: def mk_default(): print(\"mk_default has been called!\") def myfun(foo=mk_default()): print(\"myfun has been called.\") print(\"about to test functions\") myfun(\"testing\") myfun(\"testing again\") And the output here: mk_default has been called! about to test functions myfun has been called. myfun has been called. Notice how mk_default was called only once, and that happened before the function was ever called!"} +{"question_id": 28243832, "score": 97, "creation_date": 1422648644, "tags": ["python", "inspection", "python-interactive"], "instruction": "What is the meaning of a forward slash \"/\" in a Python method signature, as shown by help(foo)?\n\nIn the signature returned interactively by help(foo), what is the meaning of a /? In [37]: help(object.__eq__) Help on wrapper_descriptor: __eq__(self, value, /) Return self==value. In [55]: help(object.__init__) Help on wrapper_descriptor: __init__(self, /, *args, **kwargs) Initialize self. See help(type(self)) for accurate signature. I thought it might be related to keyword-only arguments, but it's not. When I create my own function with keyword-only arguments, positional and keyword-only arguments are separated by * (as expected), not by /. What does the / mean?", "output": "As explained here, the / as an argument marks the end of arguments that are positional only (see here), i.e. arguments you can't use as keyword parameters. In the case of __eq__(self, value, /) the slash is at the end, which means that all arguments are marked as positional only while in the case of your __init__ only self, i.e. nothing, is positional only. Edit: This was previously only used for built-in functions but since Python 3.8, you can use this in your own functions. The natural companion of / is * which allows to mark the beginning of keyword-only arguments. Example using both: # a, b are positional-only # c, d are positional or keyword # e, f are keyword-only def f(a, b, /, c, d, *, e, f): print(a, b, c, d, e, f) # valid call f(10, 20, 30, d=40, e=50, f=60) # invalid calls: f(10, b=20, c=30, d=40, e=50, f=60) # b cannot be a keyword argument f(10, 20, 30, 40, 50, f=60) # e must be a keyword argument"} +{"question_id": 51499950, "score": 97, "creation_date": 1532439076, "tags": ["python", "virtualenv"], "instruction": "Where do I put my python files in the venv folder?\n\nI created a new Python project with PyCharm which yielded the following folder structure myproject \u2514\u2500\u2500 venv \u251c\u2500\u2500 bin \u2502 \u251c\u2500\u2500 activate \u2502 \u251c\u2500\u2500 activate.csh \u2502 \u251c\u2500\u2500 activate.fish \u2502 \u251c\u2500\u2500 easy_install \u2502 \u251c\u2500\u2500 easy_install-3.5 \u2502 \u251c\u2500\u2500 pip \u2502 \u251c\u2500\u2500 pip3 \u2502 \u251c\u2500\u2500 pip3.5 \u2502 \u251c\u2500\u2500 python \u2502 \u251c\u2500\u2500 python3 \u2502 \u2514\u2500\u2500 python3.5 \u251c\u2500\u2500 include \u251c\u2500\u2500 lib \u2502 \u2514\u2500\u2500 python3.5 \u251c\u2500\u2500 lib64 -> lib \u2514\u2500\u2500 pyvenv.cfg Where do I put myproject.py or the myproject folder now? Inside or outside of venv? In the venv/binfolder? Just inside venv, i.e. myproject/venv/myproject.py?", "output": "The virtual environment manages files which aren't yours. It doesn't care how you manage your own files. Put them wherever makes sense to you, just not anywhere inside the venv directory tree. Common solutions include directly in myproject, or in myproject/src. For what it's worth, one of the important use cases for virtual environments is the ability to delete one and start over. You obviously can't do that if you put stuff there which isn't part of the virtual environment. Regard it as ephemeral infrastructure. Another use case is the ability to have multiple virtual environments for the same project, so that you can test that your code works with different versions of the libraries you depend on, or even different Python versions. A common convention is to collect the libraries you need in requirements.txt so that you can create a new virtual environment, activate it, and pip install -r requirements.txt whenever you need to."} +{"question_id": 9262278, "score": 96, "creation_date": 1329143779, "tags": ["python", "json", "django", "django-views"], "instruction": "How do I return JSON without using a template in Django?\n\nThis is related to this question: Django return json and html depending on client python I have a command line Python API for a Django app. When I access the app through the API it should return JSON and with a browser it should return HTML. I can use different URLs to access the different versions but how do I render the HTML template and JSON in the views.py with just one template? To render the HTML I would use: return render_to_response('sample/sample.html....') But how would I do the same for JSON without putting a JSON template? (the content-type should be application/json instead of text/html) What would determine the JSON and HTML outputs? So in my views.py: if something: return render_to_response('html_template',.....) else: return HttpReponse(jsondata,mimetype='application/json')", "output": "I think the issue has gotten confused regarding what you want. I imagine you're not actually trying to put the HTML in the JSON response, but rather want to alternatively return either HTML or JSON. First, you need to understand the core difference between the two. HTML is a presentational format. It deals more with how to display data than the data itself. JSON is the opposite. It's pure data -- basically a JavaScript representation of some Python (in this case) dataset you have. It serves as merely an interchange layer, allowing you to move data from one area of your app (the view) to another area of your app (your JavaScript) which normally don't have access to each other. With that in mind, you don't \"render\" JSON, and there's no templates involved. You merely convert whatever data is in play (most likely pretty much what you're passing as the context to your template) to JSON. Which can be done via either Django's JSON library (simplejson), if it's freeform data, or its serialization framework, if it's a queryset. simplejson from django.utils import simplejson some_data_to_dump = { 'some_var_1': 'foo', 'some_var_2': 'bar', } data = simplejson.dumps(some_data_to_dump) Serialization from django.core import serializers foos = Foo.objects.all() data = serializers.serialize('json', foos) Either way, you then pass that data into the response: return HttpResponse(data, content_type='application/json') [Edit] In Django 1.6 and earlier, the code to return response was return HttpResponse(data, mimetype='application/json') [EDIT]: simplejson was remove from django, you can use: import json json.dumps({\"foo\": \"bar\"}) Or you can use the django.core.serializers as described above."} +{"question_id": 28521535, "score": 96, "creation_date": 1423957370, "tags": ["python", "python-requests"], "instruction": "requests: how to disable / bypass proxy\n\nI am getting an url with: r = requests.get(\"http://myserver.com\") As I can see in the 'access.log' of \"myserver.com\", the client's system proxy is used. But I want to disable using proxies at all with requests.", "output": "The only way I'm currently aware of for disabling proxies entirely is the following: Create a session Set session.trust_env to False Create your request using that session import requests session = requests.Session() session.trust_env = False response = session.get('http://www.stackoverflow.com') This is based on this comment by Lukasa and the (limited) documentation for requests.Session.trust_env. Note: Setting trust_env to False also ignores the following: Authentication information from .netrc (code) CA bundles defined in REQUESTS_CA_BUNDLE or CURL_CA_BUNDLE (code) If however you only want to disable proxies for a particular domain (like localhost), you can use the NO_PROXY environment variable: import os import requests os.environ['NO_PROXY'] = 'stackoverflow.com' response = requests.get('http://www.stackoverflow.com')"} +{"question_id": 70003829, "score": 96, "creation_date": 1637149518, "tags": ["python", "windows", "python-poetry"], "instruction": "Poetry installed but `poetry: command not found`\n\nI've had a million and one issues with Poetry recently. I got it fully installed and working yesterday, but after a restart of my machine I'm back to having issues with it ;( Is there anyway to have Poetry consistently recognised in my Terminal, even after reboot? System Specs: Windows 10, Visual Studio Code, Bash - WSL Ubuntu CLI, Python 3.8. Terminal: me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py poetry: command not found me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3 Retrieving Poetry metadata This installer is deprecated. Poetry versions installed using this script will not be able to use 'self update' command to upgrade to 1.2.0a1 or later. Latest version already installed. me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py poetry: command not found me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ Please let me know if there is anything else I can add to post to help further clarify.", "output": "When I run this, after shutdown of bash Terminal: export PATH=\"$HOME/.poetry/bin:$PATH\" poetry command is then recognised. However, this isn't enough alone; as every time I shutdown the terminal I need to run the export. Possibly needs to be saved in a file."} +{"question_id": 18937058, "score": 96, "creation_date": 1379793949, "tags": ["python", "shell"], "instruction": "Clear screen in shell\n\nHow do you clear the screen in shell? I've seen ways like: import os os.system('cls') This just opens the Windows cmd, clears the screen and closes but I want the shell window to be cleared. I'm using version 3.3.2 of Python.", "output": "For macOS/OS X, you can use the subprocess module and call 'cls' from the shell: import subprocess as sp sp.call('cls', shell=True) To prevent '0' from showing on top of the window, replace the 2nd line with: tmp = sp.call('cls', shell=True) For Linux, you must replace cls command with clear tmp = sp.call('clear', shell=True)"} +{"question_id": 44160666, "score": 96, "creation_date": 1495634460, "tags": ["python", "django", "static", "django-views", "django-staticfiles"], "instruction": "ValueError: Missing staticfiles manifest entry for 'favicon.ico'\n\nI'm getting a ValueError when running python manage.py test. My project is named fellow_go, and I'm currently working on an App called pickup. Please note that this error is added in a relatively recent commit to Django: Fixed #24452 -- Fixed HashedFilesMixin correctness with nested paths.. ====================================================================== ERROR: test_view_url_exists_at_desired_location (pickup.tests.test_view.HomePageViewTest) ---------------------------------------------------------------------- Traceback (most recent call last): File \"/Users/sunqingyao/PycharmProjects/fellow_go/pickup/tests/test_view.py\", line 10, in test_view_url_exists_at_desired_location resp = self.client.get('/', follow=True) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/test/client.py\", line 536, in get **extra) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/test/client.py\", line 340, in get return self.generic('GET', path, secure=secure, **r) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/test/client.py\", line 416, in generic return self.request(**r) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/test/client.py\", line 501, in request six.reraise(*exc_info) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/utils/six.py\", line 686, in reraise raise value File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/core/handlers/exception.py\", line 41, in inner response = get_response(request) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/core/handlers/base.py\", line 217, in _get_response response = self.process_exception_by_middleware(e, request) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/core/handlers/base.py\", line 215, in _get_response response = response.render() File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/response.py\", line 107, in render self.content = self.rendered_content File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/response.py\", line 84, in rendered_content content = template.render(context, self._request) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/backends/django.py\", line 66, in render return self.template.render(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/base.py\", line 207, in render return self._render(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/test/utils.py\", line 107, in instrumented_test_render return self.nodelist.render(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/base.py\", line 990, in render bit = node.render_annotated(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/base.py\", line 957, in render_annotated return self.render(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/loader_tags.py\", line 177, in render return compiled_parent._render(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/test/utils.py\", line 107, in instrumented_test_render return self.nodelist.render(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/base.py\", line 990, in render bit = node.render_annotated(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/template/base.py\", line 957, in render_annotated return self.render(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/templatetags/static.py\", line 105, in render url = self.url(context) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/templatetags/static.py\", line 102, in url return self.handle_simple(path) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/templatetags/static.py\", line 117, in handle_simple return staticfiles_storage.url(path) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/contrib/staticfiles/storage.py\", line 162, in url return self._url(self.stored_name, name, force) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/contrib/staticfiles/storage.py\", line 141, in _url hashed_name = hashed_name_func(*args) File \"/Users/sunqingyao/Envs/django_tutorial/lib/python3.6/site-packages/django/contrib/staticfiles/storage.py\", line 432, in stored_name raise ValueError(\"Missing staticfiles manifest entry for '%s'\" % clean_name) ValueError: Missing staticfiles manifest entry for 'favicon.ico' ---------------------------------------------------------------------- fellow_go/settings.py STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, \"static\"), ] # ...... # Simplified static file serving. # https://warehouse.python.org/project/whitenoise/ STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage' fellow_go/urls.py urlpatterns = i18n_patterns( url(r'^$', HomePageView.as_view(), name='index'), url(r'^pickup/', include('pickup.urls')), url(r'^accounts/', include('django.contrib.auth.urls')), url(r'^admin/', admin.site.urls), prefix_default_language=False ) + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) fellow_go/pickup/views.py class HomePageView(TemplateView): template_name = 'index.html' fellow_go/templates/index.html fellow_go/pickup/tests/test_view.py class HomePageViewTest(TestCase): def test_view_url_exists_at_desired_location(self): resp = self.client.get('/', follow=True) self.assertEqual(resp.status_code, 200) Any I do have a favicon.ico file: Strangely, no errors occur with python manage.py runserver: /Users/sunqingyao/Envs/django_tutorial/bin/python3.6 /Users/sunqingyao/PycharmProjects/fellow_go/manage.py runserver 8000 Performing system checks... System check identified no issues (0 silenced). May 24, 2017 - 22:09:25 Django version 1.11.1, using settings 'fellow_go.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. [24/May/2017 22:09:28] \"GET / HTTP/1.1\" 200 6276 [24/May/2017 22:09:28] \"GET /static/css/style.min.css HTTP/1.1\" 200 2474 [24/May/2017 22:09:28] \"GET /static/css/ie10-viewport-bug-workaround.css HTTP/1.1\" 200 430 [24/May/2017 22:09:28] \"GET /static/js/ie10-viewport-bug-workaround.js HTTP/1.1\" 200 685 [24/May/2017 22:09:28] \"GET /static/js/opt-in.js HTTP/1.1\" 200 511 [24/May/2017 22:09:28] \"GET /static/css/datetimepicker.css HTTP/1.1\" 200 12351 [24/May/2017 22:09:28] \"GET /static/js/bootstrap-datetimepicker.js HTTP/1.1\" 200 55741 [24/May/2017 22:09:35] \"GET /static/favicon.ico HTTP/1.1\" 200 766 Not Found: /apple-touch-icon-precomposed.png [24/May/2017 22:09:35] \"GET /apple-touch-icon-precomposed.png HTTP/1.1\" 404 2678 Not Found: /apple-touch-icon.png [24/May/2017 22:09:35] \"GET /apple-touch-icon.png HTTP/1.1\" 404 2642 Please tell me what's wrong with my code.", "output": "Try running: python manage.py collectstatic Does the test work now? If so, this might be the configuration causing a problem: STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage' as of whitenoise v4 this will fail and you should use: STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage' Related: https://stackoverflow.com/a/32347324/2596187 Check out the Django documentation: https://docs.djangoproject.com/en/1.11/ref/contrib/staticfiles/#django.contrib.staticfiles.storage.ManifestStaticFilesStorage.manifest_strict Note that support for STATICFILES_STORAGE was deprecated and then removed from Django 5.1."} +{"question_id": 13014411, "score": 96, "creation_date": 1350918851, "tags": ["python", "django", "django-models", "django-signals"], "instruction": "Django post_save() signal implementation\n\nI have a question about django. I have ManyToMany Models here class Product(models.Model): name = models.CharField(max_length=255) price = models.DecimalField(default=0.0, max_digits=9, decimal_places=2) stock = models.IntegerField(default=0) def __unicode__(self): return self.name class Cart(models.Model): customer = models.ForeignKey(Customer) products = models.ManyToManyField(Product, through='TransactionDetail') t_date = models.DateField(default=datetime.now()) t_sum = models.FloatField(default=0.0) def __unicode__(self): return str(self.id) class TransactionDetail(models.Model): product = models.ForeignKey(Product) cart = models.ForeignKey(Cart) amount = models.IntegerField(default=0) For 1 cart object created, I can insert as many as new TransactionDetail object (the product and amount). My question is. How can I implement the trigger? What I want is whenever a Transaction detail is created, I want the amount of the product's stock is substracted by the amount in the transactiondetail. I've read about post_save() but I'm not sure how to implement it. maybe something like this when: post_save(TransactionDetail, Cart) #Cart object where TransactionDetail.cart= Cart.id Cart.stock -= TransactionDetail.amount", "output": "If you really want to use signals to achieve this, here's briefly how, from django.db.models.signals import post_save from django.dispatch import receiver class TransactionDetail(models.Model): product = models.ForeignKey(Product) # method for updating @receiver(post_save, sender=TransactionDetail, dispatch_uid=\"update_stock_count\") def update_stock(sender, instance, **kwargs): instance.product.stock -= instance.amount instance.product.save()"} +{"question_id": 880188, "score": 96, "creation_date": 1242687227, "tags": ["python", "django"], "instruction": "How to strip html/javascript from text input in django\n\nWhat is the easiest way to strip all html/javascript from a string?", "output": "Django provides an utility function to remove HTML tags: from django.utils.html import strip_tags my_string = '
Hello, world
' my_string = strip_tags(my_string) print(my_string) # Result will be \"Hello, world\" without the
elements This function used to be unsafe on older Django version (before 1.7) but nowadays it is completely safe to use it. Here is an article that reviewed this issue when it was relevant."} +{"question_id": 43272664, "score": 96, "creation_date": 1491551595, "tags": ["python", "visual-studio-code", "pylint"], "instruction": "Error message \"Linter pylint is not installed\"\n\nI want to run Python code in Microsoft Visual Studio Code but it gives an error: Linter pylint is not installed I installed: The Visual Studio Code Python extension Python 3 Anaconda How can I install Pylint?", "output": "Open a terminal (ctrl+~) Run the command pip install pylint If that doesn't work: On the off chance you've configured a non-default Python path for your editor, you'll need to match that Python's install location with the pip executable you're calling from the terminal. This is an issue because the Python extension's settings enable Pylint by default. If you'd rather turn off linting, you can instead change this setting from true to false in your user or workspace settings: \"python.linting.pylintEnabled\": false"} +{"question_id": 31521170, "score": 96, "creation_date": 1437408199, "tags": ["python", "scipy", "scikit-learn", "classification"], "instruction": "Scikit-learn train_test_split with indices\n\nHow do I get the original indices of the data when using train_test_split()? What I have is the following from sklearn.cross_validation import train_test_split import numpy as np data = np.reshape(np.randn(20),(10,2)) # 10 training examples labels = np.random.randint(2, size=10) # 10 labels x1, x2, y1, y2 = train_test_split(data, labels, size=0.2) But this does not give the indices of the original data. One workaround is to add the indices to data (e.g. data = [(i, d) for i, d in enumerate(data)]) and then pass them inside train_test_split and then expand again. Are there any cleaner solutions?", "output": "Scikit learn plays really well with Pandas, so I suggest you use it. Here's an example: In [1]: import pandas as pd import numpy as np from sklearn.model_selection import train_test_split data = np.reshape(np.random.randn(20),(10,2)) # 10 training examples labels = np.random.randint(2, size=10) # 10 labels In [2]: # Giving columns in X a name X = pd.DataFrame(data, columns=['Column_1', 'Column_2']) y = pd.Series(labels) In [3]: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) In [4]: X_test Out[4]: Column_1 Column_2 2 -1.39 -1.86 8 0.48 -0.81 4 -0.10 -1.83 In [5]: y_test Out[5]: 2 1 8 1 4 1 dtype: int32 You can directly call any scikit functions on DataFrame/Series and it will work. Let's say you wanted to do a LogisticRegression, here's how you could retrieve the coefficients in a nice way: In [6]: from sklearn.linear_model import LogisticRegression model = LogisticRegression() model = model.fit(X_train, y_train) # Retrieve coefficients: index is the feature name (['Column_1', 'Column_2'] here) df_coefs = pd.DataFrame(model.coef_[0], index=X.columns, columns = ['Coefficient']) df_coefs Out[6]: Coefficient Column_1 0.076987 Column_2 -0.352463"} +{"question_id": 14726789, "score": 96, "creation_date": 1360146638, "tags": ["python", "mysql", "corpus"], "instruction": "How can I change the default Mysql connection timeout when connecting through python?\n\nI connected to a mysql database using python con = _mysql.connect('localhost', 'dell-pc', '', 'test') The program that I wrote takes a lot of time in full execution i.e. around 10 hours. Actually, I am trying to read distinct words from a corpus. After reading was finished there was a timeout error. I checked Mysql default timeouts which were: +----------------------------+----------+ | Variable_name | Value | +----------------------------+----------+ | connect_timeout | 10 | | delayed_insert_timeout | 300 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 28800 | | lock_wait_timeout | 31536000 | | net_read_timeout | 30 | | net_write_timeout | 60 | | slave_net_timeout | 3600 | | wait_timeout | 28800 | +----------------------------+----------+ How can I change the default timeout ?", "output": "Do: con.query('SET GLOBAL connect_timeout=28800') con.query('SET GLOBAL interactive_timeout=28800') con.query('SET GLOBAL wait_timeout=28800') Parameter meaning (taken from MySQL Workbench in Navigator: Instance > Options File > Tab \"Networking\" > Section \"Timeout Settings\") connect_timeout: Number of seconds the mysqld server waits for a connect packet before responding with 'Bad handshake' interactive_timeout Number of seconds the server waits for activity on an interactive connection before closing it wait_timeout Number of seconds the server waits for activity on a connection before closing it BTW: 28800 seconds are 8 hours, so for a 10 hour execution time these values should be actually higher."} +{"question_id": 48324152, "score": 96, "creation_date": 1516287352, "tags": ["python", "optimization", "neural-network", "deep-learning", "pytorch"], "instruction": "How to change the learning rate of an optimizer at any given moment (no LR schedule)?\n\nIs it possible in PyTorch to change the learning rate of the optimizer in the middle of training dynamically (I don't want to define a learning rate schedule beforehand)? So let's say I have an optimizer: optim = torch.optim.SGD(model.parameters(), lr=0.01) Now due to some tests which I perform during training, I realize my learning rate is too high so I want to change it to say 0.001. There doesn't seem to be a method optim.set_lr(0.001) but is there some way to do this?", "output": "So the learning rate is stored in optim.param_groups[i]['lr']. optim.param_groups is a list of the different weight groups which can have different learning rates. Thus, simply doing: for g in optim.param_groups: g['lr'] = 0.001 will do the trick. Alternatively, as mentionned in the comments, if your learning rate only depends on the epoch number, you can use a learning rate scheduler. For example (modified example from the doc): from torch.optim.lr_scheduler import LambdaLR optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) # Assuming the optimizer has two groups. lambda_group1 = lambda epoch: epoch // 30 lambda_group2 = lambda epoch: 0.95 ** epoch scheduler = LambdaLR(optimizer, lr_lambda=[lambda_group1, lambda_group2]) for epoch in range(100): train(...) validate(...) scheduler.step() Also, there is a prebuilt learning rate scheduler to reduce on plateaus."} +{"question_id": 57962873, "score": 96, "creation_date": 1568660167, "tags": ["python", "python-dataclasses"], "instruction": "Easiest way to copy all fields from one dataclass instance to another?\n\nLet's assume you have defined a Python dataclass: @dataclass class Marker: a: float b: float = 1.0 What's the easiest way to copy the values from an instance marker_a to another instance marker_b? Here's an example of what I try to achieve: marker_a = Marker(1.0, 2.0) marker_b = Marker(11.0, 12.0) # now some magic happens which you hopefully can fill in print(marker_b) # result: Marker(a=1.0, b=2.0) As a boundary condition, I do not want to create and assign a new instance to marker_b. OK, I could loop through all defined fields and copy the values one by one, but there has to be a simpler way, I guess.", "output": "I think that looping over the fields probably is the easiest way. All the other options I can think of involve creating a new object. from dataclasses import fields marker_a = Marker(5) marker_b = Marker(0, 99) for field in fields(Marker): setattr(marker_b, field.name, getattr(marker_a, field.name)) print(marker_b) # Marker(a=5, b=1.0)"} +{"question_id": 2817481, "score": 96, "creation_date": 1273656539, "tags": ["python", "json", "httprequest"], "instruction": "How do I request and process JSON with python?\n\nI am trying to send a GET request to a URL that I know returns data in the form of JSON using python. I would like to know how to send this request to http://someurl/path/to/json, and how to parse it - preferably to a python dict.", "output": "For anything with requests to URLs you might want to check out requests. For JSON in particular: >>> import requests >>> r = requests.get('https://github.com/timeline.json') >>> r.json() [{u'repository': {u'open_issues': 0, u'url': 'https://github.com/..."} +{"question_id": 28533111, "score": 96, "creation_date": 1424048347, "tags": ["python", "networkx"], "instruction": "Plotting networkx graph with node labels defaulting to node name\n\nNetworkX is powerful but I was trying to plot a graph which shows node labels by default and I was surprised how tedious this seemingly simple task could be for someone new to Networkx. There is an example which shows how to add labels to the plot. https://networkx.github.io/documentation/latest/examples/drawing/labels_and_colors.html The problem with this example is that it uses too many steps and methods when all I want to do is just show labels which are same as the node name while drawing the graph. # Add nodes and edges G.add_node(\"Node1\") G.add_node(\"Node2\") G.add_edge(\"Node1\", \"Node2\") nx.draw(G) # Doesn't draw labels. How to make it show labels Node1, Node2 along? Is there a way to make nx.draw(G) show the default labels (Node1, Node2 in this case) inline in the graph?", "output": "tl/dr: just add with_labels=True to the nx.draw call. The page you were looking at is somewhat complex because it shows how to set lots of different things as the labels, how to give different nodes different colors, and how to provide carefully control node positions. So there's a lot going on. However, it appears you just want each node to use its own name, and you're happy with the default color and default position. So import networkx as nx import pylab as plt G=nx.Graph() # Add nodes and edges G.add_edge(\"Node1\", \"Node2\") nx.draw(G, with_labels = True) plt.savefig('labels.png') If you wanted to do something so that the node labels were different you could send a dict as an argument. So for example, labeldict = {} labeldict[\"Node1\"] = \"shopkeeper\" labeldict[\"Node2\"] = \"angry man with parrot\" nx.draw(G, labels=labeldict, with_labels = True)"} +{"question_id": 29382903, "score": 96, "creation_date": 1427862765, "tags": ["python", "numpy", "scipy", "curve-fitting", "piecewise"], "instruction": "How to apply piecewise linear fit in Python?\n\nI am trying to fit piecewise linear fit as shown in fig.1 for a data set This figure was obtained by setting on the lines. I attempted to apply a piecewise linear fit using the code: from scipy import optimize import matplotlib.pyplot as plt import numpy as np x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15]) y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03]) def linear_fit(x, a, b): return a * x + b fit_a, fit_b = optimize.curve_fit(linear_fit, x[0:5], y[0:5])[0] y_fit = fit_a * x[0:7] + fit_b fit_a, fit_b = optimize.curve_fit(linear_fit, x[6:14], y[6:14])[0] y_fit = np.append(y_fit, fit_a * x[6:14] + fit_b) figure = plt.figure(figsize=(5.15, 5.15)) figure.clf() plot = plt.subplot(111) ax1 = plt.gca() plot.plot(x, y, linestyle = '', linewidth = 0.25, markeredgecolor='none', marker = 'o', label = r'\\textit{y_a}') plot.plot(x, y_fit, linestyle = ':', linewidth = 0.25, markeredgecolor='none', marker = '', label = r'\\textit{y_b}') plot.set_ylabel('Y', labelpad = 6) plot.set_xlabel('X', labelpad = 6) figure.savefig('test.pdf', box_inches='tight') plt.close() But this gave me fitting of the form in fig. 2, I tried playing with the values but no change I can't get the fit of the upper line proper. The most important requirement for me is how can I get Python to get the gradient change point. I want the code to recognize and fit two linear fits in the appropriate range. How can this be done in Python?", "output": "You can use numpy.piecewise() to create the piecewise function and then use curve_fit(), Here is the code from scipy import optimize import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15], dtype=float) y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03]) def piecewise_linear(x, x0, y0, k1, k2): return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0]) p , e = optimize.curve_fit(piecewise_linear, x, y) xd = np.linspace(0, 15, 100) plt.plot(x, y, \"o\") plt.plot(xd, piecewise_linear(xd, *p)) the output: For an N parts fitting, please reference segments_fit.ipynb"} +{"question_id": 46540664, "score": 96, "creation_date": 1507020670, "tags": ["python", "flask", "flask-sqlalchemy"], "instruction": "'No application found. Either work inside a view function or push an application context.'\n\nI'm trying to separate my Flask-SQLAlchemy models into separate files. When I try to run db.create_all() I get No application found. Either work inside a view function or push an application context. shared/db.py: from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() app.py: from flask import Flask from flask_sqlalchemy import SQLAlchemy from shared.db import db app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'My connection string' db.init_app(app) user.py: from shared.db import db class User(db.Model): id = db.Column(db.Integer, primary_key=True) email_address = db.Column(db.String(300), unique=True, nullable=False) password = db.Column(db.Text, nullable=False)", "output": "Use with app.app_context() to push an application context when creating the tables. app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'My connection string' db.init_app(app) with app.app_context(): db.create_all()"} +{"question_id": 33175827, "score": 96, "creation_date": 1445014955, "tags": ["python", "macos"], "instruction": "What version of Python is on my Mac?\n\nI have a mac, when I do: python --version I got: Python 2.7.6 but when I got to: /System/Library/Frameworks/Python.framework/Versions/3.3 where is 2.7.6 located ? any advice what should I do? UPDATE: $ which -a python python3 /usr/bin/python /usr/local/bin/python3 $ brew list python python3 Error: No such keg: /usr/local/Cellar/python $ locate python WARNING: The locate database (/var/db/locate.database) does not exist. To create the database, run the following command: sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist Please be aware that the database can take some time to generate; once the database has been created, this message will no longer appear. in PyCharm editor, the default settings interpreter shows:", "output": "You could have multiple Python versions on your macOS. You may check that by command, type or which command, like: which -a python python2 python2.7 python3 python3.6 Or type python in Terminal and hit Tab few times for auto completion, which is equivalent to: compgen -c python By default python/pip commands points to the first binary found in PATH environment variable depending what's actually installed. So before installing Python packages with Homebrew, the default Python is installed in /usr/bin which is shipped with your macOS (e.g. Python 2.7.10 on High Sierra). Any versions found in /usr/local (such as /usr/local/bin) are provided by external packages. It is generally advised, that when working with multiple versions, for Python 2 you may use python2/pip2 command, respectively for Python 3 you can use python3/pip3, but it depends on your configuration which commands are available. It is also worth to mention, that since release of Homebrew 1.5.0+ (on 19 January 2018), the python formula has been upgraded to Python 3.x and a python@2 formula will be added for installing Python 2.7. Before, python formula was pointing to Python 2. For instance, if you've installed different version via Homebrew, try the following command: brew list python python3 or: brew list | grep ^python it'll show you all Python files installed with the package. Alternatively you may use apropos or locate python command to locate more Python related files. To check any environment variables related to Python, run: env | grep ^PYTHON To address your issues: Error: No such keg: /usr/local/Cellar/python Means you don't have Python installed via Homebrew. However double check by specifying only one package at a time (like brew list python python2 python3). The locate database (/var/db/locate.database) does not exist. Follow the advice and run: sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist After the database is rebuild, you can use locate command."} +{"question_id": 23916413, "score": 96, "creation_date": 1401292407, "tags": ["python", "django", "multithreading", "multiprocessing", "celery"], "instruction": "Celery parallel distributed task with multiprocessing\n\nI have a CPU intensive Celery task. I would like to use all the processing power (cores) across lots of EC2 instances to get this job done faster (a celery parallel distributed task with multiprocessing - I think). The terms, threading, multiprocessing, distributed computing, distributed parallel processing are all terms I'm trying to understand better. Example task: @app.task for item in list_of_millions_of_ids: id = item # do some long complicated equation here very CPU heavy!!!!!!! database.objects(newid=id).save() Using the code above (with an example if possible) how one would ago about distributed this task using Celery by allowing this one task to be split up utilising all the computing CPU power across all available machine in the cloud?", "output": "Your goals are: Distribute your work to many machines (distributed computing/distributed parallel processing) Distribute the work on a given machine across all CPUs (multiprocessing/threading) Celery can do both of these for you fairly easily. The first thing to understand is that each celery worker is configured by default to run as many tasks as there are CPU cores available on a system: Concurrency is the number of prefork worker process used to process your tasks concurrently, when all of these are busy doing work new tasks will have to wait for one of the tasks to finish before it can be processed. The default concurrency number is the number of CPU\u2019s on that machine (including cores), you can specify a custom number using -c option. There is no recommended value, as the optimal number depends on a number of factors, but if your tasks are mostly I/O-bound then you can try to increase it, experimentation has shown that adding more than twice the number of CPU\u2019s is rarely effective, and likely to degrade performance instead. This means each individual task doesn't need to worry about using multiprocessing/threading to make use of multiple CPUs/cores. Instead, celery will run enough tasks concurrently to use each available CPU. With that out of the way, the next step is to create a task that handles processing some subset of your list_of_millions_of_ids. You have a couple of options here - one is to have each task handle a single ID, so you run N tasks, where N == len(list_of_millions_of_ids). This will guarantee that work is evenly distributed amongst all your tasks since there will never be a case where one worker finishes early and is just waiting around; if it needs work, it can pull an id off the queue. You can do this (as mentioned by John Doe) using the celery group. tasks.py: @app.task def process_ids(item): id = item #long complicated equation here database.objects(newid=id).save() And to execute the tasks: from celery import group from tasks import process_id jobs = group(process_ids(item) for item in list_of_millions_of_ids) result = jobs.apply_async() Another option is to break the list into smaller pieces and distribute the pieces to your workers. This approach runs the risk of wasting some cycles, because you may end up with some workers waiting around while others are still doing work. However, the celery documentation notes that this concern is often unfounded: Some may worry that chunking your tasks results in a degradation of parallelism, but this is rarely true for a busy cluster and in practice since you are avoiding the overhead of messaging it may considerably increase performance. So, you may find that chunking the list and distributing the chunks to each task performs better, because of the reduced messaging overhead. You can probably also lighten the load on the database a bit this way, by calculating each id, storing it in a list, and then adding the whole list into the DB once you're done, rather than doing it one id at a time. The chunking approach would look something like this tasks.py: @app.task def process_ids(items): for item in items: id = item #long complicated equation here database.objects(newid=id).save() # Still adding one id at a time, but you don't have to. And to start the tasks: from tasks import process_ids jobs = process_ids.chunks(list_of_millions_of_ids, 30) # break the list into 30 chunks. Experiment with what number works best here. jobs.apply_async() You can experiment a bit with what chunking size gives you the best result. You want to find a sweet spot where you're cutting down messaging overhead while also keeping the size small enough that you don't end up with workers finishing their chunk much faster than another worker, and then just waiting around with nothing to do."} +{"question_id": 43326680, "score": 96, "creation_date": 1491836667, "tags": ["python", "matplotlib", "subplot", "figure", "axes"], "instruction": "What are the differences between add_axes and add_subplot?\n\nIn a previous answer it was recommended to me to use add_subplot instead of add_axes to show axes correctly, but searching the documentation I couldn't understand when and why I should use either one of these functions. Can anyone explain the differences?", "output": "Common grounds Both, add_axes and add_subplot add an axes to a figure. They both return a (subclass of a) matplotlib.axes.Axes object. However, the mechanism which is used to add the axes differs substantially. add_axes The calling signature of add_axes is add_axes(rect), where rect is a list [x0, y0, width, height] denoting the lower left point of the new axes in figure coodinates (x0,y0) and its width and height. So the axes is positionned in absolute coordinates on the canvas. E.g. fig = plt.figure() ax = fig.add_axes([0,0,1,1]) places a figure in the canvas that is exactly as large as the canvas itself. add_subplot The calling signature of add_subplot does not directly provide the option to place the axes at a predefined position. It rather allows to specify where the axes should be situated according to a subplot grid. The usual and easiest way to specify this position is the 3 integer notation, fig = plt.figure() ax = fig.add_subplot(231) In this example a new axes is created at the first position (1) on a grid of 2 rows and 3 columns. To produce only a single axes, add_subplot(111) would be used (First plot on a 1 by 1 subplot grid). (In newer matplotlib versions, add_subplot() without any arguments is possible as well.) The advantage of this method is that matplotlib takes care of the exact positioning. By default add_subplot(111) would produce an axes positioned at [0.125,0.11,0.775,0.77] or similar, which already leaves enough space around the axes for the title and the (tick)labels. However, this position may also change depending on other elements in the plot, titles set, etc. It can also be adjusted using pyplot.subplots_adjust(...) or pyplot.tight_layout(). In most cases, add_subplot would be the prefered method to create axes for plots on a canvas. Only in cases where exact positioning matters, add_axes might be useful. Example import matplotlib.pyplot as plt plt.rcParams[\"figure.figsize\"] = (5,3) fig = plt.figure() fig.add_subplot(241) fig.add_subplot(242) ax = fig.add_subplot(223) ax.set_title(\"subplots\") fig.add_axes([0.77,.3,.2,.6]) ax2 =fig.add_axes([0.67,.5,.2,.3]) fig.add_axes([0.6,.1,.35,.3]) ax2.set_title(\"random axes\") plt.tight_layout() plt.show() Alternative The easiest way to obtain one or more subplots together with their handles is plt.subplots(). For one axes, use fig, ax = plt.subplots() or, if more subplots are needed, fig, axes = plt.subplots(nrows=3, ncols=4) The initial question In the initial question an axes was placed using fig.add_axes([0,0,1,1]), such that it sits tight to the figure boundaries. The disadvantage of this is of course that ticks, ticklabels, axes labels and titles are cut off. Therefore I suggested in one of the comments to the answer to use fig.add_subplot as this will automatically allow for enough space for those elements, and, if this is not enough, can be adjusted using pyplot.subplots_adjust(...) or pyplot.tight_layout()."} +{"question_id": 11528078, "score": 96, "creation_date": 1342547520, "tags": ["python", "numpy", "duplicates", "unique"], "instruction": "Determining duplicate values in an array\n\nSuppose I have an array a = np.array([1, 2, 1, 3, 3, 3, 0]) How can I (efficiently, Pythonically) find which elements of a are duplicates (i.e., non-unique values)? In this case the result would be array([1, 3, 3]) or possibly array([1, 3]) if efficient. I've come up with a few methods that appear to work: Masking m = np.zeros_like(a, dtype=bool) m[np.unique(a, return_index=True)[1]] = True a[~m] Set operations a[~np.in1d(np.arange(len(a)), np.unique(a, return_index=True)[1], assume_unique=True)] This one is cute but probably illegal (as a isn't actually unique): np.setxor1d(a, np.unique(a), assume_unique=True) Histograms u, i = np.unique(a, return_inverse=True) u[np.bincount(i) > 1] Sorting s = np.sort(a, axis=None) s[:-1][s[1:] == s[:-1]] Pandas s = pd.Series(a) s[s.duplicated()] Is there anything I've missed? I'm not necessarily looking for a numpy-only solution, but it has to work with numpy data types and be efficient on medium-sized data sets (up to 10 million in size). Conclusions Testing with a 10 million size data set (on a 2.8GHz Xeon): a = np.random.randint(10**7, size=10**7) The fastest is sorting, at 1.1s. The dubious xor1d is second at 2.6s, followed by masking and Pandas Series.duplicated at 3.1s, bincount at 5.6s, and in1d and senderle's setdiff1d both at 7.3s. Steven's Counter is only a little slower, at 10.5s; trailing behind are Burhan's Counter.most_common at 110s and DSM's Counter subtraction at 360s. I'm going to use sorting for performance, but I'm accepting Steven's answer because the performance is acceptable and it feels clearer and more Pythonic. Edit: discovered the Pandas solution. If Pandas is available it's clear and performs well.", "output": "I think this is most clear done outside of numpy. You'll have to time it against your numpy solutions if you are concerned with speed. >>> import numpy as np >>> from collections import Counter >>> a = np.array([1, 2, 1, 3, 3, 3, 0]) >>> [item for item, count in Counter(a).items() if count > 1] [1, 3] note: This is similar to Burhan Khalid's answer, but the use of items without subscripting in the condition should be faster."} +{"question_id": 13698352, "score": 96, "creation_date": 1354607771, "tags": ["python", "attributes", "networkx"], "instruction": "Storing and Accessing node attributes python networkx\n\nI have a network of nodes created using python networkx. i want to store information in nodes such that i can access the information later based on the node label (the name of the node) and the field that in which the information has been stored (like node attributes). the information stored can be a string or a number I wish to do so in a manner such that if xyz is a node: then I want to save two or three fields having strings like the date of birth of xyz dob=1185, the place of birth of xyz pob=usa, and the day of birth of xyz dayob=monday. I know that i can use G.add_node has the attribute dictionary field in it...but I can't seem to access it for a particular field. if there is any other way i would appreciate it. i then want to compare xyz with other nodes in the networks having the same information in common. i.e. intersection of node xyz with node abc based on date of bith, place of birth and day of birth e.g for if nodes xyz and abc have an edge print their respective dobs, their pobs and their dayobs", "output": "As you say, it's just a matter of adding the attributes when adding the nodes to the graph G.add_node('abc', dob=1185, pob='usa', dayob='monday') or as a dictionary G.add_node('abc', {'dob': 1185, 'pob': 'usa', 'dayob': 'monday'}) To access the attributes, just access them as you would with any dictionary G.node['abc']['dob'] # 1185 G.node['abc']['pob'] # usa G.node['abc']['dayob'] # monday You say you want to look at attributes for connected nodes. Here's a small example on how that could be accomplished. for n1, n2 in G.edges_iter(): print G.node[n1]['dob'], G.node[n2]['dob'] print G.node[n1]['pob'], G.node[n2]['pob'] # Etc. As of networkx 2.0, G.edges_iter() has been replaced with G.edges(). This also applies to nodes. We set data=True to access attributes. The code is now: for n1, n2 in list(G.edges(data=True)): print G.node[n1]['dob'], G.node[n2]['dob'] print G.node[n1]['pob'], G.node[n2]['pob'] # Etc. NOTE: In networkx 2.4, G.node[] has been replaced with G.nodes[]."} +{"question_id": 49579684, "score": 96, "creation_date": 1522435259, "tags": ["python", "tensorflow", "tensorflow-datasets"], "instruction": "What is the difference between Dataset.from_tensors and Dataset.from_tensor_slices?\n\nI have a dataset represented as a NumPy matrix of shape (num_features, num_examples) and I wish to convert it to TensorFlow type tf.Dataset. I am struggling trying to understand the difference between these two methods: Dataset.from_tensors and Dataset.from_tensor_slices. What is the right one and why? TensorFlow documentation (link) says that both method accept a nested structure of tensor although when using from_tensor_slices the tensor should have same size in the 0-th dimension.", "output": "from_tensors combines the input and returns a dataset with a single element: >>> t = tf.constant([[1, 2], [3, 4]]) >>> ds = tf.data.Dataset.from_tensors(t) >>> [x for x in ds] [] from_tensor_slices creates a dataset with a separate element for each row of the input tensor: >>> t = tf.constant([[1, 2], [3, 4]]) >>> ds = tf.data.Dataset.from_tensor_slices(t) >>> [x for x in ds] [, ]"} +{"question_id": 51712693, "score": 96, "creation_date": 1533575912, "tags": ["python", "python-3.x", "windows", "anaconda", "conda"], "instruction": "PackageNotInstalledError: Package is not installed in prefix\n\nconda update conda >> successful conda update anaconda >> gives me error saying package is not installed in prefix. I have single installation of Python distribution on my system. How do I solve this issue? (base) C:\\Users\\asukumari>conda info active environment : base active env location : C:\\Users\\asukumari\\AppData\\Local\\Continuum\\anaconda3 shell level : 1 user config file : C:\\Users\\asukumari\\.condarc populated config files : C:\\Users\\asukumari\\.condarc conda version : 4.5.9 conda-build version : 3.4.1 python version : 3.6.4.final.0 base environment : C:\\Users\\asukumari\\AppData\\Local\\Continuum\\anaconda3 (writable) channel URLs : https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/win-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/pro/win-64 https://repo.anaconda.com/pkgs/pro/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch package cache : C:\\Users\\asukumari\\AppData\\Local\\Continuum\\anaconda3\\pkgs C:\\Users\\asukumari\\AppData\\Local\\conda\\conda\\pkgs envs directories : C:\\Users\\asukumari\\AppData\\Local\\Continuum\\anaconda3\\envs C:\\Users\\asukumari\\AppData\\Local\\conda\\conda\\envs C:\\Users\\asukumari\\.conda\\envs platform : win-64 user-agent : conda/4.5.9 requests/2.18.4 CPython/3.6.4 Windows/10 Windows/10.0.16299 administrator : False netrc file : None offline mode : False", "output": "Usually this error, \"PackageNotInstalledError: Package is not installed in prefix.\" is because your custom environment doesn't have the conda infrastructure. Instead, it is in your base only. To update the base environment: conda update --name base conda To see what version you have installed: conda list --name base conda example output of list; # packages in environment at /Users/me/miniconda2: # # Name Version Build Channel conda 4.6.14 py27_0 conda-env 2.6.0 h36134e3_0"} +{"question_id": 17668633, "score": 96, "creation_date": 1373952382, "tags": ["python", "logging", "python-logging"], "instruction": "What is the point of setLevel in a python logging handler?\n\nLet's say I have the following code: import logging import logging.handlers a = logging.getLogger('myapp') h = logging.handlers.RotatingFileHandler('foo.log') h.setLevel(logging.DEBUG) a.addHandler(h) # The effective log level is still logging.WARN print a.getEffectiveLevel() a.debug('foo message') a.warn('warning message') I expect that setting logging.DEBUG on the handler would cause debug-level messages to be written to the log file. However, this prints 30 for the effective level (equal to logging.WARNING, the default), and only logs the warn message to the log file, not the debug message. It appears that the handler's log level is being dropped on the floor, e.g. it's silently ignored. Which makes me wonder, why have setLevel on the handler at all?", "output": "It allows finer control. By default the root logger has WARNING level set; this means that it won't print messages with a lower level (no matter how the handlers' levels are set!). But, if you set the root logger's level to DEBUG, indeed the message gets sent to the log file: import logging import logging.handlers a = logging.getLogger('myapp') a.setLevel(logging.DEBUG) # set root's level h = logging.handlers.RotatingFileHandler('foo.log') h.setLevel(logging.DEBUG) a.addHandler(h) print a.getEffectiveLevel() a.debug('foo message') a.warn('warning message') Now, imagine that you want to add a new handler that doesn't record debug information. You can do this by simply setting the handler logging level: import logging import logging.handlers a = logging.getLogger('myapp') a.setLevel(logging.DEBUG) # set root's level h = logging.handlers.RotatingFileHandler('foo.log') h.setLevel(logging.DEBUG) a.addHandler(h) h2 = logging.handlers.RotatingFileHandler('foo2.log') h2.setLevel(logging.WARNING) a.addHandler(h2) print a.getEffectiveLevel() a.debug('foo message') a.warn('warning message') Now, the log file foo.log will contain both messages, while the file foo2.log will only contain the warning message. You could be interested in having a log file of only error-level messages; then, simply add a Handler and set its level to logging.ERROR, everything using the same Logger. You may think of the Logger logging level as a global restriction on which messages are \"interesting\" for a given logger and its handlers. The messages that are considered by the logger afterwards get sent to the handlers, which perform their own filtering and logging process."} +{"question_id": 11312525, "score": 96, "creation_date": 1341323808, "tags": ["python", "multiprocessing", "signals"], "instruction": "Catch Ctrl+C / SIGINT and exit multiprocesses gracefully in python\n\nHow do I catch a Ctrl+C in multiprocess python program and exit all processes gracefully, I need the solution to work both on unix and windows. I've tried the following: import multiprocessing import time import signal import sys jobs = [] def worker(): signal.signal(signal.SIGINT, signal_handler) while(True): time.sleep(1.1234) print \"Working...\" def signal_handler(signal, frame): print 'You pressed Ctrl+C!' # for p in jobs: # p.terminate() sys.exit(0) if __name__ == \"__main__\": for i in range(50): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() And it's kind of working, but I don't think it's the right solution.", "output": "The previously accepted solution has race conditions and it does not work with map and async functions. The correct way to handle Ctrl+C/SIGINT with multiprocessing.Pool is to: Make the process ignore SIGINT before a process Pool is created. This way created child processes inherit SIGINT handler. Restore the original SIGINT handler in the parent process after a Pool has been created. Use map_async and apply_async instead of blocking map and apply. Wait on the results with timeout because the default blocking waits to ignore all signals. This is Python bug https://bugs.python.org/issue8296. Putting it together: #!/bin/env python from __future__ import print_function import multiprocessing import os import signal import time def run_worker(delay): print(\"In a worker process\", os.getpid()) time.sleep(delay) def main(): print(\"Initializng 2 workers\") original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN) pool = multiprocessing.Pool(2) signal.signal(signal.SIGINT, original_sigint_handler) try: print(\"Starting 2 jobs of 5 seconds each\") res = pool.map_async(run_worker, [5, 5]) print(\"Waiting for results\") res.get(60) # Without the timeout this blocking call ignores all signals. except KeyboardInterrupt: print(\"Caught KeyboardInterrupt, terminating workers\") pool.terminate() else: print(\"Normal termination\") pool.close() pool.join() if __name__ == \"__main__\": main() As @YakovShklarov noted, there is a window of time between ignoring the signal and unignoring it in the parent process, during which the signal can be lost. Using pthread_sigmask instead to temporarily block the delivery of the signal in the parent process would prevent the signal from being lost, however, it is not available in Python-2."} +{"question_id": 44625422, "score": 96, "creation_date": 1497860177, "tags": ["python", "python-typing", "mypy"], "instruction": "How to use reveal_type in Mypy\n\nI have read that I can reveal the type of variables by using a function called reveal_type, but how can I use it, and where should I import it from?", "output": "I found out in the end how to use it: You should just put and use the reveal_type in the code, and run it with the Mypy program. Then, it will log a message that look like this: Revealed type is 'builtins.str*' From the Mypy documentation: reveal_type is only understood by Mypy and doesn\u2019t exist in Python. If you try to run your program, you\u2019ll have to remove any reveal_type calls before you can run your code. reveal_type is always available and you don\u2019t need to import it. For more reading: Displaying the type of an expression."} +{"question_id": 41852686, "score": 96, "creation_date": 1485350397, "tags": ["python", "pandas", "dataframe", "numpy", "unit-testing"], "instruction": "How do you Unit Test Python DataFrames\n\nHow do I unit test Python dataframes? I have functions that have an input and output as dataframes. Almost every function I have does this. Now if I want to unit test this what is the best method of doing it? It seems a bit of an effort to create a new dataframe (with values populated) for every function? Are there any materials you can refer me to? Should you write unit tests for these functions?", "output": "While Pandas' test functions are primarily used for internal testing, NumPy includes a very useful set of testing functions that are documented here: NumPy Test Support. These functions compare NumPy arrays, but you can get the array that underlies a Pandas DataFrame using the values property. You can define a simple DataFrame and compare what your function returns to what you expect. One technique you can use is to define one set of test data for a number of functions. That way, you can use Pytest Fixtures to define that DataFrame once, and use it in multiple tests. In terms of resources, I found this article on Testing with NumPy and Pandas to be very useful. I also did a short presentation about data analysis testing at PyCon Canada 2016: Automate Your Data Analysis Testing."} +{"question_id": 16567958, "score": 96, "creation_date": 1368628495, "tags": ["python", "python-2.7", "locking"], "instruction": "When and how to use Python's RLock\n\nReading through the Python docs I came across RLock. Can someone explain to me (with example) a scenario in which RLock would be preferred to Lock? With particular reference to: RLock's \u201crecursion level\u201d. How is this useful? A threads \"ownership\" of an RLock object Performance?", "output": "This is one example where I see the use: Useful when you want to have thread-safe access from outside the class and use the same methods from inside the class: class X: def __init__(self): self.a = 1 self.b = 2 self.lock = threading.RLock() def changeA(self): with self.lock: self.a = self.a + 1 def changeB(self): with self.lock: self.b = self.b + self.a def changeAandB(self): # you can use chanceA and changeB thread-safe! with self.lock: self.changeA() # a usual lock would block at here self.changeB() for recursion more obvious: lock = threading.RLock() def a(...): with lock: a(...) # somewhere inside other threads have to wait until the first call of a finishes = thread ownership. Performance Usually, I start programming with the Lock and when case 1 or 2 occur, I switch to an RLock. Until Python 3.2 the RLock should be a bit slower because of the additional code. It uses Lock: Lock = _allocate_lock # line 98 threading.py def RLock(*args, **kwargs): return _RLock(*args, **kwargs) class _RLock(_Verbose): def __init__(self, verbose=None): _Verbose.__init__(self, verbose) self.__block = _allocate_lock() Thread Ownership within the given thread you can acquire a RLock as often as you like. Other threads need to wait until this thread releases the resource again. This is different to the Lock which implies 'function-call ownership'(I would call it this way): Another function call has to wait until the resource is released by the last blocking function even if it is in the same thread = even if it is called by the other function. When to use Lock instead of RLock When you make a call to the outside of the resource which you can not control. The code below has two variables: a and b and the RLock shall be used to make sure a == b * 2 import threading a = 0 b = 0 lock = threading.RLock() def changeAandB(): # this function works with an RLock and Lock with lock: global a, b a += 1 b += 2 return a, b def changeAandB2(callback): # this function can return wrong results with RLock and can block with Lock with lock: global a, b a += 1 callback() # this callback gets a wrong value when calling changeAandB2 b += 2 return a, b In changeAandB2 the Lock would be the right choice although it does block. Or one can enhance it with errors using RLock._is_owned(). Functions like changeAandB2 may occur when you have implemented an Observer pattern or a Publisher-Subscriber and add locking afterward."} +{"question_id": 28259301, "score": 96, "creation_date": 1422763133, "tags": ["python", "xml", "pandas", "dataframe", "parsing"], "instruction": "How to convert an XML file to nice pandas dataframe?\n\nLet's assume that I have an XML like this: I would like to read this XML file and convert it to a pandas DataFrame: key type language feature web data e95324a9a6c790ecb95e46cf15bE232ee517651 XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] bc360cfbafc39970587547215162f0db XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] 19e71144c50a8b9160b3cvdf2324f0955e906fce XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] 21d4af9021a174f61b8erf284606c74d9e42 XXX EN xx www.foo_bar_exmaple.com A large text with lots of strings and punctuations symbols [...] This is what I already tried, but I am getting some errors and probably there is a more efficient way of doing this task: from lxml import objectify import pandas as pd path = 'file_path' xml = objectify.parse(open(path)) root = xml.getroot() root.getchildren()[0].getchildren() df = pd.DataFrame(columns=('key','type', 'language', 'feature', 'web', 'data')) for i in range(0,len(xml)): obj = root.getchildren()[i].getchildren() row = dict(zip(['key','type', 'language', 'feature', 'web', 'data'], [obj[0].text, obj[1].text])) row_s = pd.Series(row) row_s.name = i df = df.append(row_s) Could anybody provide me a better aproach for this problem?", "output": "You can easily use xml (from the Python standard library) to convert to a pandas.DataFrame. Here's what I would do (when reading from a file replace xml_data with the name of your file or file object): import pandas as pd import xml.etree.ElementTree as ET import io def iter_docs(author): author_attr = author.attrib for doc in author.iter('document'): doc_dict = author_attr.copy() doc_dict.update(doc.attrib) doc_dict['data'] = doc.text yield doc_dict xml_data = io.StringIO(u'''YOUR XML STRING HERE''') etree = ET.parse(xml_data) #create an ElementTree object doc_df = pd.DataFrame(list(iter_docs(etree.getroot()))) If there are multiple authors in your original document or the root of your XML is not an author, then I would add the following generator: def iter_author(etree): for author in etree.iter('author'): for row in iter_docs(author): yield row and change doc_df = pd.DataFrame(list(iter_docs(etree.getroot()))) to doc_df = pd.DataFrame(list(iter_author(etree))) Have a look at the ElementTree tutorial provided in the xml library documentation."} +{"question_id": 30730983, "score": 96, "creation_date": 1433850070, "tags": ["python", "caching", "python-decorators", "lru", "functools"], "instruction": "Make @lru_cache ignore some of the function arguments\n\nHow can I make @functools.lru_cache decorator ignore some of the function arguments with regard to caching key? For example, I have a function that looks like this: def find_object(db_handle, query): # (omitted code) return result If I apply lru_cache decorator just like that, db_handle will be included in the cache key. As a result, if I try to call the function with the same query, but different db_handle, it will be executed again, which I'd like to avoid. I want lru_cache to consider query argument only.", "output": "With cachetools you can write: from cachetools import cached from cachetools.keys import hashkey from random import randint @cached(cache={}, key=lambda db_handle, query: hashkey(query)) def find_object(db_handle, query): print(\"processing {0}\".format(query)) return query queries = list(range(5)) queries.extend(range(5)) for q in queries: print(\"result: {0}\".format(find_object(randint(0, 1000), q))) You will need to install cachetools (pip install cachetools). The syntax is: @cached( cache={}, key=lambda : hashkey() ) Here is another example that includes keyword args: @cached( cache={}, key=lambda a, b, c=1, d=2: hashkey(a, c) ) def my_func(a, b, c=1, d=2): return a + c In the example above note that the lambda function input args match the my_func args. You don't have to exactly match the argspec if you don't need to. For example, you can use kwargs to squash out things that aren't needed in the hashkey: @cached( cache={}, key=lambda a, b, c=1, **kwargs: hashkey(a, c) ) def my_func(a, b, c=1, d=2, e=3, f=4): return a + c In the above example we don't care about d=, e= and f= args when looking up a cache value, so we can squash them all out with **kwargs."} +{"question_id": 11371204, "score": 95, "creation_date": 1341620135, "tags": ["python", "unpack"], "instruction": "Unpack the first two elements in list/tuple\n\nIs there a way in Python to do like this: a, b, = 1, 3, 4, 5 and then: >>> a 1 >>> b 3 The above code doesn't work as it will throw ValueError: too many values to unpack", "output": "Just to add to Nolen's answer, in Python 3, you can also unpack the rest, like this: >>> a, b, *rest = 1, 2, 3, 4, 5, 6, 7 >>> a 1 >>> rest [3, 4, 5, 6, 7] (this does not work in Python 2)"} +{"question_id": 7352684, "score": 95, "creation_date": 1315506096, "tags": ["python", "numpy"], "instruction": "How to find the groups of consecutive elements in a NumPy array\n\nI have to cluster the consecutive elements from a NumPy array. Considering the following example a = [ 0, 47, 48, 49, 50, 97, 98, 99] The output should be a list of tuples as follows [(0), (47, 48, 49, 50), (97, 98, 99)] Here the difference is just one between the elements. It will be great if the difference can also be specified as a limit or a hardcoded number.", "output": "Here's a lil func that might help: def group_consecutives(vals, step=1): \"\"\"Return list of consecutive lists of numbers from vals (number list).\"\"\" run = [] result = [run] expect = None for v in vals: if (v == expect) or (expect is None): run.append(v) else: run = [v] result.append(run) expect = v + step return result >>> group_consecutives(a) [[0], [47, 48, 49, 50], [97, 98, 99]] >>> group_consecutives(a, step=47) [[0, 47], [48], [49], [50, 97], [98], [99]] P.S. This is pure Python. For a NumPy solution, see unutbu's answer."} +{"question_id": 48135736, "score": 95, "creation_date": 1515315619, "tags": ["python", "numpy", "indexing", "numpy-ndarray", "tensor"], "instruction": "What is an intuitive explanation of np.unravel_index?\n\nI have read the documentation for np.unravel_index and played around with the function, but I can't figure out what it is doing.", "output": "Computer memory is addressed linearly. Each memory cell corresponds to a number. A block of memory can be addressed in terms of a base, which is the memory address of its first element, and the item index. For example, assuming the base address is 10,000: item index 0 1 2 3 memory address 10,000 10,001 10,002 10,003 To store multi-dimensional blocks, their geometry must somehow be made to fit into linear memory. In C and NumPy, this is done row-by-row. A 2D example would be: | 0 1 2 3 --+------------------------ 0 | 0 1 2 3 1 | 4 5 6 7 2 | 8 9 10 11 So, for example, in this 3-by-4 block the 2D index (1, 2) would correspond to the linear index 6 which is 1 x 4 + 2. unravel_index does the inverse. Given a linear index, it computes the corresponding ND index. Since this depends on the block dimensions, these also have to be passed. So, in our example, we can get the original 2D index (1, 2) back from the linear index 6: >>> np.unravel_index(6, (3, 4)) (1, 2) Note: The above glosses over a few details. 1) Translating the item index to memory address also has to account for item size. For example, an integer typically has 4 or 8 bytes. So, in the latter case, the memory address for item i would be base + 8 x i. 2). NumPy is a bit more flexible than suggested. It can organize ND data column-by-column if desired. It can even handle data that are not contiguous in memory but for example leave gaps, etc. Bonus reading: internal memory layout of an ndarray"} +{"question_id": 24290297, "score": 95, "creation_date": 1403107595, "tags": ["python", "pandas", "multi-index"], "instruction": "Pandas dataframe with multiindex column - merge levels\n\nI have a dataframe, grouped, with multiindex columns as below: import pandas as pd import numpy as np import random codes = [\"one\",\"two\",\"three\"]; colours = [\"black\", \"white\"]; textures = [\"soft\", \"hard\"]; N= 100 # length of the dataframe df = pd.DataFrame({ 'id' : range(1,N+1), 'weeks_elapsed' : [random.choice(range(1,25)) for i in range(1,N+1)], 'code' : [random.choice(codes) for i in range(1,N+1)], 'colour': [random.choice(colours) for i in range(1,N+1)], 'texture': [random.choice(textures) for i in range(1,N+1)], 'size': [random.randint(1,100) for i in range(1,N+1)], 'scaled_size': [random.randint(100,1000) for i in range(1,N+1)] }, columns= ['id', 'weeks_elapsed', 'code','colour', 'texture', 'size', 'scaled_size']) grouped = df.groupby(['code', 'colour']).agg( {'size': [np.sum, np.average, np.size, pd.Series.idxmax],'scaled_size': [np.sum, np.average, np.size, pd.Series.idxmax]}).reset_index() >> grouped code colour size scaled_size sum average size idxmax sum average size idxmax 0 one black 1031 60.647059 17 81 185.153944 10.891408 17 47 1 one white 481 37.000000 13 53 204.139249 15.703019 13 53 2 three black 822 48.352941 17 6 123.269405 7.251141 17 31 3 three white 1614 57.642857 28 50 285.638337 10.201369 28 37 4 two black 523 58.111111 9 85 80.908912 8.989879 9 88 5 two white 669 41.812500 16 78 82.098870 5.131179 16 78 [6 rows x 10 columns] How can I flatten/merge the column index levels as: \"Level1|Level2\", e.g. size|sum, scaled_size|sum. etc? If this is not possible, is there a way to groupby() as I did above without creating multi-index columns?", "output": "There are varied (i.e., more pythonic) way to flatten a MultiIndex columns into single-level columns. Use map and join with string column headers: grouped.columns = grouped.columns.map('|'.join).str.strip('|') print(grouped) Output: code colour size|sum size|average size|size size|idxmax \\ 0 one black 862 53.875000 16 14 1 one white 554 46.166667 12 18 2 three black 842 49.529412 17 90 3 three white 740 56.923077 13 97 4 two black 1541 61.640000 25 50 scaled_size|sum scaled_size|average scaled_size|size scaled_size|idxmax 0 6980 436.250000 16 77 1 6101 508.416667 12 13 2 7889 464.058824 17 64 3 6329 486.846154 13 73 4 12809 512.360000 25 23 Use map with format for column headers that have numeric data types. grouped.columns = grouped.columns.map('{0[0]}|{0[1]}'.format) Output: code| colour| size|sum size|average size|size size|idxmax \\ 0 one black 734 52.428571 14 30 1 one white 1110 65.294118 17 88 2 three black 930 51.666667 18 3 3 three white 1140 51.818182 22 20 4 two black 656 38.588235 17 77 5 two white 704 58.666667 12 17 scaled_size|sum scaled_size|average scaled_size|size scaled_size|idxmax 0 8229 587.785714 14 57 1 8781 516.529412 17 73 2 10743 596.833333 18 21 3 10240 465.454545 22 26 4 9982 587.176471 17 16 5 6537 544.750000 12 49 Use list comprehension with f-string for Python 3.6+: grouped.columns = [f'{i}|{j}' if j != '' else f'{i}' for i,j in grouped.columns] Output: code colour size|sum size|average size|size size|idxmax \\ 0 one black 1003 43.608696 23 76 1 one white 1255 59.761905 21 66 2 three black 777 45.705882 17 39 3 three white 630 52.500000 12 23 4 two black 823 54.866667 15 33 5 two white 491 40.916667 12 64 scaled_size|sum scaled_size|average scaled_size|size scaled_size|idxmax 0 12532 544.869565 23 27 1 13223 629.666667 21 13 2 8615 506.764706 17 92 3 6101 508.416667 12 43 4 7661 510.733333 15 42 5 6143 511.916667 12 49"} +{"question_id": 33775011, "score": 95, "creation_date": 1447835321, "tags": ["python", "django", "django-queryset"], "instruction": "How to annotate Count with a condition in a Django queryset\n\nUsing Django ORM, can one do something like queryset.objects.annotate(Count('queryset_objects', gte=VALUE)). Catch my drift? Here's a quick example to use for illustrating a possible answer: In a Django website, content creators submit articles, and regular users view (i.e. read) the said articles. Articles can either be published (i.e. available for all to read), or in draft mode. The models depicting these requirements are: class Article(models.Model): author = models.ForeignKey(User) published = models.BooleanField(default=False) class Readership(models.Model): reader = models.ForeignKey(User) which_article = models.ForeignKey(Article) what_time = models.DateTimeField(auto_now_add=True) My question is: How can I get all published articles, sorted by unique readership from the last 30 mins? I.e. I want to count how many distinct (unique) views each published article got in the last half an hour, and then produce a list of articles sorted by these distinct views. I tried: date = datetime.now()-timedelta(minutes=30) articles = Article.objects.filter(published=True).extra(select = { \"views\" : \"\"\" SELECT COUNT(*) FROM myapp_readership JOIN myapp_article on myapp_readership.which_article_id = myapp_article.id WHERE myapp_readership.reader_id = myapp_user.id AND myapp_readership.what_time > %s \"\"\" % date, }).order_by(\"-views\") This sprang the error: syntax error at or near \"01\" (where \"01\" was the datetime object inside extra). It's not much to go on.", "output": "For django >= 1.8 Use Conditional Aggregation: from django.db.models import Count, Case, When, IntegerField Article.objects.annotate( numviews=Count(Case( When(readership__what_time__lt=treshold, then=1), output_field=IntegerField(), )) ) Explanation: normal query through your articles will be annotated with numviews field. That field will be constructed as a CASE/WHEN expression, wrapped by Count, that will return 1 for readership matching criteria and NULL for readership not matching criteria. Count will ignore nulls and count only values. You will get zeros on articles that haven't been viewed recently and you can use that numviews field for sorting and filtering. Query behind this for PostgreSQL will be: SELECT \"app_article\".\"id\", \"app_article\".\"author\", \"app_article\".\"published\", COUNT( CASE WHEN \"app_readership\".\"what_time\" < 2015-11-18 11:04:00.000000+01:00 THEN 1 ELSE NULL END ) as \"numviews\" FROM \"app_article\" LEFT OUTER JOIN \"app_readership\" ON (\"app_article\".\"id\" = \"app_readership\".\"which_article_id\") GROUP BY \"app_article\".\"id\", \"app_article\".\"author\", \"app_article\".\"published\" If we want to track only unique queries, we can add distinction into Count, and make our When clause to return value, we want to distinct on. from django.db.models import Count, Case, When, CharField, F Article.objects.annotate( numviews=Count(Case( When(readership__what_time__lt=treshold, then=F('readership__reader')), # it can be also `readership__reader_id`, it doesn't matter output_field=CharField(), ), distinct=True) ) That will produce: SELECT \"app_article\".\"id\", \"app_article\".\"author\", \"app_article\".\"published\", COUNT( DISTINCT CASE WHEN \"app_readership\".\"what_time\" < 2015-11-18 11:04:00.000000+01:00 THEN \"app_readership\".\"reader_id\" ELSE NULL END ) as \"numviews\" FROM \"app_article\" LEFT OUTER JOIN \"app_readership\" ON (\"app_article\".\"id\" = \"app_readership\".\"which_article_id\") GROUP BY \"app_article\".\"id\", \"app_article\".\"author\", \"app_article\".\"published\" For django < 1.8 and PostgreSQL You can just use raw for executing SQL statement created by newer versions of django. Apparently there is no simple and optimized method for querying that data without using raw (even with extra there are some problems with injecting required JOIN clause). Articles.objects.raw('SELECT' ' \"app_article\".\"id\",' ' \"app_article\".\"author\",' ' \"app_article\".\"published\",' ' COUNT(' ' DISTINCT CASE WHEN \"app_readership\".\"what_time\" < 2015-11-18 11:04:00.000000+01:00 THEN \"app_readership\".\"reader_id\"' ' ELSE NULL END' ' ) as \"numviews\"' 'FROM \"app_article\" LEFT OUTER JOIN \"app_readership\"' ' ON (\"app_article\".\"id\" = \"app_readership\".\"which_article_id\")' 'GROUP BY \"app_article\".\"id\", \"app_article\".\"author\", \"app_article\".\"published\"')"} +{"question_id": 21316968, "score": 95, "creation_date": 1390503264, "tags": ["python", "python-3.x", "python-2.7", "division"], "instruction": "Integer division in Python 2 and Python 3\n\nHow can I divide two numbers in Python 2.7 and get the result with decimals? I don't get it why there is difference: in Python 3: >>> 20/15 1.3333333333333333 in Python 2: >>> 20/15 1 Isn't this a modulo actually?", "output": "In Python 2.7, the / operator is integer division if inputs are integers. If you want float division (which is something I always prefer), just use this special import: from __future__ import division See it here: >>> 7 / 2 3 >>> from __future__ import division >>> 7 / 2 3.5 >>> Integer division is achieved by using //, and modulo by using %: >>> 7 % 2 1 >>> 7 // 2 3 >>> As commented by user2357112, this import has to be done before any other normal import."} +{"question_id": 39835770, "score": 95, "creation_date": 1475510464, "tags": ["python", "pandas", "pyodbc"], "instruction": "Read data from pyodbc to pandas\n\nI am querying a SQL database and I want to use pandas to process the data. However, I am not sure how to move the data. Below is my input and output. import pyodbc import pandas from pandas import DataFrame cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\users\\bartogre\\desktop\\CorpRentalPivot1.accdb;UID=\"\";PWD=\"\";') crsr = cnxn.cursor() for table_name in crsr.tables(tableType='TABLE'): print(table_name) cursor = cnxn.cursor() sql = \"Select sum(CYTM), sum(PYTM), BRAND From data Group By BRAND\" cursor.execute(sql) for data in cursor.fetchall(): print (data) ('C:\\\\users\\\\bartogre\\\\desktop\\\\CorpRentalPivot1.accdb', None, 'Data', 'TABLE', None) ('C:\\\\users\\\\bartogre\\\\desktop\\\\CorpRentalPivot1.accdb', None, 'SFDB', 'TABLE', None) (Decimal('78071898.71'), Decimal('82192672.29'), 'A') (Decimal('12120663.79'), Decimal('13278814.52'), 'B')", "output": "A shorter and more concise answer import pyodbc import pandas as pd cnxn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};' r'DBQ=C:\\users\\bartogre\\desktop\\data.mdb;') sql = \"Select sum(CYTM), sum(PYTM), BRAND From data Group By BRAND\" data = pd.read_sql(sql,cnxn) # without parameters [non-prepared statement] # with a prepared statement, use list/tuple/dictionary of parameters depending on DB #data = pd.read_sql(sql=sql, con=cnxn, params=query_params)"} +{"question_id": 75495800, "score": 95, "creation_date": 1676747963, "tags": ["python", "ffmpeg", "discord", "youtube-dl"], "instruction": "Error: Unable to extract uploader id - Youtube, Discord.py\n\nI have a very powerful bot in discord (discord.py, PYTHON) and it can play music in voice channels. It gets the music from youtube (youtube_dl). It worked perfectly before but now it doesn't want to work with any video. I tried updating youtube_dl but it still doesn't work I searched everywhere but I still can't find a answer that might help me. This is the Error: Error: Unable to extract uploader id After and before the error log there is no more information. Can anyone help? I will leave some of the code that I use for my bot... The youtube setup settings: youtube_dl.utils.bug_reports_message = lambda: '' ytdl_format_options = { 'format': 'bestaudio/best', 'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s', 'restrictfilenames': True, 'noplaylist': True, 'nocheckcertificate': True, 'ignoreerrors': False, 'logtostderr': False, 'quiet': True, 'no_warnings': True, 'default_search': 'auto', 'source_address': '0.0.0.0', # bind to ipv4 since ipv6 addresses cause issues sometimes } ffmpeg_options = { 'options': '-vn', } ytdl = youtube_dl.YoutubeDL(ytdl_format_options) class YTDLSource(discord.PCMVolumeTransformer): def __init__(self, source, *, data, volume=0.5): super().__init__(source, volume) self.data = data self.title = data.get('title') self.url = data.get('url') self.duration = data.get('duration') self.image = data.get(\"thumbnails\")[0][\"url\"] @classmethod async def from_url(cls, url, *, loop=None, stream=False): loop = loop or asyncio.get_event_loop() data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream)) #print(data) if 'entries' in data: # take first item from a playlist data = data['entries'][0] #print(data[\"thumbnails\"][0][\"url\"]) #print(data[\"duration\"]) filename = data['url'] if stream else ytdl.prepare_filename(data) return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data) Approximately the command to run the audio (from my bot): sessionChanel = message.author.voice.channel await sessionChannel.connect() url = matched.group(1) player = await YTDLSource.from_url(url, loop=client.loop, stream=True) sessionChannel.guild.voice_client.play(player, after=lambda e: print( f'Player error: {e}') if e else None)", "output": "This is a known issue, fixed in Master. For a temporary fix, python3 -m pip install --force-reinstall https://github.com/yt-dlp/yt-dlp/archive/master.tar.gz This installs tha master version. Run it through the command-line yt-dlp URL where URL is the URL of the video you want. See yt-dlp --help for all options. It should just work without errors. If you're using it as a module, import yt_dlp as youtube_dl might fix your problems (though there could be API changes that break your code; I don't know which version of yt_dlp you were using etc)."} +{"question_id": 28256761, "score": 95, "creation_date": 1422739947, "tags": ["python", "pandas"], "instruction": "select pandas rows by excluding index number\n\nNot quite sure why I can't figure this out. I'm looking to slice a Pandas dataframe by using index numbers. I have a list/core index with the index numbers that i do NOT need, shown below pandas.core.index.Int64Index Int64Index([2340, 4840, 3163, 1597, 491 , 5010, 911 , 3085, 5486, 5475, 1417, 2663, 4204, 156 , 5058, 1990, 3200, 1218, 3280, 793 , 824 , 3625, 1726, 1971, 2845, 4668, 2973, 3039, 376 , 4394, 3749, 1610, 3892, 2527, 324 , 5245, 696 , 1239, 4601, 3219, 5138, 4832, 4762, 1256, 4437, 2475, 3732, 4063, 1193], dtype=int64) How can I create a new dataframe excluding these index numbers. I tried df.iloc[combined_index] and obviously this just shows the rows with those index number (the opposite of what I want). any help will be greatly appreciated", "output": "Not sure if that's what you are looking for, posting this as an answer, because it's too long for a comment: In [31]: d = {'a':[1,2,3,4,5,6], 'b':[1,2,3,4,5,6]} In [32]: df = pd.DataFrame(d) In [33]: bad_df = df.index.isin([3,5]) In [34]: df[~bad_df] Out[34]: a b 0 1 1 1 2 2 2 3 3 4 5 5"} +{"question_id": 4309684, "score": 95, "creation_date": 1291079497, "tags": ["python"], "instruction": "Split a string with unknown number of spaces as separator in Python\n\nI need a function similar to str.split(' ') but there might be more than one space, and different number of them between the meaningful characters. Something like this: s = ' 1234 Q-24 2010-11-29 563 abc a6G47er15 ' ss = s.magic_split() print(ss) # ['1234', 'Q-24', '2010-11-29', '563', 'abc', 'a6G47er15'] Can I somehow use regular expressions to catch those spaces in between?", "output": "If you don't pass any arguments to str.split(), it will treat runs of whitespace as a single separator: >>> ' 1234 Q-24 2010-11-29 563 abc a6G47er15'.split() ['1234', 'Q-24', '2010-11-29', '563', 'abc', 'a6G47er15']"} +{"question_id": 63329657, "score": 95, "creation_date": 1596996184, "tags": ["python", "pickle"], "instruction": "Python 3.7 Error: Unsupported Pickle Protocol 5\n\nI'm trying to restore a pickled config file from RLLib (json didn't work as shown in this post), and getting the following error: config = pickle.load(open(f\"{path}/params.pkl\", \"rb\")) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in ----> 1 config = pickle.load(open(f\"{path}/params.pkl\", \"rb\")) ValueError: unsupported pickle protocol: 5 Python Version = 3.7.0 How can I open this file in 3.7?", "output": "Use pickle5 or load it into python 3.8+ and then serialize it to a lower version of it using the protocol parameter."} +{"question_id": 7247298, "score": 95, "creation_date": 1314725341, "tags": ["python", "python-3.x", "list"], "instruction": "Size of a Python list in memory\n\nI just experimented with the size of python data structures in memory. I wrote the following snippet: import sys lst1=[] lst1.append(1) lst2=[1] print(sys.getsizeof(lst1), sys.getsizeof(lst2)) I got the following outputs on the following configurations: Windows 7 64bit, Python3.1: 52 40 (so lst1 has 52 bytes and lst2 has 40 bytes) Ubuntu 11.4 32bit with Python3.2: output is 48 32 Ubuntu 11.4 32bit Python2.7: 48 36 Can anyone explain to me why the two sizes differ although both are lists containing a 1? In the python documentation for the getsizeof function I found the following: ...adds an additional garbage collector overhead if the object is managed by the garbage collector. Could this be the case in my little example?", "output": "Here's a fuller interactive session that will help me explain what's going on (Python 2.6 on Windows XP 32-bit, but it doesn't matter really): >>> import sys >>> sys.getsizeof([]) 36 >>> sys.getsizeof([1]) 40 >>> lst = [] >>> lst.append(1) >>> sys.getsizeof(lst) 52 >>> Note that the empty list is a bit smaller than the one with [1] in it. When an element is appended, however, it grows much larger. The reason for this is the implementation details in Objects/listobject.c, in the source of CPython. Empty list When an empty list [] is created, no space for elements is allocated - this can be seen in PyList_New. 36 bytes is the amount of space required for the list data structure itself on a 32-bit machine. List with one element When a list with a single element [1] is created, space for one element is allocated in addition to the memory required by the list data structure itself. Again, this can be found in PyList_New. Given size as argument, it computes: nbytes = size * sizeof(PyObject *); And then has: if (size <= 0) op->ob_item = NULL; else { op->ob_item = (PyObject **) PyMem_MALLOC(nbytes); if (op->ob_item == NULL) { Py_DECREF(op); return PyErr_NoMemory(); } memset(op->ob_item, 0, nbytes); } Py_SIZE(op) = size; op->allocated = size; So we see that with size = 1, space for one pointer is allocated. 4 bytes (on my 32-bit box). Appending to an empty list When calling append on an empty list, here's what happens: PyList_Append calls app1 app1 asks for the list's size (and gets 0 as an answer) app1 then calls list_resize with size+1 (1 in our case) list_resize has an interesting allocation strategy, summarized in this comment from its source. Here it is: /* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... */ new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6); /* check for integer overflow */ if (new_allocated > PY_SIZE_MAX - newsize) { PyErr_NoMemory(); return -1; } else { new_allocated += newsize; } Let's do some math Let's see how the numbers I quoted in the session in the beginning of my article are reached. So 36 bytes is the size required by the list data structure itself on 32-bit. With a single element, space is allocated for one pointer, so that's 4 extra bytes - total 40 bytes. OK so far. When app1 is called on an empty list, it calls list_resize with size=1. According to the over-allocation algorithm of list_resize, the next largest available size after 1 is 4, so place for 4 pointers will be allocated. 4 * 4 = 16 bytes, and 36 + 16 = 52. Indeed, everything makes sense :-)"} +{"question_id": 30384995, "score": 95, "creation_date": 1432245075, "tags": ["python", "scikit-learn", "random-forest"], "instruction": "RandomForestClassfier.fit(): ValueError: could not convert string to float\n\nGiven is a simple CSV file: A,B,C Hello,Hi,0 Hola,Bueno,1 Obviously the real dataset is far more complex than this, but this one reproduces the error. I'm attempting to build a random forest classifier for it, like so: cols = ['A','B','C'] col_types = {'A': str, 'B': str, 'C': int} test = pd.read_csv('test.csv', dtype=col_types) train_y = test['C'] == 1 train_x = test[cols] clf_rf = RandomForestClassifier(n_estimators=50) clf_rf.fit(train_x, train_y) But I just get this traceback when invoking fit(): ValueError: could not convert string to float: 'Bueno' scikit-learn version is 0.16.1.", "output": "You have to do some encoding before using fit(). As it was told fit() does not accept strings, but you solve this. There are several classes that can be used : LabelEncoder : turn your string into incremental value OneHotEncoder : use One-of-K algorithm to transform your String into integer Personally, I have post almost the same question on Stack Overflow some time ago. I wanted to have a scalable solution, but didn't get any answer. I selected OneHotEncoder that binarize all the strings. It is quite effective, but if you have a lot of different strings the matrix will grow very quickly and memory will be required."} +{"question_id": 10822635, "score": 95, "creation_date": 1338404782, "tags": ["python", "sql", "sqlalchemy"], "instruction": "Get the number of rows in table using SQLAlchemy\n\nI am using SQLAlchemy in Python, and I want to know how to get the total number of rows in a column. I have variables defined: engine = sqlalchemy.create_engine(url, ehco=False) Session = sqlalchemy.orm.sessionmaker(bind=engine) Session = session() metadata = sqlalchemy.MetaData(engine) Base = declarative_base(metadata=metadata) # A class representing the shape_congress_districts_2012 table class Congress(Base): __tablename__ = 'shape_congress_districts_2012' id = geoalchemy.Column(sqlalchemy.Integer, primary_key=True) name = geoalchemy.Column(sqlalchemy.Unicode) geom = geoalchemy.GeometryColumn(geoalchemy.Polygon(2)) geom_simple = geoalchemy.GeometryColumn(geoalchemy.Polygon(2)) area = geoalchemy.Column(sqlalchemy.Float) state_id = geoalchemy.Column(sqlalchemy.Integer) census_year = geoalchemy.Column(sqlalchemy.Date) geoalchemy.GeometryDDL(Congress.__table__) I want to determine the total number of rows in the table without having to wait a whole bunch of time querying the database. Currently, I have a bit of code: rows = session.query(Congress).all() Then I can access them from the list, but this requires me to load everything into memory at once.", "output": "This should work rows = session.query(Congress).count() Another way related to my first try: from sqlalchemy import func rows = session.query(func.count(Congress.id)).scalar()"} +{"question_id": 68166721, "score": 95, "creation_date": 1624896766, "tags": ["python", "pytorch", "google-colaboratory", "tensor"], "instruction": "Pytorch fails with CUDA error: device-side assert triggered on Colab\n\nI am trying to initialize a tensor on Google Colab with GPU enabled. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') t = torch.tensor([1,2], device=device) But I am getting this strange error. RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Even by setting that environment variable to 1 seems not showing any further details. Anyone ever had this issue?", "output": "While I tried your code, and it did not give me an error, I can say that usually the best practice to debug CUDA Runtime Errors: device-side assert like yours is to turn collab to CPU and recreate the error. It will give you a more useful traceback error. Most of the time CUDA Runtime Errors can be the cause of some index mismatching so like you tried to train a network with 10 output nodes on a dataset with 15 labels. And the thing with this CUDA error is once you get this error once, you will recieve it for every operation you do with torch.tensors. This forces you to restart your notebook. I suggest you restart your notebook, get a more accuracate traceback by moving to CPU, and check the rest of your code especially if you train a model on set of targets somewhere. To gain a clearer insight into the typical utilization of GPUs in PyTorch applications, I recommend exploring deep learning projects on GitHub. Websites such as repo-rift.com can be particularly useful for this purpose. They allow you to perform text searches with queries like \"How does this paper use GPU\". This can help you pinpoint the exact usage of CUDA in specific lines of code within extensive repositories."} +{"question_id": 61463224, "score": 95, "creation_date": 1588003721, "tags": ["python", "python-3.x", "error-handling", "python-requests"], "instruction": "When to use `raise_for_status` vs `status_code` testing\n\nI have always used: r = requests.get(url) if r.status_code == 200: # my passing code else: # anything else, if this even exists Now I was working on another issue and decided to allow for other errors and am instead now using: try: r = requests.get(url) r.raise_for_status() except requests.exceptions.ConnectionError as err: # eg, no internet raise SystemExit(err) except requests.exceptions.HTTPError as err: # eg, url, server and other errors raise SystemExit(err) # the rest of my code is going here With the exception that various other errors could be tested for at this level, is one method any better than the other?", "output": "Response.raise_for_status() is just a built-in method for checking status codes and does essentially the same thing as your first example. There is no \"better\" here, just about personal preference with flow control. My preference is toward try/except blocks for catching errors in any call, as this informs the future programmer that these conditions are some sort of error. If/else doesn't necessarily indicate an error when scanning code. Edit: Here's my quick-and-dirty pattern. import time from http import HTTPStatus import requests from requests.exceptions import HTTPError url = \"https://theurl.com\" retries = 3 retry_codes = [ HTTPStatus.TOO_MANY_REQUESTS, HTTPStatus.INTERNAL_SERVER_ERROR, HTTPStatus.BAD_GATEWAY, HTTPStatus.SERVICE_UNAVAILABLE, HTTPStatus.GATEWAY_TIMEOUT, ] for n in range(retries): try: response = requests.get(url) response.raise_for_status() break except HTTPError as exc: code = exc.response.status_code if code in retry_codes: # retry after n seconds time.sleep(n) continue raise However, in most scenarios, I subclass requests.Session, make a custom HTTPAdapter that handles exponential backoffs, and the above lives in an overridden requests.Session.request method. An example of that can be seen here."} +{"question_id": 8746908, "score": 95, "creation_date": 1325784687, "tags": ["python", "csv", "dictionary"], "instruction": "Why does CSV file contain a blank line in between each data line when outputting with Dictwriter in Python\n\nI am using DictWriter to output data in a dictionary to a csv file. Why does the CSV file have a blank line in between each data line? It's not a huge deal, but my dataset is big and doesn't fit into one csv file because it has too many lines since the \"double-spacing\" doubles the number of lines in the file. My code for writing to the dictionary is: headers=['id', 'year', 'activity', 'lineitem', 'datum'] output = csv.DictWriter(open('file3.csv','w'), delimiter=',', fieldnames=headers) output.writerow(dict((fn,fn) for fn in headers)) for row in rows: output.writerow(row)", "output": "By default, the classes in the csv module use Windows-style line terminators (\\r\\n) rather than Unix-style (\\n). Could this be what\u2019s causing the apparent double line breaks? If so, in python 2 you can override it in the DictWriter constructor: output = csv.DictWriter(open('file3.csv','w'), delimiter=',', lineterminator='\\n', fieldnames=headers)"} +{"question_id": 30245397, "score": 95, "creation_date": 1431630564, "tags": ["python", "list", "python-2.7", "python-3.x", "list-comprehension"], "instruction": "Why is a list comprehension so much faster than appending to a list?\n\nI was wondering why list comprehension is so much faster than appending to a list. I thought the difference is just expressive, but it's not. >>> import timeit >>> timeit.timeit(stmt='''\\ t = [] for i in range(10000): t.append(i)''', number=10000) 9.467898777974142 >>> timeit.timeit(stmt='t= [i for i in range(10000)]', number=10000) 4.1138417314859 The list comprehension is 50% faster. Why?", "output": "List comprehension is basically just a \"syntactic sugar\" for the regular for loop. In this case the reason that it performs better is because it doesn't need to load the append attribute of the list and call it as a function at each iteration. In other words and in general, list comprehensions perform faster because suspending and resuming a function's frame, or multiple functions in other cases, is slower than creating a list on demand. Consider the following examples : In [1]: def f1(): ...: l = [] ...: for i in range(5): ...: l.append(i) ...: ...: ...: def f2(): ...: [i for i in range(5)] ...: In [3]: import dis In [4]: dis.dis(f1) 2 0 BUILD_LIST 0 2 STORE_FAST 0 (l) 3 4 LOAD_GLOBAL 0 (range) 6 LOAD_CONST 1 (5) 8 CALL_FUNCTION 1 10 GET_ITER >> 12 FOR_ITER 14 (to 28) 14 STORE_FAST 1 (i) 4 16 LOAD_FAST 0 (l) 18 LOAD_METHOD 1 (append) 20 LOAD_FAST 1 (i) 22 CALL_METHOD 1 24 POP_TOP 26 JUMP_ABSOLUTE 12 >> 28 LOAD_CONST 0 (None) 30 RETURN_VALUE In [5]: In [5]: dis.dis(f2) 8 0 LOAD_CONST 1 ( at 0x7f397abc0d40, file \"\", line 8>) 2 LOAD_CONST 2 ('f2..') 4 MAKE_FUNCTION 0 6 LOAD_GLOBAL 0 (range) 8 LOAD_CONST 3 (5) 10 CALL_FUNCTION 1 12 GET_ITER 14 CALL_FUNCTION 1 16 POP_TOP 18 LOAD_CONST 0 (None) 20 RETURN_VALUE Disassembly of at 0x7f397abc0d40, file \"\", line 8>: 8 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) >> 4 FOR_ITER 8 (to 14) 6 STORE_FAST 1 (i) 8 LOAD_FAST 1 (i) 10 LIST_APPEND 2 12 JUMP_ABSOLUTE 4 >> 14 RETURN_VALUE In [6]: You can see that on offset 18 in the first function we have an append attribute while there's no such thing in second function using list comprehension. All those extra bytecodes will make the appending approach slower and since in this case you'll have loading of the append attribute in each iteration, in the end it will make the code to take approximately twice as slower as the second function using only list comprehension."} +{"question_id": 48709104, "score": 95, "creation_date": 1518190191, "tags": ["python", "arguments", "python-typing"], "instruction": "How do I specify multiple types for a parameter using type-hints?\n\nI have a Python function which accepts XML data as an str. For convenience, the function also checks for xml.etree.ElementTree.Element and will automatically convert to str if necessary. import xml.etree.ElementTree as ET def post_xml(data: str): if type(data) is ET.Element: data = ET.tostring(data).decode() # ... Is it possible to specify with type-hints that a parameter can be given as one of two types? def post_xml(data: str or ET.Element): # ...", "output": "You want a type union. For Python 3.10+: def post_xml(data: str | ET.Element): For earlier versions: from typing import Union def post_xml(data: Union[str, ET.Element]): ..."} +{"question_id": 5656798, "score": 95, "creation_date": 1302736434, "tags": ["python", "matplotlib"], "instruction": "Is there a way to make a discontinuous axis in Matplotlib?\n\nI'm trying to create a plot using pyplot that has a discontinuous x-axis. The usual way this is drawn is that the axis will have something like this: (values)----//----(later values) where the // indicates that you're skipping everything between (values) and (later values). I haven't been able to find any examples of this, so I'm wondering if it's even possible. I know you can join data over a discontinuity for, eg, financial data, but I'd like to make the jump in the axis more explicit. At the moment I'm just using subplots but I'd really like to have everything end up on the same graph in the end.", "output": "Paul's answer is a perfectly fine method of doing this. However, if you don't want to make a custom transform, you can just use two subplots to create the same effect. Rather than put together an example from scratch, there's an excellent example of this written by Paul Ivanov in the matplotlib examples (It's only in the current git tip, as it was only committed a few months ago. It's not on the webpage yet.). This is just a simple modification of this example to have a discontinuous x-axis instead of the y-axis. (Which is why I'm making this post a CW) Basically, you just do something like this: import matplotlib.pylab as plt import numpy as np # If you're not familiar with np.r_, don't worry too much about this. It's just # a series with points from 0 to 1 spaced at 0.1, and 9 to 10 with the same spacing. x = np.r_[0:1:0.1, 9:10:0.1] y = np.sin(x) fig,(ax,ax2) = plt.subplots(1, 2, sharey=True) # plot the same data on both axes ax.plot(x, y, 'bo') ax2.plot(x, y, 'bo') # zoom-in / limit the view to different portions of the data ax.set_xlim(0,1) # most of the data ax2.set_xlim(9,10) # outliers only # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) ax.yaxis.tick_left() ax.tick_params(labeltop='off') # don't put tick labels at the top ax2.yaxis.tick_right() # Make the spacing between the two axes a bit smaller plt.subplots_adjust(wspace=0.15) plt.show() To add the broken axis lines // effect, we can do this (again, modified from Paul Ivanov's example): import matplotlib.pylab as plt import numpy as np # If you're not familiar with np.r_, don't worry too much about this. It's just # a series with points from 0 to 1 spaced at 0.1, and 9 to 10 with the same spacing. x = np.r_[0:1:0.1, 9:10:0.1] y = np.sin(x) fig,(ax,ax2) = plt.subplots(1, 2, sharey=True) # plot the same data on both axes ax.plot(x, y, 'bo') ax2.plot(x, y, 'bo') # zoom-in / limit the view to different portions of the data ax.set_xlim(0,1) # most of the data ax2.set_xlim(9,10) # outliers only # hide the spines between ax and ax2 ax.spines['right'].set_visible(False) ax2.spines['left'].set_visible(False) ax.yaxis.tick_left() ax.tick_params(labeltop='off') # don't put tick labels at the top ax2.yaxis.tick_right() # Make the spacing between the two axes a bit smaller plt.subplots_adjust(wspace=0.15) # This looks pretty good, and was fairly painless, but you can get that # cut-out diagonal lines look with just a bit more work. The important # thing to know here is that in axes coordinates, which are always # between 0-1, spine endpoints are at these locations (0,0), (0,1), # (1,0), and (1,1). Thus, we just need to put the diagonals in the # appropriate corners of each of our axes, and so long as we use the # right transform and disable clipping. d = .015 # how big to make the diagonal lines in axes coordinates # arguments to pass plot, just so we don't keep repeating them kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((1-d,1+d),(-d,+d), **kwargs) # top-left diagonal ax.plot((1-d,1+d),(1-d,1+d), **kwargs) # bottom-left diagonal kwargs.update(transform=ax2.transAxes) # switch to the bottom axes ax2.plot((-d,d),(-d,+d), **kwargs) # top-right diagonal ax2.plot((-d,d),(1-d,1+d), **kwargs) # bottom-right diagonal # What's cool about this is that now if we vary the distance between # ax and ax2 via f.subplots_adjust(hspace=...) or plt.subplot_tool(), # the diagonal lines will move accordingly, and stay right at the tips # of the spines they are 'breaking' plt.show()"} +{"question_id": 24283171, "score": 95, "creation_date": 1403080506, "tags": ["r", "python"], "instruction": "Virtual environment in R?\n\nI've found several posts about best practice, reproducibility and workflow in R, for example: How to increase longer term reproducibility of research (particularly using R and Sweave) Complete substantive examples of reproducible research using R One of the major preoccupations is ensuring portability of code, in the sense that moving it to a new machine (possibly running a different OS) is relatively straightforward and gives the same results. Coming from a Python background, I'm used to the concept of a virtual environment. When coupled with a simple list of required packages, this goes some way to ensuring that the installed packages and libraries are available on any machine without too much fuss. Sure, it's no guarantee - different OSes have their own foibles and peculiarities - but it gets you 95% of the way there. Does such a thing exist within R? Even if it's not as sophisticated. For example simply maintaining a plain text list of required packages and a script that will install any that are missing? I'm about to start using R in earnest for the first time, probably in conjunction with Sweave, and would ideally like to start in the best way possible! Thanks for your thoughts.", "output": "It looks like there is another option from RStudio devs, renv. It's available on CRAN and supersedes Packrat. In short, you use renv::init() to initialize your project library, and use renv::snapshot() / renv::restore() to save and load the state of your library. I prefer this option to conda r-enviroments because here everything is stored in the file renv.lock, which can be committed to a Git repo and distributed to the team."} +{"question_id": 18862836, "score": 94, "creation_date": 1379470614, "tags": ["python", "argparse"], "instruction": "How to open file using argparse?\n\nI want to open file for reading using argparse. In cmd it must look like: my_program.py /filepath That's my try: parser = argparse.ArgumentParser() parser.add_argument('file', type = file) args = parser.parse_args() This gives me [edit: added later from comment by OP]: parser.add_argument('file', type = file) NameError: name 'file' is not defined", "output": "The type of the argument should be string (which is default anyway). So make it like this: parser = argparse.ArgumentParser() parser.add_argument('filename') args = parser.parse_args() with open(args.filename) as file: # do stuff here"} +{"question_id": 37642045, "score": 94, "creation_date": 1465130600, "tags": ["python", "visual-studio-code", "virtualenv"], "instruction": "Use virtualenv with Python with Visual Studio Code\n\nI have a Python project and navigation/autocomplete work perfectly within files in the project. I want to specify a virtualenv so that navigation/autocomplete work with packages inside that virtualenv. I tried this in settings.py, but navigation/autocomplete does not work. Also by setting \"python.pythonPath\":\"~/dev/venvs/proj/bin/python killed the original navigation/autocomplete. { \"python.autoComplete.extraPaths\": [ \"~/dev/venvs/proj\", \"~/dev/venvs/proj/lib\" ] }", "output": "With the latest update to the extension all you need to do is just specify the \"python.pythonPath\" as follows. The values for \"python.autoComplete.extraPaths\" will be determined during runtime, but you are still free to specify custom paths in there. Please, remember to restart Visual Studio Code once the necessary changes have been made. { \"python.pythonPath\": \"~/dev/venvs/proj/bin/python\" }"} +{"question_id": 3933197, "score": 94, "creation_date": 1287060241, "tags": ["python", "variables"], "instruction": "How to declare variable type, C style in Python\n\nI'm a programming student and my teacher is starting with C to teach us the programming paradigms, he said it's ok if I deliver my homework in python (it's easier and faster for the homeworks). And I would like to have my code to be as close as possible as in plain C. Question is: How do I declare data types for variables in python like you do in C. ex: int X,Y,Z; I know I can do this in python: x = 0 y = 0 z = 0 But that seems a lot of work and it misses the point of python being easier/faster than C. So, whats the shortest way to do this? P.S. I know you don't have to declare the data type in python most of the time, but still I would like to do it so my code looks as much possible like classmates'.", "output": "Edit: Python 3.5 introduced type hints which introduced a way to specify the type of a variable. This answer was written before this feature became available. There is no way to declare variables in Python, since neither \"declaration\" nor \"variables\" in the C sense exist. This will bind the three names to the same object: x = y = z = 0"} +{"question_id": 12860421, "score": 94, "creation_date": 1350049427, "tags": ["python", "pandas", "pivot-table"], "instruction": "How to aggregate unique count with pandas pivot_table\n\nThis code: df2 = ( pd.DataFrame({ 'X' : ['X1', 'X1', 'X1', 'X1'], 'Y' : ['Y2', 'Y1', 'Y1', 'Y1'], 'Z' : ['Z3', 'Z1', 'Z1', 'Z2'] }) ) g = df2.groupby('X') pd.pivot_table(g, values='X', rows='Y', cols='Z', margins=False, aggfunc='count') returns the following error: Traceback (most recent call last): ... AttributeError: 'Index' object has no attribute 'index' How do I get a Pivot Table with counts of unique values of one DataFrame column for two other columns? Is there aggfunc for count unique? Should I be using np.bincount()? NB. I am aware of pandas.Series.values_counts() however I need a pivot table. EDIT: The output should be: Z Z1 Z2 Z3 Y Y1 1 1 NaN Y2 NaN NaN 1", "output": "Do you mean something like this? >>> df2.pivot_table(values='X', index='Y', columns='Z', aggfunc=lambda x: len(x.unique())) Z Z1 Z2 Z3 Y Y1 1 1 NaN Y2 NaN NaN 1 Note that using len assumes you don't have NAs in your DataFrame. You can do x.value_counts().count() or len(x.dropna().unique()) otherwise."} +{"question_id": 19591458, "score": 94, "creation_date": 1382708352, "tags": ["python", "python-2.7", "utf-8"], "instruction": "Python reading from a file and saving to utf-8\n\nI'm having problems reading from a file, processing its string and saving to an UTF-8 File. Here is the code: try: filehandle = open(filename,\"r\") except: print(\"Could not open file \" + filename) quit() text = filehandle.read() filehandle.close() I then do some processing on the variable text. And then try: writer = open(output,\"w\") except: print(\"Could not open file \" + output) quit() #data = text.decode(\"iso 8859-15\") #writer.write(data.encode(\"UTF-8\")) writer.write(text) writer.close() This output the file perfectly but it does so in iso 8859-15 according to my editor. Since the same editor recognizes the input file (in the variable filename) as UTF-8 I don't know why this happened. As far as my reasearch has shown the commented lines should solve the problem. However when I use those lines the resulting file has gibberish in special character mainly, words with tilde as the text is in spanish. I would really appreciate any help as I am stumped....", "output": "Process text to and from Unicode at the I/O boundaries of your program using open with the encoding parameter. Make sure to use the (hopefully documented) encoding of the file being read. The default encoding varies by OS (specifically, locale.getpreferredencoding(False) is the encoding used), so I recommend always explicitly using the encoding parameter for portability and clarity (Python 3 syntax below): with open(filename, 'r', encoding='utf8') as f: text = f.read() # process Unicode text with open(filename, 'w', encoding='utf8') as f: f.write(text) If still using Python 2 or for Python 2/3 compatibility, the io module implements open with the same semantics as Python 3's open and exists in both versions: import io with io.open(filename, 'r', encoding='utf8') as f: text = f.read() # process Unicode text with io.open(filename, 'w', encoding='utf8') as f: f.write(text)"} +{"question_id": 33365055, "score": 94, "creation_date": 1445940638, "tags": ["python", "pandas", "datetime"], "instruction": "AttributeError: Can only use .dt accessor with datetimelike values\n\nHi I am using pandas to convert a column to month. When I read my data they are objects: Date object dtype: object So I am first making them to date time and then try to make them as months: import pandas as pd file = '/pathtocsv.csv' df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids']) df['Date'] = pd.to_datetime(df['Date']) df['Month'] = df['Date'].dt.month Also if that helps: In [10]: df['Date'].dtype Out[10]: dtype('O') So, the error I get is like this: /Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self) 2526 return maybe_to_datetimelike(self) 2527 except Exception: -> 2528 raise AttributeError(\"Can only use .dt accessor with datetimelike \" 2529 \"values\") 2530 AttributeError: Can only use .dt accessor with datetimelike values EDITED: Date columns are like this: 0 2014-01-01 1 2014-01-01 2 2014-01-01 3 2014-01-01 4 2014-01-03 5 2014-01-03 6 2014-01-03 7 2014-01-07 8 2014-01-08 9 2014-01-09 Do you have any ideas? Thank you very much!", "output": "Your problem here is that to_datetime silently failed so the dtype remained as str/object, if you set param errors='coerce' then if the conversion fails for any particular string then those rows are set to NaT. df['Date'] = pd.to_datetime(df['Date'], errors='coerce') So you need to find out what is wrong with those specific row values. See the docs"} +{"question_id": 12468022, "score": 94, "creation_date": 1347921330, "tags": ["python", "file-io", "filesystems", "queue", "cluster-computing"], "instruction": "\"OSError: [Errno 17] File exists\" when trying to use os.makedirs\n\nI have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory mydir. Each script, before outputting checks if mydir exists and if not creates it: if not os.path.isdir(mydir): os.makedirs(mydir) but this yields the error: os.makedirs(self.log_dir) File \"/usr/lib/python2.6/os.py\", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists I suspect it might be due to a race condition, where one job creates the dir before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.", "output": "Any time code can execute between when you check something and when you act on it, you will have a race condition. One way to avoid this (and the usual way in Python) is to just try and then handle the exception while True: mydir = next_dir_name() try: os.makedirs(mydir) break except OSError, e: if e.errno != errno.EEXIST: raise # time.sleep might help here pass If you have a lot of threads trying to make a predictable series of directories this will still raise a lot of exceptions, but you will get there in the end. Better to just have one thread creating the dirs in that case"} +{"question_id": 22402548, "score": 94, "creation_date": 1394793211, "tags": ["python", "user-input", "default-value"], "instruction": "How to define default value if empty user input in Python?\n\nHere I have to set the default value if the user will enter the value from the keyboard. Here is the code that user can enter value: input = int(raw_input(\"Enter the inputs : \")) Here the value will be assigned to a variable input after entering the value and hitting Enter. Is there any method that if we don't enter the value and directly hit the Enter key, the variable will be directly assigned to a default value, say as input = 0.025?", "output": "Python 3: inp = int(input('Enter the inputs: ').strip() or \"42\") Python 2: inp = int(raw_input('Enter the inputs: ').strip() or \"42\") How does it work? If nothing was entered then input/raw_input returns empty string. Empty string in Python is False, bool(\"\") -> False. Operator or returns first truthy value, which in this case is \"42\". This is not sophisticated input validation. It's slightly more robust with the addition of .strip(). That handles the possibility of the user entering spaces or other whitespace, which then would be True."} +{"question_id": 26266437, "score": 94, "creation_date": 1412802341, "tags": ["python", "linux", "django", "centos", "pip"], "instruction": "how to use python2.7 pip instead of default pip\n\nI just installed python 2.7 and also pip to the 2.7 site package. When I get the version with: pip -V It shows: pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6) How do I use the 2.7 version of pip located at: /usr/local/lib/python2.7/site-packages", "output": "There should be a binary called \"pip2.7\" installed at some location included within your $PATH variable. You can find that out by typing which pip2.7 This should print something like '/usr/local/bin/pip2.7' to your stdout. If it does not print anything like this, it is not installed. In that case, install it by running $ wget https://bootstrap.pypa.io/pip/2.7/get-pip.py $ sudo python2.7 get-pip.py Now, you should be all set, and which pip2.7 should return the correct output."} +{"question_id": 11768214, "score": 94, "creation_date": 1343858380, "tags": ["python", "download", "ftp", "python-requests"], "instruction": "Python: download a file from an FTP server\n\nI'm trying to download some public data files. I screenscrape to get the links to the files, which all look something like this: ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/nhanes/2001-2002/L28POC_B.xpt I can't find any documentation on the Requests library website.", "output": "The requests library doesn't support ftp:// links. To download a file from an FTP server you could use urlretrieve: import urllib.request urllib.request.urlretrieve('ftp://server/path/to/file', 'file') # if you need to pass credentials: # urllib.request.urlretrieve('ftp://username:password@server/path/to/file', 'file') Or urlopen: import shutil import urllib.request from contextlib import closing with closing(urllib.request.urlopen('ftp://server/path/to/file')) as r: with open('file', 'wb') as f: shutil.copyfileobj(r, f) Python 2: import shutil import urllib2 from contextlib import closing with closing(urllib2.urlopen('ftp://server/path/to/file')) as r: with open('file', 'wb') as f: shutil.copyfileobj(r, f)"} +{"question_id": 47518874, "score": 94, "creation_date": 1511812737, "tags": ["python", "jupyter-notebook", "jupyter", "python-asyncio"], "instruction": "How do I run Python asyncio code in a Jupyter notebook?\n\nI have some asyncio code which runs fine in the Python interpreter (CPython 3.6.2). I would now like to run this inside a Jupyter notebook with an IPython kernel. I can run it with import asyncio asyncio.get_event_loop().run_forever() and while that seems to work it also seems to block the notebook and doesn't seem to play nice with the notebook. My understanding is that Jupyter uses Tornado under the hood so I tried to install a Tornado event loop as recommended in the Tornado docs: from tornado.platform.asyncio import AsyncIOMainLoop AsyncIOMainLoop().install() However that gives the following error: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 1 from tornado.platform.asyncio import AsyncIOMainLoop ----> 2 AsyncIOMainLoop().install() ~\\AppData\\Local\\Continuum\\Anaconda3\\envs\\numismatic\\lib\\site- packages\\tornado\\ioloop.py in install(self) 179 `IOLoop` (e.g., :class:`tornado.httpclient.AsyncHTTPClient`). 180 \"\"\" --> 181 assert not IOLoop.initialized() 182 IOLoop._instance = self 183 AssertionError: Finally I found the following page: http://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Asynchronous.html so I added a cell with the following code: import asyncio from ipykernel.eventloops import register_integration @register_integration('asyncio') def loop_asyncio(kernel): '''Start a kernel with asyncio event loop support.''' loop = asyncio.get_event_loop() def kernel_handler(): loop.call_soon(kernel.do_one_iteration) loop.call_later(kernel._poll_interval, kernel_handler) loop.call_soon(kernel_handler) try: if not loop.is_running(): loop.run_forever() finally: loop.run_until_complete(loop.shutdown_asyncgens()) loop.close() and in the next cell I ran: %gui asyncio That worked but I don't really understand why and how it works. Can someone please explain that to me?", "output": "EDIT FEB 21st, 2019: Problem Fixed This is no longer an issue on the latest version of Jupyter Notebook. Authors of Jupyter Notebook detailed the case here. You may now just do: import asyncio async def func(): print(\"started.\") await asyncio.sleep(2) print(\"done.\") await func() Answer below was the original response that was marked correct by the op. This was posted quite a bit ago, but in case other people are looking for an explanation and solution to the problem of running asynchronous code inside Jupyter Notebook; Jupyter's Tornado 5.0 update bricked asyncio functionalities after the addition of its own asyncio event loop: Thus, for any asyncio functionality to run on Jupyter Notebook you cannot invoke a loop.run_until_complete(...), since the loop you will receive from asyncio.get_event_loop() will be active. Instead, you must either add the task to the current event loop: import asyncio loop = asyncio.get_event_loop() loop.create_task(some_async_function()) Or get the results via run_coroutine_threadsafe: import asyncio loop = asyncio.get_event_loop() asyncio.run_coroutine_threadsafe(some_async_function(), loop)"} +{"question_id": 3847386, "score": 94, "creation_date": 1286050832, "tags": ["python"], "instruction": "How to test if a list contains another list as a contiguous subsequence?\n\nHow can I test if a list contains another list (ie. it's a contiguous subsequence). Say there was a function called contains: contains([1,2], [-1, 0, 1, 2]) # Returns [2, 3] (contains returns [start, end]) contains([1,3], [-1, 0, 1, 2]) # Returns False contains([1, 2], [[1, 2], 3]) # Returns False contains([[1, 2]], [[1, 2], 3]) # Returns [0, 0] Edit: contains([2, 1], [-1, 0, 1, 2]) # Returns False contains([-1, 1, 2], [-1, 0, 1, 2]) # Returns False contains([0, 1, 2], [-1, 0, 1, 2]) # Returns [1, 3]", "output": "Here is my version: def contains(small, big): for i in xrange(len(big)-len(small)+1): for j in xrange(len(small)): if big[i+j] != small[j]: break else: return i, i+len(small) return False It returns a tuple of (start, end+1) since I think that is more pythonic, as Andrew Jaffe points out in his comment. It does not slice any sublists so should be reasonably efficient. One point of interest for newbies is that it uses the else clause on the for statement - this is not something I use very often but can be invaluable in situations like this. This is identical to finding substrings in a string, so for large lists it may be more efficient to implement something like the Boyer-Moore algorithm. Note: If you are using Python3, change xrange to range."} +{"question_id": 2389846, "score": 94, "creation_date": 1267822319, "tags": ["python", "decimal"], "instruction": "Python Decimals format\n\nWhat is a good way to format a python decimal like this way? 1.00 --> '1' 1.20 --> '1.2' 1.23 --> '1.23' 1.234 --> '1.23' 1.2345 --> '1.23'", "output": "If you have Python 2.6 or newer, use format: '{0:.3g}'.format(num) For Python 2.5 or older: '%.3g'%(num) Explanation: {0}tells format to print the first argument -- in this case, num. Everything after the colon (:) specifies the format_spec. .3 sets the precision to 3. g removes insignificant zeros. See http://en.wikipedia.org/wiki/Printf#fprintf For example: tests=[(1.00, '1'), (1.2, '1.2'), (1.23, '1.23'), (1.234, '1.23'), (1.2345, '1.23')] for num, answer in tests: result = '{0:.3g}'.format(num) if result != answer: print('Error: {0} --> {1} != {2}'.format(num, result, answer)) exit() else: print('{0} --> {1}'.format(num,result)) yields 1.0 --> 1 1.2 --> 1.2 1.23 --> 1.23 1.234 --> 1.23 1.2345 --> 1.23 Using Python 3.6 or newer, you could use f-strings: In [40]: num = 1.234; f'{num:.3g}' Out[40]: '1.23'"} +{"question_id": 39173992, "score": 94, "creation_date": 1472242150, "tags": ["python", "python-2.7", "pandas"], "instruction": "Drop all data in a pandas dataframe\n\nI would like to drop all data in a pandas dataframe, but am getting TypeError: drop() takes at least 2 arguments (3 given). I essentially want a blank dataframe with just my columns headers. import pandas as pd web_stats = {'Day': [1, 2, 3, 4, 2, 6], 'Visitors': [43, 43, 34, 23, 43, 23], 'Bounce_Rate': [3, 2, 4, 3, 5, 5]} df = pd.DataFrame(web_stats) df.drop(axis=0, inplace=True) print df", "output": "You need to pass the labels to be dropped. df.drop(df.index, inplace=True) By default, it operates on axis=0. You can achieve the same with df.iloc[:0] which is much more efficient."} +{"question_id": 49574817, "score": 94, "creation_date": 1522414270, "tags": ["python", "pandas"], "instruction": "Fill a new pandas column with row numbers\n\nI have the following DataFrame data with random index values: A B 100 0 7 203 5 4 5992 0 10 2003 9 8 20 10 5 12 6 2 I would like to add a new column 'C' with row numbers. For example: A B C 100 0 7 0 203 5 4 1 5992 0 10 2 2003 9 8 3 20 10 5 4 12 6 2 5", "output": "Use numpy.arange or range by length of DataFrame: df['C'] = np.arange(len(df)) df['C'] = range(len(df)) Or you can use DataFrame.shape, thank you @Mehmet Burak Say\u0131c\u0131: df['C'] = np.arange(df.shape[0]) print (df) A B C 100 0 7 0 203 5 4 1 5992 0 10 2 2003 9 8 3 20 10 5 4 12 6 2 5"} +{"question_id": 44524901, "score": 94, "creation_date": 1497365400, "tags": ["python", "matrix", "pytorch", "matrix-multiplication", "tensor"], "instruction": "How to multiply matrices in PyTorch?\n\nWith numpy, I can do a simple matrix multiplication like this: a = numpy.ones((3, 2)) b = numpy.ones((2, 1)) result = a.dot(b) However, this does not work with PyTorch: a = torch.ones((3, 2)) b = torch.ones((2, 1)) result = torch.dot(a, b) This code throws the following error: RuntimeError: 1D tensors expected, but got 2D and 2D tensors How do I perform matrix multiplication in PyTorch?", "output": "Use torch.mm: torch.mm(a, b) torch.dot() behaves differently to np.dot(). There's been some discussion about what would be desirable here. Specifically, torch.dot() treats both a and b as 1D vectors (irrespective of their original shape) and computes their inner product. The error is thrown because this behaviour makes your a a vector of length 6 and your b a vector of length 2; hence their inner product can't be computed. For matrix multiplication in PyTorch, use torch.mm(). Numpy's np.dot() in contrast is more flexible; it computes the inner product for 1D arrays and performs matrix multiplication for 2D arrays. torch.matmul performs matrix multiplications if both arguments are 2D and computes their dot product if both arguments are 1D. For inputs of such dimensions, its behaviour is the same as np.dot. It also lets you do broadcasting or matrix x matrix, matrix x vector and vector x vector operations in batches. # 1D inputs, same as torch.dot a = torch.rand(n) b = torch.rand(n) torch.matmul(a, b) # torch.Size([]) # 2D inputs, same as torch.mm a = torch.rand(m, k) b = torch.rand(k, j) torch.matmul(a, b) # torch.Size([m, j])"} +{"question_id": 8632033, "score": 94, "creation_date": 1324854230, "tags": ["python", "numpy", "boolean-operations"], "instruction": "How to perform element-wise Boolean operations on NumPy arrays\n\nFor example, I would like to create a mask that masks elements with value between 40 and 60: foo = np.asanyarray(range(100)) mask = (foo < 40).__or__(foo > 60) Which just looks ugly. I can't write (foo < 40) or (foo > 60) because I end up with: ValueError Traceback (most recent call last) ... ----> 1 (foo < 40) or (foo > 60) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Is there a canonical way of doing element-wise Boolean operations on NumPy arrays with good looking code?", "output": "Try this: mask = (foo < 40) | (foo > 60) Note: the __or__ method in an object overloads the bitwise or operator (|), not the Boolean or operator."} +{"question_id": 17657720, "score": 94, "creation_date": 1373901440, "tags": ["python", "list-comprehension"], "instruction": "python list comprehension double for\n\nvec = [[1,2,3], [4,5,6], [7,8,9]] print [num for elem in vec for num in elem] <----- this >>> [1, 2, 3, 4, 5, 6, 7, 8, 9] This is tricking me out. I understand elem is the lists inside of the list from for elem in vic I don't quite understand the usage of num and for num in elem in the beginning and the end. How does python interpret this? What's the order it looks at?", "output": "Lets break it down. A simple list-comprehension: [x for x in collection] This is easy to understand if we break it into parts: [A for B in C] A is the item that will be in the resulting list B is each item in the collection C C is the collection itself. In this way, one could write: [x.lower() for x in words] In order to convert all words in a list to lowercase. It is when we complicate this with another list like so: [x for y in collection for x in y] # [A for B in C for D in E] Here, something special happens. We want our final list to include A items, and A items are found inside B items, so we have to tell the list-comprehension that. A is the item that will be in the resulting list B is each item in the collection C C is the collection itself D is each item in the collection E (in this case, also A) E is another collection (in this case, B) This logic is similar to the normal for loop: for y in collection: # for B in C: for x in y: # for D in E: (in this case: for A in B) # receive x # # receive A To expand on this, and give a great example + explanation, imagine that there is a train. The train engine (the front) is always going to be there (the result of the list-comprehension) Then, there are any number of train cars, each train car is in the form: for x in y A list comprehension could look like this: [z for b in a for c in b for d in c ... for z in y] Which would be like having this regular for-loop: for b in a: for c in b: for d in c: ... for z in y: # have z In other words, instead of going down a line and indenting, in a list-comprehension you just add the next loop on to the end. To go back to the train analogy: Engine - Car - Car - Car ... Tail What is the tail? The tail is a special thing in list-comprehensions. You don't need one, but if you have a tail, the tail is a condition, look at this example: [line for line in file if not line.startswith('#')] This would give you every line in a file as long as the line didn't start with a hash character (#), others are just skipped. The trick to using the \"tail\" of the train is that it is checked for True/False at the same time as you have your final 'Engine' or 'result' from all the loops, the above example in a regular for-loop would look like this: for line in file: if not line.startswith('#'): # have line please note: Though in my analogy of a train there is only a 'tail' at the end of the train, the condition or 'tail' can be after every 'car' or loop... for example: >>> z = [[1,2,3,4],[5,6,7,8],[9,10,11,12]] >>> [x for y in z if sum(y)>10 for x in y if x < 10] [5, 6, 7, 8, 9] In regular for-loop: >>> for y in z: if sum(y)>10: for x in y: if x < 10: print x 5 6 7 8 9"} +{"question_id": 34685905, "score": 94, "creation_date": 1452286536, "tags": ["python", "apache-spark", "pyspark", "pycharm", "homebrew"], "instruction": "How to link PyCharm with PySpark?\n\nI'm new with apache spark and apparently I installed apache-spark with homebrew in my macbook: Last login: Fri Jan 8 12:52:04 on console user@MacBook-Pro-de-User-2:~$ pyspark Python 2.7.10 (default, Jul 13 2015, 12:05:58) [GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 16/01/08 14:46:44 INFO SparkContext: Running Spark version 1.5.1 16/01/08 14:46:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/01/08 14:46:47 INFO SecurityManager: Changing view acls to: user 16/01/08 14:46:47 INFO SecurityManager: Changing modify acls to: user 16/01/08 14:46:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); users with modify permissions: Set(user) 16/01/08 14:46:50 INFO Slf4jLogger: Slf4jLogger started 16/01/08 14:46:50 INFO Remoting: Starting remoting 16/01/08 14:46:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.64:50199] 16/01/08 14:46:51 INFO Utils: Successfully started service 'sparkDriver' on port 50199. 16/01/08 14:46:51 INFO SparkEnv: Registering MapOutputTracker 16/01/08 14:46:51 INFO SparkEnv: Registering BlockManagerMaster 16/01/08 14:46:51 INFO DiskBlockManager: Created local directory at /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/blockmgr-769e6f91-f0e7-49f9-b45d-1b6382637c95 16/01/08 14:46:51 INFO MemoryStore: MemoryStore started with capacity 530.0 MB 16/01/08 14:46:52 INFO HttpFileServer: HTTP File server directory is /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/spark-8e4749ea-9ae7-4137-a0e1-52e410a8e4c5/httpd-1adcd424-c8e9-4e54-a45a-a735ade00393 16/01/08 14:46:52 INFO HttpServer: Starting HTTP Server 16/01/08 14:46:52 INFO Utils: Successfully started service 'HTTP file server' on port 50200. 16/01/08 14:46:52 INFO SparkEnv: Registering OutputCommitCoordinator 16/01/08 14:46:52 INFO Utils: Successfully started service 'SparkUI' on port 4040. 16/01/08 14:46:52 INFO SparkUI: Started SparkUI at http://192.168.1.64:4040 16/01/08 14:46:53 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. 16/01/08 14:46:53 INFO Executor: Starting executor ID driver on host localhost 16/01/08 14:46:53 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50201. 16/01/08 14:46:53 INFO NettyBlockTransferService: Server created on 50201 16/01/08 14:46:53 INFO BlockManagerMaster: Trying to register BlockManager 16/01/08 14:46:53 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50201 with 530.0 MB RAM, BlockManagerId(driver, localhost, 50201) 16/01/08 14:46:53 INFO BlockManagerMaster: Registered BlockManager Welcome to ____ __ / __/__ ___ _____/ /__ _\\ \\/ _ \\/ _ `/ __/ '_/ /__ / .__/\\_,_/_/ /_/\\_\\ version 1.5.1 /_/ Using Python version 2.7.10 (default, Jul 13 2015 12:05:58) SparkContext available as sc, HiveContext available as sqlContext. >>> I would like start playing in order to learn more about MLlib. However, I use Pycharm to write scripts in python. The problem is: when I go to Pycharm and try to call pyspark, Pycharm can not found the module. I tried adding the path to Pycharm as follows: Then from a blog I tried this: import os import sys # Path for spark source folder os.environ['SPARK_HOME']=\"/Users/user/Apps/spark-1.5.2-bin-hadoop2.4\" # Append pyspark to Python Path sys.path.append(\"/Users/user/Apps/spark-1.5.2-bin-hadoop2.4/python/pyspark\") try: from pyspark import SparkContext from pyspark import SparkConf print (\"Successfully imported Spark Modules\") except ImportError as e: print (\"Can not import Spark Modules\", e) sys.exit(1) And still can not start using PySpark with Pycharm, any idea of how to \"link\" PyCharm with apache-pyspark?. Update: Then I search for apache-spark and python path in order to set the environment variables of Pycharm: apache-spark path: user@MacBook-Pro-User-2:~$ brew info apache-spark apache-spark: stable 1.6.0, HEAD Engine for large-scale data processing https://spark.apache.org/ /usr/local/Cellar/apache-spark/1.5.1 (649 files, 302.9M) * Poured from bottle From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/apache-spark.rb python path: user@MacBook-Pro-User-2:~$ brew info python python: stable 2.7.11 (bottled), HEAD Interpreted, interactive, object-oriented programming language https://www.python.org /usr/local/Cellar/python/2.7.10_2 (4,965 files, 66.9M) * Then with the above information I tried to set the environment variables as follows: Any idea of how to correctly link Pycharm with pyspark? Then when I run a python script with the above configuration I have this exception: /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/user/PycharmProjects/spark_examples/test_1.py Traceback (most recent call last): File \"/Users/user/PycharmProjects/spark_examples/test_1.py\", line 1, in from pyspark import SparkContext ImportError: No module named pyspark UPDATE: Then I tried this configurations proposed by @zero323 Configuration 1: /usr/local/Cellar/apache-spark/1.5.1/ out: user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1$ ls CHANGES.txt NOTICE libexec/ INSTALL_RECEIPT.json README.md LICENSE bin/ Configuration 2: /usr/local/Cellar/apache-spark/1.5.1/libexec out: user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1/libexec$ ls R/ bin/ data/ examples/ python/ RELEASE conf/ ec2/ lib/ sbin/", "output": "With PySpark package (Spark 2.2.0 and later) With SPARK-1267 being merged you should be able to simplify the process by pip installing Spark in the environment you use for PyCharm development. Go to File -> Settings -> Project Interpreter Click on install button and search for PySpark Click on install package button. Manually with user provided Spark installation Create Run configuration: Go to Run -> Edit configurations Add new Python configuration Set Script path so it points to the script you want to execute Edit Environment variables field so it contains at least: SPARK_HOME - it should point to the directory with Spark installation. It should contain directories such as bin (with spark-submit, spark-shell, etc.) and conf (with spark-defaults.conf, spark-env.sh, etc.) PYTHONPATH - it should contain $SPARK_HOME/python and optionally $SPARK_HOME/python/lib/py4j-some-version.src.zip if not available otherwise. some-version should match Py4J version used by a given Spark installation (0.8.2.1 - 1.5, 0.9 - 1.6, 0.10.3 - 2.0, 0.10.4 - 2.1, 0.10.4 - 2.2, 0.10.6 - 2.3, 0.10.7 - 2.4) Apply the settings Add PySpark library to the interpreter path (required for code completion): Go to File -> Settings -> Project Interpreter Open settings for an interpreter you want to use with Spark Edit interpreter paths so it contains path to $SPARK_HOME/python (an Py4J if required) Save the settings Optionally Install or add to path type annotations matching installed Spark version to get better completion and static error detection (Disclaimer - I am an author of the project). Finally Use newly created configuration to run your script."} +{"question_id": 7191209, "score": 94, "creation_date": 1314278922, "tags": ["python", "regex", "python-re"], "instruction": "re.sub replace with matched content\n\nTrying to get to grips with regular expressions in Python, I'm trying to output some HTML highlighted in part of a URL. My input is images/:id/size my output should be images/:id/size If I do this in Javascript method = 'images/:id/size'; method = method.replace(/\\:([a-z]+)/, '$1') alert(method) I get the desired result, but if I do this in Python >>> method = 'images/:id/huge' >>> re.sub('\\:([a-z]+)', '$1', method) 'images/$1/huge' I don't, how do I get Python to return the correct result rather than $1? Is re.sub even the right function to do this?", "output": "Simply use \\1 instead of $1: In [1]: import re In [2]: method = 'images/:id/huge' In [3]: re.sub(r'(:[a-z]+)', r'\\1', method) Out[3]: 'images/:id/huge' Also note the use of raw strings (r'...') for regular expressions. It is not mandatory but removes the need to escape backslashes, arguably making the code slightly more readable."} +{"question_id": 54137790, "score": 94, "creation_date": 1547159207, "tags": ["python", "python-3.x", "typeerror", "zlib", "bytesio"], "instruction": "Convert from '_io.BytesIO' to a bytes-like object in python3.6?\n\nI am using this function to uncompress the body of a HTTP response if it is compressed with gzip, compress or deflate. def uncompress_body(self, compression_type, body): if compression_type == 'gzip' or compression_type == 'compress': return zlib.decompress(body) elif compression_type == 'deflate': compressor = zlib.compressobj(9, zlib.DEFLATED, -zlib.MAX_WBITS) compressed = compressor.compress(body) compressed += compressor.flush() return base64.b64encode(compressed) return body However python throws this error message. TypeError: a bytes-like object is required, not '_io.BytesIO' on this line: return zlib.decompress(body) Essentially, how do I convert from '_io.BytesIO' to a bytes-like object?", "output": "It's a file-like object. Read them: >>> b = io.BytesIO(b'hello') >>> b.read() b'hello' If the data coming in from body is too large to read into memory, you'll want to refactor your code and use zlib.decompressobj instead of zlib.decompress."} +{"question_id": 2400504, "score": 94, "creation_date": 1268042903, "tags": ["python", "regex"], "instruction": "How can I make multiple replacements in a string using a dictionary?\n\nSuppose we have: d = { '\u0421\u043f\u043e\u0440\u0442':'\u0414\u043e\u0441\u0443\u0433', 'russianA':'englishA' } s = '\u0421\u043f\u043e\u0440\u0442 russianA' How can I replace each appearance within s of any of d's keys, with the corresponding value (in this case, the result would be '\u0414\u043e\u0441\u0443\u0433 englishA')?", "output": "Using re: import re s = '\u0421\u043f\u043e\u0440\u0442 not russianA' d = { '\u0421\u043f\u043e\u0440\u0442':'\u0414\u043e\u0441\u0443\u0433', 'russianA':'englishA' } keys = (re.escape(k) for k in d.keys()) pattern = re.compile(r'\\b(' + '|'.join(keys) + r')\\b') result = pattern.sub(lambda x: d[x.group()], s) # Output: '\u0414\u043e\u0441\u0443\u0433 not englishA' This will match whole words only. If you don't need that, use the pattern: pattern = re.compile('|'.join(re.escape(k) for k in d.keys())) Note that in this case you should sort the words descending by length if some of your dictionary entries are substrings of others."} +{"question_id": 36680402, "score": 94, "creation_date": 1460917302, "tags": ["python", "numpy"], "instruction": "TypeError: only length-1 arrays can be converted to Python scalars while plot showing\n\nI have such Python code: import numpy as np import matplotlib.pyplot as plt def f(x): return np.int(x) x = np.arange(1, 15.1, 0.1) plt.plot(x, f(x)) plt.show() And such error: TypeError: only length-1 arrays can be converted to Python scalars How can I fix it?", "output": "The error \"only length-1 arrays can be converted to Python scalars\" is raised when the function expects a single value but you pass an array instead. np.int was an alias for the built-in int, which is deprecated in numpy v1.20. The argument for int should be a scalar and it does not accept array-like objects. In general, if you want to apply a function to each element of the array, you can use np.vectorize: import numpy as np import matplotlib.pyplot as plt def f(x): return int(x) f2 = np.vectorize(f) x = np.arange(1, 15.1, 0.1) plt.plot(x, f2(x)) plt.show() You can skip the definition of f(x) and just pass the function int to the vectorize function: f2 = np.vectorize(int). Note that np.vectorize is just a convenience function and basically a for loop. That will be inefficient over large arrays. Whenever you have the possibility, use truly vectorized functions or methods (like astype(int) as @FFT suggests)."} +{"question_id": 17901341, "score": 94, "creation_date": 1374951497, "tags": ["python", "django", "django-nonrel"], "instruction": "Django - How to make a variable available to all templates?\n\nI would like to know how to pass a variable to all my templates, without repeating the same code on every method in my views.py file? In the example below I would like to make categories (an array of category objects) available to all templates in the web app. Eg: I would like to avoid writing 'categories':categories on every method. Is it possible? One view method def front_page(request): categories = Category.objects.all() if is_logged_in(request) is False: return render_to_response('users/signup.html', {'is_logged_in': is_logged_in(request), 'categories':categories}, context_instance=RequestContext(request)) else: return render_to_response('users/front_page.html', {'is_logged_in': is_logged_in(request), 'categories':categories},context_instance=RequestContext(request)) Another view method def another_view_method(request): categories = Category.objects.all() return render_to_response('eg/front_page.html', {'is_logged_in': is_logged_in(request), 'categories':categories},context_instance=RequestContext(request))", "output": "What you want is a context processor, and it's very easy to create one. Assuming you have an app named custom_app, follow the next steps: Add custom_app to INSTALLED_APPS in settings.py (you've done it already, right?); Create a file named context_processors.py into custom_app folder; Add the following code to that new file: def categories_processor(request): categories = Category.objects.all() return {'categories': categories} Add context_processors.py to TEMPLATE_CONTEXT_PROCESSORS in settings.py TEMPLATE_CONTEXT_PROCESSORS += (\"custom_app.context_processors.categories_processor\", ) And now you can use {{categories}} in all the templates :D As of Django 1.8 To add a TEMPLATE_CONTEXT_PROCESSORS, in the settings you must add the next code: TEMPLATES[0]['OPTIONS']['context_processors'].append(\"custom_app.context_processors.categories_processor\") Or include that string directly in the OPTIONS.context_processors key in your TEMPLATES setting."} +{"question_id": 38876721, "score": 94, "creation_date": 1470840442, "tags": ["python", "flask"], "instruction": "Handle Flask requests concurrently with threaded=True\n\nWhat exactly does passing threaded = True to app.run() do? My application processes input from the user, and takes a bit of time to do so. During this time, the application is unable to handle other requests. I have tested my application with threaded=True and it allows me to handle multiple requests concurrently.", "output": "As of Flask 1.0, the WSGI server included with Flask is run in threaded mode by default. Prior to 1.0, or if you disable threading, the server is run in single-threaded mode, and can only handle one request at a time. Any parallel requests will have to wait until they can be handled, which can lead to issues if you tried to contact your own server from a request. With threaded=True requests are each handled in a new thread. How many threads your server can handle concurrently depends entirely on your OS and what limits it sets on the number of threads per process. The implementation uses the SocketServer.ThreadingMixIn class, which sets no limits to the number of threads it can spin up. Note that the Flask server is designed for development only. It is not a production-ready server. Don't rely on it to run your site on the wider web. Use a proper WSGI server (like gunicorn or uWSGI) instead."} +{"question_id": 55684960, "score": 93, "creation_date": 1555315949, "tags": ["python", "list"], "instruction": "Why does Python start at index -1 (as opposed to 0) when indexing a list from the end?\n\nlist = [\"a\", \"b\", \"c\", \"d\"] print(list[3]) # Number 3 is \"d\" print(list[-4]) # Number -4 is \"a\"", "output": "To explain it in another way, because -0 is equal to 0, if backward starts from 0, it is ambiguous to the interpreter. If you are confused about -, and looking for another way to index backwards more understandably, you can try ~, it is a mirror of forward: arr = [\"a\", \"b\", \"c\", \"d\"] print(arr[~0]) # d print(arr[~1]) # c The typical usages for ~ are like \"swap mirror node\" or \"find median in a sort list\": \"\"\"swap mirror node\"\"\" def reverse(arr: List[int]) -> None: for i in range(len(arr) // 2): arr[i], arr[~i] = arr[~i], arr[i] \"\"\"find median in a sort list\"\"\" def median(arr: List[float]) -> float: mid = len(arr) // 2 return (arr[mid] + arr[~mid]) / 2 \"\"\"deal with mirror pairs\"\"\" # verify the number is strobogrammatic, strobogrammatic number looks the same when rotated 180 degrees def is_strobogrammatic(num: str) -> bool: return all(num[i] + num[~i] in '696 00 11 88' for i in range(len(num) // 2 + 1)) ~ actually is a math trick of inverse code and complement code, and it is more easy to understand in some situations. Discussion about whether should use python tricks like ~: In my opinion, if it is a code maintained by yourself, you can use any trick to avoid potential bug or achieve goal easier, because of maybe a high readability and usability. But in team work, avoid using 'too clever' code, may bring troubles to your co-workers. For example, here is one concise code from Stefan Pochmann to solve this problem. I learned a lot from his code. But some are just for fun, too hackish to use. # a strobogrammatic number is a number that looks the same when rotated 180 degrees (looked at upside down) # find all strobogrammatic numbers that are of length = n def findStrobogrammatic(self, n): nums = n % 2 * list('018') or [''] while n > 1: n -= 2 # n < 2 is so genius here nums = [a + num + b for a, b in '00 11 88 69 96'.split()[n < 2:] for num in nums] return nums I have summarized python tricks like this, in case you are interested."} +{"question_id": 20021457, "score": 93, "creation_date": 1384622936, "tags": ["python", "audio", "mp3"], "instruction": "Playing mp3 song on python\n\nI want to play my song (mp3) from python, can you give me a simplest command to do that? This is not correct: import wave w = wave.open(\"e:/LOCAL/Betrayer/Metalik Klinik1-Anak Sekolah.mp3\",\"r\")", "output": "Try this. It's simplistic, but probably not the best method. from pygame import mixer # Load the popular external library mixer.init() mixer.music.load('e:/LOCAL/Betrayer/Metalik Klinik1-Anak Sekolah.mp3') mixer.music.play() Please note that pygame's support for MP3 is limited. Also, as pointed out by Samy Bencherif, there won't be any silly pygame window popup when you run the above code. Installation is simple - $pip install pygame Update: Above code will only play the music if ran interactively, since the play() call will execute instantaneously and the script will exit. To avoid this, you could instead use the following to wait for the music to finish playing and then exit the program, when running the code as a script. import time from pygame import mixer mixer.init() mixer.music.load(\"/file/path/mymusic.ogg\") mixer.music.play() while mixer.music.get_busy(): # wait for music to finish playing time.sleep(1)"} +{"question_id": 6402812, "score": 93, "creation_date": 1308492311, "tags": ["python"], "instruction": "How to convert an H:MM:SS time string to seconds in Python?\n\nBasically I have the inverse of this problem: Python Time Seconds to h:m:s I have a string in the format H:MM:SS (always 2 digits for minutes and seconds), and I need the integer number of seconds that it represents. How can I do this in python? For example: \"1:23:45\" would produce an output of 5025 \"0:04:15\" would produce an output of 255 \"0:00:25\" would produce an output of 25 etc", "output": "def get_sec(time_str): \"\"\"Get seconds from time.\"\"\" h, m, s = time_str.split(':') return int(h) * 3600 + int(m) * 60 + int(s) print(get_sec('1:23:45')) print(get_sec('0:04:15')) print(get_sec('0:00:25'))"} +{"question_id": 37003862, "score": 93, "creation_date": 1462277679, "tags": ["python", "python-3.x", "google-cloud-storage", "boto", "google-cloud-platform"], "instruction": "How to upload a file to Google Cloud Storage on Python 3?\n\nHow can I upload a file to Google Cloud Storage from Python 3? Eventually Python 2, if it's infeasible from Python 3. I've looked and looked, but haven't found a solution that actually works. I tried boto, but when I try to generate the necessary .boto file through gsutil config -e, it keeps saying that I need to configure authentication through gcloud auth login. However, I have done the latter a number of times, without it helping.", "output": "Use the standard gcloud library, which supports both Python 2 and Python 3. Example of Uploading File to Cloud Storage from gcloud import storage from oauth2client.service_account import ServiceAccountCredentials import os credentials_dict = { 'type': 'service_account', 'client_id': os.environ['BACKUP_CLIENT_ID'], 'client_email': os.environ['BACKUP_CLIENT_EMAIL'], 'private_key_id': os.environ['BACKUP_PRIVATE_KEY_ID'], 'private_key': os.environ['BACKUP_PRIVATE_KEY'], } credentials = ServiceAccountCredentials.from_json_keyfile_dict( credentials_dict ) client = storage.Client(credentials=credentials, project='myproject') bucket = client.get_bucket('mybucket') blob = bucket.blob('myfile') blob.upload_from_filename('myfile')"} +{"question_id": 42449814, "score": 93, "creation_date": 1487977747, "tags": ["python", "scikit-learn", "virtualenv", "jupyter-notebook"], "instruction": "Running Jupyter notebook in a virtualenv: installed sklearn module not available\n\nI have installed a created a virtualenv machinelearn and installed a few python modules (pandas, scipy and sklearn) in that environment. When I run jupyter notebook, I can import pandas and scipy in my notebooks - however, when I try to import sklearn, I get the following error message: import sklearn --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in () ----> 1 import sklearn ImportError: No module named 'sklearn' I am able to import all modules, at the command line - so I know they have been successfully installed: (machinelearn) me@yourbox:~/path/to/machinelearn$ python -c \"import pandas, scipy, sklearn\" (machinelearn) me@yourbox:~/path/to/machinelearn$ How can I import sklearn in my jupyter notebook running in a virtualenv?", "output": "You probably have not installed jupyter / IPython in your virtualenv. Try the following: python -c \"import IPython\" and check that the jupyter command found in your $PATH is the one from the bin folder of your venv: which jupyter For windows users in a powershell console, you can use the following to check that the jupyter command in your $env:Path is the one from the Scripts folder of you venv: get-command jupyter Edit: if this is the problem, just run python -m pip install jupyter in your venv. Edit 2: actually you might also need: python -m ipykernel install --user --name=my-virtualenv-name and then switch the kernel named \"my-virtualenv-name\" in the jupyter user interface. Edit 3: maybe the --user flag in the last command is a bad idea: python -m ipykernel install --name=my-virtualenv-name"} +{"question_id": 14225608, "score": 93, "creation_date": 1357686371, "tags": ["python", "regex"], "instruction": "Python: How to use RegEx in an if statement?\n\nI have the following code which looks through the files in one directory and copies files that contain a certain string into another directory, but I am trying to use Regular Expressions as the string could be upper and lowercase or a mix of both. Here is the code that works, before I tried to use RegEx's import os import re import shutil def test(): os.chdir(\"C:/Users/David/Desktop/Test/MyFiles\") files = os.listdir(\".\") os.mkdir(\"C:/Users/David/Desktop/Test/MyFiles2\") for x in (files): inputFile = open((x), \"r\") content = inputFile.read() inputFile.close() if (\"Hello World\" in content) shutil.copy(x, \"C:/Users/David/Desktop/Test/MyFiles2\") Here is my code when I have tried to use RegEx's import os import re import shutil def test2(): os.chdir(\"C:/Users/David/Desktop/Test/MyFiles\") files = os.listdir(\".\") os.mkdir(\"C:/Users/David/Desktop/Test/MyFiles2\") regex_txt = \"facebook.com\" for x in (files): inputFile = open((x), \"r\") content = inputFile.read() inputFile.close() regex = re.compile(regex_txt, re.IGNORECASE) Im guessing that I need a line of code that is something like if regex = re.compile(regex_txt, re.IGNORECASE) == True But I cant seem to get anything to work, if someone could point me in the right direction it would be appreciated.", "output": "import re if re.match(regex, content): blah.. You could also use re.search depending on how you want it to match. You can run this example: \"\"\" very nice interface to try regexes: https://regex101.com/ \"\"\" # %% \"\"\"Simple if statement with a regex\"\"\" import re regex = r\"\\s*Proof.\\s*\" contents = ['Proof.\\n', '\\nProof.\\n'] for content in contents: assert re.match(regex, content), f'Failed on {content=} with {regex=}' if re.match(regex, content): print(content)"} +{"question_id": 54087303, "score": 93, "creation_date": 1546933648, "tags": ["python", "python-requests"], "instruction": "How to check for \"200 OK\"?\n\nWhat is the easiest way to check whether the response received from a requests post was \"200 OK\" or an error has occurred? I tried doing something like this: .... resp = requests.post(my_endpoint_var, headers=header_var, data=post_data_var) print(resp) if resp == \"\": print ('OK!') else: print ('Boo!') The output on the screen is: Response [200] (including the \"<\" and \">\") Boo! So even though I am getting a 200, my check in the if statement is somehow not matching?", "output": "According to the docs, there's a status_code property on the response object. So you can do the following: if resp.status_code == 200: print ('OK!') else: print ('Boo!') As others have pointed out, a simpler check might be to use the ok property: if resp.ok: print ('OK!') else: print ('Boo!') That is, if you want to consider all 2xx response codes and not 200 explicitly. You may also want to check Peter's answer for a more pythonic way to do this."} +{"question_id": 30239092, "score": 93, "creation_date": 1431611340, "tags": ["python", "input", "multiline"], "instruction": "How to get multiline input from the user\n\nI want to write a program that gets multiple line input and work with it line by line. Why isn't there any function like raw_input in Python 3? input does not allow the user to put lines separated by newline (Enter). It prints back only the first line. Can it be stored in a variable or even read it to a list?", "output": "In Python 3.x the raw_input() of Python 2.x has been replaced by input() function. However in both the cases you cannot input multi-line strings, for that purpose you would need to get input from the user line by line and then .join() them using \\n, or you can also take various lines and concatenate them using + operator separated by \\n To get multi-line input from the user you can go like: no_of_lines = 5 lines = \"\" for i in xrange(no_of_lines): lines+=input()+\"\\n\" print(lines) Or lines = [] while True: line = input() if line: lines.append(line) else: break text = '\\n'.join(lines)"} +{"question_id": 3876936, "score": 93, "creation_date": 1286399512, "tags": ["python", "fabric"], "instruction": "How to continue a task when Fabric receives an error\n\nWhen I define a task to run on several remote servers, if the task runs on server one and exits with an error, Fabric will stop and abort the task. But I want to make fabric ignore the error and run the task on the next server. How can I make it do this? For example: $ fab site1_service_gw [site1rpt1] Executing task 'site1_service_gw' [site1fep1] run: echo 'Nm123!@#' | sudo -S route [site1fep1] err: [site1fep1] err: We trust you have received the usual lecture from the local System [site1fep1] err: Administrator. It usually boils down to these three things: [site1fep1] err: [site1fep1] err: #1) Respect the privacy of others. [site1fep1] err: #2) Think before you type. [site1fep1] err: #3) With great power comes great responsibility. [site1fep1] err: root's password: [site1fep1] err: sudo: route: command not found Fatal error: run() encountered an error (return code 1) while executing 'echo 'Nm123!@#' | sudo -S route ' Aborting.", "output": "From the docs: ... Fabric defaults to a \u201cfail-fast\u201d behavior pattern: if anything goes wrong, such as a remote program returning a nonzero return value or your fabfile\u2019s Python code encountering an exception, execution will halt immediately. This is typically the desired behavior, but there are many exceptions to the rule, so Fabric provides env.warn_only, a Boolean setting. It defaults to False, meaning an error condition will result in the program aborting immediately. However, if env.warn_only is set to True at the time of failure \u2013 with, say, the settings context manager \u2013 Fabric will emit a warning message but continue executing. Looks like you can exercise fine-grained control over where errors are ignored by using the settings context manager, something like so: from fabric.api import settings sudo('mkdir tmp') # can't fail with settings(warn_only=True): sudo('touch tmp/test') # can fail sudo('rm tmp') # can't fail"} +{"question_id": 20944483, "score": 93, "creation_date": 1388990644, "tags": ["python", "python-3.x", "sorting", "dictionary"], "instruction": "Python 3 sort a dict by its values\n\nThe only methods I found work for python2 or return only list of tuples. Is it possible to sort dictionary, e.g. {\"aa\": 3, \"bb\": 4, \"cc\": 2, \"dd\": 1}, by its values? Order of sorted dictionary I want to achieve is from largest to smallest. I want results to look like this: bb 4 aa 3 cc 2 dd 1 And after sorting I want to store it into a text file.", "output": "itemgetter (see other answers) is (as I know) more efficient for large dictionaries but for the common case, I believe that d.get wins. And it does not require an extra import. >>> d = {\"aa\": 3, \"bb\": 4, \"cc\": 2, \"dd\": 1} >>> for k in sorted(d, key=d.get, reverse=True): ... k, d[k] ... ('bb', 4) ('aa', 3) ('cc', 2) ('dd', 1) Note that alternatively you can set d.__getitem__ as key function which may provide a small performance boost over d.get."} +{"question_id": 15078519, "score": 93, "creation_date": 1361834848, "tags": ["python"], "instruction": "python dictionary passed as an input to a function acts like a global in that function rather than a local\n\nI am very confused by the behaviour below. Cases 1, 3, and 4 perform as I would expect, but case 2 does not. Why does case 2 allow the function to change the value of the dictionary entry globally, even though the dictionary is never returned by the function? A main reason I am using functions is to isolate everything in the function from the rest of the code, but this does not seem to be possible if I choose to use the same variable names inside of the function. I was under the understanding that anything explicitly defined in a function is local to that function, but this does not seem to be the case if the dictionary is defined and passed as an input to the function. Case 1 >>> def testfun1(a): ... a=2 ... >>> a=0 >>> testfun1(a) >>> a 0 Case 2 >>> def testfun2(b): ... b['test']=2 ... >>> b={} >>> testfun2(b) >>> b {'test': 2} Case 3 >>> def testfun3(): ... c=2 ... >>> c=0 >>> testfun3() >>> c 0 Case 4 (explained by this question: Global dictionaries don't need keyword global to modify them?) >>> def testfun4(): ... d['test']=10 ... >>> d={} >>> testfun4() >>> d {'test': 10}", "output": "Python's \"parameter evaluation strategy\" acts a bit different than the languages you're probably used to. Instead of having explicit call by value and call by reference semantics, python has call by sharing. You are essentially always passing the object itself, and the object's mutability determines whether or not it can be modified. Lists and Dicts are mutable objects. Numbers, Strings, and Tuples are not. You are passing the dictionary to the function, not a copy. Thus when you modify it, you are also modifying the original copy. To avoid this, you should first copy the dictionary before calling the function, or from within the function (passing the dictionary to the dict function should do it, i.e. testfun4(dict(d)) and defining the function as def testfun4(d):)."} +{"question_id": 52222002, "score": 93, "creation_date": 1536321133, "tags": ["python", "time"], "instruction": "What is the difference between time.perf_counter() and time.process_time()?\n\nI am using Jupyter notebook. I'm trying to measure how long will it take to count the Avogadro's number with Python. I found that time.perf_counter() and time.process_time() modules will be useful for this kind of work. So I tried both of them, but the result was totally different. What makes this kind of difference? Here is my code. Using time.perf_counter() import time a = 10 ** 5 def AvogadroCounting(): i = 0 while i <= a: i += 1 AvogadroCounting() t_fract = time.perf_counter() #time to count fraction of avogadro's number in Seconds print(t_fract, 'secs') And my notebook gives 693920.393636181 secs. Using time.process_time() import time a = 10 ** 5 def AvogadroCounting(): i = 0 while i <= a: i += 1 AvogadroCounting() t_fract = time.process_time() #time to count fraction of avogadro's number in Seconds print(t_fract, 'secs') and this gives 2048.768273 secs.", "output": "time.perf_counter() keeps going during sleep, time.process_time() does not. time.perf_counter() \u2192 float Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid. time.process_time() \u2192 float Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process. It does not include time elapsed during sleep. It is process-wide by definition. The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid. See the official documentation import time def pc(): start = time.perf_counter() time.sleep(1) print(time.perf_counter()-start) def pt(): start = time.process_time() time.sleep(1) print(time.process_time()-start) pc() # 0.99872320449432 pt() # 0.0"} +{"question_id": 926574, "score": 93, "creation_date": 1243610573, "tags": ["python", "iterator", "overloading"], "instruction": "Why does defining __getitem__ on a class make it iterable in python?\n\nWhy does defining __getitem__ on a class make it iterable? For instance if I write: class B: def __getitem__(self, k): return k cb = B() for k in cb: print k I get the output: 0 1 2 3 4 5 ... I would really expect to see an error returned from for k in cb:.", "output": "If you take a look at PEP234 defining iterators, it says: An object can be iterated over with for if it implements __iter__() or __getitem__(). An object can function as an iterator if it implements next()."} +{"question_id": 41494942, "score": 93, "creation_date": 1483651478, "tags": ["python", "pandas", "matplotlib", "time-series", "seaborn"], "instruction": "Pandas dataframe groupby plot\n\nI have a dataframe which is structured as: Date ticker adj_close 0 2016-11-21 AAPL 111.730 1 2016-11-22 AAPL 111.800 2 2016-11-23 AAPL 111.230 3 2016-11-25 AAPL 111.790 4 2016-11-28 AAPL 111.570 ... 8 2016-11-21 ACN 119.680 9 2016-11-22 ACN 119.480 10 2016-11-23 ACN 119.820 11 2016-11-25 ACN 120.740 ... How can I plot based on the ticker the adj_close versus Date?", "output": "Simple plot, you can use: df.plot(x='Date',y='adj_close') Or you can set the index to be Date beforehand, then it's easy to plot the column you want: df.set_index('Date', inplace=True) df['adj_close'].plot() If you want a chart with one series by ticker on it You need to groupby before: df.set_index('Date', inplace=True) df.groupby('ticker')['adj_close'].plot(legend=True) If you want a chart with individual subplots: grouped = df.groupby('ticker') ncols=2 nrows = int(np.ceil(grouped.ngroups/ncols)) fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(12,4), sharey=True) for (key, ax) in zip(grouped.groups.keys(), axes.flatten()): grouped.get_group(key).plot(ax=ax) ax.legend() plt.show()"} +{"question_id": 12116830, "score": 93, "creation_date": 1345843303, "tags": ["python", "numpy"], "instruction": "Numpy slice of arbitrary dimensions\n\nI would like to slice a numpy array to obtain the i-th index in the last dimension. For a 3D array, this would be: slice = myarray[:, :, i] But I am writing a function where I can take an array of arbitrary dimensions, so for a 4D array I'd need myarray[:, :, :, i], and so on. Is there a way I can obtain this slice for any array without explicitly having to write the array dimensions?", "output": "There is ... or Ellipsis, which does exactly this: slice = myarray[..., i] Ellipsis is the python object, if you should want to use it outside the square bracket notation."} +{"question_id": 32289175, "score": 93, "creation_date": 1440871258, "tags": ["python", "tkinter", "tcl"], "instruction": "List of All Tkinter Events\n\nIn Python tkinter module, , and are used to identify mouse button clicks for left, middle and right buttons respectively. Likewise, is used for the return key press. Where can I find a list of all such events, including the names for the various keyboard keys? Tcl bind manual does not have those.", "output": "A general list for Bindings and Events can be found on effbot.org or in the docs provided by New Mexico Tech whereas the name of several keys are listed here in addition to the original documentation. Here's a summary of the most common events with some keypress names explained: Event Description Button 1 is the leftmost button, button 2 is the middle button(where available), and button 3 the rightmost button. , , and <1> are all synonyms. For mouse wheel support under Linux, use Button-4 (scroll up) and Button-5 (scroll down) The mouse is moved, with mouse button 1 being held down (use B2 for the middle button, B3 for the right button). Button 1 was released. This is probably a better choice in most cases than the Button event, because if the user accidentally presses the button, they can move the mouse off the widget to avoid setting off the event. Button 1 was double clicked. You can use Double or Triple as prefixes. The mouse pointer entered the widget (this event doesn't mean that the user pressed the Enter key!). The mouse pointer left the widget. Keyboard focus was moved to this widget, or to a child of this widget. Keyboard focus was moved from this widget to another widget. The user pressed the Enter key. For an ordinary 102-key PC-style keyboard, the special keys are Cancel (the Break key), BackSpace, Tab, Return(the Enter key), Shift_L (any Shift key), Control_L (any Control key), Alt_L (any Alt key), Pause, Caps_Lock, Escape, Prior (Page Up), Next (Page Down), End, Home, Left, Up, Right, Down, Print, Insert, Delete, F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, F11, F12, Num_Lock, and Scroll_Lock. The user pressed any key. The key is provided in the char member of the event object passed to the callback (this is an empty string for special keys). a The user typed an \"a\". Most printable characters can be used as is. The exceptions are space () and less than (). Note that 1 is a keyboard binding, while <1> is a button binding. The user pressed the Up arrow, while holding the Shift key pressed. You can use prefixes like Alt, Shift, and Control. The widget changed size (or location, on some platforms). The new size is provided in the width and height attributes of the event object passed to the callback. A widget is changing from being inactive to being active. This refers to changes in the state option of a widget such as a button changing from inactive (grayed out) to active. A widget is changing from being active to being inactive. This refers to changes in the state option of a widget such as a radiobutton changing from active to inactive (grayed out). A widget is being destroyed. This event occurs whenever at least some part of your application or widget becomes visible after having been covered up by another window. The user let up on a key. A widget is being mapped, that is, made visible in the application. This will happen, for example, when you call the widget's .grid() method. The user moved the mouse pointer entirely within a widget. The user moved the mouse wheel up or down. At present, this binding works on Windows and MacOS, but not under Linux. A widget is being unmapped and is no longer visible. Happens when at least some part of the application window becomes visible on the screen."} +{"question_id": 55704058, "score": 93, "creation_date": 1555404389, "tags": ["python", "plotly"], "instruction": "Set the range of the y axis in Plotly\n\nI made a line plot with Plotly. How can I set the range of Y axis to always have it in [0; 10]? layout = go.Layout( title=go.layout.Title(text=\"Test\", xref=\"paper\", x=0), xaxis=go.layout.XAxis( tickmode=\"linear\", tickfont=dict(size=10), title=go.layout.xaxis.Title(font=dict(size=14, color=\"#7f7f7f\")), ), yaxis=go.layout.YAxis( title=go.layout.yaxis.Title(text=y, font=dict(size=14, color=\"#7f7f7f\")), ), ) data = [go.Scatter(x=x1, y=y1)]", "output": "Update for newer versions When setting up a figure you can use plotly's magic underscore notation and specify layout_yaxis_range=[, ] like this: fig = go.Figure(data=go.Scatter(x=x, y=y, mode='lines'), layout_yaxis_range=[-4,4]) Or if you've already got a figure named fig, you can use: fig.update_layout(yaxis_range=[-4,4]) And: fig.update(layout_yaxis_range = [-4,4]) Or: fig.update_yaxes(range = [-4,4]) Figure: Complete code: # imports import pandas as pd import plotly.graph_objs as go import numpy as np # data np.random.seed(4) x = np.linspace(0, 1, 50) y = np.cumsum(np.random.randn(50)) # plotly line chart fig = go.Figure(data=go.Scatter(x=x, y=y, mode='lines'), layout_yaxis_range=[-4,4]) fig.update_layout(yaxis_range=[-4,4]) fig.show() Original answer using plotly.offline, iplot and no magic underscore notation: When setting up a figure, use: layout = go.Layout(yaxis=dict(range=[fromValue, toValue]) Or if you've already got a figure named fig, you can use: fig.update_layout(yaxis=dict(range=[fromValue,toValue])) Plot: Complete code for Jupyter Notebook: # imports from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot import pandas as pd import plotly.graph_objs as go import numpy as np # setup init_notebook_mode(connected=True) # data np.random.seed(4) x = np.linspace(0, 1, 50) y = np.cumsum(np.random.randn(50)) # line trace = go.Scatter( x=x, y=y, ) # layout layout = go.Layout(yaxis=dict(range=[-4,4]) ) # Plot fig = go.Figure(data=[trace], layout=layout) iplot(fig) Some important details: With this setup, you can easily add an y axis title like this: # layout layout = go.Layout(yaxis=dict(range=[-4,4]), title='y Axis') ) It's a little more tricky if you'd like to format that title further. I find it easiest to actually add another element with title = go.layout.yaxis.Title(text='y Axis', font=dict(size=14, color='#7f7f7f'). As long as you do it the right way, you should not experience the situation in your comment above: Thanks. I tried it. But then I have 2 definitions of yaxis in the Layout: yaxis=dict(range=[0, 10]) and yaxis=go.layout.YAxis. Therefore an error appears. Take a look at this: Plot: Complete code with y-axis text formatting: # imports from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot import pandas as pd import plotly.graph_objs as go import numpy as np # setup init_notebook_mode(connected=True) # data np.random.seed(4) x = np.linspace(0, 1, 50) y = np.cumsum(np.random.randn(50)) # line trace = go.Scatter( x=x, y=y, ) # layout layout = go.Layout( yaxis=dict(range=[-4,4], title = go.layout.yaxis.Title(text='y Axis', font=dict(size=14, color='#7f7f7f'))) ) # Plot fig = go.Figure(data=[trace], layout=layout) iplot(fig)"} +{"question_id": 46994426, "score": 93, "creation_date": 1509224367, "tags": ["python", "arrays", "numpy", "rounding"], "instruction": "How to round a numpy array?\n\nI have a numpy array, something like below: data = np.array([ 1.60130719e-01, 9.93827160e-01, 3.63108206e-04]) and I want to round each element to two decimal places. How can I do so?", "output": "Numpy provides two identical methods to do this. Either use np.round(data, 2) or np.around(data, 2) as they are equivalent. See the documentation for more information. Examples: >>> import numpy as np >>> a = np.array([0.015, 0.235, 0.112]) >>> np.round(a, 2) array([0.02, 0.24, 0.11]) >>> np.around(a, 2) array([0.02, 0.24, 0.11]) >>> np.round(a, 1) array([0. , 0.2, 0.1])"} +{"question_id": 5228158, "score": 93, "creation_date": 1299556632, "tags": ["python", "generator", "combinatorics"], "instruction": "Cartesian product of a dictionary of lists\n\nI'm trying to write some code to test out the Cartesian product of a bunch of input parameters. I've looked at itertools, but its product function is not exactly what I want. Is there a simple obvious way to take a dictionary with an arbitrary number of keys and an arbitrary number of elements in each value, and then yield a dictionary with the next permutation? Input: options = {\"number\": [1,2,3], \"color\": [\"orange\",\"blue\"] } print list( my_product(options) ) Example output: [ {\"number\": 1, \"color\": \"orange\"}, {\"number\": 1, \"color\": \"blue\"}, {\"number\": 2, \"color\": \"orange\"}, {\"number\": 2, \"color\": \"blue\"}, {\"number\": 3, \"color\": \"orange\"}, {\"number\": 3, \"color\": \"blue\"} ]", "output": "Ok, thanks @dfan for telling me I was looking in the wrong place. I've got it now: from itertools import product def my_product(inp): return (dict(zip(inp.keys(), values)) for values in product(*inp.values()) EDIT: after years more Python experience, I think a better solution is to accept kwargs rather than a dictionary of inputs; the call style is more analogous to that of the original itertools.product. Also I think writing a generator function, rather than a function that returns a generator expression, makes the code clearer. So: import itertools def product_dict(**kwargs): keys = kwargs.keys() for instance in itertools.product(*kwargs.values()): yield dict(zip(keys, instance)) and if you need to pass in a dict, list(product_dict(**mydict)). The one notable change using kwargs rather than an arbitrary input class is that it prevents the keys/values from being ordered, at least until Python 3.6."} +{"question_id": 54646709, "score": 93, "creation_date": 1549963626, "tags": ["python", "scikit-learn", "pipeline"], "instruction": "Sklearn Pipeline: Get feature names after OneHotEncode In ColumnTransformer\n\nI want to get feature names after I fit the pipeline. categorical_features = ['brand', 'category_name', 'sub_category'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) numeric_features = ['num1', 'num2', 'num3', 'num4'] numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())]) preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) Then clf = Pipeline(steps=[('preprocessor', preprocessor), ('regressor', GradientBoostingRegressor())]) After fitting with pandas dataframe, I can get feature importances from clf.steps[1][1].feature_importances_ and I tried clf.steps[0][1].get_feature_names() but I got an error AttributeError: Transformer num (type Pipeline) does not provide get_feature_names. How can I get feature names from this?", "output": "You can access the feature_names using the following snippet: clf.named_steps['preprocessor'].transformers_[1][1]\\ .named_steps['onehot'].get_feature_names(categorical_features) Using sklearn >= 0.21 version, we can make it even simpler: clf['preprocessor'].transformers_[1][1]\\ ['onehot'].get_feature_names(categorical_features) Reproducible example: import numpy as np import pandas as pd from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.linear_model import LinearRegression df = pd.DataFrame({'brand': ['aaaa', 'asdfasdf', 'sadfds', 'NaN'], 'category': ['asdf', 'asfa', 'asdfas', 'as'], 'num1': [1, 1, 0, 0], 'target': [0.2, 0.11, 1.34, 1.123]}) numeric_features = ['num1'] numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler())]) categorical_features = ['brand', 'category'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant', fill_value='missing')), ('onehot', OneHotEncoder(handle_unknown='ignore'))]) preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) clf = Pipeline(steps=[('preprocessor', preprocessor), ('regressor', LinearRegression())]) clf.fit(df.drop('target', 1), df['target']) clf.named_steps['preprocessor'].transformers_[1][1]\\ .named_steps['onehot'].get_feature_names(categorical_features) # ['brand_NaN' 'brand_aaaa' 'brand_asdfasdf' 'brand_sadfds' 'category_as' # 'category_asdf' 'category_asdfas' 'category_asfa']"} +{"question_id": 290424, "score": 93, "creation_date": 1226676600, "tags": ["python"], "instruction": "Filter a Python list by predicate\n\nI would want to do something like: >>> lst = [1, 2, 3, 4, 5] >>> lst.find(lambda x: x % 2 == 0) 2 >>> lst.findall(lambda x: x % 2 == 0) [2, 4] Is there anything nearing such behavior in Python's standard libraries? I know it's very easy to roll-your-own here, but I'm looking for a more standard way.", "output": "You can use the filter method: >>> lst = [1, 2, 3, 4, 5] >>> filter(lambda x: x % 2 == 0, lst) [2, 4] or a list comprehension: >>> lst = [1, 2, 3, 4, 5] >>> [x for x in lst if x %2 == 0] [2, 4] to find a single element, you could try: >>> next(x for x in lst if x % 2 == 0) 2 Though that would throw an exception if nothing matches, so you'd probably want to wrap it in a try/catch. The () brackets make this a generator expression rather than a list comprehension. Personally though I'd just use the regular filter/comprehension and take the first element (if there is one). These raise an exception if nothing is found filter(lambda x: x % 2 == 0, lst)[0] [x for x in lst if x %2 == 0][0] These return empty lists filter(lambda x: x % 2 == 0, lst)[:1] [x for x in lst if x %2 == 0][:1]"} +{"question_id": 16105485, "score": 93, "creation_date": 1366377205, "tags": ["python", "django"], "instruction": "Unsupported operand type(s) for *: 'float' and 'Decimal'\n\nI'm just playing around learning classes functions, etc., so I decided to create a simple function that should give me a tax amount. This is my code so far. class VAT_calculator: \"\"\" A set of methods for VAT calculations. \"\"\" def __init__(self, amount=None): self.amount = amount self.VAT = decimal.Decimal('0.095') def initialize(self): self.amount = 0 def total_with_VAT(self): \"\"\" Returns amount with VAT added. \"\"\" if not self.amount: msg = u\"Cannot add VAT if no amount is passed!'\" raise ValidationError(msg) return (self.amount * self.VAT).quantize(self.amount, rounding=decimal.ROUND_UP) My issue is I'm getting the following error: unsupported operand type(s) for : 'float' and 'Decimal'* Why should this not work?", "output": "It seems like self.VAT is of decimal.Decimal type and self.amount is a float, a thing that you can't do. Try decimal.Decimal(self.amount) * self.VAT instead."} +{"question_id": 29155310, "score": 93, "creation_date": 1426800334, "tags": ["python", "pandas", "dataframe", "lambda", "apply"], "instruction": "Trouble passing in lambda to apply for pandas DataFrame: \"TypeError: () got an unexpected keyword argument 'axis' \"\n\nI'm trying to apply a function to all rows of a pandas DataFrame (actually just one column in that DataFrame) I'm sure this is a syntax error but I'm know sure what I'm doing wrong df['col'].apply(lambda x, y:(x - y).total_seconds(), args=[d1], axis=1) The col column contains a bunch a datetime.datetime objects and d1 is the earliest of them. I'm trying to get a column of the total number of seconds for each of the rows. I keep getting the following error TypeError: () got an unexpected keyword argument 'axis' I don't understand why axis is getting passed to my lambda function I've also tried doing def diff_dates(d1, d2): return (d1-d2).total_seconds() df['col'].apply(diff_dates, args=[d1], axis=1) And I get the same error.", "output": "Note there is no axis param for a Series.apply call, as distinct to a DataFrame.apply call. Series.apply(func, convert_dtype=True, args=(), **kwds) ... func : function convert_dtype : boolean, default True Try to find better dtype for elementwise function results. If False, leave as dtype=object args : tuple Positional arguments to pass to function in addition to the value **kwds Additional keyword arguments passed to func. There is one for a df but it's unclear how you're expecting this to work when you're calling it on a series but you're expecting it to work on a row?"} +{"question_id": 16483625, "score": 93, "creation_date": 1368193132, "tags": ["python", "generator", "yield"], "instruction": "for x in y(): how does this work?\n\nI was looking for code to spin a cursor in the terminal and found this. I was wondering what was happening in the code. In particular for c in spinning_cursor(): I've never seen this syntax. Is it because I am returning one element from a generator at a time with yield, and this is assigned to c? Any other examples of this for x in y() use? import sys import time def spinning_cursor(): cursor='/-\\|' i = 0 while 1: yield cursor[i] i = (i + 1) % len(cursor) for c in spinning_cursor(): sys.stdout.write(c) sys.stdout.flush() time.sleep(0.1) sys.stdout.write('\\b')", "output": "Using yield turns a function into a generator. A generator is a specialized type of iterator. for always loops over iterables, taking each element in turn and assigning it to the name(s) you listed. spinning_cursor() returns a generator, the code inside spinning_cursor() doesn't actually run until you start iterating over the generator. Iterating over a generator means the code in the function is executed until it comes across a yield statement, at which point the result of the expression there is returned as the next value and execution is paused again. The for loop does just that, it'll call the equivalent of next() on the generator, until the generator signals it is done by raising StopIteration (which happens when the function returns). Each return value of next() is assigned, in turn, to c. You can see this by creating the generator on in the Python prompt: >>> def spinning_cursor(): ... cursor='/-\\|' ... i = 0 ... while 1: ... yield cursor[i] ... i = (i + 1) % len(cursor) ... >>> sc = spinning_cursor() >>> sc >>> next(sc) '/' >>> next(sc) '-' >>> next(sc) '\\\\' >>> next(sc) '|' This specific generator never returns, so StopIteration is never raised and the for loop will go on forever unless you kill the script. A far more boring (but more efficient) alternative would be to use itertools.cycle(): from itertools import cycle spinning_cursor = cycle('/-\\|')"} +{"question_id": 1236593, "score": 92, "creation_date": 1249523418, "tags": ["python", "django", "list", "django-templates"], "instruction": "Comma separated lists in django templates\n\nIf fruits is the list ['apples', 'oranges', 'pears'], is there a quick way using django template tags to produce \"apples, oranges, and pears\"? I know it's not difficult to do this using a loop and {% if counter.last %} statements, but because I'm going to use this repeatedly I think I'm going to have to learn how to write custom tags filters, and I don't want to reinvent the wheel if it's already been done. As an extension, my attempts to drop the Oxford Comma (ie return \"apples, oranges and pears\") are even messier.", "output": "Here's the filter I wrote to solve my problem (it doesn't include the Oxford comma) def join_with_commas(obj_list): \"\"\"Takes a list of objects and returns their string representations, separated by commas and with 'and' between the penultimate and final items For example, for a list of fruit objects: [, , ] -> 'apples, oranges and pears' \"\"\" if not obj_list: return \"\" l=len(obj_list) if l==1: return u\"%s\" % obj_list[0] else: return \", \".join(str(obj) for obj in obj_list[:l-1]) \\ + \" and \" + str(obj_list[l-1]) To use it in the template: {{ fruits|join_with_commas }}"} +{"question_id": 10551117, "score": 92, "creation_date": 1336738397, "tags": ["python", "argparse"], "instruction": "Setting options from environment variables when using argparse\n\nI have a script which has certain options that can either be passed on the command line, or from environment variables. The CLI should take precedence if both are present, and an error occur if neither are set. I could check that the option is assigned after parsing, but I prefer to let argparse to do the heavy lifting and be responsible for displaying the usage statement if parsing fails. I have come up with a couple of alternative approaches to this (which I will post below as answers so they can be discussed separately) but they feel pretty kludgey to me and I think that I am missing something. Is there an accepted \"best\" way of doing this? (Edit to make the desired behaviour clear when both the CLI option and environment variable are unset)", "output": "I use this pattern frequently enough that I have packaged a simple action class to handle it: import argparse import os class EnvDefault(argparse.Action): def __init__(self, envvar, required=True, default=None, **kwargs): if envvar: if envvar in os.environ: default = os.environ[envvar] if required and default: required = False super(EnvDefault, self).__init__(default=default, required=required, **kwargs) def __call__(self, parser, namespace, values, option_string=None): setattr(namespace, self.dest, values) I can then call this from my code with: import argparse from envdefault import EnvDefault parser=argparse.ArgumentParser() parser.add_argument( \"-u\", \"--url\", action=EnvDefault, envvar='URL', help=\"Specify the URL to process (can also be specified using URL environment variable)\") args=parser.parse_args()"} +{"question_id": 15173225, "score": 92, "creation_date": 1362218789, "tags": ["python", "string", "nlp", "similarity", "cosine-similarity"], "instruction": "Calculate cosine similarity given 2 sentence strings\n\nFrom Python: tf-idf-cosine: to find document similarity , it is possible to calculate document similarity using tf-idf cosine. Without importing external libraries, are that any ways to calculate cosine similarity between 2 strings? s1 = \"This is a foo bar sentence .\" s2 = \"This sentence is similar to a foo bar sentence .\" s3 = \"What is this string ? Totally not related to the other two lines .\" cosine_sim(s1, s2) # Should give high cosine similarity cosine_sim(s1, s3) # Shouldn't give high cosine similarity value cosine_sim(s2, s3) # Shouldn't give high cosine similarity value", "output": "A simple pure-Python implementation would be: import math import re from collections import Counter WORD = re.compile(r\"\\w+\") def get_cosine(vec1, vec2): intersection = set(vec1.keys()) & set(vec2.keys()) numerator = sum([vec1[x] * vec2[x] for x in intersection]) sum1 = sum([vec1[x] ** 2 for x in list(vec1.keys())]) sum2 = sum([vec2[x] ** 2 for x in list(vec2.keys())]) denominator = math.sqrt(sum1) * math.sqrt(sum2) if not denominator: return 0.0 else: return float(numerator) / denominator def text_to_vector(text): words = WORD.findall(text) return Counter(words) text1 = \"This is a foo bar sentence .\" text2 = \"This sentence is similar to a foo bar sentence .\" vector1 = text_to_vector(text1) vector2 = text_to_vector(text2) cosine = get_cosine(vector1, vector2) print(\"Cosine:\", cosine) Prints: Cosine: 0.861640436855 The cosine formula used here is described here. This does not include weighting of the words by tf-idf, but in order to use tf-idf, you need to have a reasonably large corpus from which to estimate tfidf weights. You can also develop it further, by using a more sophisticated way to extract words from a piece of text, stem or lemmatise it, etc."} +{"question_id": 12960574, "score": 92, "creation_date": 1350582284, "tags": ["python", "pandas"], "instruction": "pandas read_csv index_col=None not working with delimiters at the end of each line\n\nI am going through the 'Python for Data Analysis' book and having trouble in the 'Example: 2012 Federal Election Commision Database' section reading the data to a DataFrame. The trouble is that one of the columns of data is always being set as the index column, even when the index_col argument is set to None. Here is the link to the data : http://www.fec.gov/disclosurep/PDownload.do. Here is the loading code (to save time in the checking, I set the nrows=10): import pandas as pd fec = pd.read_csv('P00000001-ALL.csv',nrows=10,index_col=None) To keep it short I am excluding the data column outputs, but here is my output (please not the Index values): In [20]: fec Out[20]: Index: 10 entries, C00410118 to C00410118 Data columns: ... dtypes: float64(4), int64(3), object(11) And here is the book's output (again with data columns excluded): In [13]: fec = read_csv('P00000001-ALL.csv') In [14]: fec Out[14]: Int64Index: 1001731 entries, 0 to 1001730 ... dtypes: float64(1), int64(1), object(14) The Index values in my output are actually the first column of data in the file, which is then moving all the rest of the data to the left by one. Would anyone know how to prevent this column of data to be listed as an index? I would like to have the index just +1 increasing integers. I am fairly new to python and pandas, so I apologize for any inconvenience. Thanks.", "output": "Quick Answer Use index_col=False instead of index_col=None when you have delimiters at the end of each line to turn off index column inference and discard the last column. More Detail After looking at the data, there is a comma at the end of each line. And this quote (the documentation has been edited since the time this post was created): index_col: column number, column name, or list of column numbers/names, to use as the index (row labels) of the resulting DataFrame. By default, it will number the rows without using any column, unless there is one more data column than there are headers, in which case the first column is taken as the index. from the documentation shows that pandas believes you have n headers and n+1 data columns and is treating the first column as the index. EDIT 10/20/2014 - More information I found another valuable entry that is specifically about trailing limiters and how to simply ignore them: If a file has one more column of data than the number of column names, the first column will be used as the DataFrame\u2019s row names: ... Ordinarily, you can achieve this behavior using the index_col option. There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False: ..."} +{"question_id": 2408560, "score": 92, "creation_date": 1268133425, "tags": ["python", "windows", "input"], "instruction": "Non-blocking console input?\n\nI am trying to make a simple IRC client in Python (as kind of a project while I learn the language). I have a loop that I use to receive and parse what the IRC server sends me, but if I use raw_input to input stuff, it stops the loop dead in its tracks until I input something (obviously). How can I input something without the loop stopping? (I don't think I need to post the code, I just want to input something without the while 1: loop stopping.) I'm on Windows.", "output": "For Windows, console only, use the msvcrt module: import msvcrt num = 0 done = False while not done: print(num) num += 1 if msvcrt.kbhit(): print \"you pressed\",msvcrt.getch(),\"so now i will quit\" done = True For Linux, this article describes the following solution, it requires the termios module: import sys import select import tty import termios def isData(): return select.select([sys.stdin], [], [], 0) == ([sys.stdin], [], []) old_settings = termios.tcgetattr(sys.stdin) try: tty.setcbreak(sys.stdin.fileno()) i = 0 while 1: print(i) i += 1 if isData(): c = sys.stdin.read(1) if c == '\\x1b': # x1b is ESC break finally: termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings) For cross platform, or in case you want a GUI as well, you can use Pygame: import pygame from pygame.locals import * def display(str): text = font.render(str, True, (255, 255, 255), (159, 182, 205)) textRect = text.get_rect() textRect.centerx = screen.get_rect().centerx textRect.centery = screen.get_rect().centery screen.blit(text, textRect) pygame.display.update() pygame.init() screen = pygame.display.set_mode( (640,480) ) pygame.display.set_caption('Python numbers') screen.fill((159, 182, 205)) font = pygame.font.Font(None, 17) num = 0 done = False while not done: display( str(num) ) num += 1 pygame.event.pump() keys = pygame.key.get_pressed() if keys[K_ESCAPE]: done = True"} +{"question_id": 15018372, "score": 92, "creation_date": 1361515307, "tags": ["python", "selenium"], "instruction": "How to take partial screenshot with Selenium WebDriver in python?\n\nI have searched a lot for this but couldn't find a solution. Here's a similar question with a possible solution in java. Is there a similar solution in Python?", "output": "Other than Selenium, this example also requires the PIL Imaging library. Sometimes this is put in as one of the standard libraries and sometimes it's not, but if you don't have it you can install it with pip install Pillow from selenium import webdriver from PIL import Image from io import BytesIO fox = webdriver.Firefox() fox.get('http://stackoverflow.com/') # now that we have the preliminary stuff out of the way time to get that image :D element = fox.find_element_by_id('hlogo') # find part of the page you want image of location = element.location size = element.size png = fox.get_screenshot_as_png() # saves screenshot of entire page fox.quit() im = Image.open(BytesIO(png)) # uses PIL library to open image in memory left = location['x'] top = location['y'] right = location['x'] + size['width'] bottom = location['y'] + size['height'] im = im.crop((left, top, right, bottom)) # defines crop points im.save('screenshot.png') # saves new cropped image and finally the output is... the Stackoverflow logo!!! Now of course this would be overkill for just grabbing a static image but if your want to grab something that requires Javascript to get to this could be a viable solution."} +{"question_id": 430079, "score": 92, "creation_date": 1231542136, "tags": ["python", "string", "split"], "instruction": "How to split strings into text and number?\n\nI'd like to split strings like these 'foofo21' 'bar432' 'foobar12345' into ['foofo', '21'] ['bar', '432'] ['foobar', '12345'] Does somebody know an easy and simple way to do this in python?", "output": "I would approach this by using re.match in the following way: import re match = re.match(r\"([a-z]+)([0-9]+)\", 'foofo21', re.I) if match: items = match.groups() print(items) >> (\"foofo\", \"21\")"} +{"question_id": 34266159, "score": 92, "creation_date": 1450093258, "tags": ["python", "linux", "debian", "pip"], "instruction": "How to see sizes of installed pip packages?\n\nOn Linux Debian, how can I list all installed python pip packages and the size (amount of disk space used) that each one takes up?", "output": "Go to the package site to find the size e.g. https://pypi.python.org/pypi/pip/json Then expand releases, find the version, and look up the size (in bytes)."} +{"question_id": 51716916, "score": 92, "creation_date": 1533598410, "tags": ["python"], "instruction": "Built-in module to calculate the least common multiple\n\nI am currently using a function that accepts two numbers and uses a loop to find the least common multiple of those numbers, def lcm(x, y): \"\"\"This function takes two integers and returns the L.C.M.\"\"\" # Choose the greater number if x > y: greater = x else: greater = y while(True): if((greater % x == 0) and (greater % y == 0)): lcm = greater break greater += 1 return lcm Is there a built-in module in Python that does it instead of writing a custom function?", "output": "In Python 3.8 and earlier There is no such thing built into the stdlib. However, there is a Greatest Common Divisor function in the math library. (For Python 3.4 or 2.7, it's buried in fractions instead.) And writing an LCM on top of a GCD is pretty trivial: def lcm(a, b): return abs(a*b) // math.gcd(a, b) Or, if you're using NumPy, it's come with an lcm function for quite some time now."} +{"question_id": 10678441, "score": 92, "creation_date": 1337560964, "tags": ["python"], "instruction": "Flipping the boolean values in a list Python\n\nI have a boolean list in Python mylist = [True , True, False,...] which I want to change to the logical opposite [False, False, True , ...] Is there an inbuilt way to do this in Python (something like a call not(mylist) ) without a hand-written loop to reverse the elements?", "output": "It's easy with list comprehension: mylist = [True , True, False] [not elem for elem in mylist] yields [False, False, True]"} +{"question_id": 45529507, "score": 92, "creation_date": 1502003312, "tags": ["python", "pandas", "windows", "csv"], "instruction": "UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 35: invalid start byte\n\nI am trying to read a CSV file using the below script. Past = pd.read_csv(\"C:/Users/Admin/Desktop/Python/Past.csv\", encoding='utf-8') But, I get the error \"UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 35: invalid start byte\" Where is the issue? I used encoding in the script and thought it would resolve the error.", "output": "This happens because you chose the wrong encoding. Since you are working on a Windows machine, just replacing Past = pd.read_csv(\"C:/Users/.../Past.csv\", encoding='utf-8') with Past = pd.read_csv(\"C:/Users/.../Past.csv\", encoding='cp1252') should solve the problem."} +{"question_id": 44405708, "score": 92, "creation_date": 1496818895, "tags": ["python", "flask"], "instruction": "Flask doesn't print to console\n\nI'm new to flask, and I'm trying to add print info to debug server side code. When launch my flask app with debug=True, i can't get any info print to console I tried to use logging instead, but no success. So how to debug flask program with console. @app.route('/getJSONResult', methods=['GET', 'POST']) def getJSONResult(): if request.method == 'POST': uut = request.form['uut'] notes = request.form['notes'] temperature = request.form['temperature'] logging.info(\"enter getJSONReuslt\") print('enter getJSONReuslt') filter_by_query = {k: v for k, v in { 'uut': uut, 'notes': notes, 'temperature': temperature}.items() if v != \"\"} s = session.query(UUT_TEST_INFO).filter_by(**filter_by_query).first() return jsonify(s.serialize) if __name__ == '__main__': app.secret_key = ''.join(random.choice( string.ascii_uppercase + string.digits) for x in range(32)) app.debug = True app.run(host='127.0.0.1', port=5000) > 127.0.0.1 - - [07/Jun/2017 15:20:48] \"GET /qyer HTTP/1.1\" 200 - > 127.0.0.1 - - [07/Jun/2017 15:20:48] \"GET /static/css/bootstrap.min.css HTTP/1.1\" 200 - > 127.0.0.1 - - [07/Jun/2017 15:20:48] \"GET /static/js/bootstrap.min.js HTTP/1.1\" 200 - > 127.0.0.1 - - [07/Jun/2017 15:20:51] \"GET /static/css/bootstrap.min.css.map HTTP/1.1\" 200 - > 127.0.0.1 - - [07/Jun/2017 15:21:58] \"POST /getJSONResult HTTP/1.1\" 500 - I fixed server side 500 error issue, now request get 200 code, and console displays following info $ python project.py INFO:werkzeug: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) INFO:werkzeug: * Restarting with stat WARNING:werkzeug: * Debugger is active! INFO:werkzeug: * Debugger pin code: 158-624-607 INFO:werkzeug:127.0.0.1 - - [08/Jun/2017 11:33:33] \"GET /qyer HTTP/1.1\" 200 - INFO:root:Enter getJSONResult INFO:werkzeug:127.0.0.1 - - [08/Jun/2017 11:33:43] \"POST /getJSONResult HTTP/1.1\" 200 - Still no info from print command", "output": "Try this and see if it helps: For python2: from __future__ import print_function import sys print('This is error output', file=sys.stderr) print('This is standard output', file=sys.stdout) For python3 you don't need to import from future print_function: import sys print('This is error output', file=sys.stderr) print('This is standard output', file=sys.stdout) See if it helps to print to console."} +{"question_id": 69875125, "score": 92, "creation_date": 1636307629, "tags": ["python", "selenium", "selenium-webdriver", "webdriver", "findelement"], "instruction": "find_element_by_* commands are deprecated in Selenium\n\nWhen starting the function def run(driver_path): driver = webdriver.Chrome(executable_path=driver_path) driver.get('https://tproger.ru/quiz/real-programmer/') button = driver.find_element_by_class_name(\"quiz_button\") button.click() run(driver_path) I'm getting errors like these: :6: DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome(executable_path=driver_path) :10: DeprecationWarning: find_element_by_* commands are deprecated. Please use find_element() instead button = driver.find_element_by_class_name(\"quiz_button\") but I can't understand why. I'm using WebDriver at the latest version for my Chrome's version. I don't why I get find_element_by_* commands are deprecated when it's in the documentation that the command exists.", "output": "This error message: DeprecationWarning: find_element_by_* commands are deprecated. Please use find_element() instead implies that the find_element_by_* commands are deprecated in the latest Selenium Python libraries. As AutomatedTester mentions: This DeprecationWarning was the reflection of the changes made with respect to the decision to simplify the APIs across the languages and this does that. Solution Instead you have to use find_element(). As an example: You have to include the following imports from selenium.webdriver.common.by import By Using class_name: button = driver.find_element_by_class_name(\"quiz_button\") Needs be replaced with: button = driver.find_element(By.CLASS_NAME, \"quiz_button\") Along the lines of, you also have to change the following: Using id: element = find_element_by_id(\"element_id\") Needs be replaced with: element = driver.find_element(By.ID, \"element_id\") Using name: element = find_element_by_name(\"element_name\") Needs be replaced with: element = driver.find_element(By.NAME, \"element_name\") Using link_text: element = find_element_by_link_text(\"element_link_text\") Needs be replaced with: element = driver.find_element(By.LINK_TEXT, \"element_link_text\") Using partial_link_text: element = find_element_by_partial_link_text(\"element_partial_link_text\") Needs be replaced with: element = driver.find_element(By.PARTIAL_LINK_TEXT, \"element_partial_link_text\") Using tag_name: element = find_element_by_tag_name(\"element_tag_name\") Needs be replaced with: element = driver.find_element(By.TAG_NAME, \"element_tag_name\") Using css_selector: element = find_element_by_css_selector(\"element_css_selector\") Needs be replaced with: element = driver.find_element(By.CSS_SELECTOR, \"element_css_selector\") Using xpath: element = find_element_by_xpath(\"element_xpath\") Needs be replaced with: element = driver.find_element(By.XPATH, \"element_xpath\") Note: If you are searching and replacing to implement the above changes, you will need to do the same thing for find_elements_*, i.e., the plural forms of find_element_*. You may also find this upgrade guide useful as it covers some other unrelated changes you may need to make when upgrading: Upgrade to Selenium 4"} +{"question_id": 4914008, "score": 92, "creation_date": 1297004081, "tags": ["python", "parsing"], "instruction": "How to efficiently parse fixed width files?\n\nI am trying to find an efficient way of parsing files that holds fixed width lines. For example, the first 20 characters represent a column, from 21:30 another one and so on. Assuming that the line holds 100 characters, what would be an efficient way to parse a line into several components? I could use string slicing per line, but it's a little bit ugly if the line is big. Are there any other fast methods?", "output": "Using the Python standard library's struct module would be fairly easy as well as fairly fast since it's written in C. The code below how it use it. It also allows columns of characters to be skipped by specifying negative values for the number of characters in the field. import struct fieldwidths = (2, -10, 24) fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's') for fw in fieldwidths) # Convert Unicode input to bytes and the result back to Unicode string. unpack = struct.Struct(fmtstring).unpack_from # Alias. parse = lambda line: tuple(s.decode() for s in unpack(line.encode())) print('fmtstring: {!r}, record size: {} chars'.format(fmtstring, struct.calcsize(fmtstring))) line = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\\n' fields = parse(line) print('fields: {}'.format(fields)) Output: fmtstring: '2s 10x 24s', recsize: 36 chars fields: ('AB', 'MNOPQRSTUVWXYZ0123456789') Here's a way to do it with string slices, as you were considering but were concerned that it might get too ugly. It is kind of complicated and speedwise it's about the same as the version based the struct module \u2014 although I have an idea about how it could be sped up (which might make the extra complexity worthwhile). See update below on that topic. from itertools import zip_longest from itertools import accumulate def make_parser(fieldwidths): cuts = tuple(cut for cut in accumulate(abs(fw) for fw in fieldwidths)) pads = tuple(fw < 0 for fw in fieldwidths) # bool values for padding fields flds = tuple(zip_longest(pads, (0,)+cuts, cuts))[:-1] # ignore final one parse = lambda line: tuple(line[i:j] for pad, i, j in flds if not pad) # Optional informational function attributes. parse.size = sum(abs(fw) for fw in fieldwidths) parse.fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's') for fw in fieldwidths) return parse line = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\\n' fieldwidths = (2, -10, 24) # negative widths represent ignored padding fields parse = make_parser(fieldwidths) fields = parse(line) print('format: {!r}, rec size: {} chars'.format(parse.fmtstring, parse.size)) print('fields: {}'.format(fields)) Output: format: '2s 10x 24s', rec size: 36 chars fields: ('AB', 'MNOPQRSTUVWXYZ0123456789') Update As I suspected, there is a way of making the string-slicing version of the code faster \u2014 which in Python 2.7 make it about the same speed as the version using struct, but in Python 3.x make it 233% faster (as well as the un-optimized version of itself which is about the same speed as the struct version). What the version presented above does is define a lambda function that's primarily a comprehension that generates the limits of a bunch of slices at runtime. parse = lambda line: tuple(line[i:j] for pad, i, j in flds if not pad) Which is equivalent to a statement like the following, depending on the values of i and j in the for loop, to something looking like this: parse = lambda line: tuple(line[0:2], line[12:36], line[36:51], ...) However the latter executes more than twice as fast since the slice boundaries are all constants. Fortunately it relatively easy to convert and \"compile\" the former into the latter using the built-in eval() function: def make_parser(fieldwidths): cuts = tuple(cut for cut in accumulate(abs(fw) for fw in fieldwidths)) pads = tuple(fw < 0 for fw in fieldwidths) # bool flags for padding fields flds = tuple(zip_longest(pads, (0,)+cuts, cuts))[:-1] # ignore final one slcs = ', '.join('line[{}:{}]'.format(i, j) for pad, i, j in flds if not pad) parse = eval('lambda line: ({})\\n'.format(slcs)) # Create and compile source code. # Optional informational function attributes. parse.size = sum(abs(fw) for fw in fieldwidths) parse.fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's') for fw in fieldwidths) return parse"} +{"question_id": 18401015, "score": 92, "creation_date": 1377255053, "tags": ["python", "mongodb", "connection", "pymongo"], "instruction": "How to close a mongodb python connection?\n\nI'm doing a python script that writes some data to a mongodb. I need to close the connection and free some resources, when finishing. How is that done in Python?", "output": "Use close() method on your MongoClient instance: client = pymongo.MongoClient() # some code here client.close() Cleanup client resources and disconnect from MongoDB. End all server sessions created by this client by sending one or more endSessions commands. Close all sockets in the connection pools and stop the monitor threads."} +{"question_id": 3762881, "score": 92, "creation_date": 1285090458, "tags": ["python", "http-redirect", "stdout", "stdin"], "instruction": "How do I check if stdin has some data?\n\nIn Python, how do you check if sys.stdin has data or not? I found that os.isatty(0) can not only check if stdin is connected to a TTY device, but also if there is data available. But if someone uses code such as sys.stdin = cStringIO.StringIO(\"ddd\") and after that uses os.isatty(0), it still returns True. What do I need to do to check if stdin has data?", "output": "On Unix systems you can do the following: import sys import select if select.select([sys.stdin, ], [], [], 0.0)[0]: print(\"Have data!\") else: print(\"No data\") On Windows the select module may only be used with sockets though so you'd need to use an alternative mechanism."} +{"question_id": 7150998, "score": 92, "creation_date": 1314033229, "tags": ["python"], "instruction": "Where is module being imported from?\n\nAssuming I have two Python modules and path_b is in the import path: # file: path_b/my_module.py print \"I was imported from ???\" #file: path_a/app.py import my_module Is it possible to see where the module is imported from? I want an output like \"I was imported from path_a/app.py\", if I start app.py (because I need the file name). Edit: For better understanding; I could write: # file: path_b/my_module.py def foo(file): print \"I was imported from %s\" % file #file: path_a/app.py import my_module my_module.foo(__file__) So the output would be: $> python path_app.py I was imported from path_a/app.py", "output": "There may be an easier way to do this, but this works: import inspect print(inspect.getframeinfo(inspect.getouterframes(inspect.currentframe())[1][0])[0]) Note that the path will be printed relative to the current working directory if it's a parent directory of the script location."} +{"question_id": 54898578, "score": 92, "creation_date": 1551245233, "tags": ["python", "unit-testing", "testing", "integration-testing", "pytest"], "instruction": "How to keep Unit tests and Integrations tests separate in pytest\n\nAccording to Wikipedia and various articles it is best practice to divide tests into Unit tests (run first) and Integration tests (run second), where Unit tests are typically very fast and should be run with every build in a CI environment, however Integration tests take longer to run and should be more of a daily run. Is there a way to divide these in pytest? Most projects don't seem to have multiple test folders, so is there a way to make sure I only run Unit, Integration or both according to the situtation (CI vs daily builds)? When calculating test coverage, I assume I will have to run both. Am I going about this the right way in attempting to divide the tests into these categories? Is there a good example somewhere of a project that has done this?", "output": "Yes, you can mark tests with the pytest.mark decorator. Example: def unit_test_1(): # assert here def unit_test_2(): # assert here @pytest.mark.integtest def integration_test(): # assert here Now, from the command line, you can run pytest -m \"not integtest\" for only the unit tests, pytest -m integtest for only the integration test and plain pytest for all. (You can also decorate your unit tests with pytest.mark.unit if you want, but I find that slightly tedious/verbose) See the documentation for more information."} +{"question_id": 41471578, "score": 92, "creation_date": 1483557256, "tags": ["python", "visual-studio-code"], "instruction": "Visual Studio Code - How to add multiple paths to python path?\n\nI am experimenting with Visual Studio Code and so far, it seems great (light, fast, etc). I am trying to get one of my Python apps running that uses a virtual environment, but also uses libraries that are not in the site-package of my virtual environment. I know that in settings.json, I can specify a python.pythonPath setting, which I have done and is pointing to a virtual environment. I also know that I can add additional paths to python.autoComplete.extraPaths, where thus far I am adding the external libraries. The problem is, when I am debugging, it's failing because it's not finding the libraries specified in python.autoComplete.extraPaths. Is there another setting that must be used for this?", "output": "This worked for me:- in your launch.json profile entry, specify a new entry called \"env\", and set PYTHONPATH yourself. \"configurations\": [ { \"name\": \"Python\", \"type\": \"python\", \"stopOnEntry\": false, \"request\": \"launch\", \"pythonPath\": \"${config.python.pythonPath}\", \"program\": \"${file}\", \"cwd\": \"${workspaceRoot}\", \"debugOptions\": [ \"WaitOnAbnormalExit\", \"WaitOnNormalExit\", \"RedirectOutput\" ], \"env\": { \"PYTHONPATH\": \"/path/a:path/b\" } } ]"} +{"question_id": 20242862, "score": 92, "creation_date": 1385556226, "tags": ["python", "unit-testing", "mocking", "nose"], "instruction": "Why python mock patch doesn't work?\n\nI have two files spike.py class T1(object): def foo(self, afd): return \"foo\" def get_foo(self): return self.foo(1) def bar(): return \"bar\" test_spike.py: from unittest import TestCase import unittest from mock import patch, MagicMock from spike import T1, bar class TestStuff(TestCase): @patch('spike.T1.foo', MagicMock(return_value='patched')) def test_foo(self): foo = T1().get_foo() self.assertEqual('patched', foo) @patch('spike.bar') def test_bar(self, mock_obj): mock_obj.return_value = 'patched' bar = bar() self.assertEqual('patched', bar) if __name__ == \"__main__\": unittest.main() When I run python test_spike.py, the first test case would pass, but the second would fail. and I switch to use nosetests test_spike.py, then both two are failed. I don't understand how this happened? These cases supposed to pass all.", "output": "For test_foo you are not using patch correctly. You should be using it like this: class TestFoo(TestCase): @patch.object(T1, 'foo', MagicMock(return_value='patched')) def test_foo(self): foo = T1().get_foo() self.assertEqual('patched', foo) that gives me: nosetests test_spike.py . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK Now the second example does not work because you import bar function (get a reference to it) and then try to mock it. When you mock something you can't change what your variables hold (reference to original function). To fix this you should use @falsetru suggested method like: from unittest import TestCase import unittest from mock import patch import spike class TestFoo(TestCase): @patch('spike.bar') def test_bar(self, mock_obj): mock_obj.return_value = 'patched' value = spike.bar() self.assertEqual('patched', value) if __name__ == \"__main__\": unittest.main() this gives me: python test_spike.py . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK But when I try to run it with nose I get: nosetests test_spike.py F ====================================================================== FAIL: test_bar (src.test_spike.TestFoo) ---------------------------------------------------------------------- Traceback (most recent call last): File \"/home/zilva/envs/test/local/lib/python2.7/site-packages/mock/mock.py\", line 1305, in patched return func(*args, **keywargs) File \"/home/zilva/git/test/src/test_spike.py\", line 11, in test_bar self.assertEqual('patched', value) AssertionError: 'patched' != 'bar' ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1) This happends because I am patching not in the right place. My directory structure is: test/ \u2514\u2500\u2500 src/ \u251c\u2500\u2500 spike.py \u251c\u2500\u2500 test_spike.py \u2514\u2500\u2500 __init__.py and I run tests from src directory so I should be patching using path from project root directory like: @patch('src.spike.bar') and this would give me: nosetests test_spike.py . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK or if I am at test directory: nosetests src/test_spike.py . ---------------------------------------------------------------------- Ran 1 test in 0.001s OK"} +{"question_id": 1816483, "score": 92, "creation_date": 1259522406, "tags": ["python", "inheritance", "subclass", "slots"], "instruction": "How does inheritance of __slots__ in subclasses actually work?\n\nIn the Python data model reference section on slots there is a list of notes on using __slots__. I am thoroughly confused by the 1st and 6th items, because they seem to be contradicting each other. First item: When inheriting from a class without __slots__, the __dict__ attribute of that class will always be accessible, so a __slots__ definition in the subclass is meaningless. Sixth item: The action of a __slots__ declaration is limited to the class where it is defined. As a result, subclasses will have a __dict__ unless they also define __slots__ (which must only contain names of any additional slots). It seems to me these items could be better worded or shown through code, but I have been trying to wrap my head around this and am still coming up confused. I do understand how __slots__ are supposed to be used, and I am trying to get a better grasp on how they work. The Question: Can someone please explain to me in plain language what the conditions are for inheritance of slots when subclassing? (Simple code examples would be helpful but not necessary.)", "output": "As others have mentioned, the sole reason for defining __slots__ is to save some memory, when you have simple objects with a predefined set of attributes and don't want each to carry around a dictionary. This is meaningful only for classes of which you plan to have many instances, of course. The savings may not be immediately obvious -- consider...: >>> class NoSlots(object): pass ... >>> n = NoSlots() >>> class WithSlots(object): __slots__ = 'a', 'b', 'c' ... >>> w = WithSlots() >>> n.a = n.b = n.c = 23 >>> w.a = w.b = w.c = 23 >>> sys.getsizeof(n) 32 >>> sys.getsizeof(w) 36 From this, it would seem the with-slots size is larger than the no-slots size! But that's a mistake, because sys.getsizeof doesn't consider \"object contents\" such as the dictionary: >>> sys.getsizeof(n.__dict__) 140 Since the dict alone takes 140 bytes, clearly the \"32 bytes\" object n is alleged to take are not considering all that's involved in each instance. You can do a better job with third-party extensions such as pympler: >>> import pympler.asizeof >>> pympler.asizeof.asizeof(w) 96 >>> pympler.asizeof.asizeof(n) 288 This shows much more clearly the memory footprint that's saved by __slots__: for a simple object such as this case, it's a bit less than 200 bytes, almost 2/3 of the object's overall footprint. Now, since these days a megabyte more or less doesn't really matter all that much to most applications, this also tells you that __slots__ is not worth the bother if you're going to have just a few thousand instances around at a time -- however, for millions of instances, it sure does make a very important difference. You can also get a microscopic speedup (partly due to better cache use for small objects with __slots__): $ python -mtimeit -s'class S(object): __slots__=\"x\",\"y\"' -s's=S(); s.x=s.y=23' 's.x' 10000000 loops, best of 3: 0.37 usec per loop $ python -mtimeit -s'class S(object): pass' -s's=S(); s.x=s.y=23' 's.x' 1000000 loops, best of 3: 0.604 usec per loop $ python -mtimeit -s'class S(object): __slots__=\"x\",\"y\"' -s's=S(); s.x=s.y=23' 's.x=45' 1000000 loops, best of 3: 0.28 usec per loop $ python -mtimeit -s'class S(object): pass' -s's=S(); s.x=s.y=23' 's.x=45' 1000000 loops, best of 3: 0.332 usec per loop but this is somewhat dependent on Python version (these are the numbers I measure repeatably with 2.5; with 2.6, I see a larger relative advantage to __slots__ for setting an attribute, but none at all, indeed a tiny disadvantage, for getting it). Now, regarding inheritance: for an instance to be dict-less, all classes up its inheritance chain must also have dict-less instances. Classes with dict-less instances are those which define __slots__, plus most built-in types (built-in types whose instances have dicts are those on whose instances you can set arbitrary attributes, such as functions). Overlaps in slot names are not forbidden, but they're useless and waste some memory, since slots are inherited: >>> class A(object): __slots__='a' ... >>> class AB(A): __slots__='b' ... >>> ab=AB() >>> ab.a = ab.b = 23 >>> as you see, you can set attribute a on an AB instance -- AB itself only defines slot b, but it inherits slot a from A. Repeating the inherited slot isn't forbidden: >>> class ABRed(A): __slots__='a','b' ... >>> abr=ABRed() >>> abr.a = abr.b = 23 but does waste a little memory: >>> pympler.asizeof.asizeof(ab) 88 >>> pympler.asizeof.asizeof(abr) 96 so there's really no reason to do it."} +{"question_id": 28517937, "score": 92, "creation_date": 1423933200, "tags": ["python", "opencv", "numpy", "homebrew", "anaconda"], "instruction": "How can I upgrade NumPy?\n\nWhen I installed OpenCV using Homebrew (brew), I got this problem whenever I run this command to test python -c \"import cv2\": RuntimeError: module compiled against API version 9 but this version of numpy is 6 Traceback (most recent call last): File \"\", line 1, in ImportError: numpy.core.multiarray failed to import I tried to upgrade NumPy, but this is confusing: >>> import numpy >>> print numpy.__version__ 1.6.1 When I run brew to upgrade NumPy, I got this problem: brew install -u numpy Warning: numpy-1.9.1 already installed When I uninstalled it: sudo pip install numpy Requirement already satisfied (use --upgrade to upgrade): numpy in ./anaconda/lib/python2.7/site-packages I have followed this question and deleted Anaconda from my mac. pip install numpy Requirement already satisfied (use --upgrade to upgrade): numpy in /Library/Python/2.7/site-packages But nothing have changed. How can I link the NumPy version to OpenCV?", "output": "Because we have two NumPy installations in the system. One is installed by Homebrew and the second is installed by pip. So in order to solve the problem, we need to delete one and use the default NumPy install by OpenCV. Check the path, import numpy print numpy.__path__ and manually delete it using rm."} +{"question_id": 1406145, "score": 92, "creation_date": 1252598202, "tags": ["python", "winapi", "tkinter", "tk-toolkit"], "instruction": "How do I get rid of Python Tkinter root window?\n\nDo you know a smart way to hide or in any other way get rid of the root window that appears, opened by Tk()? I would like just to use a normal dialog. Should I skip the dialog and put all my components in the root window? Is it possible or desirable? Or is there a smarter solution?", "output": "Probably the vast majority of of tk-based applications place all the components in the default root window. This is the most convenient way to do it since it already exists. Choosing to hide the default window and create your own is a perfectly fine thing to do, though it requires just a tiny bit of extra work. To answer your specific question about how to hide it, use the withdraw method of the root window: import Tkinter as tk root = tk.Tk() root.withdraw() If you want to make the window visible again, call the deiconify (or wm_deiconify) method. root.deiconify() Once you are done with the dialog, you can destroy the root window along with all other tkinter widgets with the destroy method: root.destroy()"} +{"question_id": 13669252, "score": 92, "creation_date": 1354451081, "tags": ["python", "lambda", "higher-order-functions"], "instruction": "What is `lambda` in Python code? How does it work with `key` arguments to `sorted`, `sum` etc.?\n\nI saw some examples using built-in functions like sorted, sum etc. that use key=lambda. What does lambda mean here? How does it work? For the general computer science concept of a lambda, see What is a lambda (function)?. See also How are lambdas useful? for some discussion that no longer meets site standards but which you may find useful.", "output": "A lambda is an anonymous function: >>> f = lambda: 'foo' >>> print(f()) foo It is often used in functions such as sorted() that take a callable as a parameter (often the key keyword parameter). You could provide an existing function instead of a lambda there too, as long as it is a callable object. Take the sorted() function as an example. It'll return the given iterable in sorted order: >>> sorted(['Some', 'words', 'sort', 'differently']) ['Some', 'differently', 'sort', 'words'] but that sorts uppercased words before words that are lowercased. Using the key keyword you can change each entry so it'll be sorted differently. We could lowercase all the words before sorting, for example: >>> def lowercased(word): return word.lower() ... >>> lowercased('Some') 'some' >>> sorted(['Some', 'words', 'sort', 'differently'], key=lowercased) ['differently', 'Some', 'sort', 'words'] We had to create a separate function for that, we could not inline the def lowercased() line into the sorted() expression: >>> sorted(['Some', 'words', 'sort', 'differently'], key=def lowercased(word): return word.lower()) File \"\", line 1 sorted(['Some', 'words', 'sort', 'differently'], key=def lowercased(word): return word.lower()) ^ SyntaxError: invalid syntax A lambda on the other hand, can be specified directly, inline in the sorted() expression: >>> sorted(['Some', 'words', 'sort', 'differently'], key=lambda word: word.lower()) ['differently', 'Some', 'sort', 'words'] Lambdas are limited to one expression only, the result of which is the return value. There are loads of places in the Python library, including built-in functions, that take a callable as keyword or positional argument. There are too many to name here, and they often play a different role."} +{"question_id": 2170610, "score": 92, "creation_date": 1264914167, "tags": ["python", "elementtree"], "instruction": "access ElementTree node parent node\n\nI am using the builtin Python ElementTree module. It is straightforward to access children, but what about parent or sibling nodes? - can this be done efficiently without traversing the entire tree?", "output": "There's no direct support in the form of a parent attribute, but you can perhaps use the patterns described here to achieve the desired effect. The following one-liner is suggested (updated from the linked-to post to Python 3.8) to create a child-to-parent mapping for a whole tree, using the method xml.etree.ElementTree.Element.iter: parent_map = {c: p for p in tree.iter() for c in p}"} +{"question_id": 19523277, "score": 92, "creation_date": 1382458985, "tags": ["python", "pandas", "group-by", "rename"], "instruction": "Renaming Column Names in Pandas Groupby function\n\nQ1) I want to do a groupby, SQL-style aggregation and rename the output column: Example dataset: ID Region count 0 100 Asia 2 1 101 Europe 3 2 102 US 1 3 103 Africa 5 4 100 Russia 5 5 101 Australia 7 6 102 US 8 7 104 Asia 10 8 105 Europe 11 9 110 Africa 23 I want to group the observations of this dataset by ID and Region and summing the count for each group. So I used something like this... >>> print(df.groupby(['ID','Region'],as_index=False).count().sum()) ID Region count 0 100 Asia 2 1 100 Russia 5 2 101 Australia 7 3 101 Europe 3 4 102 US 9 5 103 Africa 5 6 104 Asia 10 7 105 Europe 11 8 110 Africa 23 On using as_index=False I am able to get \"SQL-Like\" output. My problem is that I am unable to rename the aggregate variable count here. So in SQL if wanted to do the above thing I would do something like this: select ID, Region, sum(count) as Total_Numbers from df group by ID, Region order by ID, Region As we see, it's very easy for me to rename the aggregate variable count to Total_Numbers in SQL. I wanted to do the same thing in Pandas but unable to find such an option in group-by function. Can somebody help? The second question (more of an observation): Q2) Is it possible to directly use column names in Pandas dataframe functions without enclosing them in quotes? I understand that the variable names are strings, so have to be inside quotes, but I see if use them outside dataframe function and as an attribute we don't require them to be inside quotes. Like df.ID.sum() etc. It's only when we use it in a DataFrame function like df.sort() or df.groupby we have to use it inside quotes. This is actually a bit of pain as in SQL or in SAS or other languages we simply use the variable name without quoting them. Any suggestion on this? Kindly reply to both questions (Q1 is the main, Q2 more of an opinion).", "output": "For the first question I think answer would be: .rename(columns= {'count':'Total_Numbers'}) or .columns = ['ID', 'Region', 'Total_Numbers'] if it is Pandas Series we have .rename(\"Total_Numbers\") As for second one I'd say the answer would be no. It's possible to use it like 'df.ID' because of python datamodel: Attribute references are translated to lookups in this dictionary, e.g., m.x is equivalent to m.dict[\"x\"]"} +{"question_id": 53781634, "score": 92, "creation_date": 1544797827, "tags": ["python", "pandas", "dataframe", "group-by", "aggregation"], "instruction": "Aggregation in Pandas\n\nHow can I perform aggregation with Pandas? No DataFrame after aggregation! What happened? How can I aggregate mainly strings columns (to lists, tuples, strings with separator)? How can I aggregate counts? How can I create a new column filled by aggregated values? I've seen these recurring questions asking about various faces of the pandas aggregate functionality. Most of the information regarding aggregation and its various use cases today is fragmented across dozens of badly worded, unsearchable posts. The aim here is to collate some of the more important points for posterity. This Q&A is meant to be the next instalment in a series of helpful user-guides: How can I pivot a dataframe? What are the 'levels', 'keys', and names arguments for in Pandas' concat function? How do I operate on a DataFrame with a Series for every column? Pandas Merging 101 Please note that this post is not meant to be a replacement for the documentation about aggregation and about groupby, so please read that as well!", "output": "Question 1 How can I perform aggregation with Pandas? Expanded aggregation documentation. Aggregating functions are the ones that reduce the dimension of the returned objects. It means output Series/DataFrame have less or same rows like original. Some common aggregating functions are tabulated below: Function Description mean() Compute mean of groups sum() Compute sum of group values size() Compute group sizes count() Compute count of group std() Standard deviation of groups var() Compute variance of groups sem() Standard error of the mean of groups describe() Generates descriptive statistics first() Compute first of group values last() Compute last of group values nth() Take nth value, or a subset if n is a list min() Compute min of group values max() Compute max of group values np.random.seed(123) df = pd.DataFrame({'A' : ['foo', 'foo', 'bar', 'foo', 'bar', 'foo'], 'B' : ['one', 'two', 'three','two', 'two', 'one'], 'C' : np.random.randint(5, size=6), 'D' : np.random.randint(5, size=6), 'E' : np.random.randint(5, size=6)}) print (df) A B C D E 0 foo one 2 3 0 1 foo two 4 1 0 2 bar three 2 1 1 3 foo two 1 0 3 4 bar two 3 1 4 5 foo one 2 1 0 Aggregation by filtered columns and Cython implemented functions: df1 = df.groupby(['A', 'B'], as_index=False)['C'].sum() print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 An aggregate function is used for all columns without being specified in the groupby function, here the A, B columns: df2 = df.groupby(['A', 'B'], as_index=False).sum() print (df2) A B C D E 0 bar three 2 1 1 1 bar two 3 1 4 2 foo one 4 4 0 3 foo two 5 1 3 You can also specify only some columns used for aggregation in a list after the groupby function: df3 = df.groupby(['A', 'B'], as_index=False)['C','D'].sum() print (df3) A B C D 0 bar three 2 1 1 bar two 3 1 2 foo one 4 4 3 foo two 5 1 Same results by using function DataFrameGroupBy.agg: df1 = df.groupby(['A', 'B'], as_index=False)['C'].agg('sum') print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 df2 = df.groupby(['A', 'B'], as_index=False).agg('sum') print (df2) A B C D E 0 bar three 2 1 1 1 bar two 3 1 4 2 foo one 4 4 0 3 foo two 5 1 3 For multiple functions applied for one column use a list of tuples - names of new columns and aggregated functions: df4 = (df.groupby(['A', 'B'])['C'] .agg([('average','mean'),('total','sum')]) .reset_index()) print (df4) A B average total 0 bar three 2.0 2 1 bar two 3.0 3 2 foo one 2.0 4 3 foo two 2.5 5 If want to pass multiple functions is possible pass list of tuples: df5 = (df.groupby(['A', 'B']) .agg([('average','mean'),('total','sum')])) print (df5) C D E average total average total average total A B bar three 2.0 2 1.0 1 1.0 1 two 3.0 3 1.0 1 4.0 4 foo one 2.0 4 2.0 4 0.0 0 two 2.5 5 0.5 1 1.5 3 Then get MultiIndex in columns: print (df5.columns) MultiIndex(levels=[['C', 'D', 'E'], ['average', 'total']], labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]) And for converting to columns, flattening MultiIndex use map with join: df5.columns = df5.columns.map('_'.join) df5 = df5.reset_index() print (df5) A B C_average C_total D_average D_total E_average E_total 0 bar three 2.0 2 1.0 1 1.0 1 1 bar two 3.0 3 1.0 1 4.0 4 2 foo one 2.0 4 2.0 4 0.0 0 3 foo two 2.5 5 0.5 1 1.5 3 Another solution is pass list of aggregate functions, then flatten MultiIndex and for another columns names use str.replace: df5 = df.groupby(['A', 'B']).agg(['mean','sum']) df5.columns = (df5.columns.map('_'.join) .str.replace('sum','total') .str.replace('mean','average')) df5 = df5.reset_index() print (df5) A B C_average C_total D_average D_total E_average E_total 0 bar three 2.0 2 1.0 1 1.0 1 1 bar two 3.0 3 1.0 1 4.0 4 2 foo one 2.0 4 2.0 4 0.0 0 3 foo two 2.5 5 0.5 1 1.5 3 If want specified each column with aggregated function separately pass dictionary: df6 = (df.groupby(['A', 'B'], as_index=False) .agg({'C':'sum','D':'mean'}) .rename(columns={'C':'C_total', 'D':'D_average'})) print (df6) A B C_total D_average 0 bar three 2 1.0 1 bar two 3 1.0 2 foo one 4 2.0 3 foo two 5 0.5 You can pass custom function too: def func(x): return x.iat[0] + x.iat[-1] df7 = (df.groupby(['A', 'B'], as_index=False) .agg({'C':'sum','D': func}) .rename(columns={'C':'C_total', 'D':'D_sum_first_and_last'})) print (df7) A B C_total D_sum_first_and_last 0 bar three 2 2 1 bar two 3 2 2 foo one 4 4 3 foo two 5 1 Question 2 No DataFrame after aggregation! What happened? Aggregation by two or more columns: df1 = df.groupby(['A', 'B'])['C'].sum() print (df1) A B bar three 2 two 3 foo one 4 two 5 Name: C, dtype: int32 First check the Index and type of a Pandas object: print (df1.index) MultiIndex(levels=[['bar', 'foo'], ['one', 'three', 'two']], labels=[[0, 0, 1, 1], [1, 2, 0, 2]], names=['A', 'B']) print (type(df1)) There are two solutions for how to get MultiIndex Series to columns: add parameter as_index=False df1 = df.groupby(['A', 'B'], as_index=False)['C'].sum() print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 use Series.reset_index: df1 = df.groupby(['A', 'B'])['C'].sum().reset_index() print (df1) A B C 0 bar three 2 1 bar two 3 2 foo one 4 3 foo two 5 If group by one column: df2 = df.groupby('A')['C'].sum() print (df2) A bar 5 foo 9 Name: C, dtype: int32 ... get Series with Index: print (df2.index) Index(['bar', 'foo'], dtype='object', name='A') print (type(df2)) And the solution is the same like in the MultiIndex Series: df2 = df.groupby('A', as_index=False)['C'].sum() print (df2) A C 0 bar 5 1 foo 9 df2 = df.groupby('A')['C'].sum().reset_index() print (df2) A C 0 bar 5 1 foo 9 Question 3 How can I aggregate mainly strings columns (to lists, tuples, strings with separator)? df = pd.DataFrame({'A' : ['a', 'c', 'b', 'b', 'a', 'c', 'b'], 'B' : ['one', 'two', 'three','two', 'two', 'one', 'three'], 'C' : ['three', 'one', 'two', 'two', 'three','two', 'one'], 'D' : [1,2,3,2,3,1,2]}) print (df) A B C D 0 a one three 1 1 c two one 2 2 b three two 3 3 b two two 2 4 a two three 3 5 c one two 1 6 b three one 2 Instead of an aggregation function, it is possible to pass list, tuple, set for converting the column: df1 = df.groupby('A')['B'].agg(list).reset_index() print (df1) A B 0 a [one, two] 1 b [three, two, three] 2 c [two, one] An alternative is use GroupBy.apply: df1 = df.groupby('A')['B'].apply(list).reset_index() print (df1) A B 0 a [one, two] 1 b [three, two, three] 2 c [two, one] For converting to strings with a separator, use .join only if it is a string column: df2 = df.groupby('A')['B'].agg(','.join).reset_index() print (df2) A B 0 a one,two 1 b three,two,three 2 c two,one If it is a numeric column, use a lambda function with astype for converting to strings: df3 = (df.groupby('A')['D'] .agg(lambda x: ','.join(x.astype(str))) .reset_index()) print (df3) A D 0 a 1,3 1 b 3,2,2 2 c 2,1 Another solution is converting to strings before groupby: df3 = (df.assign(D = df['D'].astype(str)) .groupby('A')['D'] .agg(','.join).reset_index()) print (df3) A D 0 a 1,3 1 b 3,2,2 2 c 2,1 For converting all columns, don't pass a list of column(s) after groupby. There isn't any column D, because automatic exclusion of 'nuisance' columns. It means all numeric columns are excluded. df4 = df.groupby('A').agg(','.join).reset_index() print (df4) A B C 0 a one,two three,three 1 b three,two,three two,two,one 2 c two,one one,two So it's necessary to convert all columns into strings, and then get all columns: df5 = (df.groupby('A') .agg(lambda x: ','.join(x.astype(str))) .reset_index()) print (df5) A B C D 0 a one,two three,three 1,3 1 b three,two,three two,two,one 3,2,2 2 c two,one one,two 2,1 Question 4 How can I aggregate counts? df = pd.DataFrame({'A' : ['a', 'c', 'b', 'b', 'a', 'c', 'b'], 'B' : ['one', 'two', 'three','two', 'two', 'one', 'three'], 'C' : ['three', np.nan, np.nan, 'two', 'three','two', 'one'], 'D' : [np.nan,2,3,2,3,np.nan,2]}) print (df) A B C D 0 a one three NaN 1 c two NaN 2.0 2 b three NaN 3.0 3 b two two 2.0 4 a two three 3.0 5 c one two NaN 6 b three one 2.0 Function GroupBy.size for size of each group: df1 = df.groupby('A').size().reset_index(name='COUNT') print (df1) A COUNT 0 a 2 1 b 3 2 c 2 Function GroupBy.count excludes missing values: df2 = df.groupby('A')['C'].count().reset_index(name='COUNT') print (df2) A COUNT 0 a 2 1 b 2 2 c 1 This function should be used for multiple columns for counting non-missing values: df3 = df.groupby('A').count().add_suffix('_COUNT').reset_index() print (df3) A B_COUNT C_COUNT D_COUNT 0 a 2 2 1 1 b 3 2 3 2 c 2 1 1 A related function is Series.value_counts. It returns the size of the object containing counts of unique values in descending order, so that the first element is the most frequently-occurring element. It excludes NaNs values by default. df4 = (df['A'].value_counts() .rename_axis('A') .reset_index(name='COUNT')) print (df4) A COUNT 0 b 3 1 a 2 2 c 2 If you want same output like using function groupby + size, add Series.sort_index: df5 = (df['A'].value_counts() .sort_index() .rename_axis('A') .reset_index(name='COUNT')) print (df5) A COUNT 0 a 2 1 b 3 2 c 2 Question 5 How can I create a new column filled by aggregated values? Method GroupBy.transform returns an object that is indexed the same (same size) as the one being grouped. See the Pandas documentation for more information. np.random.seed(123) df = pd.DataFrame({'A' : ['foo', 'foo', 'bar', 'foo', 'bar', 'foo'], 'B' : ['one', 'two', 'three','two', 'two', 'one'], 'C' : np.random.randint(5, size=6), 'D' : np.random.randint(5, size=6)}) print (df) A B C D 0 foo one 2 3 1 foo two 4 1 2 bar three 2 1 3 foo two 1 0 4 bar two 3 1 5 foo one 2 1 df['C1'] = df.groupby('A')['C'].transform('sum') df['C2'] = df.groupby(['A','B'])['C'].transform('sum') df[['C3','D3']] = df.groupby('A')['C','D'].transform('sum') df[['C4','D4']] = df.groupby(['A','B'])['C','D'].transform('sum') print (df) A B C D C1 C2 C3 D3 C4 D4 0 foo one 2 3 9 4 9 5 4 4 1 foo two 4 1 9 5 9 5 5 1 2 bar three 2 1 5 2 5 2 2 1 3 foo two 1 0 9 5 9 5 5 1 4 bar two 3 1 5 3 5 2 3 1 5 foo one 2 1 9 4 9 5 4 4"} +{"question_id": 13129618, "score": 92, "creation_date": 1351544462, "tags": ["python", "pandas", "numpy", "matplotlib"], "instruction": "Histogram values of a Pandas Series\n\nI have some values in a Python Pandas Series (type: pandas.core.series.Series) In [1]: series = pd.Series([0.0,950.0,-70.0,812.0,0.0,-90.0,0.0,0.0,-90.0,0.0,-64.0,208.0,0.0,-90.0,0.0,-80.0,0.0,0.0,-80.0,-48.0,840.0,-100.0,190.0,130.0,-100.0,-100.0,0.0,-50.0,0.0,-100.0,-100.0,0.0,-90.0,0.0,-90.0,-90.0,63.0,-90.0,0.0,0.0,-90.0,-80.0,0.0,]) In [2]: series.min() Out[2]: -100.0 In [3]: series.max() Out[3]: 950.0 I would like to get values of histogram (not necessary plotting histogram)... I just need to get the frequency for each interval. Let's say that my intervals are going from [-200; -150] to [950; 1000] so lower bounds are lwb = range(-200,1000,50) and upper bounds are upb = range(-150,1050,50) I don't know how to get frequency (the number of values that are inside each interval) now... I'm sure that defining lwb and upb is not necessary... but I don't know what function I should use to perform this! (after diving in Pandas doc, I think cut function can help me because it's a discretization problem... but I'm don't understand how to use it) After being able to do this, I will have a look at the way to display histogram (but that's an other problem)", "output": "You just need to use the histogram function of NumPy: import numpy as np count, division = np.histogram(series) where division is the automatically calculated border for your bins and count is the population inside each bin. If you need to fix a certain number of bins, you can use the argument bins and specify a number of bins, or give it directly the boundaries between each bin. count, division = np.histogram(series, bins = [-201,-149,949,1001]) to plot the results you can use the matplotlib function hist, but if you are working in pandas each Series has its own handle to the hist function, and you can give it the chosen binning: series.hist(bins=division) Edit: As mentioned by another poster, Pandas is built on top of NumPy. Since OP is explicitly using Pandas, we can do away with the additional import by accessing NumPy through Pandas: count, division = pd.np.histogram(series)"} +{"question_id": 16424091, "score": 92, "creation_date": 1367944203, "tags": ["python", "image", "tkinter", "tkinter-canvas"], "instruction": "Why does Tkinter image not show up if created in a function?\n\nThis code works: import tkinter root = tkinter.Tk() canvas = tkinter.Canvas(root) canvas.grid(row = 0, column = 0) photo = tkinter.PhotoImage(file = './test.gif') canvas.create_image(0, 0, image=photo) root.mainloop() It shows me the image. Now, this code compiles but it doesn't show me the image, and I don't know why, because it's the same code, in a class: import tkinter class Test: def __init__(self, master): canvas = tkinter.Canvas(master) canvas.grid(row = 0, column = 0) photo = tkinter.PhotoImage(file = './test.gif') canvas.create_image(0, 0, image=photo) root = tkinter.Tk() test = Test(root) root.mainloop()", "output": "The variable photo is a local variable which gets garbage collected after the class is instantiated. The solution involves saving a reference to the photo, for example: self.photo = tkinter.PhotoImage(...) If you do a Google search on \"tkinter image doesn't display\", the first result is this: Why do my Tkinter images not appear?"} +{"question_id": 14105452, "score": 92, "creation_date": 1356982586, "tags": ["python", "forms", "post", "flask", "bad-request"], "instruction": "What is the cause of the Bad Request Error when submitting form in Flask application?\n\nAfter reading many similar sounding problems and the relevant Flask docs, I cannot seem to figure out what is generating the following error upon submitting a form: 400 Bad Request The browser (or proxy) sent a request that this server could not understand. While the form always displays properly, the bad request happens when I submit an HTML form that ties to either of these functions: @app.route('/app/business', methods=['GET', 'POST']) def apply_business(): if request.method == 'POST': new_account = Business(name=request.form['name_field'], email=request.form['email_field'], account_type=\"business\", q1=request.form['q1_field'], q2=request.form['q2_field'], q3=request.form['q3_field'], q4=request.form['q4_field'], q5=request.form['q5_field'], q6=request.form['q6_field'], q7=request.form['q7_field'], account_status=\"pending\", time=datetime.datetime.utcnow()) db.session.add(new_account) db.session.commit() session['name'] = request.form['name_field'] return redirect(url_for('success')) return render_template('application.html', accounttype=\"business\") @app.route('/app/student', methods=['GET', 'POST']) def apply_student(): if request.method == 'POST': new_account = Student(name=request.form['name_field'], email=request.form['email_field'], account_type=\"student\", q1=request.form['q1_field'], q2=request.form['q2_field'], q3=request.form['q3_field'], q4=request.form['q4_field'], q5=request.form['q5_field'], q6=request.form['q6_field'], q7=request.form['q7_field'], q8=request.form['q8_field'], q9=request.form['q9_field'], q10=request.form['q10_field'], account_status=\"pending\", time=datetime.datetime.utcnow()) db.session.add(new_account) db.session.commit() session['name'] = request.form['name_field'] return redirect(url_for('success')) return render_template('application.html', accounttype=\"student\") The relevant part of HTML is apply {% if accounttype==\"business\" %} {% elif accounttype==\"student\" %} {% endif %}

Full Name:

Email Address:

... The problem for most people was not calling GET or POST, but I am doing just that in both functions, and I double checked to make sure I imported everything necessary, such as from flask import request. I also queried the database and confirmed that the additions from the form weren't added. In the Flask app, I was requesting form fields that were labeled slightly different in the HTML form. Keeping the names consistent is a must. More can be read at this question Form sending error, Flask", "output": "The solution was simple and uncovered in the comments. As addressed in this question, Form sending error, Flask, and pointed out by Sean Vieira, ...the issue is that Flask raises an HTTP error when it fails to find a key in the args and form dictionaries. What Flask assumes by default is that if you are asking for a particular key and it's not there then something got left out of the request and the entire request is invalid. In other words, if only one form element that you request in Python cannot be found in HTML, then the POST request is not valid and the error appears, in my case without any irregularities in the traceback. For me, it was a lack of consistency with spelling: in the HTML, I labeled various form inputs while in Python, when there was a POST called, I grab a nonexistent form with request.form['question1'] whereas, to be consistent with my HTML form names, it needed to be request.form['question1_field']"} +{"question_id": 45745661, "score": 92, "creation_date": 1503007707, "tags": ["python", "string"], "instruction": "lower() vs. casefold() in string matching and converting to lowercase\n\nHow do I do a case-insensitive string comparison? From what I understood from Google and the link above that both functions: lower() and casefold() will convert the string to lowercase, but casefold() will convert even the caseless letters such as the \u00df in German to ss. All of that about Greek letters, but my question in general: are there any other differences? which one is better to convert to lowercase? which one is better to check the matching strings? Part 2: firstString = \"der Flu\u00df\" secondString = \"der Fluss\" # \u00df is equivalent to ss if firstString.casefold() == secondString.casefold(): print('The strings are equal.') else: print('The strings are not equal.') In the example above should I use: lower() # the result is not equal which make sense to me Or: casefold() # which \u00df is ss and result is the # strings are equal. (since I am a beginner that still does not # make sense to me. I see different strings).", "output": "TL;DR Converting to Lowercase -> lower() Caseless String matching/comparison -> casefold() casefold() is a text normalization function like lower() that is specifically designed to remove upper- or lower-case distinctions for the purposes of comparison. It is another form of normalizing text that may initially appear to be very similar to lower() because generally, the results are the same. As of Unicode 13.0.0, only ~300 of ~150,000 characters produced differing results when passed through lower() and casefold(). @dlukes' answer has the code to identify the characters that generate those differing results. To answer your other two questions: use lower() when you specifically want to ensure a character is lowercase, like for presenting to users or persisting data use casefold() when you want to compare that result to another casefold-ed value. Other Material I suggest you take a closer look into what case folding actually is, so here's a good start: W3 Case Folding Wiki Another source: Elastic.co Case Folding Edit: I just recently found another very good related answer to a slightly different question here on SO (doing a case-insensitive string comparison) Performance Using this snippet, you can get a sense for the performance between the two: import sys from timeit import timeit unicode_codepoints = tuple(map(chr, range(sys.maxunicode))) def compute_lower(): return tuple(codepoint.lower() for codepoint in unicode_codepoints) def compute_casefold(): return tuple(codepoint.casefold() for codepoint in unicode_codepoints) timer_repeat = 1000 print(f\"time to compute lower on unicode namespace: {timeit(compute_lower, number = timer_repeat) / timer_repeat} seconds\") print(f\"time to compute casefold on unicode namespace: {timeit(compute_casefold, number = timer_repeat) / timer_repeat} seconds\") print(f\"number of distinct characters from lower: {len(set(compute_lower()))}\") print(f\"number of distinct characters from casefold: {len(set(compute_casefold()))}\") Running this, you'll get the results that the two are overwhelmingly the same in both performance and the number of distinct characters returned time to compute lower on unicode namespace: 0.137255663 seconds time to compute casefold on unicode namespace: 0.136321374 seconds number of distinct characters from lower: 1112719 number of distinct characters from casefold: 1112694 If you run the numbers, that means it takes about 1.6e-07 seconds to run the computation on a single character for either function, so there isn't a performance benefit either way."} +{"question_id": 15004772, "score": 92, "creation_date": 1361456954, "tags": ["python", "iterator", "python-itertools"], "instruction": "What is the difference between chain and chain.from_iterable in itertools?\n\nI could not find any valid example on the internet where I can see the difference between them and why to choose one over the other.", "output": "The first takes 0 or more arguments, each an iterable, the second one takes one argument which is expected to produce the iterables: from itertools import chain chain(list1, list2, list3) iterables = [list1, list2, list3] chain.from_iterable(iterables) but iterables can be any iterator that yields the iterables: def gen_iterables(): for i in range(10): yield range(i) itertools.chain.from_iterable(gen_iterables()) Using the second form is usually a case of convenience, but because it loops over the input iterables lazily, it is also the only way you can chain an infinite number of finite iterators: def gen_iterables(): while True: for i in range(5, 10): yield range(i) chain.from_iterable(gen_iterables()) The above example will give you a iterable that yields a cyclic pattern of numbers that will never stop, but will never consume more memory than what a single range() call requires."} +{"question_id": 1829216, "score": 92, "creation_date": 1259704574, "tags": ["python", "django", "django-models", "oop"], "instruction": "How can I programmatically obtain the max_length of a Django model field?\n\nSay I have a Django class something like this: class Person(models.Model): name = models.CharField(max_length=50) # ... How can I programatically obtain the max_length value for the name field?", "output": "Person._meta.get_field('name').max_length will give you this value. But having to use _meta suggests this is something you shouldn't do in normal usage. Edit: as Carl pointed out, this naming is misleading and it does seem quite acceptable to use it: http://www.b-list.org/weblog/2007/nov/04/working-models/ Read more at Django Docs: https://docs.djangoproject.com/en/dev/ref/models/meta/#django.db.models.options.Options.get_field"} +{"question_id": 75804599, "score": 92, "creation_date": 1679420110, "tags": ["python", "openai-api", "chatgpt-api", "gpt-3", "gpt-4"], "instruction": "OpenAI API: How do I count tokens before(!) I send an API request?\n\nOpenAI's text models have a context length, e.g.: Curie has a context length of 2049 tokens. They provide max_tokens and stop parameters to control the length of the generated sequence. Therefore the generation stops either when stop token is obtained, or max_tokens is reached. The issue is: when generating a text, I don't know how many tokens my prompt contains. Since I do not know that, I cannot set max_tokens = 2049 - number_tokens_in_prompt. This prevents me from generating text dynamically for a wide range of text in terms of their length. What I need is to continue generating until the stop token. My questions are: How can I count the number of tokens in Python API so that I will set max_tokens parameter accordingly? Is there a way to set max_tokens to the max cap so that I won't need to count the number of prompt tokens?", "output": "How do I count tokens before(!) I send an API request? As stated in the official OpenAI article: To further explore tokenization, you can use our interactive Tokenizer tool, which allows you to calculate the number of tokens and see how text is broken into tokens. Alternatively, if you'd like to tokenize text programmatically, use tiktoken as a fast BPE tokenizer specifically used for OpenAI models. How does a tokenizer work? A tokenizer can split the text string into a list of tokens, as stated in the official OpenAI example on counting tokens with tiktoken: tiktoken is a fast open-source tokenizer by OpenAI. Given a text string (e.g., \"tiktoken is great!\") and an encoding (e.g., \"cl100k_base\"), a tokenizer can split the text string into a list of tokens (e.g., [\"t\", \"ik\", \"token\", \" is\", \" great\", \"!\"]). Splitting text strings into tokens is useful because GPT models see text in the form of tokens. Knowing how many tokens are in a text string can tell you: whether the string is too long for a text model to process and how much an OpenAI API call costs (as usage is priced by token). Which encodings does OpenAI use for its models? As of April 2024, tiktoken supports 2 encodings used by OpenAI models (source 1, source 2): Encoding name OpenAI models o200k_base \u2022 GPT-4o models (gpt-4o) cl100k_base \u2022 GPT-4 models (gpt-4)\u2022 GPT-3.5 Turbo models (gpt-3.5-turbo)\u2022 GPT Base models (davinci-002, babbage-002)\u2022 Embeddings models (text-embedding-ada-002, text-embedding-3-large, text-embedding-3-small)\u2022 Fine-tuned models (ft:gpt-4, ft:gpt-3.5-turbo, ft:davinci-002, ft:babbage-002) Note: The p50k_base and r50k_base encodings were used for models that are deprecated as of April 2024. What tokenizer libraries are out there? Official OpenAI libraries: Python: tiktoken 3rd-party libraries: Python: GPT2TokenizerFast Node.js: tiktoken, gpt4-tokenizer, gpt3-tokenizer, gpt-3-encoder .NET / C#: tryAGI.Tiktoken, SharpToken, TiktokenSharp, GPT Tokenizer Java: jtokkit, gpt2-tokenizer-java PHP: GPT-3-Encoder-PHP How do I use tiktoken? Install or upgrade tiktoken: pip install --upgrade tiktoken Write the code to count tokens, where you have two options. OPTION 1: Search in the table above for the correct encoding for a given OpenAI model If you run get_tokens_1.py, you'll get the following output: 9 get_tokens_1.py import tiktoken def num_tokens_from_string(string: str, encoding_name: str) -> int: encoding = tiktoken.get_encoding(encoding_name) num_tokens = len(encoding.encode(string)) return num_tokens print(num_tokens_from_string(\"Hello world, let's test tiktoken.\", \"cl100k_base\")) OPTION 2: Use tiktoken.encoding_for_model() to automatically load the correct encoding for a given OpenAI model If you run get_tokens_2.py, you'll get the following output: 9 get_tokens_2.py import tiktoken def num_tokens_from_string(string: str, encoding_name: str) -> int: encoding = tiktoken.encoding_for_model(encoding_name) num_tokens = len(encoding.encode(string)) return num_tokens print(num_tokens_from_string(\"Hello world, let's test tiktoken.\", \"gpt-3.5-turbo\")) Note: If you take a careful look at the usage field in the OpenAI API response, you'll see that it reports 10 tokens used for an identical message. That's 1 token more than tiktoken. I still haven't figured out why. I tested this in the past. As @Jota mentioned in the comment below, there still seems to be a mismatch between the token usage reported by the OpenAI API response and tiktoken."} +{"question_id": 41027315, "score": 92, "creation_date": 1481144554, "tags": ["python", "apache-spark", "dataframe", "pyspark", "apache-spark-sql"], "instruction": "Pyspark: Split multiple array columns into rows\n\nI have a dataframe which has one row, and several columns. Some of the columns are single values, and others are lists. All list columns are the same length. I want to split each list column into a separate row, while keeping any non-list column as is. Sample DF: from pyspark import Row from pyspark.sql import SQLContext from pyspark.sql.functions import explode sqlc = SQLContext(sc) df = sqlc.createDataFrame([Row(a=1, b=[1,2,3],c=[7,8,9], d='foo')]) # +---+---------+---------+---+ # | a| b| c| d| # +---+---------+---------+---+ # | 1|[1, 2, 3]|[7, 8, 9]|foo| # +---+---------+---------+---+ What I want: +---+---+----+------+ | a| b| c | d | +---+---+----+------+ | 1| 1| 7 | foo | | 1| 2| 8 | foo | | 1| 3| 9 | foo | +---+---+----+------+ If I only had one list column, this would be easy by just doing an explode: df_exploded = df.withColumn('b', explode('b')) # >>> df_exploded.show() # +---+---+---------+---+ # | a| b| c| d| # +---+---+---------+---+ # | 1| 1|[7, 8, 9]|foo| # | 1| 2|[7, 8, 9]|foo| # | 1| 3|[7, 8, 9]|foo| # +---+---+---------+---+ However, if I try to also explode the c column, I end up with a dataframe with a length the square of what I want: df_exploded_again = df_exploded.withColumn('c', explode('c')) # >>> df_exploded_again.show() # +---+---+---+---+ # | a| b| c| d| # +---+---+---+---+ # | 1| 1| 7|foo| # | 1| 1| 8|foo| # | 1| 1| 9|foo| # | 1| 2| 7|foo| # | 1| 2| 8|foo| # | 1| 2| 9|foo| # | 1| 3| 7|foo| # | 1| 3| 8|foo| # | 1| 3| 9|foo| # +---+---+---+---+ What I want is - for each column, take the nth element of the array in that column and add that to a new row. I've tried mapping an explode accross all columns in the dataframe, but that doesn't seem to work either: df_split = df.rdd.map(lambda col: df.withColumn(col, explode(col))).toDF()", "output": "Spark >= 2.4 You can replace zip_ udf with arrays_zip function from pyspark.sql.functions import arrays_zip, col, explode (df .withColumn(\"tmp\", arrays_zip(\"b\", \"c\")) .withColumn(\"tmp\", explode(\"tmp\")) .select(\"a\", col(\"tmp.b\"), col(\"tmp.c\"), \"d\")) Spark < 2.4 With DataFrames and UDF: from pyspark.sql.types import ArrayType, StructType, StructField, IntegerType from pyspark.sql.functions import col, udf, explode zip_ = udf( lambda x, y: list(zip(x, y)), ArrayType(StructType([ # Adjust types to reflect data types StructField(\"first\", IntegerType()), StructField(\"second\", IntegerType()) ])) ) (df .withColumn(\"tmp\", zip_(\"b\", \"c\")) # UDF output cannot be directly passed to explode .withColumn(\"tmp\", explode(\"tmp\")) .select(\"a\", col(\"tmp.first\").alias(\"b\"), col(\"tmp.second\").alias(\"c\"), \"d\")) With RDDs: (df .rdd .flatMap(lambda row: [(row.a, b, c, row.d) for b, c in zip(row.b, row.c)]) .toDF([\"a\", \"b\", \"c\", \"d\"])) Both solutions are inefficient due to Python communication overhead. If data size is fixed you can do something like this: from functools import reduce from pyspark.sql import DataFrame # Length of array n = 3 # For legacy Python you'll need a separate function # in place of method accessor reduce( DataFrame.unionAll, (df.select(\"a\", col(\"b\").getItem(i), col(\"c\").getItem(i), \"d\") for i in range(n)) ).toDF(\"a\", \"b\", \"c\", \"d\") or even: from pyspark.sql.functions import array, struct # SQL level zip of arrays of known size # followed by explode tmp = explode(array(*[ struct(col(\"b\").getItem(i).alias(\"b\"), col(\"c\").getItem(i).alias(\"c\")) for i in range(n) ])) (df .withColumn(\"tmp\", tmp) .select(\"a\", col(\"tmp\").getItem(\"b\"), col(\"tmp\").getItem(\"c\"), \"d\")) This should be significantly faster compared to UDF or RDD. Generalized to support an arbitrary number of columns: # This uses keyword only arguments # If you use legacy Python you'll have to change signature # Body of the function can stay the same def zip_and_explode(*colnames, n): return explode(array(*[ struct(*[col(c).getItem(i).alias(c) for c in colnames]) for i in range(n) ])) df.withColumn(\"tmp\", zip_and_explode(\"b\", \"c\", n=3))"} +{"question_id": 67112343, "score": 92, "creation_date": 1618504032, "tags": ["python", "path", "os.path", "pathlib"], "instruction": "pathlib.Path vs. os.path.join in Python\n\nWhen I need to define a file system path in my script, I use os.path.join to guarantee that the path will be consistent on different file systems: from os import path path_1 = path.join(\"/home\", \"test\", \"test.txt\") I also know that there is the pathlib module that basically does the same: from pathlib import Path path_2 = Path(\"/home\") / \"test\" / \"test.txt\" What is the difference between these two ways to handle paths? Which one is better?", "output": "pathlib is the more modern way since Python 3.4. The documentation for pathlib says that \"For low-level path manipulation on strings, you can also use the os.path module.\" It doesn't make much difference for joining paths, but other path commands are more convenient with pathlib compared to os.path. For example, to get the \"stem\" (filename without extension): os.path: splitext(basename(path))[0] pathlib: path.stem Also, you can use the same type of syntax (commas instead of slashes) to join paths with pathlib as well: path_2 = Path(\"/home\", \"test\", \"test.txt\")"} +{"question_id": 57715289, "score": 92, "creation_date": 1567100074, "tags": ["python", "ssl", "smtplib"], "instruction": "How to fix ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056)?\n\nI am trying to send an email with python, but it keeps saying ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056). Here is my code: server = smtplib.SMTP_SSL('smtp.mail.com', 587) server.login(\"something0@mail.com\", \"password\") server.sendmail( \"something0@mail.com\", \"something@mail.com\", \"email text\") server.quit() Do you know what is wrong?", "output": "The port for SSL is 465 and not 587, however when I used SSL the mail arrived to the junk mail. For me the thing that worked was to use TLS over regular SMTP instead of SMTP_SSL. Note that this is a secure method as TLS is also a cryptographic protocol (like SSL). import smtplib, ssl port = 587 # For starttls smtp_server = \"smtp.gmail.com\" sender_email = \"my@gmail.com\" receiver_email = \"your@gmail.com\" password = input(\"Type your password and press enter:\") message = \"\"\"\\ Subject: Hi there This message is sent from Python.\"\"\" context = ssl.create_default_context() with smtplib.SMTP(smtp_server, port) as server: server.ehlo() # Can be omitted server.starttls(context=context) server.ehlo() # Can be omitted server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message) provided thanks to the real python tutorial."} +{"question_id": 43751455, "score": 91, "creation_date": 1493787823, "tags": ["python"], "instruction": "super(type, obj): obj must be an instance or subtype of type\n\nWhy do I get the following error, and how do I resolve it? TypeError: super(type, obj): obj must be an instance or subtype of type", "output": "Another way this error can occur is when you reload the module with the class in a Jupiter notebook. Easy solution is to restart the kernel. http://thomas-cokelaer.info/blog/2011/09/382/ Check out @Mike W's answer for more detail."} +{"question_id": 51233, "score": 91, "creation_date": 1220935135, "tags": ["python", "html"], "instruction": "How can I retrieve the page title of a webpage using Python?\n\nHow can I retrieve the page title of a webpage (title html tag) using Python?", "output": "I'll always use lxml for such tasks. You could use beautifulsoup as well. import lxml.html t = lxml.html.parse(url) print(t.find(\".//title\").text) EDIT based on comment: from urllib2 import urlopen from lxml.html import parse url = \"https://www.google.com\" page = urlopen(url) p = parse(page) print(p.find(\".//title\").text)"} +{"question_id": 5180365, "score": 91, "creation_date": 1299153178, "tags": ["python", "string"], "instruction": "Add commas into number string\n\nI have a value running through my program that puts out a number rounded to 2 decimal places at the end, like this: print (\"Total cost is: ${:0.2f}\".format(TotalAmount)) Is there a way to insert a comma value every 3 digits left of the decimal point? e.g. 10000.00 becomes 10,000.00 or 1000000.00 becomes 1,000,000.00.", "output": "In Python 2.7 and 3.x, you can use the format syntax :, >>> total_amount = 10000 >>> print(\"{:,}\".format(total_amount)) 10,000 >>> print(\"Total cost is: ${:,.2f}\".format(total_amount)) Total cost is: $10,000.00 This is documented in PEP 378 -- Format Specifier for Thousands Separator and has an example in the Official Docs \"Using the comma as a thousands separator\""} +{"question_id": 1068246, "score": 91, "creation_date": 1246441232, "tags": ["python", "unit-testing", "python-unittest"], "instruction": "Python unittest: how to run only part of a test file?\n\nI have a test file that contains tests taking quite a lot of time (they send calculations to a cluster and wait for the result). All of these are in specific TestCase class. Since they take time and furthermore are not likely to break, I'd want to be able to choose whether this subset of tests does or doesn't run (the best way would be with a command-line argument, ie \"./tests.py --offline\" or something like that), so I could run most of the tests often and quickly and the whole set once in a while, when I have time. For now, I just use unittest.main() to start the tests.", "output": "The default unittest.main() uses the default test loader to make a TestSuite out of the module in which main is running. You don't have to use this default behavior. You can, for example, make three unittest.TestSuite instances. The \"fast\" subset. fast = TestSuite() fast.addTests(TestFastThis) fast.addTests(TestFastThat) The \"slow\" subset. slow = TestSuite() slow.addTests(TestSlowAnother) slow.addTests(TestSlowSomeMore) The \"whole\" set. alltests = unittest.TestSuite([fast, slow]) Note that I've adjusted the TestCase names to indicate Fast vs. Slow. You can subclass unittest.TestLoader to parse the names of classes and create multiple loaders. Then your main program can parse command-line arguments with optparse or argparse (available since 2.7 or 3.2) to pick which suite you want to run, fast, slow or all. Or, you can trust that sys.argv[1] is one of three values and use something as simple as this if __name__ == \"__main__\": suite = eval(sys.argv[1]) # Be careful with this line! unittest.TextTestRunner().run(suite)"} +{"question_id": 9731435, "score": 91, "creation_date": 1331868528, "tags": ["python", "celery", "django-celery", "celery-task"], "instruction": "Retry Celery tasks with exponential back off\n\nFor a task like this: from celery.decorators import task @task() def add(x, y): if not x or not y: raise Exception(\"test error\") return self.wait_until_server_responds( if it throws an exception and I want to retry it from the daemon side, how can apply an exponential back off algorithm, i.e. after 2^2, 2^3,2^4 etc seconds? Also is the retry maintained from the server side, such that if the worker happens to get killed then next worker that spawns will take the retry task?", "output": "The task.request.retries attribute contains the number of tries so far, so you can use this to implement exponential back-off: from celery.task import task @task(bind=True, max_retries=3) def update_status(self, auth, status): try: Twitter(auth).update_status(status) except Twitter.WhaleFail as exc: raise self.retry(exc=exc, countdown=2 ** self.request.retries) To prevent a Thundering Herd Problem, you may consider adding a random jitter to your exponential backoff: import random self.retry(exc=exc, countdown=int(random.uniform(2, 4) ** self.request.retries))"} +{"question_id": 24455615, "score": 91, "creation_date": 1403883217, "tags": ["python", "memory"], "instruction": "python- how to display size of all variables\n\nI want to print the memory size of all variables in my scope simultaneously. Something similar to: for obj in locals().values(): print sys.getsizeof(obj) But with variable names before each value so I can see which variables I need to delete or split into batches. Ideas?", "output": "You can iterate over both the key and value of a dictionary using .items() from __future__ import print_function # for Python2 import sys local_vars = list(locals().items()) for var, obj in local_vars: print(var, sys.getsizeof(obj))"} +{"question_id": 5695208, "score": 91, "creation_date": 1303062159, "tags": ["python", "list", "grouping"], "instruction": "Group list by values\n\nLet's say I have a list like this: mylist = [[\"A\",0], [\"B\",1], [\"C\",0], [\"D\",2], [\"E\",2]] How can I most elegantly group this to get this list output in Python: [[\"A\", \"C\"], [\"B\"], [\"D\", \"E\"]] So the values are grouped by the secound value but the order is preserved...", "output": "values = set(map(lambda x:x[1], mylist)) newlist = [[y[0] for y in mylist if y[1]==x] for x in values]"} +{"question_id": 2953250, "score": 91, "creation_date": 1275425607, "tags": ["python", "pep8"], "instruction": "Python PEP8: Blank lines convention\n\nI am interested in knowing what is the Python convention for newlines between the program parts? For example, consider this: import os def func1(): def func2(): What should be the ideal newline separation between: The import modules and the functions? The functions themselves? I have read PEP8, but I wanted to confirm the above two points.", "output": "Two blank lines between the import statements and other code. Two blank lines between top-level functions."} +{"question_id": 45846765, "score": 91, "creation_date": 1503512017, "tags": ["python", "json", "pandas", "dataframe", "pandas-explode"], "instruction": "Efficient way to unnest (explode) multiple list columns in a pandas DataFrame\n\nI am reading multiple JSON objects into one DataFrame. The problem is that some of the columns are lists. Also, the data is very big and because of that I cannot use the available solutions on the internet. They are very slow and memory-inefficient Here is how my data looks like: df = pd.DataFrame({'A': ['x1','x2','x3', 'x4'], 'B':[['v1','v2'],['v3','v4'],['v5','v6'],['v7','v8']], 'C':[['c1','c2'],['c3','c4'],['c5','c6'],['c7','c8']],'D':[['d1','d2'],['d3','d4'],['d5','d6'],['d7','d8']], 'E':[['e1','e2'],['e3','e4'],['e5','e6'],['e7','e8']]}) A B C D E 0 x1 [v1, v2] [c1, c2] [d1, d2] [e1, e2] 1 x2 [v3, v4] [c3, c4] [d3, d4] [e3, e4] 2 x3 [v5, v6] [c5, c6] [d5, d6] [e5, e6] 3 x4 [v7, v8] [c7, c8] [d7, d8] [e7, e8] And this is the shape of my data: (441079, 12) My desired output is: A B C D E 0 x1 v1 c1 d1 e1 0 x1 v2 c2 d2 e2 1 x2 v3 c3 d3 e3 1 x2 v4 c4 d4 e4 ..... EDIT: After being marked as duplicate, I would like to stress on the fact that in this question I was looking for an efficient method of exploding multiple columns. Therefore the approved answer is able to explode an arbitrary number of columns on very large datasets efficiently. Something that the answers to the other question failed to do (and that was the reason I asked this question after testing those solutions).", "output": "pandas >= 1.3 In more recent versions, pandas allows you to explode multiple columns at once using DataFrame.explode, provided all values have lists of equal size. Thus, you are able to use this: df.explode(['B', 'C', 'D', 'E']).reset_index(drop=True) A B C D E 0 x1 v1 c1 d1 e1 1 x1 v2 c2 d2 e2 2 x2 v3 c3 d3 e3 3 x2 v4 c4 d4 e4 4 x3 v5 c5 d5 e5 5 x3 v6 c6 d6 e6 6 x4 v7 c7 d7 e7 7 x4 v8 c8 d8 e8 pandas >= 0.25 For slightly older versions, you can apply Series.explode on each column. df.set_index(['A']).apply(pd.Series.explode).reset_index() A B C D E 0 x1 v1 c1 d1 e1 1 x1 v2 c2 d2 e2 2 x2 v3 c3 d3 e3 3 x2 v4 c4 d4 e4 4 x3 v5 c5 d5 e5 5 x3 v6 c6 d6 e6 6 x4 v7 c7 d7 e7 7 x4 v8 c8 d8 e8 The idea is to set as the index all columns that must NOT be exploded first, then reset the index after. Funnily enough, this happens to be faster than calling df.explode, according to my tests. YMMV. explode methods are quite performant in general: df2 = pd.concat([df] * 100, ignore_index=True) %timeit df2.explode(['B', 'C', 'D', 'E']).reset_index(drop=True) %timeit df2.set_index(['A']).apply(pd.Series.explode).reset_index() # fastest %%timeit (df2.set_index('A') .apply(lambda x: x.apply(pd.Series).stack()) .reset_index() .drop('level_1', axis=1)) 2.59 ms \u00b1 112 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 1.27 ms \u00b1 239 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each) 120 ms \u00b1 9.48 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)"} +{"question_id": 26393231, "score": 91, "creation_date": 1413412271, "tags": ["python", "web-scraping", "python-requests"], "instruction": "Using python Requests with javascript pages\n\nI am trying to use the Requests framework with python (http://docs.python-requests.org/en/latest/) but the page I am trying to get to uses javascript to fetch the info that I want. I have tried to search on the web for a solution but the fact that I am searching with the keyword javascript most of the stuff I am getting is how to scrape with the javascript language. Is there anyway to use the requests framework with pages that use javascript?", "output": "Good news: there is now a requests module that supports javascript: https://pypi.org/project/requests-html/ from requests_html import HTMLSession session = HTMLSession() r = session.get('http://www.yourjspage.com') r.html.render() # this call executes the js in the page As a bonus this wraps BeautifulSoup, I think, so you can do things like r.html.find('#myElementID').text which returns the content of the HTML element as you'd expect."} +{"question_id": 27181084, "score": 91, "creation_date": 1417141714, "tags": ["python", "argparse"], "instruction": "How to iterate over arguments\n\nI have such script: import argparse parser = argparse.ArgumentParser( description='Text file conversion.' ) parser.add_argument(\"inputfile\", help=\"file to process\", type=str) parser.add_argument(\"-o\", \"--out\", default=\"output.txt\", help=\"output name\") parser.add_argument(\"-t\", \"--type\", default=\"detailed\", help=\"Type of processing\") args = parser.parse_args() for arg in args: print(arg) But it doesnt work. I get error: TypeError: 'Namespace' object is not iterable How to iterate over arguments and their value?", "output": "Add vars if you want to iterate over a Namespace object: for arg in vars(args): print arg, getattr(args, arg)"} +{"question_id": 45631715, "score": 91, "creation_date": 1502444042, "tags": ["python", "google-chrome", "selenium", "google-chrome-headless"], "instruction": "Downloading with chrome headless and selenium\n\nI'm using python-selenium and Chrome 59 and trying to automate a simple download sequence. When I launch the browser normally, the download works, but when I do so in headless mode, the download doesn't work. # Headless implementation from selenium import webdriver chromeOptions = webdriver.ChromeOptions() chromeOptions.add_argument(\"headless\") driver = webdriver.Chrome(chrome_options=chromeOptions) driver.get('https://www.mockaroo.com/') driver.find_element_by_id('download').click() # ^^^ Download doesn't start # Normal Mode from selenium import webdriver driver = webdriver.Chrome() driver.get('https://www.mockaroo.com/') driver.find_element_by_id('download').click() # ^^^ Download works normally I've even tried adding a default path: prefs = {\"download.default_directory\" : \"/Users/Chetan/Desktop/\"} chromeOptions.add_argument(\"headless\") chromeOptions.add_experimental_option(\"prefs\",prefs) Adding a default path works in the normal implementation, but the same problem persists in the headless version. How do I get the download to start in headless mode?", "output": "The Chromium developers recently added a 2nd headless mode (in 2021). See https://bugs.chromium.org/p/chromium/issues/detail?id=706008#c36 They later renamed the option in 2023 for Chrome 109 -> https://github.com/chromium/chromium/commit/e9c516118e2e1923757ecb13e6d9fff36775d1f4 For Chrome 109 and above, the --headless=new flag will now allow you to get the full functionality of Chrome in the new headless mode, and you can even run extensions in it. (For Chrome versions 96 through 108, use --headless=chrome) Usage: (Chrome 109 and above): options.add_argument(\"--headless=new\") Usage: (Chrome 96 through Chrome 108): options.add_argument(\"--headless=chrome\") If something works in regular Chrome, it should now work with the newer headless mode too."} +{"question_id": 44576167, "score": 91, "creation_date": 1497557332, "tags": ["python", "inheritance", "overriding"], "instruction": "Force child class to override parent's methods\n\nSuppose I have a base class with unimplemented methods as follows: class Polygon(): def __init__(self): pass def perimeter(self): pass def area(self): pass Now, let's say one of my colleagues uses the Polygon class to create a subclass as follows: import math class Circle(Polygon): def __init__(self, radius): self.radius = radius def perimeter(self): return 2 * math.pi * self.radius (H/Sh)e has forgotten to implement the area() method. How can I force the subclass to implement the parent's area() method?", "output": "this could be your parent class: class Polygon(): def __init__(self): raise NotImplementedError def perimeter(self): raise NotImplementedError def area(self): raise NotImplementedError although the problem will be spotted at runtime only, when one of the instances of the child classes tries to call one of these methods. a different version is to use abc.abstractmethod. from abc import ABC, abstractmethod import math class Polygon(ABC): @abstractmethod def __init__(self): pass @abstractmethod def perimeter(self): pass @abstractmethod def area(self): pass class Circle(Polygon): def __init__(self, radius): self.radius = radius def perimeter(self): return 2 * math.pi * self.radius # def area(self): # return math.pi * self.radius**2 c = Circle(9.0) # TypeError: Can't instantiate abstract class Circle # with abstract methods area you will not be able to instantiate a Circle without it having all the methods implemented. this is the python 3 syntax; in python 2 you'd need to class Polygon(object): __metaclass__ = ABCMeta also note that for the binary special functions __eq__(), __lt__(), __add__(), ... it is better to return NotImplemented instead of raising NotImplementedError."} +{"question_id": 34160968, "score": 91, "creation_date": 1449591917, "tags": ["python", "documentation", "docstring"], "instruction": "Python Docstring: raise vs. raises\n\nI use the PyCharm IDE which assists with crafting PEP0257-compliant docstrings. It provides two attributes I don't entirely understand the distinction/use between: :raise Exception: exception explanation here :raises Exception: exception explanation here When would I use raise as opposes to raises in my docstring? Specifically, if a class required an argument that was not provided and raises a TypeError, which should be used to document that?", "output": "TL;DR raises is used to describe the possible exceptions being raised. raise is recognized by Sphinx when running autodoc and is the same as raises. Full Explanation PyCharm helps in using a few different styles of docstring comments. Three which I often use are: NumPy Format Google Format Sphinx (much more than a format) In all of these there is a special section for Raises which you can see in an older version of the PyCharm code tests: Simple NumPy Simple Google The implementation for SphinxDocString we can see here there there are numerous keywords which can be recognized. Those tags then link to the list of RAISES_TAGS which can be found here. I hope this information is useful."} +{"question_id": 33078554, "score": 91, "creation_date": 1444644703, "tags": ["python", "list", "dictionary"], "instruction": "Mapping dictionary value to list\n\nGiven the following dictionary: dct = {'a':3, 'b':3,'c':5,'d':3} How can I apply these values to a list such as: lst = ['c', 'd', 'a', 'b', 'd'] in order to get something like: lstval = [5, 3, 3, 3, 3]", "output": "Using a list comprehension: >>> [dct[k] for k in lst] [5, 3, 3, 3, 3] Using map: >>> [*map(dct.get, lst)] [5, 3, 3, 3, 3]"} +{"question_id": 13242382, "score": 91, "creation_date": 1352159787, "tags": ["python", "image-processing", "numpy", "scipy", "python-imaging-library"], "instruction": "Resampling a numpy array representing an image\n\nI am looking for how to resample a numpy array representing image data at a new size, preferably having a choice of the interpolation method (nearest, bilinear, etc.). I know there is scipy.misc.imresize which does exactly this by wrapping PIL's resize function. The only problem is that since it uses PIL, the numpy array has to conform to image formats, giving me a maximum of 4 \"color\" channels. I want to be able to resize arbitrary images, with any number of \"color\" channels. I was wondering if there is a simple way to do this in scipy/numpy, or if I need to roll my own. I have two ideas for how to concoct one myself: a function that runs scipy.misc.imresize on every channel separately create my own using scipy.ndimage.interpolation.affine_transform The first one would probably be slow for large data, and the second one does not seem to offer any other interpolation method except splines.", "output": "Based on your description, you want scipy.ndimage.zoom. Bilinear interpolation would be order=1, nearest is order=0, and cubic is the default (order=3). zoom is specifically for regularly-gridded data that you want to resample to a new resolution. As a quick example: import numpy as np import scipy.ndimage x = np.arange(9).reshape(3,3) print 'Original array:' print x print 'Resampled by a factor of 2 with nearest interpolation:' print scipy.ndimage.zoom(x, 2, order=0) print 'Resampled by a factor of 2 with bilinear interpolation:' print scipy.ndimage.zoom(x, 2, order=1) print 'Resampled by a factor of 2 with cubic interpolation:' print scipy.ndimage.zoom(x, 2, order=3) And the result: Original array: [[0 1 2] [3 4 5] [6 7 8]] Resampled by a factor of 2 with nearest interpolation: [[0 0 1 1 2 2] [0 0 1 1 2 2] [3 3 4 4 5 5] [3 3 4 4 5 5] [6 6 7 7 8 8] [6 6 7 7 8 8]] Resampled by a factor of 2 with bilinear interpolation: [[0 0 1 1 2 2] [1 2 2 2 3 3] [2 3 3 4 4 4] [4 4 4 5 5 6] [5 5 6 6 6 7] [6 6 7 7 8 8]] Resampled by a factor of 2 with cubic interpolation: [[0 0 1 1 2 2] [1 1 1 2 2 3] [2 2 3 3 4 4] [4 4 5 5 6 6] [5 6 6 7 7 7] [6 6 7 7 8 8]] Edit: As Matt S. pointed out, there are a couple of caveats for zooming multi-band images. I'm copying the portion below almost verbatim from one of my earlier answers: Zooming also works for 3D (and nD) arrays. However, be aware that if you zoom by 2x, for example, you'll zoom along all axes. data = np.arange(27).reshape(3,3,3) print 'Original:\\n', data print 'Zoomed by 2x gives an array of shape:', ndimage.zoom(data, 2).shape This yields: Original: [[[ 0 1 2] [ 3 4 5] [ 6 7 8]] [[ 9 10 11] [12 13 14] [15 16 17]] [[18 19 20] [21 22 23] [24 25 26]]] Zoomed by 2x gives an array of shape: (6, 6, 6) In the case of multi-band images, you usually don't want to interpolate along the \"z\" axis, creating new bands. If you have something like a 3-band, RGB image that you'd like to zoom, you can do this by specifying a sequence of tuples as the zoom factor: print 'Zoomed by 2x along the last two axes:' print ndimage.zoom(data, (1, 2, 2)) This yields: Zoomed by 2x along the last two axes: [[[ 0 0 1 1 2 2] [ 1 1 1 2 2 3] [ 2 2 3 3 4 4] [ 4 4 5 5 6 6] [ 5 6 6 7 7 7] [ 6 6 7 7 8 8]] [[ 9 9 10 10 11 11] [10 10 10 11 11 12] [11 11 12 12 13 13] [13 13 14 14 15 15] [14 15 15 16 16 16] [15 15 16 16 17 17]] [[18 18 19 19 20 20] [19 19 19 20 20 21] [20 20 21 21 22 22] [22 22 23 23 24 24] [23 24 24 25 25 25] [24 24 25 25 26 26]]]"} +{"question_id": 25557686, "score": 91, "creation_date": 1409258581, "tags": ["python", "locking", "multiprocessing", "share"], "instruction": "Python sharing a lock between processes\n\nI am attempting to use a partial function so that pool.map() can target a function that has more than one parameter (in this case a Lock() object). Here is example code (taken from an answer to a previous question of mine): from functools import partial def target(lock, iterable_item): for item in items: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release() def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() l = multiprocessing.Lock() func = partial(target, l) pool.map(func, iterable) pool.close() pool.join() However when I run this code, I get the error: Runtime Error: Lock objects should only be shared between processes through inheritance. What am I missing here? How can I share the lock between my subprocesses?", "output": "You can't pass normal multiprocessing.Lock objects to Pool methods, because they can't be pickled. There are two ways to get around this. One is to create Manager() and pass a Manager.Lock(): def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() m = multiprocessing.Manager() l = m.Lock() func = partial(target, l) pool.map(func, iterable) pool.close() pool.join() This is a little bit heavyweight, though; using a Manager requires spawning another process to host the Manager server. And all calls to acquire/release the lock have to be sent to that server via IPC. The other option is to pass the regular multiprocessing.Lock() at Pool creation time, using the initializer kwarg. This will make your lock instance global in all the child workers: def target(iterable_item): for item in items: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release() def init(l): global lock lock = l def main(): iterable = [1, 2, 3, 4, 5] l = multiprocessing.Lock() pool = multiprocessing.Pool(initializer=init, initargs=(l,)) pool.map(target, iterable) pool.close() pool.join() The second solution has the side-effect of no longer requiring partial."} +{"question_id": 9305751, "score": 91, "creation_date": 1329368049, "tags": ["python", "class", "variables"], "instruction": "How to force/ensure class attributes are a specific type?\n\nHow do I restrict a class member variable to be a specific type in Python? Longer version: I have a class that has several member variables which are set externally to the class. Due to the way they're used, they must be of specific types, either int or list. If this was C++, I would simply make them private and do type-checking in the 'set' function. Given that that isn't possible, is there any way to restrict the type of the variables so that an error/exception occurs at runtime if they're assigned a value of incorrect type? Or do I need to check their type within every function that uses them?", "output": "You can use a property like the other answers put it - so, if you want to constrain a single attribute, say \"bar\", and constrain it to an integer, you could write code like this: class Foo(object): def _get_bar(self): return self.__bar def _set_bar(self, value): if not isinstance(value, int): raise TypeError(\"bar must be set to an integer\") self.__bar = value bar = property(_get_bar, _set_bar) And this works: >>> f = Foo() >>> f.bar = 3 >>> f.bar 3 >>> f.bar = \"three\" Traceback (most recent call last): File \"\", line 1, in File \"\", line 6, in _set_bar TypeError: bar must be set to an integer >>> (There is also a new way of writing properties, using the \"property\" built-in as a decorator to the getter method - but I prefer the old way, like I put it above). Of course, if you have lots of attributes on your classes, and want to protect all of them in this way, it starts to get verbose. Nothing to worry about - Python's introspection abilities allow one to create a class decorator that could automate this with a minimum of lines. def getter_setter_gen(name, type_): def getter(self): return getattr(self, \"__\" + name) def setter(self, value): if not isinstance(value, type_): raise TypeError(f\"{name} attribute must be set to an instance of {type_}\") setattr(self, \"__\" + name, value) return property(getter, setter) def auto_attr_check(cls): new_dct = {} for key, value in cls.__dict__.items(): if isinstance(value, type): value = getter_setter_gen(key, value) new_dct[key] = value # Creates a new class, using the modified dictionary as the class dict: return type(cls)(cls.__name__, cls.__bases__, new_dct) And you just use auto_attr_checkas a class decorator, and declar the attributes you want in the class body to be equal to the types the attributes need to constrain too: ... ... @auto_attr_check ... class Foo(object): ... bar = int ... baz = str ... bam = float ... >>> f = Foo() >>> f.bar = 5; f.baz = \"hello\"; f.bam = 5.0 >>> f.bar = \"hello\" Traceback (most recent call last): File \"\", line 1, in File \"\", line 6, in setter TypeError: bar attribute must be set to an instance of >>> f.baz = 5 Traceback (most recent call last): File \"\", line 1, in File \"\", line 6, in setter TypeError: baz attribute must be set to an instance of >>> f.bam = 3 + 2j Traceback (most recent call last): File \"\", line 1, in File \"\", line 6, in setter TypeError: bam attribute must be set to an instance of >>>"} +{"question_id": 30405413, "score": 91, "creation_date": 1432326649, "tags": ["python", "datetime", "pandas", "extract", "dataframe"], "instruction": "pandas extract year from datetime: df['year'] = df['date'].year is not working\n\nI import a dataframe via read_csv, but for some reason can't extract the year or month from the series df['date'], trying that gives AttributeError: 'Series' object has no attribute 'year': date Count 6/30/2010 525 7/30/2010 136 8/31/2010 125 9/30/2010 84 10/29/2010 4469 df = pd.read_csv('sample_data.csv', parse_dates=True) df['date'] = pd.to_datetime(df['date']) df['year'] = df['date'].year df['month'] = df['date'].month UPDATE: and when I try solutions with df['date'].dt on my pandas version 0.14.1, I get \"AttributeError: 'Series' object has no attribute 'dt' \": df = pd.read_csv('sample_data.csv',parse_dates=True) df['date'] = pd.to_datetime(df['date']) df['year'] = df['date'].dt.year df['month'] = df['date'].dt.month Sorry for this question that seems repetitive - I expect the answer will make me feel like a bonehead... but I have not had any luck using answers to the similar questions on SO. FOLLOWUP: I can't seem to update my pandas 0.14.1 to a newer release in my Anaconda environment, each of the attempts below generates an invalid syntax error. I'm using Python 3.4.1 64bit. conda update pandas conda install pandas==0.15.2 conda install -f pandas Any ideas?", "output": "If you're running a recent-ish version of pandas then you can use the datetime accessor dt to access the datetime components: In [6]: df['date'] = pd.to_datetime(df['date']) df['year'], df['month'] = df['date'].dt.year, df['date'].dt.month df Out[6]: date Count year month 0 2010-06-30 525 2010 6 1 2010-07-30 136 2010 7 2 2010-08-31 125 2010 8 3 2010-09-30 84 2010 9 4 2010-10-29 4469 2010 10 EDIT It looks like you're running an older version of pandas in which case the following would work: In [18]: df['date'] = pd.to_datetime(df['date']) df['year'], df['month'] = df['date'].apply(lambda x: x.year), df['date'].apply(lambda x: x.month) df Out[18]: date Count year month 0 2010-06-30 525 2010 6 1 2010-07-30 136 2010 7 2 2010-08-31 125 2010 8 3 2010-09-30 84 2010 9 4 2010-10-29 4469 2010 10 Regarding why it didn't parse this into a datetime in read_csv you need to pass the ordinal position of your column ([0]) because when True it tries to parse columns [1,2,3] see the docs In [20]: t=\"\"\"date Count 6/30/2010 525 7/30/2010 136 8/31/2010 125 9/30/2010 84 10/29/2010 4469\"\"\" df = pd.read_csv(io.StringIO(t), sep='\\s+', parse_dates=[0]) df.info() Int64Index: 5 entries, 0 to 4 Data columns (total 2 columns): date 5 non-null datetime64[ns] Count 5 non-null int64 dtypes: datetime64[ns](1), int64(1) memory usage: 120.0 bytes So if you pass param parse_dates=[0] to read_csv there shouldn't be any need to call to_datetime on the 'date' column after loading."} +{"question_id": 19672352, "score": 91, "creation_date": 1383097279, "tags": ["python", "windows", "admin", "elevated-privileges"], "instruction": "How to run script with elevated privilege on windows\n\nI am writing a pyqt application which require to execute admin task. I would prefer to start my script with elevate privilege. I am aware that this question is asked many times in SO or in other forum. But the solution people are suggesting is to have a look at this SO question Request UAC elevation from within a Python script? However, I am unable to execute the sample code given in the link. I have put this code on top of the main file and tried to execute it. import os import sys import win32com.shell.shell as shell ASADMIN = 'asadmin' if sys.argv[-1] != ASADMIN: script = os.path.abspath(sys.argv[0]) params = ' '.join([script] + sys.argv[1:] + [ASADMIN]) shell.ShellExecuteEx(lpVerb='runas', lpFile=sys.executable, lpParameters=params) sys.exit(0) print \"I am root now.\" It actually ask permission to elevate but print line never get executed. Somebody can help me to run the above code successfully.", "output": "Update as of 19-02-2023 The update to the below script is now alive as a Python package by the same author. You can install it from PyPi, which lives at https://pypi.org/project/pyuac/, and the source code/home page is located at https://github.com/Preston-Landers/pyuac. Install it using: pip install pyuac pip install pypiwin32 Direct usage of the package is: import pyuac def main(): print(\"Do stuff here that requires being run as an admin.\") # The window will disappear as soon as the program exits! input(\"Press enter to close the window. >\") if __name__ == \"__main__\": if not pyuac.isUserAdmin(): print(\"Re-launching as admin!\") pyuac.runAsAdmin() else: main() # Already an admin here. Or, if you wish to use the decorator: from pyuac import main_requires_admin @main_requires_admin def main(): print(\"Do stuff here that requires being run as an admin.\") # The window will disappear as soon as the program exits! input(\"Press enter to close the window. >\") if __name__ == \"__main__\": main() Original answer Thank you all for your reply. I got my script working with the module/script written by Preston Landers in 2010. After two days of browsing the internet, I could find the script. It was deeply hidden in the pywin32 mailing list. With this script, it is easier to check if the user is an admin, and if not, ask for UAC/admin privileges. It provides output in separate windows to display what the code is doing. An example of how to use the code is also included in the script. For the benefit of everyone who's looking for UAC on Windows, take a look at this code. It can be used something like this from your main script:- import admin if not admin.isUserAdmin(): admin.runAsAdmin() The actual code (in the module) is:- #!/usr/bin/env python # -*- coding: utf-8; mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*- # vim: fileencoding=utf-8 tabstop=4 expandtab shiftwidth=4 # (C) COPYRIGHT \u00a9 Preston Landers 2010 # Released under the same license as Python 2.6.5 import sys, os, traceback, types def isUserAdmin(): if os.name == 'nt': import ctypes # WARNING: requires Windows XP SP2 or higher! try: return ctypes.windll.shell32.IsUserAnAdmin() except: traceback.print_exc() print \"Admin check failed, assuming not an admin.\" return False elif os.name == 'posix': # Check for root on Posix return os.getuid() == 0 else: raise RuntimeError, \"Unsupported operating system for this module: %s\" % (os.name,) def runAsAdmin(cmdLine=None, wait=True): if os.name != 'nt': raise RuntimeError, \"This function is only implemented on Windows.\" import win32api, win32con, win32event, win32process from win32com.shell.shell import ShellExecuteEx from win32com.shell import shellcon python_exe = sys.executable if cmdLine is None: cmdLine = [python_exe] + sys.argv elif type(cmdLine) not in (types.TupleType,types.ListType): raise ValueError, \"cmdLine is not a sequence.\" cmd = '\"%s\"' % (cmdLine[0],) # XXX TODO: isn't there a function or something we can call to massage command line params? params = \" \".join(['\"%s\"' % (x,) for x in cmdLine[1:]]) cmdDir = '' showCmd = win32con.SW_SHOWNORMAL #showCmd = win32con.SW_HIDE lpVerb = 'runas' # causes UAC elevation prompt. # print \"Running\", cmd, params # ShellExecute() doesn't seem to allow us to fetch the PID or handle # of the process, so we can't get anything useful from it. Therefore # the more complex ShellExecuteEx() must be used. # procHandle = win32api.ShellExecute(0, lpVerb, cmd, params, cmdDir, showCmd) procInfo = ShellExecuteEx(nShow=showCmd, fMask=shellcon.SEE_MASK_NOCLOSEPROCESS, lpVerb=lpVerb, lpFile=cmd, lpParameters=params) if wait: procHandle = procInfo['hProcess'] obj = win32event.WaitForSingleObject(procHandle, win32event.INFINITE) rc = win32process.GetExitCodeProcess(procHandle) #print \"Process handle %s returned code %s\" % (procHandle, rc) else: rc = None return rc def test(): rc = 0 if not isUserAdmin(): print \"You're not an admin.\", os.getpid(), \"params: \", sys.argv #rc = runAsAdmin([\"c:\\\\Windows\\\\notepad.exe\"]) rc = runAsAdmin() else: print \"You are an admin!\", os.getpid(), \"params: \", sys.argv rc = 0 x = raw_input('Press Enter to exit.') return rc if __name__ == \"__main__\": sys.exit(test())"} +{"question_id": 22787209, "score": 91, "creation_date": 1396358531, "tags": ["python", "matplotlib", "seaborn", "stacked-bar-chart", "grouped-bar-chart"], "instruction": "How to have clusters of stacked bars\n\nSo here is how my data set looks like : In [1]: df1=pd.DataFrame(np.random.rand(4,2),index=[\"A\",\"B\",\"C\",\"D\"],columns=[\"I\",\"J\"]) In [2]: df2=pd.DataFrame(np.random.rand(4,2),index=[\"A\",\"B\",\"C\",\"D\"],columns=[\"I\",\"J\"]) In [3]: df1 Out[3]: I J A 0.675616 0.177597 B 0.675693 0.598682 C 0.631376 0.598966 D 0.229858 0.378817 In [4]: df2 Out[4]: I J A 0.939620 0.984616 B 0.314818 0.456252 C 0.630907 0.656341 D 0.020994 0.538303 I want to have stacked bar plot for each dataframe but since they have same index, I'd like to have 2 stacked bars per index. I've tried to plot both on the same axes : In [5]: ax = df1.plot(kind=\"bar\", stacked=True) In [5]: ax2 = df2.plot(kind=\"bar\", stacked=True, ax = ax) But it overlaps. Then I tried to concat the two dataset first : pd.concat(dict(df1 = df1, df2 = df2),axis = 1).plot(kind=\"bar\", stacked=True) but here everything is stacked My best try is : pd.concat(dict(df1 = df1, df2 = df2),axis = 0).plot(kind=\"bar\", stacked=True) Which gives : This is basically what I want, except that I want the bar ordered as (df1,A) (df2,A) (df1,B) (df2,B) etc... I guess there is a trick but I can't found it ! After @bgschiller's answer I got this : Which is almost what I want. I would like the bar to be clustered by index, in order to have something visually clear. Bonus : Having the x-label not redundant, something like : df1 df2 df1 df2 _______ _______ ... A B", "output": "I eventually found a trick (edit: see below for using seaborn and longform dataframe): Solution with pandas and matplotlib Here it is with a more complete example : import pandas as pd import matplotlib.cm as cm import numpy as np import matplotlib.pyplot as plt def plot_clustered_stacked(dfall, labels=None, title=\"multiple stacked bar plot\", H=\"/\", **kwargs): \"\"\"Given a list of dataframes, with identical columns and index, create a clustered stacked bar plot. labels is a list of the names of the dataframe, used for the legend title is a string for the title of the plot H is the hatch used for identification of the different dataframe\"\"\" n_df = len(dfall) n_col = len(dfall[0].columns) n_ind = len(dfall[0].index) axe = plt.subplot(111) for df in dfall : # for each data frame axe = df.plot(kind=\"bar\", linewidth=0, stacked=True, ax=axe, legend=False, grid=False, **kwargs) # make bar plots h,l = axe.get_legend_handles_labels() # get the handles we want to modify for i in range(0, n_df * n_col, n_col): # len(h) = n_col * n_df for j, pa in enumerate(h[i:i+n_col]): for rect in pa.patches: # for each index rect.set_x(rect.get_x() + 1 / float(n_df + 1) * i / float(n_col)) rect.set_hatch(H * int(i / n_col)) #edited part rect.set_width(1 / float(n_df + 1)) axe.set_xticks((np.arange(0, 2 * n_ind, 2) + 1 / float(n_df + 1)) / 2.) axe.set_xticklabels(df.index, rotation = 0) axe.set_title(title) # Add invisible data to add another legend n=[] for i in range(n_df): n.append(axe.bar(0, 0, color=\"gray\", hatch=H * i)) l1 = axe.legend(h[:n_col], l[:n_col], loc=[1.01, 0.5]) if labels is not None: l2 = plt.legend(n, labels, loc=[1.01, 0.1]) axe.add_artist(l1) return axe # create fake dataframes df1 = pd.DataFrame(np.random.rand(4, 5), index=[\"A\", \"B\", \"C\", \"D\"], columns=[\"I\", \"J\", \"K\", \"L\", \"M\"]) df2 = pd.DataFrame(np.random.rand(4, 5), index=[\"A\", \"B\", \"C\", \"D\"], columns=[\"I\", \"J\", \"K\", \"L\", \"M\"]) df3 = pd.DataFrame(np.random.rand(4, 5), index=[\"A\", \"B\", \"C\", \"D\"], columns=[\"I\", \"J\", \"K\", \"L\", \"M\"]) # Then, just call : plot_clustered_stacked([df1, df2, df3],[\"df1\", \"df2\", \"df3\"]) And it gives that : You can change the colors of the bar by passing a cmap argument: plot_clustered_stacked([df1, df2, df3], [\"df1\", \"df2\", \"df3\"], cmap=plt.cm.viridis) Solution with seaborn: Given the same df1, df2, df3, below, I convert them in a long form: df1[\"Name\"] = \"df1\" df2[\"Name\"] = \"df2\" df3[\"Name\"] = \"df3\" dfall = pd.concat([pd.melt(i.reset_index(), id_vars=[\"Name\", \"index\"]) # transform in tidy format each df for i in [df1, df2, df3]], ignore_index=True) The problem with seaborn is that it doesn't stack bars natively, so the trick is to plot the cumulative sum of each bar on top of each other: dfall.set_index([\"Name\", \"index\", \"variable\"], inplace=1) dfall[\"vcs\"] = dfall.groupby(level=[\"Name\", \"index\"]).cumsum() dfall.reset_index(inplace=True) >>> dfall.head(6) Name index variable value vcs 0 df1 A I 0.717286 0.717286 1 df1 B I 0.236867 0.236867 2 df1 C I 0.952557 0.952557 3 df1 D I 0.487995 0.487995 4 df1 A J 0.174489 0.891775 5 df1 B J 0.332001 0.568868 Then loop over each group of variable and plot the cumulative sum: c = [\"blue\", \"purple\", \"red\", \"green\", \"pink\"] for i, g in enumerate(dfall.groupby(\"variable\")): ax = sns.barplot(data=g[1], x=\"index\", y=\"vcs\", hue=\"Name\", color=c[i], zorder=-i, # so first bars stay on top edgecolor=\"k\") ax.legend_.remove() # remove the redundant legends It lacks the legend that can be added easily I think. The problem is that instead of hatches (which can be added easily) to differentiate the dataframes we have a gradient of lightness, and it's a bit too light for the first one, and I don't really know how to change that without changing each rectangle one by one (as in the first solution). Tell me if you don't understand something in the code. Feel free to re-use this code which is under CC0."} +{"question_id": 11664443, "score": 91, "creation_date": 1343287769, "tags": ["python", "input", "user-input"], "instruction": "How to read multiple lines of raw input?\n\nI want to create a Python program which takes in multiple lines of user input. For example: This is a multilined input. It has multiple sentences. Each sentence is on a newline. How can I take in multiple lines of raw input?", "output": "sentinel = '' # ends when this string is seen for line in iter(input, sentinel): pass # do things here To get every line as a string you can do: '\\n'.join(iter(input, sentinel)) Python 2: '\\n'.join(iter(raw_input, sentinel))"} +{"question_id": 1060499, "score": 91, "creation_date": 1246308952, "tags": ["python", "new-style-class"], "instruction": "Difference between type(obj) and obj.__class__\n\nWhat is the difference between type(obj) and obj.__class__? Is there ever a possibility of type(obj) is not obj.__class__? I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing? def f(a, b=None): if b is None: b = type(a)(1) # #1 b = a.__class__(1) # #2", "output": "type(obj) and type.__class__ do not behave the same for old style classes in Python 2: >>> class a(object): ... pass ... >>> class b(a): ... pass ... >>> class c: ... pass ... >>> ai = a() >>> bi = b() >>> ci = c() >>> type(ai) is ai.__class__ True >>> type(bi) is bi.__class__ True >>> type(ci) is ci.__class__ False >>> type(ci) >>> ci.__class__ This is explained in the documentation: For an old-style class, the statement x.__class__ provides the class of x, but type(x) is always . This reflects the fact that all old-style instances, independent of their class, are implemented with a single built-in type, called instance."} +{"question_id": 27315472, "score": 91, "creation_date": 1417780864, "tags": ["python", "json", "count", "element"], "instruction": "How to count items in JSON data\n\nHow I can get the number of elements in node of JSON data? JSON: { \"result\":[ { \"run\":[ { \"action\":\"stop\" }, { \"action\":\"start\" }, { \"action\":\"start\" } ], \"find\":true } ] } I need to get the number of elements from node data['result'][0]['run']. It should be 3, but I can't find how to do it in Python.", "output": "import json json_data = json.dumps({ \"result\":[ { \"run\":[ { \"action\":\"stop\" }, { \"action\":\"start\" }, { \"action\":\"start\" } ], \"find\": \"true\" } ] }) item_dict = json.loads(json_data) print len(item_dict['result'][0]['run']) Convert it into Python dictionary"} +{"question_id": 55673886, "score": 91, "creation_date": 1555234944, "tags": ["python", "linux", "docker", "debian", "locale"], "instruction": "What is the difference between C.UTF-8 and en_US.UTF-8 locales?\n\nI'm migrating a Python application from an Ubuntu server with a en_US.UTF-8 locale to a new Debian server which comes with C.UTF-8 already set by default. I'm trying to understand if there could be any impact from this change.", "output": "In general C is for computer, en_US is for people in the US who speak English (and other people who want the same behaviour). The for computer means that the strings are sometimes more standardized (but still in English), so an output of a program could be read from another program. With en_US, strings could be improved, alphabetic order could be improved (maybe by new rules of Chicago rules of style, etc.). So more user-friendly, but possibly less stable. Note: locales are not just for translation of strings, but also for collation: alphabetic order, numbers (e.g. thousand separator), currency (I think it is safe to predict that $ and 2 decimal digits will remain), months, day of weeks, etc. In your case, it is just the UTF-8 version of both locales. In general it should not matter. I usually prefer en_US.UTF-8, but usually it doesn't matter, and in your case (server app), it should only change log and error messages (if you use locale.setlocale(). You should handle client locales inside your app. Programs that read from other programs should set C before opening the pipe, so it should not really matter. As you see, probably it doesn't matter. You may also use the POSIX locale, also defined in Debian. You get the list of installed locales with locale -a. Note: Micro-optimization will prescribe C/C.UTF-8 locale: no translation of files (gettext), and simple rules on collation and number formatting, but this should be visible only on the server side."} +{"question_id": 49605231, "score": 91, "creation_date": 1522642474, "tags": ["python", "numpy", "gpu"], "instruction": "Does Numpy automatically detect and use GPU?\n\nI have a few basic questions about using Numpy with GPU (nvidia GTX 1080 Ti). I'm new to GPU, and would like to make sure I'm properly using the GPU to accelerate Numpy/Python. I searched on the internet for a while, but didn't find a simple tutorial that addressed my questions. I'd appreciate it if someone can give me some pointers: 1) Does Numpy/Python automatically detect the presence of GPU and utilize it to speed up matrix computation (e.g. numpy.multiply, numpy.linalg.inv, ... etc)? Or do I have code in a specific way to exploit the GPU for fast computation? 2) Can someone recommend a good tutorial/introductory material on using Numpy/Python with GPU (nvidia's)? Thanks a lot!", "output": "Does Numpy/Python automatically detect the presence of GPU and utilize it to speed up matrix computation (e.g. numpy.multiply, numpy.linalg.inv, ... etc)? No. Or do I have code in a specific way to exploit the GPU for fast computation? Yes. Search for Numba, CuPy, Theano, PyTorch or PyCUDA for different paradigms for accelerating Python with GPUs."} +{"question_id": 35727134, "score": 91, "creation_date": 1456846861, "tags": ["python", "python-2.7", "import"], "instruction": "Module imports and __init__.py\n\nI am trying to understand what the best practices are with regards to Python's (v2.7) import mechanics. I have a project that has started to grow a bit and lets say my code is organized as follows: foo/ __init__.py Foo.py module1.py module2.py module3.py The package name is foo and underneath it I have module Foo.py which contains code for the class Foo. Hence I am using the same name for the package, module and class which might not be very clever to start with. __init__.py is empty and class Foo needs to import module1, module2 and module3 hence part of my Foo.py file looks like: # foo/Foo.py import module1 import module2 import module3 class Foo(object): def __init__(self): .... .... if __name__ == '__main__': foo_obj = Foo() However I later revisited this and I thought it would be better to have all imports in the __init__.py file. Hence my __init__.py now looks like: # foo/__init__.py import Foo import module1 import module2 import module3 .... .... and my Foo.py only needs to import foo: # foo/Foo.py import foo While this looks convenient since it is a one liner, I am a bit worried that it might be creating circular imports. What I mean is that when the script Foo.py is run it will import everything it can and then __init__.py will be called which will import Foo.py again (is that correct?). Additionally using the same name for package, module and class makes things more confusing. Does it make sense the way I have done it? Or am I asking for trouble?", "output": "A couple things you could do to improve your organization, if only to adhere to some popular Python conventions and standards. If you search this topic, you will inevitably run across people recommending the PEP8 guidelines. These are the de facto canonical standards for organizing python code. Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. Based on these guidelines, your project modules should be named like this: foo/ __init__.py foo.py module1.py module2.py module3.py I find it's generally best to avoid importing modules unnecessarily in __init__.py unless you're doing it for namespace reasons. For example, if you want the namespace for your package to look like this from foo import Foo instead of from foo.foo import Foo Then it makes sense to put from .foo import Foo in your __init__.py. As your package gets larger, some users may not want to use all of the sub-packages and modules, so it doesn't make sense to force the user to wait for all those modules to load by implicitly importing them in your __init__.py. Also, you have to consider whether you even want module1, module2, and module3 as part of your external API. Are they only used by Foo and not intended to be for end users? If they're only used internally, then don't include them in the __init__.py I'd also recommend using absolute or explicit relative imports for importing sub-modules. For example, in foo.py Absolute from foo import module1 from foo import module2 from foo import module3 Explicit Relative from . import module1 from . import module2 from . import module3 This will prevent any possible naming issues with other packages and modules. It will also make it easier if you decide to support Python3, since the implicit relative import syntax you're currently using is not supported in Python3. Also, files inside your package generally shouldn't contain a if __name__ == '__main__' This is because running a file as a script means it won't be considered part of the package that it belongs to, so it won't be able to make relative imports. The best way to provide executable scripts to users is by using the scripts or console_scripts feature of setuptools. The way you organize your scripts can be different depending on which method you use, but I generally organize mine like this: foo/ __init__.py foo.py ... scripts/ foo_script.py setup.py"} +{"question_id": 604749, "score": 91, "creation_date": 1236042704, "tags": ["python", "webcam"], "instruction": "How do I access my webcam in Python?\n\nI would like to access my webcam from Python. I tried using the VideoCapture extension (tutorial), but that didn't work very well for me, I had to work around some problems such as it's a bit slow with resolutions >320x230, and sometimes it returns None for no apparent reason. Is there a better way to access my webcam from Python?", "output": "OpenCV has support for getting data from a webcam, and it comes with Python wrappers by default, you also need to install numpy for the OpenCV Python extension (called cv2) to work. As of 2019, you can install both of these libraries with pip: pip install numpy pip install opencv-python More information on using OpenCV with Python. An example copied from Displaying webcam feed using opencv and python: import cv2 cv2.namedWindow(\"preview\") vc = cv2.VideoCapture(0) if vc.isOpened(): # try to get the first frame rval, frame = vc.read() else: rval = False while rval: cv2.imshow(\"preview\", frame) rval, frame = vc.read() key = cv2.waitKey(20) if key == 27: # exit on ESC break vc.release() cv2.destroyWindow(\"preview\")"} +{"question_id": 23332147, "score": 90, "creation_date": 1398656645, "tags": ["python", "macos", "amazon-web-services", "pip"], "instruction": "awscli not added to path after installation\n\nI installed the aws cli according to the offical Amazon directions. sudo pip install awscli However, aws is nowhere to be found in my path. The installation seems to have been successful. There are a number of files located at /Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/awscli, however there are no executables named aws. My python version is 3.3.4, my pip version is 1.5.4, and running this command on OS X 10.9. What could be wrong? Thanks!", "output": "Improving the OP's Answer The OP answered their own question, but the exact location of the executable is more likely to be different than it is to be the same. So, let's break down WHY his solution worked so you can apply it to yourself. From the problem There are a number of files located at /Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/awscli, however there are no executables named aws. From the solution The solution was to add /Library/Frameworks/Python.framework/Versions/3.3/bin to the my PATH. Let's learn something Compare those paths to find their commonality: /Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/awscli /Library/Frameworks/Python.framework/Versions/3.3/bin Notice that they diverge at lib vs. bin. And consider that the OP stated, \"there are no executables named aws.\" That brings us to our first learning lessons: Executables tend to not be in lib folders. Look for bin folders that share a common lineage. In this case I would have suggested looking for bin folders via: find /Library/Frameworks/Python.framework -type d -name bin But, if you are going to do that, you might as well just search for your executable via: find /Library/Frameworks/Python.framework -type f -perm -100 -name aws # the `-` in `perm -100` means not an exact match of 100 # but any octal that includes 100 But wait How did OP know to look in their /Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/? The easiest answer is also our next learning lesson: Ask your python where things are installed. Here is how I do that: $ python -c 'import awscli; print(awscli)' $ python3 -c 'import awscli; print(awscli)' I have 2 Pythons and neither of them use the same paths or even path patterns as the OP. Apply what we've learned $ find /System/Library/Frameworks/Python.framework -type d -name bin /System/Library/Frameworks/Python.framework/Versions/2.7/bin /System/Library/Frameworks/Python.framework/Versions/3.6/bin $ find /System/Library/Frameworks/Python.framework -type f -perm -100 -name aws /System/Library/Frameworks/Python.framework/Versions/2.7/bin/aws /System/Library/Frameworks/Python.framework/Versions/3.6/bin/aws As you can see, I have 2 bin folders and 2 aws executables. I probably want to use the Python3.6 version. However, if I'm doing local trial and error work for a remote system that uses the Python2.7 version, I'm going to want to use that. And this is exactly why I have 2 version installed."} +{"question_id": 4095319, "score": 90, "creation_date": 1288863163, "tags": ["python", "unit-testing"], "instruction": "Unittest tests order\n\nHow can I be sure of the unittest methods' order? Is the alphabetical or numeric prefixes the proper way? class TestFoo(TestCase): def test_1(self): ... def test_2(self): ... or class TestFoo(TestCase): def test_a(self): ... def test_b(self): ...", "output": "You can disable it by setting sortTestMethodsUsing to None: import unittest unittest.TestLoader.sortTestMethodsUsing = None For pure unit tests, you folks are right; but for component tests and integration tests... I do not agree that you shall assume nothing about the state. What if you are testing the state? For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, and then you are no longer testing the state, but you are testing the \"service start\" functionality. Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently. Many developers tend to use \"unit test\" frameworks for component testing...so stop and ask yourself, am I doing unit testing or component testing?"} +{"question_id": 43184887, "score": 90, "creation_date": 1491223060, "tags": ["python", "opencv"], "instruction": "DLL load failed error when importing cv2\n\nI have installed opencv on my windows machine using python 3.6 without any issues, using: pip install opencv-python but when I try to import cv2 I get the following error ImportError: DLL load failed: The specified module could not be found. I have seen this post It says cv2 doesn't work with python 3 I was wondering if this has been fixed or if there is a way around it", "output": "You can download the latest OpenCV 3.2.0 for Python 3.6 on Windows 32-bit or 64-bit machine, look for file starts withopencv_python\u20113.2.0\u2011cp36\u2011cp36m, from this unofficial site. Then type below command to install it: pip install opencv_python\u20113.2.0\u2011cp36\u2011cp36m\u2011win32.whl (32-bit version) pip install opencv_python\u20113.2.0\u2011cp36\u2011cp36m\u2011win_amd64.whl (64-bit version) I think it would be easier. Update on 2017-09-15: OpenCV 3.3.0 wheel files are now available in the unofficial site and replaced OpenCV 3.2.0. Update on 2018-02-15: OpenCV 3.4.0 wheel files are now available in the unofficial site and replaced OpenCV 3.3.0. Update on 2018-06-19: OpenCV 3.4.1 wheel files are now available in the unofficial site with CPython 3.5/3.6/3.7 support, and replaced OpenCV 3.4.0. Update on 2018-10-03: OpenCV 3.4.3 wheel files are now available in the unofficial site with CPython 3.5/3.6/3.7 support, and replaced OpenCV 3.4.1. Update on 2019-01-30: OpenCV 4.0.1 wheel files are now available in the unofficial site with CPython 3.5/3.6/3.7 support. Update on 2019-06-10: OpenCV 3.4.6 and OpenCV 4.1.0 wheel files are now available in the unofficial site with CPython 3.5/3.6/3.7 support. Update on 2023-02-11: OpenCV 4.5.5 wheel files are now available in the unofficial site with CPython 3.7/3.8/3.9/3.10/3.11 support."} +{"question_id": 38841109, "score": 90, "creation_date": 1470709571, "tags": ["python", "django", "django-rest-framework", "ibm-cloud", "django-csrf"], "instruction": "CSRF validation does not work on Django using HTTPS\n\nI am developing an application which the frontend is an AngularJS API that makes requests to the backend API developed in Django Rest Framework. The frontend is on the domain: https://front.bluemix.net And my backend is on the domain: https://back.bluemix.net I am having problems making requests from the frontend API to the backend API. The error is this: Error: CSRF Failed: Referer checking failed - https://front.bluemix.net does not match any trusted origins. I am using CORS and I have already included the following lines in my settings.py in the Django backend API: ALLOWED_HOSTS = [] CORS_ALLOW_CREDENTIALS = True CORS_ORIGIN_ALLOW_ALL = True CORS_ALLOW_CREDENTIALS = True CSRF_TRUSTED_ORIGINS = ['https://front.bluemix.net/'] CORS_REPLACE_HTTPS_REFERER = True CSRF_COOKIE_DOMAIN = 'bluemix.net' CORS_ORIGIN_WHITELIST = ( 'https://front.bluemix.net/', 'front.bluemix.net', 'bluemix.net', ) Anyone knows how to solve this problem?", "output": "Django 4.0 and above For Django 4.0 and above, CSRF_TRUSTED_ORIGINS must include scheme and host, e.g.: CSRF_TRUSTED_ORIGINS = ['https://front.bluemix.net'] Django 3.2 and lower For Django 3.2 and lower, CSRF_TRUSTED_ORIGINS must contain only the hostname, without a scheme: CSRF_TRUSTED_ORIGINS = ['front.bluemix.net'] You probably also need to put something in ALLOWED_HOSTS..."} +{"question_id": 13996302, "score": 90, "creation_date": 1356119340, "tags": ["python", "pandas", "group-by", "rolling-computation", "rolling-sum"], "instruction": "How to use rolling functions for GroupBy objects\n\nI have a time series object grouped of the type . grouped.sum() gives the desired result but I cannot get rolling_sum to work with the groupby object. Is there any way to apply rolling functions to groupby objects? For example: x = range(0, 6) id = ['a', 'a', 'a', 'b', 'b', 'b'] df = DataFrame(zip(id, x), columns = ['id', 'x']) df.groupby('id').sum() id x a 3 b 12 However, I would like to have something like: id x 0 a 0 1 a 1 2 a 3 3 b 3 4 b 7 5 b 12", "output": "cumulative sum To answer the question directly, the cumsum method would produced the desired series: In [17]: df Out[17]: id x 0 a 0 1 a 1 2 a 2 3 b 3 4 b 4 5 b 5 In [18]: df.groupby('id').x.cumsum() Out[18]: 0 0 1 1 2 3 3 3 4 7 5 12 Name: x, dtype: int64 pandas rolling functions per group More generally, any rolling function can be applied to each group as follows (using the new .rolling method as commented by @kekert). Note that the return type is a multi-indexed series, which is different from previous (deprecated) pd.rolling_* methods. In [10]: df.groupby('id')['x'].rolling(2, min_periods=1).sum() Out[10]: id a 0 0.00 1 1.00 2 3.00 b 3 3.00 4 7.00 5 9.00 Name: x, dtype: float64 To apply the per-group rolling function and receive result in original dataframe order, transform should be used instead: In [16]: df.groupby('id')['x'].transform(lambda s: s.rolling(2, min_periods=1).sum()) Out[16]: 0 0 1 1 2 3 3 3 4 7 5 9 Name: x, dtype: int64 deprecated approach For reference, here's how the now deprecated pandas.rolling_mean behaved: In [16]: df.groupby('id')['x'].apply(pd.rolling_mean, 2, min_periods=1) Out[16]: 0 0.0 1 0.5 2 1.5 3 3.0 4 3.5 5 4.5"} +{"question_id": 12881926, "score": 90, "creation_date": 1350215385, "tags": ["python", "opencv"], "instruction": "Create a new RGB OpenCV image using Python?\n\nUsing OpenCV in Python, how can I create a new RGB image? I don't want to load the image from a file, just create an empty image ready to work with.", "output": "The new cv2 interface for Python integrates numpy arrays into the OpenCV framework, which makes operations much simpler as they are represented with simple multidimensional arrays. For example, your question would be answered with: import cv2 # Not actually necessary if you just want to create an image. import numpy as np blank_image = np.zeros((height,width,3), np.uint8) This initialises an RGB-image that is just black. Now, for example, if you wanted to set the left half of the image to blue and the right half to green , you could do so easily: blank_image[:,0:width//2] = (255,0,0) # (B, G, R) blank_image[:,width//2:width] = (0,255,0) If you want to save yourself a lot of trouble in future, as well as having to ask questions such as this one, I would strongly recommend using the cv2 interface rather than the older cv one. I made the change recently and have never looked back. You can read more about cv2 at the OpenCV Change Logs."} +{"question_id": 44409084, "score": 90, "creation_date": 1496828760, "tags": ["python", "numpy"], "instruction": "How to zip two 1d numpy array to 2d numpy array\n\nI have two numpy 1d arrays, e.g: a = np.array([1,2,3,4,5]) b = np.array([6,7,8,9,10]) Then how can I get one 2d array [[1,6], [2,7], [3,8], [4,9], [5, 10]]?", "output": "The answer lies in your question: np.array(list(zip(a,b))) Edit: Although my post gives the answer as requested by the OP, the conversion to list and back to NumPy array takes some overhead (noticeable for large arrays). Hence, dstack would be a computationally efficient alternative (ref. @zipa's answer). I was unaware of dstack at the time of posting this answer so credits to @zipa for introducing it to this post. Edit 2: As can be seen in the duplicate question, np.c_ is even shorter than np.dstack. >>> import numpy as np >>> a = np.arange(1, 6) >>> b = np.arange(6, 11) >>> >>> a array([1, 2, 3, 4, 5]) >>> b array([ 6, 7, 8, 9, 10]) >>> np.c_[a, b] array([[ 1, 6], [ 2, 7], [ 3, 8], [ 4, 9], [ 5, 10]])"} +{"question_id": 11759269, "score": 90, "creation_date": 1343824281, "tags": ["python", "static-methods"], "instruction": "Calling static method in python\n\nI have a class Person and a static method in that class called call_person: class Person: def call_person(): print \"hello person\" In the python console I import the class Person and call Person.call_person(). But it is giving me error that says 'module' object has no attribute 'call_person'. Can anyone please let me know why I am getting this error?", "output": "You need to do something like: class Person: @staticmethod def call_person(): print(\"hello person\") # Calling static methods works on classes as well as instances of that class Person.call_person() # calling on class p = Person() p.call_person() # calling on instance of class Depending on what you want to do, a classmethod might be more appropriate: class Person: @classmethod def call_person(cls): print(\"hello person\", cls) p = Person().call_person() # using classmethod on instance Person.call_person() # using classmethod on class The difference here is that in the second example, the class itself is passed as the first argument to the method (as opposed to a regular method where the instance is the first argument, or a staticmethod which doesn't receive any additional arguments). Now to answer your actual question. I'm betting that you aren't finding your method because you have put the class Person into a module Person.py. Then: import Person # Person class is available as Person.Person Person.Person.call_person() # this should work Person.Person().call_person() # this should work as well Alternatively, you might want to import the class Person from the module Person: from Person import Person Person.call_person() This all gets a little confusing as to what is a module and what is a class. Typically, I try to avoid giving classes the same name as the module that they live in. However, this is apparently not looked down on too much as the datetime module in the standard library contains a datetime class. Finally, it is worth pointing out that you don't need a class for this simple example: # Person.py def call_person(): print(\"Hello person\") Now in another file, import it: import Person Person.call_person() # 'Hello person'"} +{"question_id": 17972938, "score": 90, "creation_date": 1375280308, "tags": ["python", "python-2.7", "pandas"], "instruction": "Check if a string in a Pandas DataFrame column is in a list of strings\n\nIf I have a frame like this frame = pd.DataFrame({ \"a\": [\"the cat is blue\", \"the sky is green\", \"the dog is black\"] }) and I want to check if any of those rows contain a certain word I just have to do this. frame[\"b\"] = ( frame.a.str.contains(\"dog\") | frame.a.str.contains(\"cat\") | frame.a.str.contains(\"fish\") ) frame[\"b\"] outputs: 0 True 1 False 2 True Name: b, dtype: bool If I decide to make a list: mylist = [\"dog\", \"cat\", \"fish\"] How would I check that the rows contain a certain word in the list?", "output": "frame = pd.DataFrame({'a' : ['the cat is blue', 'the sky is green', 'the dog is black']}) frame a 0 the cat is blue 1 the sky is green 2 the dog is black The str.contains method accepts a regular expression pattern: mylist = ['dog', 'cat', 'fish'] pattern = '|'.join(mylist) pattern 'dog|cat|fish' frame.a.str.contains(pattern) 0 True 1 False 2 True Name: a, dtype: bool Because regex patterns are supported, you can also embed flags: frame = pd.DataFrame({'a' : ['Cat Mr. Nibbles is blue', 'the sky is green', 'the dog is black']}) frame a 0 Cat Mr. Nibbles is blue 1 the sky is green 2 the dog is black pattern = '|'.join([f'(?i){animal}' for animal in mylist]) # python 3.6+ pattern '(?i)dog|(?i)cat|(?i)fish' frame.a.str.contains(pattern) 0 True # Because of the (?i) flag, 'Cat' is also matched to 'cat' 1 False 2 True"} +{"question_id": 14694501, "score": 90, "creation_date": 1360007802, "tags": ["python", "matplotlib", "subplot"], "instruction": "Delete a subplot\n\nI'm trying to figure out a way of deleting (dynamically) subplots in matplotlib. I see they have a remove method, but I get the error NotImplementedError: cannot remove artist I'm surprised that I can't find this anywhere. Does anyone know how to do this? from matplotlib import pyplot as plt fig, axs = plt.subplots(1,3) axs[0].plot([1,2],[3,4]) axs[2].plot([0,1],[2,3]) plt.draw() plt.tight_layout()", "output": "Use fig.delaxes or plt.delaxes to remove unwanted subplots fig, axs = plt.subplots(1,3) axs[0].plot([1,2],[3,4]) axs[2].plot([0,1],[2,3]) fig.delaxes(axs[1]) plt.draw() plt.tight_layout()"} +{"question_id": 10867882, "score": 90, "creation_date": 1338697450, "tags": ["python", "python-3.x", "tuples", "iterable-unpacking"], "instruction": "How are tuples unpacked in for loops?\n\nI stumbled across the following code: for i, a in enumerate(attributes): labels.append(Label(root, text = a, justify = LEFT).grid(sticky = W)) e = Entry(root) e.grid(column=1, row=i) entries.append(e) entries[i].insert(INSERT,\"text to insert\") I don't understand the i, a bit, and searching for information on for didn't yield any useful results. When I try and experiment with the code I get the error: ValueError: need more than 1 value to unpack Does anyone know what it does, or a more specific term associated with it that I can google to learn more?", "output": "You could google \"tuple unpacking\". This can be used in various places in Python. The simplest is in assignment: >>> x = (1,2) >>> a, b = x >>> a 1 >>> b 2 In a for-loop it works similarly. If each element of the iterable is a tuple, then you can specify two variables, and each element in the loop will be unpacked to the two. >>> x = [(1,2), (3,4), (5,6)] >>> for item in x: ... print \"A tuple\", item A tuple (1, 2) A tuple (3, 4) A tuple (5, 6) >>> for a, b in x: ... print \"First\", a, \"then\", b First 1 then 2 First 3 then 4 First 5 then 6 The enumerate function creates an iterable of tuples, so it can be used this way."} +{"question_id": 43332057, "score": 90, "creation_date": 1491854656, "tags": ["python", "pandas", "string", "removing-whitespace"], "instruction": "Strip whitespace from strings in a column\n\nI am using python csvkit to compare 2 files like this: df1 = pd.read_csv('input1.csv', sep=',\\s+', delimiter=',', encoding=\"utf-8\") df2 = pd.read_csv('input2.csv', sep=',\\s,', delimiter=',', encoding=\"utf-8\") df3 = pd.merge(df1,df2, on='employee_id', how='right') df3.to_csv('output.csv', encoding='utf-8', index=False) Currently I am running the file through a script before hand that strips spaces from the employee_id column. An example of employee_ids: 37 78973 3 23787 2 22 3 123 Is there a way to get csvkit to do it and save me a step?", "output": "You can strip() an entire Series in Pandas using .str.strip(): df1['employee_id'] = df1['employee_id'].str.strip() df2['employee_id'] = df2['employee_id'].str.strip() This will remove leading/trailing whitespaces on the employee_id column in both df1 and df2 Alternatively, modify the read_csv lines to use skipinitialspace=True df1 = pd.read_csv('input1.csv', sep=',\\s+', delimiter=',', encoding=\"utf-8\", skipinitialspace=True) df2 = pd.read_csv('input2.csv', sep=',\\s,', delimiter=',', encoding=\"utf-8\", skipinitialspace=True) It looks like you are attempting to remove spaces in a string containing numbers, which can be accomplished with pandas.Series.str.replace: df1['employee_id'] = df1['employee_id'].str.replace(\" \", \"\") df2['employee_id'] = df2['employee_id'].str.replace(\" \", \"\")"} +{"question_id": 24306205, "score": 90, "creation_date": 1403179547, "tags": ["python", "shell", "subprocess", "pipe"], "instruction": "File not found error when launching a subprocess containing piped commands\n\nI need to run the command date | grep -o -w '\"+tz+\"'' | wc -w using Python on my localhost. I am using subprocess module for the same and using the check_output method as I need to capture the output for the same. However it is throwing me an error : Traceback (most recent call last): File \"test.py\", line 47, in check_timezone() File \"test.py\", line 40, in check_timezone count = subprocess.check_output(command) File \"/usr/lib/python2.7/subprocess.py\", line 537, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs) File \"/usr/lib/python2.7/subprocess.py\", line 679, in __init__ errread, errwrite) File \"/usr/lib/python2.7/subprocess.py\", line 1249, in _execute_child raise child_exception- OSError: [Errno 2] No such file or directory", "output": "You have to add shell=True to execute a shell command. check_output is trying to find an executable called: date | grep -o -w '\"+tz+\"'' | wc -w and cannot find it. (no idea why you removed the essential information from the error message). See the difference between: >>> subprocess.check_output('date | grep 1') Traceback (most recent call last): File \"\", line 1, in File \"/usr/lib/python3.4/subprocess.py\", line 603, in check_output with Popen(*popenargs, stdout=PIPE, **kwargs) as process: File \"/usr/lib/python3.4/subprocess.py\", line 848, in __init__ restore_signals, start_new_session) File \"/usr/lib/python3.4/subprocess.py\", line 1446, in _execute_child raise child_exception_type(errno_num, err_msg) FileNotFoundError: [Errno 2] No such file or directory: 'date | grep 1' And: >>> subprocess.check_output('date | grep 1', shell=True) b'gio 19 giu 2014, 14.15.35, CEST\\n' Read the documentation about the Frequently Used Arguments for more information about the shell argument and how it changes the interpretation of the other arguments. Note that you should try to avoid using shell=True since spawning a shell can be a security hazard (even if you do not execute untrusted input attacks like Shellshock can still be performed!). The documentation for the subprocess module has a little section about replacing the shell pipeline. You can do so by spawning the two processes in python and use subprocess.PIPE: date_proc = subprocess.Popen(['date'], stdout=subprocess.PIPE) grep_proc = subprocess.check_output(['grep', '1'], stdin=date_proc.stdout, stdout=subprocess.PIPE) date_proc.stdout.close() output = grep_proc.communicate()[0] You can write some simple wrapper function to easily define pipelines: import subprocess from shlex import split from collections import namedtuple from functools import reduce proc_output = namedtuple('proc_output', 'stdout stderr') def pipeline(starter_command, *commands): if not commands: try: starter_command, *commands = starter_command.split('|') except AttributeError: pass starter_command = _parse(starter_command) starter = subprocess.Popen(starter_command, stdout=subprocess.PIPE) last_proc = reduce(_create_pipe, map(_parse, commands), starter) return proc_output(*last_proc.communicate()) def _create_pipe(previous, command): proc = subprocess.Popen(command, stdin=previous.stdout, stdout=subprocess.PIPE) previous.stdout.close() return proc def _parse(cmd): try: return split(cmd) except Exception: return cmd With this in place you can write pipeline('date | grep 1') or pipeline('date', 'grep 1') or pipeline(['date'], ['grep', '1'])"} +{"question_id": 35134162, "score": 90, "creation_date": 1454340285, "tags": ["python", "python-2.7", "elasticsearch"], "instruction": "How to delete an Elasticsearch Index using Python?\n\nI have Python 2.7 and Elasticsearch 2.1.1. I used the following to delete an index: es.delete(index='researchtest', doc_type='test') but this gives me this message: return func(*args, params=params, **kwargs) TypeError: delete() takes at least 4 arguments (4 given) I also tried this technique: es.delete_by_query( index='researchtest', doc_type='test', body='{\"query\":{\"match_all\":{}}}' ) but I get this message: AttributeError: 'Elasticsearch' object has no attribute 'delete_by_query' Has the API changed for 2.1.1 for python? https://elasticsearch-py.readthedocs.org/en/master/api.html#elasticsearch.client.IndicesClient.delete", "output": "For ES 8+ use: from elasticsearch import Elasticsearch es = Elasticsearch() es.options(ignore_status=[400,404]).indices.delete(index='test-index') For older versions use this notation: from elasticsearch import Elasticsearch es = Elasticsearch() es.indices.delete(index='test-index', ignore=[400, 404])"} +{"question_id": 3939660, "score": 90, "creation_date": 1287119802, "tags": ["python", "math", "primes", "sieve-of-eratosthenes"], "instruction": "Sieve of Eratosthenes - Finding Primes Python\n\nJust to clarify, this is not a homework problem :) I wanted to find primes for a math application I am building & came across Sieve of Eratosthenes approach. I have written an implementation of it in Python. But it's terribly slow. For say, if I want to find all primes less than 2 million. It takes > 20 mins. (I stopped it at this point). How can I speed this up? def primes_sieve(limit): limitn = limit+1 primes = range(2, limitn) for i in primes: factors = range(i, limitn, i) for f in factors[1:]: if f in primes: primes.remove(f) return primes print primes_sieve(2000) UPDATE: I ended up doing profiling on this code & found that quite a lot of time was spent on removing an element from the list. Quite understandable considering it has to traverse the entire list (worst-case) to find the element & then remove it and then readjust the list (maybe some copy goes on?). Anyway, I chucked out list for dictionary. My new implementation - def primes_sieve1(limit): limitn = limit+1 primes = dict() for i in range(2, limitn): primes[i] = True for i in primes: factors = range(i,limitn, i) for f in factors[1:]: primes[f] = False return [i for i in primes if primes[i]==True] print primes_sieve1(2000000)", "output": "You're not quite implementing the correct algorithm: In your first example, primes_sieve doesn't maintain a list of primality flags to strike/unset (as in the algorithm), but instead resizes a list of integers continuously, which is very expensive: removing an item from a list requires shifting all subsequent items down by one. In the second example, primes_sieve1 maintains a dictionary of primality flags, which is a step in the right direction, but it iterates over the dictionary in undefined order, and redundantly strikes out factors of factors (instead of only factors of primes, as in the algorithm). You could fix this by sorting the keys, and skipping non-primes (which already makes it an order of magnitude faster), but it's still much more efficient to just use a list directly. The correct algorithm (with a list instead of a dictionary) looks something like: def primes_sieve2(limit): a = [True] * limit # Initialize the primality list a[0] = a[1] = False for (i, isprime) in enumerate(a): if isprime: yield i for n in range(i*i, limit, i): # Mark factors non-prime a[n] = False (Note that this also includes the algorithmic optimization of starting the non-prime marking at the prime's square (i*i) instead of its double.)"} +{"question_id": 54351740, "score": 90, "creation_date": 1548348898, "tags": ["python", "string-formatting", "string-interpolation", "f-string"], "instruction": "How can I use f-string with a variable, not with a string literal?\n\nI want to use f-string with my string variable, not with string defined with a string literal, \"...\". Here is my code: name=[\"deep\",\"mahesh\",\"nirbhay\"] user_input = r\"certi_{element}\" # this string I ask from user for element in name: print(f\"{user_input}\") This code gives output: certi_{element} certi_{element} certi_{element} But I want: certi_{deep} certi_{mahesh} certi_{nirbhay} How can I do this? See also: How to postpone/defer the evaluation of f-strings?", "output": "If you define: def fstr(template): return eval(f\"f'{template}'\") Then you can do: name=[\"deep\",\"mahesh\",\"nirbhay\"] user_input = r\"certi_{element}\" # this string i ask from user for element in name: print(fstr(user_input)) Which gives as output: certi_deep certi_mahesh certi_nirbhay But be aware that users can use expressions in the template, like e.g.: import os # assume you have used os somewhere user_input = r\"certi_{os.environ}\" for element in name: print(fstr(user_input)) You definitely don't want this! Therefore, a much safer option is to define: def fstr(template, **kwargs): return eval(f\"f'{template}'\", kwargs) Arbitrary code is no longer possible, but users can still use string expressions like: user_input = r\"certi_{element.upper()*2}\" for element in name: print(fstr(user_input, element=element)) Gives as output: certi_DEEPDEEP certi_MAHESHMAHESH certi_NIRBHAYNIRBHAY Which may be desired in some cases."} +{"question_id": 10192758, "score": 90, "creation_date": 1334671325, "tags": ["python", "compilation", "package"], "instruction": "How to get the list of options that Python was compiled with?\n\nYou can compile Python in various ways. I'd like to find out with which options my Python was compiled. Concrete use-case: was my Python compiled with readline? I know I can see this by doing \"import readline\", but I'd like to see a list of compilation setting for my Python binary. Edit: I mean the Python executable and not source code written by myself.", "output": "There is a module to see the system config import sysconfig print(sysconfig.get_config_vars()) It offers an interface to get individual variables as well. sysconfig.get_config_var('HAVE_LIBREADLINE') It is also accessible via shell: python3 -m sysconfig | less Edit: before python2.7, you have to use import distutils.sysconfig print distutils.sysconfig.get_config_vars()"} +{"question_id": 37534440, "score": 90, "creation_date": 1464647988, "tags": ["python", "jupyter-notebook", "command-line-arguments", "papermill"], "instruction": "Passing command line arguments to argv in jupyter/ipython notebook\n\nI'm wondering if it's possible to populate sys.argv (or some other structure) with command line arguments in a jupyter/ipython notebook, similar to how it's done through a python script. For instance, if I were to run a python script as follows: python test.py False Then sys.argv would contain the argument False. But if I run a jupyter notebook in a similar manner: jupyter notebook test.ipynb False Then the command line argument gets lost. Is there any way to access this argument from within the notebook itself?", "output": "If the goal is to run a notebook with configurable arguments passed from commandline, I think the easiest way is to use environment variables, like this: NB_ARGS=some_args jupyter nbconvert --execute --to html --template full some_notebook.ipynb Then in the notebook, you can import os and use os.environ['NB_ARGS']. The variable value can be some text that contains key-value pairs or json for example."} +{"question_id": 67637004, "score": 90, "creation_date": 1621600670, "tags": ["python", "flask", "gunicorn"], "instruction": "Gunicorn worker terminated with signal 9\n\nI am running a Flask application and hosting it on Kubernetes from a Docker container. Gunicorn is managing workers that reply to API requests. The following warning message is a regular occurrence, and it seems like requests are being canceled for some reason. On Kubernetes, the pod is showing no odd behavior or restarts and stays within 80% of its memory and CPU limits. [2021-03-31 16:30:31 +0200] [1] [WARNING] Worker with pid 26 was terminated due to signal 9 How can we find out why these workers are killed?", "output": "I encountered the same warning message. [WARNING] Worker with pid 71 was terminated due to signal 9 I came across this faq, which says that \"A common cause of SIGKILL is when OOM killer terminates a process due to low memory condition.\" I used dmesg realized that indeed it was killed because it was running out of memory. Out of memory: Killed process 776660 (gunicorn)"} +{"question_id": 42279063, "score": 90, "creation_date": 1487261348, "tags": ["python", "pycharm", "argparse", "python-typing"], "instruction": "Python: Typehints for argparse.Namespace objects\n\nIs there a way to have Python static analyzers (e.g. in PyCharm, other IDEs) pick up on Typehints on argparse.Namespace objects? Example: parser = argparse.ArgumentParser() parser.add_argument('--somearg') parsed = parser.parse_args(['--somearg','someval']) # type: argparse.Namespace the_arg = parsed.somearg # <- Pycharm complains that parsed object has no attribute 'somearg' If I remove the type declaration in the inline comment, PyCharm doesn't complain, but it also doesn't pick up on invalid attributes. For example: parser = argparse.ArgumentParser() parser.add_argument('--somearg') parsed = parser.parse_args(['--somearg','someval']) # no typehint the_arg = parsed.somaerg # <- typo in attribute, but no complaint in PyCharm. Raises AttributeError when executed. Any ideas? Update Inspired by Austin's answer below, the simplest solution I could find is one using namedtuples: from collections import namedtuple ArgNamespace = namedtuple('ArgNamespace', ['some_arg', 'another_arg']) parser = argparse.ArgumentParser() parser.add_argument('--some-arg') parser.add_argument('--another-arg') parsed = parser.parse_args(['--some-arg', 'val1', '--another-arg', 'val2']) # type: ArgNamespace x = parsed.some_arg # good... y = parsed.another_arg # still good... z = parsed.aint_no_arg # Flagged by PyCharm! While this is satisfactory, I still don't like having to repeat the argument names. If the argument list grows considerably, it will be tedious updating both locations. What would be ideal is somehow extracting the arguments from the parser object like the following: parser = argparse.ArgumentParser() parser.add_argument('--some-arg') parser.add_argument('--another-arg') MagicNamespace = parser.magically_extract_namespace() parsed = parser.parse_args(['--some-arg', 'val1', '--another-arg', 'val2']) # type: MagicNamespace I haven't been able to find anything in the argparse module that could make this possible, and I'm still unsure if any static analysis tool could be clever enough to get those values and not bring the IDE to a grinding halt. Still searching... Update 2 Per hpaulj's comment, the closest thing I could find to the method described above that would \"magically\" extract the attributes of the parsed object is something that would extract the dest attribute from each of the parser's _actions.: parser = argparse.ArgumentParser() parser.add_argument('--some-arg') parser.add_argument('--another-arg') MagicNamespace = namedtuple('MagicNamespace', [act.dest for act in parser._actions]) parsed = parser.parse_args(['--some-arg', 'val1', '--another-arg', 'val2']) # type: MagicNamespace But this still does not cause attribute errors to get flagged in static analysis. This is true also true if I pass namespace=MagicNamespace in the parser.parse_args call.", "output": "Typed argument parser was made for exactly this purpose. It wraps argparse. Your example is implemented as: from tap import Tap class ArgumentParser(Tap): somearg: str parsed = ArgumentParser().parse_args(['--somearg', 'someval']) the_arg = parsed.somearg Here's a picture of it in action. It's on PyPI and can be installed with: pip install typed-argument-parser Full disclosure: I'm one of the creators of this library."} +{"question_id": 40339886, "score": 90, "creation_date": 1477907734, "tags": ["python", "pandas", "dataframe", "concatenation", "nan"], "instruction": "pandas concat generates nan values\n\nI am curious why a simple concatenation of two dataframes in pandas: initId.shape # (66441, 1) initId.isnull().sum() # 0 ypred.shape # (66441, 1) ypred.isnull().sum() # 0 of the same shape and both without NaN values foo = pd.concat([initId, ypred], join='outer', axis=1) foo.shape # (83384, 2) foo.isnull().sum() # 16943 can result in a lot of NaN values if joined. How can I fix this problem and prevent NaN values being introduced? Trying to reproduce it like aaa = pd.DataFrame([0,1,0,1,0,0], columns=['prediction']) bbb = pd.DataFrame([0,0,1,0,1,1], columns=['groundTruth']) pd.concat([aaa, bbb], axis=1) failed e.g. worked just fine as no NaN values were introduced.", "output": "I think there is problem with different index values, so where concat cannot align get NaN: aaa = pd.DataFrame([0,1,0,1,0,0], columns=['prediction'], index=[4,5,8,7,10,12]) print(aaa) prediction 4 0 5 1 8 0 7 1 10 0 12 0 bbb = pd.DataFrame([0,0,1,0,1,1], columns=['groundTruth']) print(bbb) groundTruth 0 0 1 0 2 1 3 0 4 1 5 1 print (pd.concat([aaa, bbb], axis=1)) prediction groundTruth 0 NaN 0.0 1 NaN 0.0 2 NaN 1.0 3 NaN 0.0 4 0.0 1.0 5 1.0 1.0 7 1.0 NaN 8 0.0 NaN 10 0.0 NaN 12 0.0 NaN Solution is reset_index if indexes values are not necessary: aaa.reset_index(drop=True, inplace=True) bbb.reset_index(drop=True, inplace=True) print(aaa) prediction 0 0 1 1 2 0 3 1 4 0 5 0 print(bbb) groundTruth 0 0 1 0 2 1 3 0 4 1 5 1 print (pd.concat([aaa, bbb], axis=1)) prediction groundTruth 0 0 0 1 1 0 2 0 1 3 1 0 4 0 1 5 0 1 EDIT: If need same index like aaa and length of DataFrames is same use: bbb.index = aaa.index print (pd.concat([aaa, bbb], axis=1)) prediction groundTruth 4 0 0 5 1 0 8 0 1 7 1 0 10 0 1 12 0 1"} +{"question_id": 1077307, "score": 90, "creation_date": 1246579627, "tags": ["python", "iterator", "generator"], "instruction": "Why is there no first(iterable) built-in function in Python?\n\nI'm wondering if there's a reason that there's no first(iterable) in the Python built-in functions, somewhat similar to any(iterable) and all(iterable) (it may be tucked in a stdlib module somewhere, but I don't see it in itertools). first would perform a short-circuit generator evaluation so that unnecessary (and a potentially infinite number of) operations can be avoided; i.e. def identity(item): return item def first(iterable, predicate=identity): for item in iterable: if predicate(item): return item raise ValueError('No satisfactory value found') This way you can express things like: denominators = (2, 3, 4, 5) lcd = first(i for i in itertools.count(1) if all(i % denominators == 0 for denominator in denominators)) Clearly you can't do list(generator)[0] in that case, since the generator doesn't terminate. Or if you have a bunch of regexes to match against (useful when they all have the same groupdict interface): match = first(regex.match(big_text) for regex in regexes) You save a lot of unnecessary processing by avoiding list(generator)[0] and short-circuiting on a positive match.", "output": "In Python 2, if you have an iterator, you can just call its next method. Something like: >>> (5*x for x in xrange(2,4)).next() 10 In Python 3, you can use the next built-in with an iterator: >>> next(5*x for x in range(2,4)) 10"} +{"question_id": 21096436, "score": 89, "creation_date": 1389630825, "tags": ["python", "curl", "virtualenv", "pip", "pycurl"], "instruction": "SSL backend error when using OpenSSL\n\nI was trying to install pycurl in a virtualenv using pip and I got this error ImportError: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (none/other) I read some documentation saying that \"To fix this, you need to tell setup.py what SSL backend is used\" (source) although I am not sure how to do this since I installed pycurl using pip. How can I specify the SSL backend when installing pycurl with pip? Thanks", "output": "for most people After reading their INSTALLATION file, I was able to solve my problem by setting an environment variable and did a reinstall # remove existing `pycurl` installation pip uninstall pycurl # export variable with your link-time ssl backend (which is openssl above) export PYCURL_SSL_LIBRARY=openssl # then, re-install `pycurl` with **no cache** pip install pycurl --no-cache-dir There could be other solution out there but this works perfectly for me on a virtualenv and pip installation. Some people have a different error message complaining about nss instead of openssl ImportError: pycurl: libcurl link-time ssl backend (nss) (the key part is nss) so you have to do something different for this error message: pip uninstall pycurl pip install --no-cache-dir --compile --compile-options=\"--with-nss\" pycurl"} +{"question_id": 44650888, "score": 89, "creation_date": 1497955433, "tags": ["python", "image", "opencv", "image-processing"], "instruction": "Resize an image without distortion OpenCV\n\nI am using Python 3 and latest version of OpenCV. I am trying to resize an image using the resize function provided, but after resizing, the image is very distorted. Code: import cv2 file = \"/home/tanmay/Desktop/test_image.png\" img = cv2.imread(file , 0) print(img.shape) cv2.imshow('img' , img) k = cv2.waitKey(0) if k == 27: cv2.destroyWindow('img') resize_img = cv2.resize(img , (28 , 28)) cv2.imshow('img' , resize_img) x = cv2.waitKey(0) if x == 27: cv2.destroyWindow('img') The original image is 480 x 640 (RGB, therefore I passed the 0 to get it to grayscale). Is there any way I can resize it and avoid the distortion using OpenCV or any other library perhaps? I intend to make a handwritten digit recogniser and I have trained my neural network using the MNIST data, therefore I need the image to be 28x28.", "output": "You may try below. The function will keep the aspect rate of the original image. def image_resize(image, width = None, height = None, inter = cv2.INTER_AREA): # initialize the dimensions of the image to be resized and # grab the image size dim = None (h, w) = image.shape[:2] # if both the width and height are None, then return the # original image if width is None and height is None: return image # check to see if the width is None if width is None: # calculate the ratio of the height and construct the # dimensions r = height / float(h) dim = (int(w * r), height) # otherwise, the height is None else: # calculate the ratio of the width and construct the # dimensions r = width / float(w) dim = (width, int(h * r)) # resize the image resized = cv2.resize(image, dim, interpolation = inter) # return the resized image return resized Here is an example usage. image = image_resize(image, height = 800)"} +{"question_id": 62267544, "score": 89, "creation_date": 1591636143, "tags": ["python", "pydantic"], "instruction": "Generate pydantic model from a dict\n\nIs there a straight-forward approach to generate a Pydantic model from a dictionary? Here is a sample of the data I have. { 'id': '424c015f-7170-4ac5-8f59-096b83fe5f5806082020', 'contacts': [{ 'displayName': 'Norma Fisher', 'id': '544aa395-0e63-4f9a-8cd4-767b3040146d' }], 'startTime': '2020-06-08T09:38:00+00:00' } Expecting a model similar to ... class NewModel(BaseModel): id: str contacts: list startTime: str", "output": "In Pydantic 2, you can use MyModel.model_validate(my_dict) to generate a model from a dictionary. According to the documentation \u2013 this is very similar to the __init__ method of the model, except it takes a dict rather than keyword arguments. If you're Pydantic 1, the method is parse_obj instead."} +{"question_id": 14694482, "score": 89, "creation_date": 1360007710, "tags": ["python", "html", "web-scraping", "text", "beautifulsoup"], "instruction": "Converting html to text with Python\n\nI am trying to convert an html block to text using Python. Input:

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa

Consectetuer adipiscing elit. Some Link Aenean commodo ligula eget dolor. Aenean massa

Aenean massa.Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa

Consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa

Desired output: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa Consectetuer adipiscing elit. Some Link Aenean commodo ligula eget dolor. Aenean massa Aenean massa.Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa Consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa I tried the html2text module without much success: #!/usr/bin/env python import urllib2 import html2text from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(urllib2.urlopen('http://example.com/page.html').read()) txt = soup.find('div', {'class' : 'body'}) print(html2text.html2text(txt)) The txt object produces the html block above. I'd like to convert it to text and print it on the screen.", "output": "soup.get_text() outputs what you want: from bs4 import BeautifulSoup soup = BeautifulSoup(html) print(soup.get_text()) output: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa Consectetuer adipiscing elit. Some Link Aenean commodo ligula eget dolor. Aenean massa Aenean massa.Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa Consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa To keep newlines: print(soup.get_text('\\n')) To be identical to your example, you can replace a newline with two newlines: soup.get_text().replace('\\n','\\n\\n')"} +{"question_id": 41908379, "score": 89, "creation_date": 1485596773, "tags": ["python", "matplotlib", "keras"], "instruction": "Keras - Plot training, validation and test set accuracy\n\nI want to plot the output of this simple neural network: model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True) model.test_on_batch(x_test, y_test) model.metrics_names I have plotted accuracy and loss of training and validation: print(history.history.keys()) # \"Accuracy\" plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() # \"Loss\" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() Now I want to add and plot test set's accuracy from model.test_on_batch(x_test, y_test), but from model.metrics_names I obtain the same value 'acc' utilized for plotting accuracy on training data plt.plot(history.history['acc']). How could I plot test set's accuracy?", "output": "You could do it this way also .... regressor.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics=['accuracy']) earlyStopCallBack = EarlyStopping(monitor='loss', patience=3) history=regressor.fit(X_train, y_train, validation_data=(X_test, y_test), epochs = EPOCHS, batch_size = BATCHSIZE, callbacks=[earlyStopCallBack]) For the plotting - I like plotly ... so import plotly.graph_objects as go from plotly.subplots import make_subplots # Create figure with secondary y-axis fig = make_subplots(specs=[[{\"secondary_y\": True}]]) # Add traces fig.add_trace( go.Scatter( y=history.history['val_loss'], name=\"val_loss\"), secondary_y=False, ) fig.add_trace( go.Scatter( y=history.history['loss'], name=\"loss\"), secondary_y=False, ) fig.add_trace( go.Scatter( y=history.history['val_accuracy'], name=\"val accuracy\"), secondary_y=True, ) fig.add_trace( go.Scatter( y=history.history['accuracy'], name=\"val accuracy\"), secondary_y=True, ) # Add figure title fig.update_layout( title_text=\"Loss/Accuracy of LSTM Model\" ) # Set x-axis title fig.update_xaxes(title_text=\"Epoch\") # Set y-axes titles fig.update_yaxes(title_text=\"primary Loss\", secondary_y=False) fig.update_yaxes(title_text=\"secondary Accuracy\", secondary_y=True) fig.show() Nothing wrong with either of the proceeding methods. Please note the Plotly graph has two scales , 1 for loss the other for accuracy."} +{"question_id": 6284396, "score": 89, "creation_date": 1307562719, "tags": ["python", "permutation", "python-itertools"], "instruction": "Permutations with unique values\n\nitertools.permutations generates where its elements are treated as unique based on their position, not on their value. So basically I want to avoid duplicates like this: >>> list(itertools.permutations([1, 1, 1])) [(1, 1, 1), (1, 1, 1), (1, 1, 1), (1, 1, 1), (1, 1, 1), (1, 1, 1)] Filtering afterwards is not possible because the amount of permutations is too large in my case. Does anybody know of a suitable algorithm for this? What I basically want is the following: x = itertools.product((0, 1, 'x'), repeat=X) x = sorted(x, key=functools.partial(count_elements, elem='x')) which is not possible because sorted creates a list and the output of itertools.product is too large.", "output": "class unique_element: def __init__(self,value,occurrences): self.value = value self.occurrences = occurrences def perm_unique(elements): eset=set(elements) listunique = [unique_element(i,elements.count(i)) for i in eset] u=len(elements) return perm_unique_helper(listunique,[0]*u,u-1) def perm_unique_helper(listunique,result_list,d): if d < 0: yield tuple(result_list) else: for i in listunique: if i.occurrences > 0: result_list[d]=i.value i.occurrences-=1 for g in perm_unique_helper(listunique,result_list,d-1): yield g i.occurrences+=1 a = list(perm_unique([1,1,2])) print(a) result: [(2, 1, 1), (1, 2, 1), (1, 1, 2)] EDIT (how this works): I rewrote the above program to be longer but more readable. I usually have a hard time explaining how something works, but let me try. In order to understand how this works, you have to understand a similar but simpler program that would yield all permutations with repetitions. def permutations_with_replacement(elements,n): return permutations_helper(elements,[0]*n,n-1)#this is generator def permutations_helper(elements,result_list,d): if d<0: yield tuple(result_list) else: for i in elements: result_list[d]=i all_permutations = permutations_helper(elements,result_list,d-1)#this is generator for g in all_permutations: yield g This program is obviously much simpler: d stands for depth in permutations_helper and has two functions. One function is the stopping condition of our recursive algorithm, and the other is for the result list that is passed around. Instead of returning each result, we yield it. If there were no function/operator yield we would have to push the result in some queue at the point of the stopping condition. But this way, once the stopping condition is met, the result is propagated through all stacks up to the caller. That is the purpose of for g in perm_unique_helper(listunique,result_list,d-1): yield g so each result is propagated up to caller. Back to the original program: we have a list of unique elements. Before we can use each element, we have to check how many of them are still available to push onto result_list. Working with this program is very similar to permutations_with_replacement. The difference is that each element cannot be repeated more times than it is in perm_unique_helper."} +{"question_id": 56467667, "score": 89, "creation_date": 1559767042, "tags": ["python", "pymupdf", "mupdf"], "instruction": "How do I resolve \"No module named 'frontend'\" when trying to use PyMuPDF/fitz?\n\nI have installed PymuPDF/fitz because am trying to extract images from PDF files. However, upon running the code below, I am seeing No module named 'frontend'. import fitz doc = fitz.open(pdf_path) for i in range(len(doc)): for img in doc.getPageImageList(i): xref = img[0] pix = fitz.Pixmap(doc, xref) if pix.n < 5: # this is GRAY or RGB pix.writePNG(\"p%s-%s.png\" % (i, xref)) else: # CMYK: convert to RGB first pix1 = fitz.Pixmap(fitz.csRGB, pix) pix1.writePNG(\"p%s-%s.png\" % (i, xref)) pix1 = None pix = None I have searched but there isn't single report of this kind of error. I have installed PyMuPDF, muPDF and fitz modules. Here is the error in full: Traceback (most recent call last): File \"/home/waqar/PycharmProjects/predator/ExtractFileImage.py\", line 1, in import fitz File \"/home/waqar/anaconda3/envs/retinanet/lib/python3.6/site-packages/fitz/__init__.py\", line 1, in from frontend import * ModuleNotFoundError: No module named 'frontend'", "output": "I've solved it by: pip install PyMuPDF This will actually allow the import of the fitz you appear to want. (There's another fitz, which is probably not what you want if you're manipulating PDF files.) NOTE: If you get RuntimeError: Directory 'static/' does not exist after install than do: pip uninstall fitz for more info see: raise RuntimeError(f\"Directory '{directory}' does not exist\") RuntimeError: Directory 'static/' does not exist from import fitz"} +{"question_id": 16533078, "score": 89, "creation_date": 1368489907, "tags": ["python", "opencv"], "instruction": "Clone an image in cv2 Python\n\nI'm new to OpenCV. What is the Python function which act the same as cv::clone() in C++? I just try to get a rect by rectImg = img[10:20, 10:20] but when I draw a line on it, I find the line appear both on img and the rectImage, so, how can I get this done?", "output": "Abid Rahman K's answer is correct, but you say that you are using cv2 which inherently uses NumPy arrays. So, to make a complete different copy of say \"myImage\": newImage = myImage.copy() The above is enough. There isn't any need to import NumPy (numpy)."} +{"question_id": 53623472, "score": 89, "creation_date": 1543969981, "tags": ["python", "matplotlib", "pytorch"], "instruction": "How do I display a single image in PyTorch?\n\nHow do I display a PyTorch Tensor of shape (3, 224, 224) representing a 224x224 RGB image? Using plt.imshow(image) gives the error: TypeError: Invalid dimensions for image data", "output": "Given a Tensor representing the image, use .permute() to put the channels as the last dimension when passing them to matplotlib: plt.imshow(tensor_image.permute(1, 2, 0)) Note: permute does not copy or allocate memory, and from_numpy() doesn't either."} +{"question_id": 13310594, "score": 89, "creation_date": 1352473799, "tags": ["python", "matplotlib", "colorbar", "imshow"], "instruction": "Positioning the colorbar\n\nI have a matplotlib plot with a colorbar attached. I want to position the colorbar so that it is horizontal, and underneath my plot. I have almost done this via the following: plt.colorbar(orientation=\"horizontal\",fraction=0.07,anchor=(1.0,0.0)) But the colorbar is still overlapping with the plot slightly (and the labels of the x axis). I want to move the colorbar further down, but I can't figure out how to do it.", "output": "Edit: Updated for matplotlib version >= 3. Three great ways to do this have already been shared in this answer. The matplotlib documentation advises to use inset_locator. This would work as follows: import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import inset_axes import numpy as np rng = np.random.default_rng(1) fig, ax = plt.subplots(figsize=(4,4)) im = ax.imshow(rng.random((11, 16))) ax.set_xlabel(\"x label\") axins = inset_axes(ax, width=\"100%\", height=\"5%\", loc='lower center', borderpad=-5 ) fig.colorbar(im, cax=axins, orientation=\"horizontal\")"} +{"question_id": 8389636, "score": 89, "creation_date": 1323107642, "tags": ["python", "matplotlib", "legend"], "instruction": "creating over 20 unique legend colors using matplotlib\n\nI am plotting 20 different lines on a single plot using matplotlib. I use a for loop for plotting and label every line with its key and then use the legend function for key in dict.keys(): plot(x,dict[key], label = key) graph.legend() But using this way, the graph repeats a lot of colors in the legend. Is there any way to ensure a unique color is assigned to each line using matplotlib and over 20 lines? thanks", "output": "The answer to your question is related to two other SO questions. The answer to How to pick a new color for each plotted line within a figure in matplotlib? explains how to define the default list of colors that is cycled through to pick the next color to plot. This is done with the Axes.set_color_cycle method. You want to get the correct list of colors though, and this is most easily done using a color map, as is explained in the answer to this question: Create a color generator from given colormap in matplotlib. There a color map takes a value from 0 to 1 and returns a color. So for your 20 lines, you want to cycle from 0 to 1 in steps of 1/20. Specifically you want to cycle form 0 to 19/20, because 1 maps back to 0. This is done in this example: import matplotlib.pyplot as plt import numpy as np NUM_COLORS = 20 cm = plt.get_cmap('gist_rainbow') fig = plt.figure() ax = fig.add_subplot(111) ax.set_prop_cycle(color=[cm(1.*i/NUM_COLORS) for i in range(NUM_COLORS)]) for i in range(NUM_COLORS): ax.plot(np.arange(10)*(i+1)) fig.savefig('moreColors.png') plt.show() This is the resulting figure: Alternative, better (debatable) solution There is an alternative way that uses a ScalarMappable object to convert a range of values to colors. The advantage of this method is that you can use a non-linear Normalization to convert from line index to actual color. The following code produces the same exact result: import matplotlib.pyplot as plt import matplotlib.cm as mplcm import matplotlib.colors as colors import numpy as np NUM_COLORS = 20 cm = plt.get_cmap('gist_rainbow') cNorm = colors.Normalize(vmin=0, vmax=NUM_COLORS-1) scalarMap = mplcm.ScalarMappable(norm=cNorm, cmap=cm) fig = plt.figure() ax = fig.add_subplot(111) # old way: #ax.set_prop_cycle(color=[cm(1.*i/NUM_COLORS) for i in range(NUM_COLORS)]) # new way: ax.set_prop_cycle(color=[scalarMap.to_rgba(i) for i in range(NUM_COLORS)]) for i in range(NUM_COLORS): ax.plot(np.arange(10)*(i+1)) fig.savefig('moreColors.png') plt.show()"} +{"question_id": 24199729, "score": 89, "creation_date": 1402643900, "tags": ["python", "mongodb", "pymongo"], "instruction": "pymongo.errors.CursorNotFound: cursor id '...' not valid at server\n\nI am trying to fetch some ids that exist in a mongo database with the following code: client = MongoClient('xx.xx.xx.xx', xxx) db = client.test_database db = client['...'] collection = db.test_collection collection = db[\"...\"] for cursor in collection.find({ \"$and\" : [{ \"followers\" : { \"$gt\" : 2000 } }, { \"followers\" : { \"$lt\" : 3000 } }, { \"list_followers\" : { \"$exists\" : False } }] }): print cursor['screenname'] print cursor['_id']['uid'] id = cursor['_id']['uid'] However, after a short while, I am receive this error: pymongo.errors.CursorNotFound: cursor id '...' not valid at server. I found this article which refers to that problem. Nevertheless it is not clear to me which solution to take. Is it possible to use find().batch_size(30)? What exactly does the above command do? Can I take all the database ids using batch_size?", "output": "You're getting this error because the cursor is timing out on the server (after 10 minutes of inactivity). From the pymongo documentation: Cursors in MongoDB can timeout on the server if they\u2019ve been open for a long time without any operations being performed on them. This can lead to an CursorNotFound exception being raised when attempting to iterate the cursor. When you call the collection.find method it queries a collection and it returns a cursor to the documents. To get the documents you iterate the cursor. When you iterate over the cursor the driver is actually making requests to the MongoDB server to fetch more data from the server. The amount of data returned in each request is set by the batch_size() method. From the documentation: Limits the number of documents returned in one batch. Each batch requires a round trip to the server. It can be adjusted to optimize performance and limit data transfer. Setting the batch_size to a lower value will help you with the timeout errors errors, but it will increase the number of times you're going to get access the MongoDB server to get all the documents. The default batch size: For most queries, the first batch returns 101 documents or just enough documents to exceed 1 megabyte. Batch size will not exceed the maximum BSON document size (16 MB). There is no universal \"right\" batch size. You should test with different values and see what is the appropriate value for your use case i.e. how many documents can you process in a 10 minute window. The last resort will be that you set no_cursor_timeout=True. But you need to be sure that the cursor is closed after you finish processing the data. How to avoid it without try/except: cursor = collection.find( {\"x\": 1}, no_cursor_timeout=True ) for doc in cursor: # do something with doc cursor.close()"} +{"question_id": 3786881, "score": 89, "creation_date": 1285330027, "tags": ["python", "methods"], "instruction": "What is a \"method\" in Python?\n\nCan anyone, please, explain to me in very simple terms what a \"method\" is in Python? The thing is in many Python tutorials for beginners this word is used in such way as if the beginner already knew what a method is in the context of Python. While I am of course familiar with the general meaning of this word, I have no clue what this term means in Python. So, please, explain to me what the \"Pythonian\" method is all about. Some very simple example code would be very much appreciated as a picture is worth thousand words.", "output": "It's a function which is a member of a class: class C: def my_method(self): print(\"I am a C\") c = C() c.my_method() # Prints(\"I am a C\") Simple as that! (There are also some alternative kinds of method, allowing you to control the relationship between the class and the function. But I'm guessing from your question that you're not asking about that, but rather just the basics.)"} +{"question_id": 6102019, "score": 89, "creation_date": 1306179273, "tags": ["python", "regex", "types"], "instruction": "Type of compiled regex object in python\n\nWhat is the type of the compiled regular expression in python? In particular, I want to evaluate isinstance(re.compile(''), ???) to be true, for introspection purposes. One solution I had was, have some global constant REGEX_TYPE = type(re.compile('')), but it doesn't seem very elegant. EDIT: The reason I want to do this is because I have list of strings and compiled regex objects. I want to \"match\" a string against list, by for each string in the list, try to check for string equality. for each regex in the list, try to check whether the string matches the given pattern. and the code that I came up with was: for allowed in alloweds: if isinstance(allowed, basestring) and allowed == input: ignored = False break elif isinstance(allowed, REGEX_TYPE) and allowed.match(input): ignored = False break", "output": "When the type of something isn't well specified, there's nothing wrong with using the type builtin to discover the answer at runtime: >>> import re >>> retype = type(re.compile('hello, world')) >>> isinstance(re.compile('goodbye'), retype) True >>> isinstance(12, retype) False >>> Discovering the type at runtime protects you from having to access private attributes and against future changes to the return type. There's nothing inelegant about using type here, though there may be something inelegant about wanting to know the type at all. That said, with the passage of time, the context of this question has shifted. With contemporary versions of Python, the return type of re.compile is now re.Pattern. The general question about what to do if the type of something is not well-specified is still valid but in this particular case, the type of re.compile(...) is now well-specified."} +{"question_id": 34220532, "score": 89, "creation_date": 1449827506, "tags": ["python", "tensorflow", "neural-network", "deep-learning", "variable-assignment"], "instruction": "How to assign a value to a TensorFlow variable?\n\nI am trying to assign a new value to a tensorflow variable in python. import tensorflow as tf import numpy as np x = tf.Variable(0) init = tf.initialize_all_variables() sess = tf.InteractiveSession() sess.run(init) print(x.eval()) x.assign(1) print(x.eval()) But the output I get is 0 0 So the value has not changed. What am I missing?", "output": "In TF1, the statement x.assign(1) does not actually assign the value 1 to x, but rather creates a tf.Operation that you have to explicitly run to update the variable.* A call to Operation.run() or Session.run() can be used to run the operation: assign_op = x.assign(1) sess.run(assign_op) # or `assign_op.op.run()` print(x.eval()) # ==> 1 (* In fact, it returns a tf.Tensor, corresponding to the updated value of the variable, to make it easier to chain assignments.) However, in TF2 x.assign(1) will now assign the value eagerly: x.assign(1) print(x.numpy()) # ==> 1"} +{"question_id": 31306080, "score": 89, "creation_date": 1436401117, "tags": ["python", "unit-testing", "pytest"], "instruction": "Pytest monkeypatch isn't working on imported function\n\nSuppose there are two packages in a project: some_package and another_package. # some_package/foo.py: def bar(): print('hello') # another_package/function.py from some_package.foo import bar def call_bar(): # ... code ... bar() # ... code ... I want to test another_package.function.call_bar mocking out some_package.foo.bar because it has some network I/O I want to avoid. Here is a test: # tests/test_bar.py from another_package.function import call_bar def test_bar(monkeypatch): monkeypatch.setattr('some_package.foo.bar', lambda: print('patched')) call_bar() assert True To my surprise it outputs hello instead of patched. I tried to debug this thing, putting an IPDB breakpoint in the test. When I manually import some_package.foo.bar after the breakpoint and call bar() I get patched. On my real project the situation is even more interesting. If I invoke pytest in the project root my function isn't patched, but when I specify tests/test_bar.py as an argument - it works. As far as I understand it has something to do with the from some_package.foo import bar statement. If it's being executed before monkeypatching is happening then patching fails. But on the condensed test setup from the example above patching does not work in both cases. And why does it work in IPDB REPL after hitting a breakpoint?", "output": "Named importation creates a new name for the object. If you then replace the old name for the object the new name is unaffected. Import the module and use module.bar instead. That will always use the current object. import module def func_under_test(): module.foo() def test_func(): monkeypatch.setattr(...) func_under_test"} +{"question_id": 40001892, "score": 89, "creation_date": 1476284324, "tags": ["python", "python-3.x", "arguments"], "instruction": "Reading named command arguments\n\nCan I use argparse to read named command line arguments that do not need to be in a specific order? I browsed through the documentation but most of it focused on displaying content based on the arguments provided (such as --h). Right now, my script reads ordered, unnamed arguments: myscript.py foo-val bar-val using sys.argv: foo = sys.argv[1] bar = sys.argv[2] But I would like to change the input so that it is order agnostic by naming arguments: myscript.py --bar=bar-val --foo=foo-val", "output": "You can use the Optional Arguments like so. With this program: #!/usr/bin/env python3 import argparse, sys parser=argparse.ArgumentParser() parser.add_argument(\"--bar\", help=\"Do the bar option\") parser.add_argument(\"--foo\", help=\"Foo the program\") args=parser.parse_args() print(f\"Args: {args}\\nCommand Line: {sys.argv}\\nfoo: {args.foo}\") print(f\"Dict format: {vars(args)}\") Make it executable: $ chmod +x prog.py Then if you call it with: $ ./prog.py --bar=bar-val --foo foo-val It prints: Args: Namespace(bar='bar-val', foo='foo-val') Command Line: ['./prog.py', '--bar=bar-val', '--foo', 'foo-val'] foo: foo-val Dict format: {'bar': 'bar-val', 'foo': 'foo-val'} Or, if the user wants help argparse builds that too: $ ./prog.py -h usage: prog.py [-h] [--bar BAR] [--foo FOO] options: -h, --help show this help message and exit --bar BAR Do the bar option --foo FOO Foo the program 2022-08-30: Updated to Python3 this answer..."} +{"question_id": 13793399, "score": 89, "creation_date": 1355098990, "tags": ["python", "sql", "postgresql", "sql-injection", "psycopg2"], "instruction": "Passing table name as a parameter in psycopg2\n\nI have the following code, using pscyopg2: sql = 'select %s from %s where utctime > %s and utctime < %s order by utctime asc;' data = (dataItems, voyage, dateRangeLower, dateRangeUpper) rows = cur.mogrify(sql, data) This outputs: select 'waterTemp, airTemp, utctime' from 'ss2012_t02' where utctime > '2012-05-03T17:01:35+00:00'::timestamptz and utctime < '2012-05-01T17:01:35+00:00'::timestamptz order by utctime asc; When I execute this, it falls over - this is understandable, as the quotes around the table name are illegal. Is there a way to legally pass the table name as a parameter, or do I need to do a (explicitly warned against) string concatenation, ie: voyage = 'ss2012_t02' sql = 'select %s from ' + voyage + ' where utctime > %s and utctime < %s order by utctime asc;' Cheers for any insights.", "output": "According to the official documentation: If you need to generate dynamically an SQL query (for instance choosing dynamically a table name) you can use the facilities provided by the psycopg2.sql module. The sql module is new in psycopg2 version 2.7. It has the following syntax: from psycopg2 import sql cur.execute( sql.SQL(\"insert into {table} values (%s, %s)\") .format(table=sql.Identifier('my_table')), [10, 20]) More on: https://www.psycopg.org/docs/sql.html#module-usage [Update 2017-03-24: AsIs should NOT be used to represent table or fields names, the new sql module should be used instead: https://stackoverflow.com/a/42980069/5285608 ] Also, according to psycopg2 documentation: Warning: Never, never, NEVER use Python string concatenation (+) or string parameters interpolation (%) to pass variables to a SQL query string. Not even at gunpoint."} +{"question_id": 19556139, "score": 89, "creation_date": 1382583732, "tags": ["python", "sql", "django", "shell"], "instruction": "Get SQL query count during a Django shell session\n\nIs there a way to print the number of raw SQL queries performed by the Django ORM during a Django shell session? This sort of information is already provided by the Django debug toolbar (e.g, 5 QUERIES in 5.83MS but it's not obvious how to get it from the shell.", "output": "You can use connection.queries: >>> from django.conf import settings >>> settings.DEBUG = True >>> from django.db import connection >>> Model.objects.count() >>> print(len(connection.queries)) 1"} +{"question_id": 12018992, "score": 89, "creation_date": 1345296767, "tags": ["python", "python-2.7"], "instruction": "Print Combining Strings and Numbers\n\nTo print strings and numbers in Python, is there any other way than doing something like: first = 10 second = 20 print \"First number is %(first)d and second number is %(second)d\" % {\"first\": first, \"second\":second}", "output": "Using print function without parentheses works with older versions of Python but is no longer supported on Python3, so you have to put the arguments inside parentheses. However, there are workarounds, as mentioned in the answers to this question. Since the support for Python2 has ended in Jan 1st 2020, the answer has been modified to be compatible with Python3. You could do any of these (and there may be other ways): (1) print(\"First number is {} and second number is {}\".format(first, second)) (1b) print(\"First number is {first} and number is {second}\".format(first=first, second=second)) or (2) print('First number is', first, 'second number is', second) (Note: A space will be automatically added afterwards when separated from a comma) or (3) print('First number %d and second number is %d' % (first, second)) or (4) print('First number is ' + str(first) + ' second number is' + str(second)) Using format() (1/1b) is preferred where available."} +{"question_id": 24193174, "score": 89, "creation_date": 1402604024, "tags": ["python", "matplotlib", "pandas"], "instruction": "Reset color cycle in Matplotlib\n\nSay I have data about 3 trading strategies, each with and without transaction costs. I want to plot, on the same axes, the time series of each of the 6 variants (3 strategies * 2 trading costs). I would like the \"with transaction cost\" lines to be plotted with alpha=1 and linewidth=1 while I want the \"no transaction costs\" to be plotted with alpha=0.25 and linewidth=5. But I would like the color to be the same for both versions of each strategy. I would like something along the lines of: fig, ax = plt.subplots(1, 1, figsize=(10, 10)) for c in with_transaction_frame.columns: ax.plot(with_transaction_frame[c], label=c, alpha=1, linewidth=1) ****SOME MAGIC GOES HERE TO RESET THE COLOR CYCLE for c in no_transaction_frame.columns: ax.plot(no_transaction_frame[c], label=c, alpha=0.25, linewidth=5) ax.legend() What is the appropriate code to put on the indicated line to reset the color cycle so it is \"back to the start\" when the second loop is invoked?", "output": "In Matplotlib <1.5.0, you can reset the colorcycle to the original with Axes.set_color_cycle. Looking at the code for this, there is a function to do the actual work: def set_color_cycle(self, clist=None): if clist is None: clist = rcParams['axes.color_cycle'] self.color_cycle = itertools.cycle(clist) And a method on the Axes which uses it: def set_color_cycle(self, clist): \"\"\" Set the color cycle for any future plot commands on this Axes. *clist* is a list of mpl color specifiers. \"\"\" self._get_lines.set_color_cycle(clist) self._get_patches_for_fill.set_color_cycle(clist) This basically means you can call the set_color_cycle with None as the only argument, and it will be replaced with the default cycle found in rcParams['axes.color_cycle']. I tried this with the following code and got the expected result: import matplotlib.pyplot as plt import numpy as np for i in range(3): plt.plot(np.arange(10) + i) # for Matplotlib version < 1.5 plt.gca().set_color_cycle(None) # for Matplotlib version >= 1.5 plt.gca().set_prop_cycle(None) for i in range(3): plt.plot(np.arange(10, 1, -1) + i) plt.show()"} +{"question_id": 11476713, "score": 89, "creation_date": 1342205999, "tags": ["python", "string"], "instruction": "Determining how many times a substring occurs in a string in Python\n\nI am trying to figure out how many times a string occurs in a string. For example: nStr = '000123000123' Say the string I want to find is 123. Obviously it occurs twice in nStr but I am having trouble implementing this logic into Python. What I have got at the moment: pattern = '123' count = a = 0 while pattern in nStr[a:]: a = nStr[a:].find(pattern)+1 count += 1 return count The answer it should return is 2. I'm stuck in an infinite loop at the moment. I was just made aware that count is a much better way to do it but out of curiosity, does anyone see a way to do it similar to what I have already got?", "output": "Use str.count: >>> nStr = '000123000123' >>> nStr.count('123') 2 A working version of your code: nStr = '000123000123' pattern = '123' count = 0 flag = True start = 0 while flag: a = nStr.find(pattern, start) # find() returns -1 if the word is not found, #start i the starting index from the search starts(default value is 0) if a == -1: #if pattern not found set flag to False flag = False else: # if word is found increase count and set starting index to a+1 count += 1 start = a + 1 print(count)"} +{"question_id": 20961287, "score": 89, "creation_date": 1389051198, "tags": ["python", "matplotlib", "ipython"], "instruction": "What is %pylab?\n\nI keep seeing people use %pylab in various code snippits, particularly with iPython. However, I cannot see where %pylab is mentioned anywhere in Learning Python (and the few other Python books I have) and am not really sure what it means. I'm sure the answer is simple, but can anyone enlighten me?", "output": "%pylab is a magic function in ipython. Magic functions in ipython always begin with the percent sign (%) followed without any spaces by a small text string; in essence, ipython magic functions define shortcuts particularly useful for interactive work, e.g., to give you an idea of how magic functions work in python, a few of my favorites: to view cwd directory contents: %ls to run a script in ipython using an empty namespace, type space then a script name: %run to execute a code snippet (particularly for multi-line snippets which would usually cause an _IndentationError_ to be thrown): %paste When the %pylab magic function is entered at the IPython prompt, it triggers the import of various modules within Matplotlib. Which modules? well, the ones subsumed under the pylab interface. The awesome Matplotlib plotting library has two distinct interfaces: a pythonic one, and the original MATLAB-like one intended for plotting at the interactive prompt. The former is usually imported like so: from matplotlib import pyplot as PLT Indeed, pyplot has its own magic python magic function %pyplot Why two different interfaces? Matplotlib's original interface was pylab; only later was the pythonic interface added. Scripting and app development were not the primary uses cases for Matplotlib when the project began, plotting in the python shell was. Apparently John Hunter (Matplotlib's creator) wanted to include interactive plotting in python so he submitted a patch to Fernando Perez's (FP) IPython project. FP was a Ph.D student at the time and informed JH that he would not able to review the path for some time. As a result, JH created Matplotlib. The significance is that Matplotlib began as a shell-based plotting scheme. the pylab interface is indeed more suitable for interactive work: from pylab import * x, y = arange(10), cos(x/2) plot(x, y) show() and using the pyplot interface: from matplotlib import pyplot as PLT import numpy as NP x, y = NP.arange(10), NP.cos(x/2) fig = PLT.figure() ax1 = fig.add_subplot(111) ax1.plot(x, y) PLT.show()"} +{"question_id": 4840182, "score": 89, "creation_date": 1296344481, "tags": ["python"], "instruction": "setup.py and adding file to /bin/\n\nI can't figure out how to make setup.py add a script to the the user's /bin or /usr/bin or whatever. E.g., I'd like to add a myscript.py to /usr/bin so that the user can call myscript.py from any directory.", "output": "The Python documentation explains it under the installing scripts section. Scripts are files containing Python source code, intended to be started from the command line. setup(..., scripts=['scripts/xmlproc_parse', 'scripts/xmlproc_val'] ) As mentioned here, beside scripts, there is an entry_points mechanism, which is more cross-platform. With entry_points you connect a command line tool name with a function of your choice, whereas scripts could point to any file (e.g. a shell script)."} +{"question_id": 64943693, "score": 89, "creation_date": 1605965971, "tags": ["python", "fastapi"], "instruction": "What are the best practices for structuring a FastAPI project?\n\nThe problem that I want to solve related the project setup: Good names of directories so that their purpose is clear. Keeping all project files (including virtualenv) in one place, so I can easily copy, move, archive, remove the whole project, or estimate disk space usage. Creating multiple copies of some selected file sets such as entire application, repository, or virtualenv, while keeping a single copy of other files that I don't want to clone. Deploying the right set of files to the server simply by resyncing selected one dir. handling both frontend and backend nicely.", "output": "Harsha already mentioned my project generator but I think it can be helpful for future readers to explain the ideas behind of it. If you are going to serve your frontend something like yarn or npm. You should not worry about the structure between them. With something like axios or the Javascript's fetch you can easily talk with your backend from anywhere. When it comes to structuring the backend, if you want to render templates with Jinja, you can have something that is close to MVC Pattern. your_project \u251c\u2500\u2500 __init__.py \u251c\u2500\u2500 main.py \u251c\u2500\u2500 core \u2502 \u251c\u2500\u2500 models \u2502 \u2502 \u251c\u2500\u2500 database.py \u2502 \u2502 \u2514\u2500\u2500 __init__.py \u2502 \u251c\u2500\u2500 schemas \u2502 \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2502 \u2514\u2500\u2500 schema.py \u2502 \u2514\u2500\u2500 settings.py \u251c\u2500\u2500 tests \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 v1 \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 test_v1.py \u2514\u2500\u2500 v1 \u251c\u2500\u2500 api.py \u251c\u2500\u2500 endpoints \u2502 \u251c\u2500\u2500 endpoint.py \u2502 \u2514\u2500\u2500 __init__.py \u2514\u2500\u2500 __init__.py By using __init__ everywhere, we can access the variables from the all over the app, just like Django. Let's the folders into parts: Core models database.py schemas users.py something.py settings.py views (Add this if you are going to render templates) v1_views.py v2_views.py tests v1 v2 Models It is for your database models, by doing this you can import the same database session or object from v1 and v2. Schemas Schemas are your Pydantic models, we call it schemas because it is actually used for creating OpenAPI schemas since FastAPI is based on OpenAPI specification we use schemas everywhere, from Swagger generation to endpoint's expected request body. settings.py It is for Pydantic's Settings Management which is extremely useful, you can use the same variables without redeclaring it, to see how it could be useful for you check out our documentation for Settings and Environment Variables Views This is optional if you are going to render your frontend with Jinja, you can have something close to MVC pattern Core views v1_views.py v2_views.py It would look something like this if you want to add views. Tests It is good to have your tests inside your backend folder. APIs Create them independently by APIRouter, instead of gathering all your APIs inside one file. Notes You can use absolute import for all your importing since we are using __init__ everywhere, see Python's packaging docs. So assume you are trying to import v1's endpoint.py from v2, you can simply do from my_project.v1.endpoints.endpoint import something"} +{"question_id": 78129981, "score": 89, "creation_date": 1709927540, "tags": ["python", "swift", "xcode"], "instruction": "Logging Error: Failed to initialize logging system. Log messages may be missing.?\n\nLogging Error: Failed to initialize logging system. Log messages may be missing. If this issue persists, try setting IDEPreferLogStreaming=YES in the active scheme actions environment variables. Has anyone else encountered this message? Where is IDEPreferLogStreaming located? I don't know what any of this means. It's building my app successfully but then loading it like its a computer using floppy discs (crazy slow). Any ideas? I tried wiping my OS and reinstalling. I've reinstalled Xcode twice now. Nothing. A colleague of mine is working on the same SwifUI project with no issues.", "output": "To find IDEPreferLogStreaming, you need to go to Product -> Scheme -> Edit Scheme and then add it as a new Environment Variable yourself. IDEPreferLogStreaming=YES For me it didnt solve the issue though --- [Edit: it works for me now as well. Probably I was to quick saying it doesnt. Thanks for your feedback.]"} +{"question_id": 59933946, "score": 89, "creation_date": 1580138286, "tags": ["python", "generics", "python-typing"], "instruction": "Difference between TypeVar('T', A, B) and TypeVar('T', bound=Union[A, B])\n\nWhat's the difference between the following two TypeVars? from typing import TypeVar, Union class A: pass class B: pass T = TypeVar(\"T\", A, B) T = TypeVar(\"T\", bound=Union[A, B]) I believe that in Python 3.12 this is the difference between these two bounds class Foo[T: (A, B)]: ... class Foo[T: A | B]: ... Here's an example of something I don't get: this passes type checking... T = TypeVar(\"T\", bound=Union[A, B]) class AA(A): pass class X(Generic[T]): pass class XA(X[A]): pass class XAA(X[AA]): pass ...but with T = TypeVar(\"T\", A, B), it fails with error: Value of type variable \"T\" of \"X\" cannot be \"AA\" Related: this question on the difference between Union[A, B] and TypeVar(\"T\", A, B).", "output": "When you do T = TypeVar(\"T\", bound=Union[A, B]), you are saying T can be bound to either Union[A, B] or any subtype of Union[A, B]. It's upper-bounded to the union. So for example, if you had a function of type def f(x: T) -> T, it would be legal to pass in values of any of the following types: Union[A, B] (or a union of any subtypes of A and B such as Union[A, BChild]) A (or any subtype of A) B (or any subtype of B) This is how generics behave in most programming languages: they let you impose a single upper bound. But when you do T = TypeVar(\"T\", A, B), you are basically saying T must be either upper-bounded by A or upper-bounded by B. That is, instead of establishing a single upper-bound, you get to establish multiple! So this means while it would be legal to pass in values of either types A or B into f, it would not be legal to pass in Union[A, B] since the union is neither upper-bounded by A nor B. So for example, suppose you had a iterable that could contain either ints or strs. If you want this iterable to contain any arbitrary mixture of ints or strs, you only need a single upper-bound of a Union[int, str]. For example: from typing import TypeVar, Union, List, Iterable mix1: List[Union[int, str]] = [1, \"a\", 3] mix2: List[Union[int, str]] = [4, \"x\", \"y\"] all_ints = [1, 2, 3] all_strs = [\"a\", \"b\", \"c\"] T1 = TypeVar('T1', bound=Union[int, str]) def concat1(x: Iterable[T1], y: Iterable[T1]) -> List[T1]: out: List[T1] = [] out.extend(x) out.extend(y) return out # Type checks a1 = concat1(mix1, mix2) # Also type checks (though your type checker may need a hint to deduce # you really do want a union) a2: List[Union[int, str]] = concat1(all_ints, all_strs) # Also type checks a3 = concat1(all_strs, all_strs) In contrast, if you want to enforce that the function will accept either a list of all ints or all strs but never a mixture of either, you'll need multiple upper bounds. T2 = TypeVar('T2', int, str) def concat2(x: Iterable[T2], y: Iterable[T2]) -> List[T2]: out: List[T2] = [] out.extend(x) out.extend(y) return out # Does NOT type check b1 = concat2(mix1, mix2) # Also does NOT type check b2 = concat2(all_ints, all_strs) # But this type checks b3 = concat2(all_ints, all_ints)"} +{"question_id": 39847884, "score": 89, "creation_date": 1475570731, "tags": ["python", "pycharm", "compiler-warnings", "suppress-warnings"], "instruction": "Can I get PyCharm to suppress a particular warning on a single line?\n\nPyCharm provides some helpful warnings on code style, conventions and logical gotchas. It also provides a notification if I try to commit code with warnings (or errors). Sometimes I consciously ignore these warnings for particular lines of code (for various reasons, typically to account for implementation details of third-party libraries). I want to suppress the warning, but just for that line (if the warning crops up on a different line where I'm not being deliberate, I want to know about it!) How can I do that in PyCharm? (Following a universal Python convention strongly preferable.)", "output": "To suppress PyCharm code inspections for a particular line of code you can use the following construct: # noinspection INSPECTION_NAME your_line_of_code_to_suppress where the name of the inspection (INSPECTION_NAME above) you can take from the list of inspection names (they are pretty descriptive). To suppress pylint command line messages explicitly you have to use different comments/commands, as described here (pylint error names)."} +{"question_id": 6900520, "score": 89, "creation_date": 1312211611, "tags": ["python", "operating-system"], "instruction": "which one should I use: os.sep or os.path.sep?\n\nThey are same, but which one should I use? http://docs.python.org/library/os.html: os.sep The character used by the operating system to separate pathname components. This is '/' for POSIX and '\\' for Windows. Note that knowing this is not sufficient to be able to parse or concatenate pathnames \u2014 use os.path.split() and os.path.join() \u2014 but it is occasionally useful. Also available via os.path.", "output": "I'd use os.path.sep to make it very clear that it's the path separator\u2026 But consistency is more important, so if one is already being used, use that. Otherwise, pick one and use it all the time. Edit: Just to make sure you're not reinventing the wheel, though, the path module already has join, split, dirname, and basename functions\u2026 So you should rarely need to use os.sep: >>> os.path.join(\"foo\", \"bar\", \"baz\") 'foo/bar/baz' >>> os.path.split(_) ('foo/bar', 'baz')"} +{"question_id": 8850415, "score": 89, "creation_date": 1326456897, "tags": ["python", "django", "django-models"], "instruction": "Django - FileField check if None\n\nI have a model with an optional file field class MyModel(models.Model): name = models.CharField(max_length=50) sound = models.FileField(upload_to='audio/', blank=True) Let's put a value >>> test = MyModel(name='machin') >>> test.save() Why do I get that ? >>> test.sound >>> test.sound is None False How can I check if there is a file set ?", "output": "if test.sound.name: print \"I have a sound file\" else: print \"no sound\" Also, FileField's boolean value will be False when there's no file: bool(test.sound) == False when test.sound.name is falsy."} +{"question_id": 58151507, "score": 89, "creation_date": 1569720664, "tags": ["python", "pytorch", "normalize"], "instruction": "Why Pytorch officially use mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225] to normalize images?\n\nIn this page (https://pytorch.org/vision/stable/models.html), it says that \"All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]\". Shouldn't the usual mean and std of normalization be [0.5, 0.5, 0.5] and [0.5, 0.5, 0.5]? Why is it setting such strange values?", "output": "Using the mean and std of Imagenet is a common practice. They are calculated based on millions of images. If you want to train from scratch on your own dataset, you can calculate the new mean and std. Otherwise, using the Imagenet pretrianed model with its own mean and std is recommended."} +{"question_id": 34073370, "score": 89, "creation_date": 1449166745, "tags": ["python", "generator"], "instruction": "Best way to receive the 'return' value from a python generator\n\nSince Python 3.3, if a generator function returns a value, that becomes the value for the StopIteration exception that is raised. This can be collected a number of ways: The value of a yield from expression, which implies the enclosing function is also a generator. Wrapping a call to next() or .send() in a try/except block. However, if I'm simply wanting to iterate over the generator in a for loop - the easiest way - there doesn't appear to be a way to collect the value of the StopIteration exception, and thus the return value. Im using a simple example where the generator yields values, and returns some kind of summary at the end (running totals, averages, timing statistics, etc). for i in produce_values(): do_something(i) values_summary = ....?? One way is to handle the loop myself: values_iter = produce_values() try: while True: i = next(values_iter) do_something(i) except StopIteration as e: values_summary = e.value But this throws away the simplicity of the for loop. I can't use yield from since that requires the calling code to be, itself, a generator. Is there a simpler way than the roll-ones-own for loop shown above?", "output": "You can think of the value attribute of StopIteration (and arguably StopIteration itself) as implementation details, not designed to be used in \"normal\" code. Have a look at PEP 380 that specifies the yield from feature of Python 3.3: It discusses that some alternatives of using StopIteration to carry the return value where considered. Since you are not supposed to get the return value in an ordinary for loop, there is no syntax for it. The same way as you are not supposed to catch the StopIteration explicitly. A nice solution for your situation would be a small utility class (might be useful enough for the standard library): class Generator: def __init__(self, gen): self.gen = gen def __iter__(self): self.value = yield from self.gen return self.value This wraps any generator and catches its return value to be inspected later: >>> def test(): ... yield 1 ... return 2 ... >>> gen = Generator(test()) >>> for i in gen: ... print(i) ... 1 >>> print(gen.value) 2 The line return self.value in __iter__() ensures that the return value will be propagated correctly when Generator instances are nested (e.g. gen = Generator(Generator(test())))."} +{"question_id": 25651990, "score": 89, "creation_date": 1409771543, "tags": ["python", "subprocess", "python-3.4"], "instruction": "OSError: [WinError 193] %1 is not a valid Win32 application\n\nI am trying to call a Python file \"hello.py\" from within the python interpreter with subprocess. But I am unable to resolve this error. [Python 3.4.1]. import subprocess subprocess.call(['hello.py', 'htmlfilename.htm']) Traceback (most recent call last): File \"\", line 1, in subprocess.call(['hello.py', 'htmlfilename.htm']) File \"C:\\Python34\\lib\\subprocess.py\", line 537, in call with Popen(*popenargs, **kwargs) as p: File \"C:\\Python34\\lib\\subprocess.py\", line 858, in __init__ restore_signals, start_new_session) File \"C:\\Python34\\lib\\subprocess.py\", line 1111, in _execute_child startupinfo) OSError: [WinError 193] %1 is not a valid Win32 application Also is there any alternate way to \"call a python script with arguments\" other than using subprocess?", "output": "%1 refers to the first argument of the command, for some reason it was not replaced with hello.py. The file hello.py is not an executable file. You need to specify the executable: subprocess.call(['python.exe', 'hello.py', 'htmlfilename.htm']) You'll need python.exe to be visible on the search path, or you could pass the full path to the executable file that is running the calling script: import sys subprocess.call([sys.executable, 'hello.py', 'htmlfilename.htm'])"} +{"question_id": 3951840, "score": 89, "creation_date": 1287284512, "tags": ["python"], "instruction": "How to invoke a function on an object dynamically by name?\n\nIn Python, say I have a string that contains the name of a class function that I know a particular object will have, how can I invoke it? That is: obj = MyClass() # this class has a method doStuff() func = \"doStuff\" # how to call obj.doStuff() using the func variable?", "output": "Use the getattr built-in function. See the documentation obj = MyClass() try: func = getattr(obj, \"dostuff\") func() except AttributeError: print(\"dostuff not found\")"} +{"question_id": 4212877, "score": 89, "creation_date": 1290068944, "tags": ["python", "asynchronous", "nonblocking", "tornado"], "instruction": "When and how to use Tornado? When is it useless?\n\nOk, Tornado is non-blocking and quite fast and it can handle a lot of standing requests easily. But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost. I couldn't find comprehensive explanation of this, so I'm asking it here: When should Tornado be used? When is it useless? When using it, what should be taken into account? How can we make inefficient site using Tornado? There is a server and a webframework. When should we use framework and when can we replace it with other one?", "output": "There is a server and a webframework. When should we use framework and when can we replace it with other one? This distinction is a bit blurry. If you are only serving static pages, you would use one of the fast servers like lighthttpd. Otherwise, most servers provide a varying complexity of framework to develop web applications. Tornado is a good web framework. Twisted is even more capable and is considered a good networking framework. It has support for lot of protocols. Tornado and Twisted are frameworks that provide support non-blocking, asynchronous web / networking application development. When should Tornado be used? When is it useless? When using it, what should be taken into account? By its very nature, Async / Non-Blocking I/O works great when it is I/O intensive and not computation intensive. Most web / networking applications suits well for this model. If your application demands certain computational intensive task to be done then it has to be delegated to some other service that can handle it better. While Tornado / Twisted can do the job of web server, responding to web requests. How can we make inefficient site using Tornado? Do any thing computational intensive task Introduce blocking operations But I guess it's not a silver bullet and if we just blindly run Django-based or any other site with Tornado it won't give any performance boost. Performance is usually a characteristic of complete web application architecture. You can bring down the performance with most web frameworks, if the application is not designed properly. Think about caching, load balancing etc. Tornado and Twisted provide reasonable performance and they are good for building performant web applications. You can check out the testimonials for both twisted and tornado to see what they are capable of."} +{"question_id": 26454649, "score": 88, "creation_date": 1413748073, "tags": ["python", "rounding"], "instruction": "Python - round up to the nearest ten\n\nIf I get the number 46 and I want to round up to the nearest ten. How do can I do this in python? 46 goes to 50.", "output": "You can use math.ceil() to round up, and then multiply by 10 Python 2 import math def roundup(x): return int(math.ceil(x / 10.0)) * 10 Python 3 (the only difference is you no longer need to cast the result as an int) import math def roundup(x): return math.ceil(x / 10.0) * 10 To use just do >>roundup(45) 50"} +{"question_id": 15715912, "score": 88, "creation_date": 1364626640, "tags": ["python", "list"], "instruction": "Remove the last N elements of a list\n\nIs there a a better way to remove the last N elements of a list. for i in range(0,n): lst.pop( )", "output": "if you wish to remove the last n elements, in other words, keep first len - n elements: lst = lst[:len(lst)-n] Note: This is not an in memory operation. It would create a shallow copy."} +{"question_id": 6455560, "score": 88, "creation_date": 1308838754, "tags": ["python", "sqlalchemy", "pylons", "pyramid"], "instruction": "How to get column names from SQLAlchemy result (declarative syntax)\n\nI am working in a pyramid project and I've the table in SQLAlchemy in declarative syntax \"\"\"models.py\"\"\" class Projects(Base): __tablename__ = 'projects' __table_args__ = {'autoload': True} I get the results by using \"\"\"\"views.py\"\"\" session = DBSession() row_data = session.query(Projects).filter_by(id=1).one() How can I get the column names from this result. PS: I am unable to use this method since I am using the declarative syntax.", "output": "The difference is between ORM and non-ORM, not declarative, which is just a helper for the ORM. Query has an attribute column_descriptions that was added for this purpose: https://docs.sqlalchemy.org/en/20/orm/queryguide/query.html#sqlalchemy.orm.Query.column_descriptions"} +{"question_id": 12680080, "score": 88, "creation_date": 1349121182, "tags": ["python", "enums"], "instruction": "python enums with attributes\n\nConsider: class Item: def __init__(self, a, b): self.a = a self.b = b class Items: GREEN = Item('a', 'b') BLUE = Item('c', 'd') Is there a way to adapt the ideas for simple enums to this case? (see this question) Ideally, as in Java, I would like to cram it all into one class. Java model: enum EnumWithAttrs { GREEN(\"a\", \"b\"), BLUE(\"c\", \"d\"); EnumWithAttrs(String a, String b) { this.a = a; this.b = b; } private String a; private String b; /* accessors and other java noise */ }", "output": "Python 3.4 has a new Enum data type (which has been backported as enum34 and enhanced as aenum1). Both enum34 and aenum2 easily support your use case: aenum (Python 2/3) import aenum class EnumWithAttrs(aenum.AutoNumberEnum): _init_ = 'a b' GREEN = 'a', 'b' BLUE = 'c', 'd' enum34 (Python 2/3) or standard library enum (Python 3.4+) import enum class EnumWithAttrs(enum.Enum): def __new__(cls, *args, **kwds): value = len(cls.__members__) + 1 obj = object.__new__(cls) obj._value_ = value return obj def __init__(self, a, b): self.a = a self.b = b GREEN = 'a', 'b' BLUE = 'c', 'd' And in use: >>> EnumWithAttrs.BLUE >>> EnumWithAttrs.BLUE.a 'c' 1 Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. 2 aenum also supports NamedConstants and metaclass-based NamedTuples."} +{"question_id": 475871, "score": 88, "creation_date": 1232795599, "tags": ["python", "oop"], "instruction": "What difference does it make to use \"self\" to define a member in a Python class?\n\nHow do these 2 classes differ? class A(): x = 3 class B(): def __init__(self): self.x = 3 Is there any significant difference?", "output": "A.x is a class variable. B's self.x is an instance variable. i.e. A's x is shared between instances. It would be easier to demonstrate the difference with something that can be modified like a list: class A: x = [] def add(self): self.x.append(1) class B: def __init__(self): self.x = [] def add(self): self.x.append(1) x = A() y = A() x.add() y.add() print(\"A x:\", x.x) x = B() y = B() x.add() y.add() print(\"B x:\", x.x) Output A x: [1, 1] B x: [1]"} +{"question_id": 14596884, "score": 88, "creation_date": 1359521195, "tags": ["python", "python-2.7"], "instruction": "Remove text between () and []\n\nI have a very long string of text with () and [] in it. I'm trying to remove the characters between the parentheses and brackets but I cannot figure out how. The list is similar to this: x = \"This is a sentence. (once a day) [twice a day]\" This list isn't what I'm working with but is very similar and a lot shorter.", "output": "This should work for parentheses. Regular expressions will \"consume\" the text it has matched so it won't work for nested parentheses. import re regex = re.compile(\".*?\\((.*?)\\)\") result = re.findall(regex, mystring) or this would find one set of parentheses, simply loop to find more: start = mystring.find(\"(\") end = mystring.find(\")\") if start != -1 and end != -1: result = mystring[start+1:end]"} +{"question_id": 5236296, "score": 88, "creation_date": 1299606660, "tags": ["python", "list", "dictionary"], "instruction": "How do I convert a list of dicts to a dict?\n\nHow can I convert a list of dicts to a dict? Below is the list of dicts: data = [{'name': 'John Doe', 'age': 37, 'sex': 'M'}, {'name': 'Lisa Simpson', 'age': 17, 'sex': 'F'}, {'name': 'Bill Clinton', 'age': 57, 'sex': 'M'}] I want to convert this to: data = {'John Doe': {'name': 'John Doe', 'age': 37, 'sex': 'M'}, 'Lisa Simpson': {'name': 'Lisa Simpson', 'age': 17, 'sex': 'F'}, 'Bill Clinton': {'name': 'Bill Clinton', 'age': 57, 'sex': 'M'}}", "output": "A possible solution using names as the new keys: new_dict = {} for item in data: name = item['name'] new_dict[name] = item With Python 3.x you can also use dict comprehensions for the same approach in a more nice way: new_dict = {item['name']:item for item in data} As suggested in a comment by Paul McGuire, if you don't want the name in the inner dict, you can do: new_dict = {} for item in data: name = item.pop('name') new_dict[name] = item"} +{"question_id": 13673060, "score": 88, "creation_date": 1354477501, "tags": ["python"], "instruction": "Split string into strings by length?\n\nIs there a way to take a string that is 4*x characters long, and cut it into 4 strings, each x characters long, without knowing the length of the string? For example: >>>x = \"qwertyui\" >>>split(x, one, two, three, four) >>>two 'er'", "output": ">>> x = \"qwertyui\" >>> chunks, chunk_size = len(x), len(x)//4 >>> [ x[i:i+chunk_size] for i in range(0, chunks, chunk_size) ] ['qw', 'er', 'ty', 'ui']"} +{"question_id": 31928506, "score": 88, "creation_date": 1439238377, "tags": ["python", "elasticsearch"], "instruction": "How to get a list of all indexes in python-elasticsearch\n\nHow would I get a list of the names of an index in Python? Here is what I have so far: >>> es=e.es >>> es >>> es.indices # how to get a list of all indexes in this cluster?", "output": "This question comes up when searching for information on retrieving aliases using the python-elasticsearch library. The accepted answer says to use get_aliases but that method has been removed (as of 2017). To get aliases, you can use the following: es.indices.get_alias(\"*\") UPDATE The latest usage should be with a keyword arg: es.indices.get_alias(index=\"*\")"} +{"question_id": 21753841, "score": 88, "creation_date": 1392293341, "tags": ["python", "numpy", "scipy"], "instruction": "Factorial in numpy and scipy\n\nHow can I import factorial function from numpy and scipy separately in order to see which one is faster? I already imported factorial from python itself by import math. But, it does not work for numpy and scipy.", "output": "You can import them like this: In [7]: import scipy, numpy, math In [8]: scipy.math.factorial, numpy.math.factorial, math.factorial Out[8]: (, , ) scipy.math.factorial and numpy.math.factorial seem to simply be aliases/references for/to math.factorial, that is scipy.math.factorial is math.factorial and numpy.math.factorial is math.factorial should both give True."} +{"question_id": 29950557, "score": 88, "creation_date": 1430328451, "tags": ["python", "numpy", "suppress-warnings", "divide-by-zero"], "instruction": "Ignore divide by 0 warning in NumPy\n\nI have a function for statistic issues: import numpy as np from scipy.special import gamma as Gamma def Foo(xdata): ... return x1 * ( ( #R is a numpy vector ( ((R - x2)/beta) ** (x3 -1) ) * ( np.exp( - ((R - x2) / x4) ) ) / ( x4 * Gamma(x3)) ).real ) Sometimes I get from the shell the following warning: RuntimeWarning: divide by zero encountered in... I use the numpy isinf function to correct the results of the function in other files, so I do not need this warning. Is there a way to ignore the message? In other words, I do not want the shell to print this message. I do not want to disable all python warnings, just this one.", "output": "You can disable the warning with numpy.seterr. Put this before the possible division by zero: np.seterr(divide='ignore') That'll disable zero division warnings globally. If you just want to disable them for a little bit, you can use numpy.errstate in a with clause: with np.errstate(divide='ignore'): # some code here For a zero by zero division (undetermined, results in a NaN), the error behaviour has changed with numpy version 1.8.0: this is now considered \"invalid\", while previously it was \"divide\". Thus, if there is a chance you your numerator could be zero as well, use np.seterr(divide='ignore', invalid='ignore') or with np.errstate(divide='ignore', invalid='ignore'): # some code here See the \"Compatibility\" section in the release notes, last paragraph before the \"New Features\" section: Comparing NaN floating point numbers now raises the invalid runtime warning. If a NaN is expected the warning can be ignored using np.errstate."} +{"question_id": 12186388, "score": 88, "creation_date": 1346273810, "tags": ["python"], "instruction": "Get value at list/array index or \"None\" if out of range in Python\n\nIs there clean way to get the value at a list index or None if the index is out or range in Python? The obvious way to do it would be this: if len(the_list) > i: return the_list[i] else: return None However, the verbosity reduces code readability. Is there a clean, simple, one-liner that can be used instead?", "output": "Try: try: return the_list[i] except IndexError: return None Or, one liner: l[i] if i < len(l) else None Example: >>> l=list(range(5)) >>> i=6 >>> print(l[i] if i < len(l) else None) None >>> i=2 >>> print(l[i] if i < len(l) else None) 2"} +{"question_id": 3765533, "score": 88, "creation_date": 1285116283, "tags": ["python", "arrays", "list", "dictionary"], "instruction": "Python Array with String Indices\n\nIs it possible to use strings as indices in an array in python? For example: myArray = [] myArray[\"john\"] = \"johns value\" myArray[\"jeff\"] = \"jeffs value\" print myArray[\"john\"]", "output": "What you want is called an associative array. In python these are called dictionaries. Dictionaries are sometimes found in other languages as \u201cassociative memories\u201d or \u201cassociative arrays\u201d. Unlike sequences, which are indexed by a range of numbers, dictionaries are indexed by keys, which can be any immutable type; strings and numbers can always be keys. myDict = {} myDict[\"john\"] = \"johns value\" myDict[\"jeff\"] = \"jeffs value\" Alternative way to create the above dict: myDict = {\"john\": \"johns value\", \"jeff\": \"jeffs value\"} Accessing values: print(myDict[\"jeff\"]) # => \"jeffs value\" Getting the keys (in Python v2): print(myDict.keys()) # => [\"john\", \"jeff\"] In Python 3, you'll get a dict_keys, which is a view and a bit more efficient (see views docs and PEP 3106 for details). print(myDict.keys()) # => dict_keys(['john', 'jeff']) If you want to learn about python dictionary internals, I recommend this ~25 min video presentation: https://www.youtube.com/watch?v=C4Kc8xzcA68. It's called the \"The Mighty Dictionary\"."} +{"question_id": 10837017, "score": 88, "creation_date": 1338479472, "tags": ["python", "python-2.7"], "instruction": "How do I make a fixed size formatted string in python?\n\nI want to create a formatted string with fixed size with fixed position between fields. An example explains better, here there are clearly 3 distinct fields and the string is a fixed size: XXX 123 98.00 YYYYY 3 1.00 ZZ 42 123.34 How can I apply such formatting to a string in python (2.7)?", "output": "Sure, use the .format method. E.g., print('{:10s} {:3d} {:7.2f}'.format('xxx', 123, 98)) print('{:10s} {:3d} {:7.2f}'.format('yyyy', 3, 1.0)) print('{:10s} {:3d} {:7.2f}'.format('zz', 42, 123.34)) will print xxx 123 98.00 yyyy 3 1.00 zz 42 123.34 You can adjust the field sizes as desired. Note that .format works independently of print to format a string. I just used print to display the strings. Brief explanation: 10s format a string with 10 spaces, left justified by default 3d format an integer reserving 3 spaces, right justified by default 7.2f format a float, reserving 7 spaces, 2 after the decimal point, right justfied by default. There are many additional options to position/format strings (padding, left/right justify etc), String Formatting Operations will provide more information. Update for f-string mode. E.g., text, number, other_number = 'xxx', 123, 98 print(f'{text:10} {number:3d} {other_number:7.2f}') For right alignment print(f'{text:>10} {number:3d} {other_number:7.2f}')"} +{"question_id": 25553919, "score": 88, "creation_date": 1409244166, "tags": ["python", "multiprocessing", "pool", "map-function"], "instruction": "Passing multiple parameters to pool.map() function in Python\n\nI need some way to use a function within pool.map() that accepts more than one parameter. As per my understanding, the target function of pool.map() can only have one iterable as a parameter but is there a way that I can pass other parameters in as well? In this case, I need to pass in a few configuration variables, like my Lock() and logging information to the target function. I have tried to do some research and I think that I may be able to use partial functions to get it to work? However I don't fully understand how these work. Any help would be greatly appreciated! Here is a simple example of what I want to do: def target(items, lock): for item in items: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release() def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() pool.map(target(PASS PARAMS HERE), iterable) pool.close() pool.join()", "output": "You can use functools.partial for this (as you suspected): from functools import partial def target(lock, iterable_item): for item in iterable_item: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release() def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() l = multiprocessing.Lock() func = partial(target, l) pool.map(func, iterable) pool.close() pool.join() Example: def f(a, b, c): print(\"{} {} {}\".format(a, b, c)) def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() a = \"hi\" b = \"there\" func = partial(f, a, b) pool.map(func, iterable) pool.close() pool.join() if __name__ == \"__main__\": main() Output: hi there 1 hi there 2 hi there 3 hi there 4 hi there 5"} +{"question_id": 15696461, "score": 88, "creation_date": 1364524285, "tags": ["python", "python-2.7", "import"], "instruction": "Import Python Script Into Another?\n\nI'm going through Zed Shaw's Learn Python The Hard Way and I'm on lesson 26. In this lesson we have to fix some code, and the code calls functions from another script. He says that we don't have to import them to pass the test, but I'm curious as to how we would do so. Link to the lesson | Link to the code to correct And here are the particular lines of code that call on a previous script: words = ex25.break_words(sentence) sorted_words = ex25.sort_words(words) print_first_word(words) print_last_word(words) print_first_word(sorted_words) print_last_word(sorted_words) sorted_words = ex25.sort_sentence(sentence) print sorted_words print_first_and_last(sentence) print_first_a_last_sorted(sentence) Code to Correct: This is the code from the course, that's being referenced Do not edit the question to correct the code def break_words(stuff): \"\"\"This function will break up words for us.\"\"\" words = stuff.split(' ') return words def sort_words(words): \"\"\"Sorts the words.\"\"\" return sorted(words) def print_first_word(words) \"\"\"Prints the first word after popping it off.\"\"\" word = words.poop(0) print word def print_last_word(words): \"\"\"Prints the last word after popping it off.\"\"\" word = words.pop(-1 print word def sort_sentence(sentence): \"\"\"Takes in a full sentence and returns the sorted words.\"\"\" words = break_words(sentence) return sort_words(words) def print_first_and_last(sentence): \"\"\"Prints the first and last words of the sentence.\"\"\" words = break_words(sentence) print_first_word(words) print_last_word(words) def print_first_and_last_sorted(sentence): \"\"\"Sorts the words then prints the first and last one.\"\"\" words = sort_sentence(sentence) print_first_word(words) print_last_word(words) print \"Let's practice everything.\" print 'You\\'d need to know \\'bout escapes with \\\\ that do \\n newlines and \\t tabs.' poem = \"\"\" \\tThe lovely world with logic so firmly planted cannot discern \\n the needs of love nor comprehend passion from intuition and requires an explantion \\n\\t\\twhere there is none. \"\"\" print \"--------------\" print poem print \"--------------\" five = 10 - 2 + 3 - 5 print \"This should be five: %s\" % five def secret_formula(started): jelly_beans = started * 500 jars = jelly_beans \\ 1000 crates = jars / 100 return jelly_beans, jars, crates start_point = 10000 beans, jars, crates == secret_formula(start-point) print \"With a starting point of: %d\" % start_point print \"We'd have %d jeans, %d jars, and %d crates.\" % (beans, jars, crates) start_point = start_point / 10 print \"We can also do that this way:\" print \"We'd have %d beans, %d jars, and %d crabapples.\" % secret_formula(start_pont sentence = \"All god\\tthings come to those who weight.\" words = ex25.break_words(sentence) sorted_words = ex25.sort_words(words) print_first_word(words) print_last_word(words) .print_first_word(sorted_words) print_last_word(sorted_words) sorted_words = ex25.sort_sentence(sentence) prin sorted_words print_irst_and_last(sentence) print_first_a_last_sorted(senence)", "output": "It depends on how the code in the first file is structured. If it's just a bunch of functions, like: # first.py def foo(): print(\"foo\") def bar(): print(\"bar\") Then you could import it and use the functions as follows: # second.py import first first.foo() # prints \"foo\" first.bar() # prints \"bar\" or # second.py from first import foo, bar foo() # prints \"foo\" bar() # prints \"bar\" or, to import all the names defined in first.py: # second.py from first import * foo() # prints \"foo\" bar() # prints \"bar\" Note: This assumes the two files are in the same directory. It gets a bit more complicated when you want to import names (functions, classes, etc) from modules in other directories or packages."} +{"question_id": 10354163, "score": 88, "creation_date": 1335543133, "tags": ["python", "first-class-functions"], "instruction": "Assigning a function to a variable\n\nLet's say I have a function def x(): print(20) Now I want to assign the function to a variable called y, so that if I use the y it calls the function x again. if i simply do the assignment y = x(), it returns None.", "output": "You simply don't call the function. >>> def x(): >>> print(20) >>> y = x >>> y() 20 The brackets tell Python that you are calling the function, so when you put them there, it calls the function and assigns y the value returned by x (which in this case is None)."} +{"question_id": 14718135, "score": 88, "creation_date": 1360102976, "tags": ["python", "cpython"], "instruction": "How can I tell which python implementation I'm using?\n\nPython has a few different implementations: CPython, Jython, PyPy, etc. I want to programmatically determine which implementation my code is running on. How can I do that? To be specific, I'm looking for a function, say, get_implementation_name(), that could be used like so: impl_name = get_implementation_name() if impl_name == \"CPython\": print \"I can abuse CPython implementation details. (I'm a bad, bad man.)\" elif impl_name == \"PyPy\": print \"Can't count on reference-counting garbage collection here...\" else: print \"I better be careful...\"", "output": "In [50]: import platform In [52]: platform.python_implementation() Out[52]: 'CPython' Docs: platform.python_implementation"} +{"question_id": 41135033, "score": 88, "creation_date": 1481690808, "tags": ["python"], "instruction": "type hinting within a class\n\nclass Node: def append_child(self, node: Node): if node != None: self.first_child = node self.child_nodes += [node] How do I do node: Node? Because when I run it, it says name 'Node' is not defined. Should I just remove the : Node and instance check it inside the function? But then how could I access node's properties (which I would expect to be instance of Node class)? I don't know how implement type casting in Python, BTW.", "output": "\"self\" references in type checking are typically done using strings: class Node: def append_child(self, node: 'Node'): if node != None: self.first_child = node self.child_nodes += [node] This is described in the \"Forward references\" section of PEP-0484. Please note that this doesn't do any type-checking or casting. This is a type hint which python (normally) disregards completely1. However, third party tools (e.g. mypy), use type hints to do static analysis on your code and can generate errors before runtime. Also, starting with python3.7, you can implicitly convert all of your type-hints to strings within a module by using the from __future__ import annotations (and in python4.0, this will be the default). 1The hints are introspectable -- So you could use them to build some kind of runtime checker using decorators or the like if you really wanted to, but python doesn't do this by default."} +{"question_id": 69381312, "score": 88, "creation_date": 1632938110, "tags": ["python", "importerror", "python-3.10", "python-collections"], "instruction": "ImportError: cannot import name '...' from 'collections' using Python 3.10\n\nI am trying to run my program which uses various dependencies, but since upgrading to Python 3.10 this does not work anymore. When I run \"python3\" in the terminal and from there import my dependencies I get an error: ImportError: cannot import name 'Mapping' from 'collections' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py) This seems to be a general problem, but here is the traceback of my specific case: Traceback (most recent call last): File \"/Users/mk/Flasktut/app.py\", line 2, in from flask import Flask, render_template File \"/Users/mk/Flasktut/env/lib/python3.10/site-packages/flask/__init__.py\", line 14, in from jinja2 import escape File \"/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/__init__.py\", line 33, in from jinja2.environment import Environment, Template File \"/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/environment.py\", line 16, in from jinja2.defaults import BLOCK_START_STRING, \\ File \"/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/defaults.py\", line 32, in from jinja2.tests import TESTS as DEFAULT_TESTS File \"/Users/mk/Flasktut/env/lib/python3.10/site-packages/jinja2/tests.py\", line 13, in from collections import Mapping ImportError: cannot import name 'Mapping' from 'collections' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py)", "output": "Change: from collections import Mapping to from collections.abc import Mapping"} +{"question_id": 19246103, "score": 88, "creation_date": 1381230465, "tags": ["python", "sockets", "namespaces", "ip"], "instruction": "socket.error:[errno 99] cannot assign requested address and namespace in python\n\nMy server software says errno99: cannot assign requested address while using an ip address other than 127.0.0.1 for binding. But if the IP address is 127.0.0.1 it works. Is it related to namespaces? I am executing my server and client codes in another python program by calling execfile(). I am actually editing the mininet source code.I edited net.py and inside that I used execfile('server.py') execfile('client1.py') and execfile('client2.py').So as soon as \"sudo mn --topo single,3\" is called along with the creation of 3 hosts my server and client codes will get executed.I have given my server and client codes below. #server code import select import socket import sys backlog = 5 size = 1024 server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind((\"10.0.0.1\",9999)) server.listen(backlog) input = [server] running = 1 while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == server: client, address = server.accept() input.append(client) else: l = s.recv(1024) sys.stdout.write(l) server.close() #client code import socket import select import sys import time while(1) : s,addr=server1.accept() data=int(s.recv(4)) s = socket.socket() s.connect((\"10.0.0.1\",9999)) while (1): f=open (\"hello1.txt\", \"rb\") l = f.read(1024) s.send(l) l = f.read(1024) time.sleep(5) s.close()", "output": "Stripping things down to basics this is what you would want to test with: import socket server = socket.socket() server.bind((\"10.0.0.1\", 6677)) server.listen(4) client_socket, client_address = server.accept() print(client_address, \"has connected\") while True: recvieved_data = client_socket.recv(1024) print(recvieved_data) This works assuming a few things: Your local IP address (on the server) is 10.0.0.1 (This video shows you how) No other software is listening on port 6677 Also note the basic concept of IP addresses: Try the following, open the start menu, in the \"search\" field type cmd and press enter. Once the black console opens up type ping www.google.com and this should give you and IP address for google. This address is googles local IP and they bind to that and obviously you can not bind to an IP address owned by google. With that in mind, you own your own set of IP addresses. First you have the local IP of the server, but then you have the local IP of your house. In the below picture 192.168.1.50 is the local IP of the server which you can bind to. You still own 83.55.102.40 but the problem is that it's owned by the Router and not your server. So even if you visit http://whatsmyip.com and that tells you that your IP is 83.55.102.40 that is not the case because it can only see where you're coming from.. and you're accessing your internet from a router. In order for your friends to access your server (which is bound to 192.168.1.50) you need to forward port 6677 to 192.168.1.50 and this is done in your router. Assuming you are behind one. If you're in school there's other dilemmas and routers in the way most likely."} +{"question_id": 70730831, "score": 88, "creation_date": 1642342052, "tags": ["python", "c++", "python-3.x", "rounding", "integer-division"], "instruction": "What's the mathematical reason behind Python choosing to round integer division toward negative infinity?\n\nI know Python // rounds towards negative infinity and in C++ / is truncating, rounding towards 0. And here's what I know so far: |remainder| -12 / 10 = -1, - 2 // C++ -12 // 10 = -2, + 8 # Python 12 / -10 = -1, 2 // C++ 12 // -10 = -2, - 8 # Python 12 / 10 = 1, 2 // Both 12 // 10 = 1, 2 -12 / -10 = 1, - 2 // Both = 2, + 8 C++: 1. m%(-n) == m%n 2. -m%n == -(m%n) 3. (m/n)*n + m%n == m Python: 1. m%(-n) == -8 == -(-m%n) 2. (m//n)*n + m%n == m But why Python // choose to round towards negative infinity? I didn't find any resources explain that, but only find and hear people say vaguely: \"for mathematics reasons\". For example, in Why is -1/2 evaluated to 0 in C++, but -1 in Python?: People dealing with these things in the abstract tend to feel that rounding toward negative infinity makes more sense (that means it's compatible with the modulo function as defined in mathematics, rather than % having a somewhat funny meaning). But I don't see C++ 's / not being compatible with the modulo function. In C++, (m/n)*n + m%n == m also applies. So what's the (mathematical) reason behind Python choosing rounding towards negative infinity? See also Guido van Rossum's old blog post on the topic.", "output": "But why Python // choose to round towards negative infinity? I'm not sure if the reason why this choice was originally made is documented anywhere (although, for all I know, it could be explained in great length in some PEP somewhere), but we can certainly come up with various reasons why it makes sense. One reason is simply that rounding towards negative (or positive!) infinity means that all numbers get rounded the same way, whereas rounding towards zero makes zero special. The mathematical way of saying this is that rounding down towards \u2212\u221e is translation invariant, i.e. it satisfies the equation: round_down(x + k) == round_down(x) + k for all real numbers x and all integers k. Rounding towards zero does not, since, for example: round_to_zero(0.5 - 1) != round_to_zero(0.5) - 1 Of course, other arguments exist too, such as the argument you quote based on compatibility with (how we would like) the % operator (to behave) \u2014 more on that below. Indeed, I would say the real question here is why Python's int() function is not defined to round floating point arguments towards negative infinity, so that m // n would equal int(m / n). (I suspect \"historical reasons\".) Then again, it's not that big of a deal, since Python does at least have math.floor() that does satisfy m // n == math.floor(m / n). But I don't see C++ 's / not being compatible with the modulo function. In C++, (m/n)*n + m%n == m also applies. True, but retaining that identity while having / round towards zero requires defining % in an awkward way for negative numbers. In particular, we lose both of the following useful mathematical properties of Python's %: 0 <= m % n < n for all m and all positive n; and (m + k * n) % n == m % n for all integers m, n and k. These properties are useful because one of the main uses of % is \"wrapping around\" a number m to a limited range of length n. For example, let's say we're trying to calculate directions: let's say heading is our current compass heading in degrees (counted clockwise from due north, with 0 <= heading < 360) and that we want to calculate our new heading after turning angle degrees (where angle > 0 if we turn clockwise, or angle < 0 if we turn counterclockwise). Using Python's % operator, we can calculate our new heading simply as: heading = (heading + angle) % 360 and this will simply work in all cases. However, if we try to to use this formula in C++, with its different rounding rules and correspondingly different % operator, we'll find that the wrap-around doesn't always work as expected! For example, if we start facing northwest (heading = 315) and turn 90\u00b0 clockwise (angle = 90), we'll indeed end up facing northeast (heading = 45). But if then try to turn back 90\u00b0 counterclockwise (angle = -90), with C++'s % operator we won't end up back at heading = 315 as expected, but instead at heading = -45! To get the correct wrap-around behavior using the C++ % operator, we'll instead need to write the formula as something like: heading = (heading + angle) % 360; if (heading < 0) heading += 360; or as: heading = ((heading + angle) % 360) + 360) % 360; (The simpler formula heading = (heading + angle + 360) % 360 will only work if we can always guarantee that heading + angle >= -360.) This is the price you pay for having a non-translation-invariant rounding rule for division, and consequently a non-translation-invariant % operator."} +{"question_id": 16698935, "score": 88, "creation_date": 1369246669, "tags": ["python", "xml", "xslt", "converters"], "instruction": "How to transform an XML file using XSLT in Python?\n\nGood day! Need to convert xml using xslt in Python. I have a sample code in php. How to implement this in Python or where to find something similar? Thank you! $xmlFileName = dirname(__FILE__).\"example.fb2\"; $xml = new DOMDocument(); $xml->load($xmlFileName); $xslFileName = dirname(__FILE__).\"example.xsl\"; $xsl = new DOMDocument; $xsl->load($xslFileName); // Configure the transformer $proc = new XSLTProcessor(); $proc->importStyleSheet($xsl); // attach the xsl rules echo $proc->transformToXML($xml);", "output": "Using lxml, import lxml.etree as ET dom = ET.parse(xml_filename) xslt = ET.parse(xsl_filename) transform = ET.XSLT(xslt) newdom = transform(dom) print(ET.tostring(newdom, pretty_print=True))"} +{"question_id": 6169522, "score": 88, "creation_date": 1306693000, "tags": ["python", "virtualenv", "zlib", "ubuntu-10.10"], "instruction": "no module named zlib\n\nFirst, please bear with me. I have hard time telling others my problem and this is a long thread... I am using pythonbrew to run multiple versions of python in Ubuntu 10.10. For installing pythonbrew and how it works, please refers to this link below http://www.howopensource.com/2011/05/how-to-install-and-manage-different-versions-of-python-in-linux/ After reading a couple stackoverflow threads, I finally found the file called Setup under this directory: ~/.pythonbrew/pythons/Python-2.7.1/lib/python2.7/config In this Setup file I see # Andrew Kuchling's zlib module. # This require zlib 1.1.3 (or later). # See http://www.gzip.org/zlib/ # zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz I uncommented the last line, then I ran python -v again. However, I received the same error when I tried import zlib, so I guess I have to do something to install zlib into the lib. But I am clueless about what I need to do. Can someone please direct me in the right direction??? Thank you very much! I am doing this because I want to use different version of python in different virtualenv I created. When I did virtualenv -p python2.7 I received no module named zlib. jwxie518@jwxie518-P5E-VM-DO:~$ virtualenv -p python2.7 --no-site-packages testenv Running virtualenv with interpreter /home/jwxie518/.pythonbrew/pythons/Python-2.7.1/bin/python2.7 Traceback (most recent call last): File \"/usr/local/lib/python2.6/dist-packages/virtualenv.py\", line 17, in import zlib ImportError: No module named zlib EDIT I have to install 2.7.1 by appending --force. I am developing Django, and I need some of these missing modules, for example sqlite3, and to create my virtualenv I definitely need zlib. If I just use the system default (2.6.6), I have no problem. To do this with system default, all I need to do is virtualenv --no-site-packages testenv Thanks! (2nd edit) I installed 3.2 also and I tested it without problem, so I guess my problem comes down to how to install the missing module(s). jwxie518@jwxie518-P5E-VM-DO:~$ virtualenv -p python3.2 testenv Running virtualenv with interpreter /home/jwxie518/.pythonbrew/pythons/Python-3.2/bin/python3.2 New python executable in testenv/bin/python3.2 Also creating executable in testenv/bin/python Installing distribute..................................................................................................................................................................................................................................................................................................................................done. Installing pip...............done. jwxie518@jwxie518-P5E-VM-DO:~$ virtualenv -p python3.2 --no-site-packages testenv Running virtualenv with interpreter /home/jwxie518/.pythonbrew/pythons/Python-3.2/bin/python3.2 New python executable in testenv/bin/python3.2 Also creating executable in testenv/bin/python Installing distribute..................................................................................................................................................................................................................................................................................................................................done. Installing pip...............done.", "output": "Sounds like you need to install the devel package for zlib, probably want to do something like # ubuntu 12,14,16,18,20.04+ sudo apt-get install zlib1g-dev Instead of using python-brew you might want to consider just compiling by hand, it's not very hard. Just download the source, and configure, make, make install. You'll want to at least set --prefix to somewhere, so it'll get installed where you want. ./configure --prefix=/opt/python2.7 + other options make make install You can check what configuration options are available with ./configure --help and see what your system python was compiled with by doing: python -c \"import sysconfig; print sysconfig.get_config_var('CONFIG_ARGS')\" The key is to make sure you have the development packages installed for your system, so that Python will be able to build the zlib, sqlite3, etc modules. The python docs cover the build process in more detail: http://docs.python.org/using/unix.html#building-python."} +{"question_id": 15710515, "score": 88, "creation_date": 1364586887, "tags": ["python", "python-3.x", "string-formatting"], "instruction": "Python 3 bytes formatting\n\nIn Python 3, one can format a string like: \"{0}, {1}, {2}\".format(1, 2, 3) But how to format bytes? b\"{0}, {1}, {2}\".format(1, 2, 3) raises AttributeError: 'bytes' object has no attribute 'format'. If there is no format method for bytes, how to do the formatting or \"rewriting\" of bytes?", "output": "As of Python 3.5, % formatting will work for bytes, too! This was part of PEP 461, authored by Ethan Furman: PEP: 461 Title: Adding % formatting to bytes and bytearray Version: $Revision$ Last-Modified: $Date$ Author: Ethan Furman Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 2014-01-13 Python-Version: 3.5 Post-History: 2014-01-14, 2014-01-15, 2014-01-17, 2014-02-22, 2014-03-25, 2014-03-27 Resolution: Abstract ======== This PEP proposes adding % formatting operations similar to Python 2's ``str`` type to ``bytes`` and ``bytearray`` [1]_ [2]_. Rationale ========= While interpolation is usually thought of as a string operation, there are cases where interpolation on ``bytes`` or ``bytearrays`` make sense, and the work needed to make up for this missing functionality detracts from the overall readability of the code. Motivation ========== With Python 3 and the split between ``str`` and ``bytes``, one small but important area of programming became slightly more difficult, and much more painful -- wire format protocols [3]_. This area of programming is characterized by a mixture of binary data and ASCII compatible segments of text (aka ASCII-encoded text). Bringing back a restricted %-interpolation for ``bytes`` and ``bytearray`` will aid both in writing new wire format code, and in porting Python 2 wire format code. Common use-cases include ``dbf`` and ``pdf`` file formats, ``email`` formats, and ``FTP`` and ``HTTP`` communications, among many others. PEP 461 was accepted by Guido van Rossum on March 27, 2014: Accepted. Congrats with marshalling yet another quite contentious discussion, and putting up with my last-minute block-headedness! From this, we can obviously conclude that % is no longer scheduled for deprecation (as was announced with Python 3.1)."} +{"question_id": 3752618, "score": 88, "creation_date": 1284994463, "tags": ["python", "list", "iteration"], "instruction": "Adding element to list while iterating\n\nI know that it is not allowed to remove elements while iterating a list, but is it allowed to add elements to a python list while iterating. Here is an example: for a in myarr: if somecond(a): myarr.append(newObj()) I have tried this in my code and it seems to work fine, however I don't know if it's because I am just lucky and that it will break at some point in the future? I prefer not to copy the list since myarr is huge, and therefore it would be too slow. Also I need to check the appended objects with somecond(). At some point somecond(a) will be false, so there can not be an infinite loop. Each object in myarr has a size, and each time somecond(a) is true and a new object is appended to the list, the new object will have a size smaller than a. somecond() has an epsilon for how small objects can be and if they are too small it will return \"false\".", "output": "You could use the islice from itertools to create an iterator over a smaller portion of the list. Then you can append entries to the list without impacting the items you're iterating over: islice(myarr, 0, len(myarr)-1) Even better, you don't even have to iterate over all the elements. You can increment a step size."} +{"question_id": 22770352, "score": 88, "creation_date": 1396293778, "tags": ["python", "r", "time-series", "statsmodels", "forecasting"], "instruction": "auto.arima() equivalent for python\n\nI am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in statsmodels. Currently R has a function forecast::auto.arima() which will tune the (p,d,q) parameters. How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?", "output": "You can implement a number of approaches: ARIMAResults include aic and bic. By their definition, (see here and here), these criteria penalize for the number of parameters in the model. So you may use these numbers to compare the models. Also scipy has optimize.brute which does grid search on the specified parameters space. So a workflow like this should work: def objfunc(order, exog, endog): from statsmodels.tsa.arima.model import ARIMA fit = ARIMA(endog, order, exog).fit() return fit.aic() from scipy.optimize import brute grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1)) brute(objfunc, grid, args=(exog, endog), finish=None) Make sure you call brute with finish=None. You may obtain pvalues from ARIMAResults. So a sort of step-forward algorithm is easy to implement where the degree of the model is increased across the dimension which obtains lowest p-value for the added parameter. Use ARIMAResults.predict to cross-validate alternative models. The best approach would be to keep the tail of the time series (say most recent 5% of data) out of sample, and use these points to obtain the test error of the fitted models."} +{"question_id": 28492103, "score": 88, "creation_date": 1423798920, "tags": ["python", "multithreading", "python-asyncio", "aiohttp"], "instruction": "How to combine python asyncio with threads?\n\nI have successfully built a RESTful microservice with Python asyncio and aiohttp that listens to a POST event to collect realtime events from various feeders. It then builds an in-memory structure to cache the last 24h of events in a nested defaultdict/deque structure. Now I would like to periodically checkpoint that structure to disc, preferably using pickle. Since the memory structure can be >100MB I would like to avoid holding up my incoming event processing for the time it takes to checkpoint the structure. I'd rather create a snapshot copy (e.g. deepcopy) of the structure and then take my time to write it to disk and repeat on a preset time interval. I have been searching for examples on how to combine threads (and is a thread even the best solution for this?) and asyncio for that purpose but could not find something that would help me. Any pointers to get started are much appreciated!", "output": "It's pretty simple to delegate a method to a thread or sub-process using loop.run_in_executor: import asyncio import time from concurrent.futures import ProcessPoolExecutor def cpu_bound_operation(x): time.sleep(x) # This is some operation that is CPU-bound async def main(): loop = asyncio.get_running_loop() # Run cpu_bound_operation in the ProcessPoolExecutor # This will make your coroutine block, but won't block # the event loop; other coroutines can run in meantime. await loop.run_in_executor(p, cpu_bound_operation, 5) p = ProcessPoolExecutor(2) # Create a Process Pool with 2 processes asyncio.run(main()) As for whether to use a ProcessPoolExecutor or ThreadPoolExecutor, that's kind of hard to say; pickling a large object will definitely eat some CPU cycles, which initially would make you think ProcessPoolExecutor is the way to go. However, passing your 100MB object to a Process in the pool would require pickling the instance in your main process, sending the bytes to the child process via IPC, unpickling it in the child, and then pickling it again so you can write it to disk. Given that, my guess is the pickling/unpickling overhead will be large enough that you're better off using a ThreadPoolExecutor, even though you're going to take a performance hit because of the GIL. That said, it's very simple to test both ways and find out for sure, so you might as well do that."} +{"question_id": 35908987, "score": 88, "creation_date": 1457590546, "tags": ["python", "python-multiprocessing"], "instruction": "multiprocessing: map vs map_async\n\nWhat's the difference between using map and map_async? Are they not running the same function after distributing the items from the list to 4 processes? So is it wrong to presume both are running asynchronous and parallel? def f(x): return 2*x p=Pool(4) l=[1,2,3,4] out1=p.map(f,l) #vs out2=p.map_async(f,l)", "output": "There are four choices to mapping jobs to processes. You have to consider multi-args, concurrency, blocking, and ordering. map and map_async only differ with respect to blocking. map_async is non-blocking where as map is blocking So let's say you had a function from multiprocessing import Pool import time def f(x): print(x*x) if __name__ == '__main__': pool = Pool(processes=4) pool.map(f, range(10)) r = pool.map_async(f, range(10)) # DO STUFF print('HERE') print('MORE') r.wait() print('DONE') Example output: 0 1 4 9 16 25 36 49 64 81 0 HERE 1 4 MORE 16 25 36 9 49 64 81 DONE pool.map(f, range(10)) will wait for all 10 of those function calls to finish so we see all the prints in a row. r = pool.map_async(f, range(10)) will execute them asynchronously and only block when r.wait() is called so we see HERE and MORE in between but DONE will always be at the end."} +{"question_id": 24664435, "score": 88, "creation_date": 1404943050, "tags": ["python", "linux", "anaconda"], "instruction": "Use the default Python rather than the Anaconda installation when called from the terminal\n\nI recently installed the Anaconda version of Python. Now when I type python into the terminal it opens the Anaconda distribution rather than the default distribution. How do I get it to use the default version for the command python on Linux (Ubuntu 12.04 (Precise Pangolin))?", "output": "Anaconda adds the path to your .bashrc, so it is found first. You can add the path to your default Python instance to .bashrc or remove the path to Anaconda if you don't want to use it. You can also use the full path /usr/bin/python in Bash to use the default Python interpreter. If you leave your .bashrc file as is, any command you run using python will use the Anaconda interpreter. If you want, you could also use an alias for each interpreter. You will see something like export PATH=$HOME/anaconda/bin:$PATH in your .bashrc file. So basically, if you want to use Anaconda as your main everyday interpreter, use the full path to your default Python or create an alias. If you want it the other way around, remove the export PATH=.... from bashrc and use full path to Anaconda Python interpreter."} +{"question_id": 24715230, "score": 88, "creation_date": 1405184040, "tags": ["python", "scikit-learn", "random-forest", "one-hot-encoding"], "instruction": "Can sklearn random forest directly handle categorical features?\n\nSay I have a categorical feature, color, which takes the values ['red', 'blue', 'green', 'orange'], and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them. I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.", "output": "No, there isn't. Somebody's working on this and the patch might be merged into mainline some day, but right now there's no support for categorical variables in scikit-learn except dummy (one-hot) encoding."} +{"question_id": 67085041, "score": 88, "creation_date": 1618370326, "tags": ["python", "version", "dry", "python-poetry", "pyproject.toml"], "instruction": "How to specify version in only one place when using pyproject.toml?\n\nMy package version is defined in two places: __version__ = 1.2.3 in mypackage/__init__.py version = \"1.2.3\" in pyproject.toml (I am using Poetry) I have to update both whenever I bump the version which is annoying and not DRY. Is there a way to make Python read the version from the TOML, or to make the TOML read the version from Python?", "output": "After you have installed your project - either in editable mode by poetry install or from the wheel - you can access several metadata via importlib.metadata (importlib_metadata for python < 3.8). So keep the version only in the pyproject.toml and use this in your python code: import importlib.metadata __version__ = importlib.metadata.version(\"mypackage\")"} +{"question_id": 43193880, "score": 88, "creation_date": 1491252140, "tags": ["python", "pandas"], "instruction": "How to get row number in dataframe in Pandas?\n\nHow can I get the number of the row in a dataframe that contains a certain value in a certain column using Pandas? For example, I have the following dataframe: ClientID LastName 0 34 Johnson 1 67 Smith 2 53 Brows How can I find the number of the row that has 'Smith' in 'LastName' column?", "output": "Note that a dataframe's index could be out of order, or not even numerical at all. If you don't want to use the current index and instead renumber the rows sequentially, then you can use df.reset_index() together with the suggestions below To get all indices that matches 'Smith' >>> df[df['LastName'] == 'Smith'].index Int64Index([1], dtype='int64') or as a numpy array >>> df[df['LastName'] == 'Smith'].index.to_numpy() # .values on older versions array([1]) or if there is only one and you want the integer, you can subset >>> df[df['LastName'] == 'Smith'].index[0] 1 You could use the same boolean expressions with .loc, but it is not needed unless you also want to select a certain column, which is redundant when you only want the row number/index."} +{"question_id": 18842779, "score": 88, "creation_date": 1379399647, "tags": ["python", "string", "optimization", "concatenation", "string-concatenation"], "instruction": "String concatenation without '+' operator\n\nI was playing with python and I realized we don't need to use '+' operator to concatenate static strings. But it fails if I assign it to a variable. For example: string1 = 'Hello' 'World' #1 works fine string2 = 'Hello' + 'World' #2 also works fine string3 = 'Hello' string4 = 'World' string5 = string3 string4 #3 causes syntax error string6 = string3 + string4 #4 works fine Now I have two questions: Why statement 3 does not work while statement 1 does? Is there any technical difference such as calculation speed etc. between statement 1 and 2?", "output": "From the Python 3.12 docs: Multiple adjacent string literals (delimited by whitespace), possibly using different quoting conventions, are allowed, and their meaning is the same as their concatenation. Thus, \"hello\" 'world' is equivalent to \"helloworld\". Statement 3 doesn't work because: The \u2018+\u2019 operator must be used to concatenate string expressions at run time. Notice that the title of the subheader in the docs is \"string literal concatenation\" too. This only works for string literals, not other objects. There's probably no difference. If there is, it's probably extremely tiny and nothing that anyone should worry about. Also, understand that there can be dangers to this: >>> def foo(bar, baz=None): ... return bar ... >>> foo(\"bob\" ... \"bill\") 'bobbill' This is a perfect example of where Errors should never pass silently. What if I wanted \"bill\" to be the argument baz? I have forgotton a comma, but no error is raised. Instead, concatenation has taken place."} +{"question_id": 59732335, "score": 88, "creation_date": 1578999187, "tags": ["python", "python-3.x", "docker"], "instruction": "Is there any disadvantage in using PYTHONDONTWRITEBYTECODE in Docker?\n\nIn many Docker tutorials based on Python (such as: this one) they use the option PYTHONDONTWRITEBYTECODE in order to make Python avoid to write .pyc files on the import of source modules (This is equivalent to specifying the -B option). What are the risks and advantages of setting this option up?", "output": "When you run a single python process in the container, which does not spawn other python processes itself during its lifetime, then there is no \"risk\" in doing that. Storing byte code on disk is used to compile python into byte code just upon the first invocation of a program and its dependent libraries to save that step upon the following invocations. In a container the process runs just once, therefore setting this option makes sense."} +{"question_id": 58612306, "score": 87, "creation_date": 1572368894, "tags": ["python", "python-3.x", "winapi", "pip", "pywin32"], "instruction": "How to fix \"ImportError: DLL load failed\" while importing win32api\n\nI'm setting up an autoclicker in Python 3.8 and I need win32api for GetAsyncKeyState but it always gives me this error: >>> import win32api Traceback (most recent call last): File \"\", line 1, in ImportError: DLL load failed while importing win32api: The specified module could not be found. I'm on Windows 10 Home 64x. I've already tried pip install pypiwin32 And it successfully installs but nothing changes. I tried uninstalling and re-installing python as well. I also tried installing 'django' in the same way and it actually works when I import django, so I think it's a win32api issue only. >>> import win32api I expect the output to be none, but the actual output is always that error ^^", "output": "Run Scripts\\pywin32_postinstall.py -install in an Admin command prompt ref: https://github.com/mhammond/pywin32/issues/1431 edit: User @JoyfulPanda gave a warning: Running this script with admin rights will also copy pythoncom37.dll, pywintypes37.dll (corresponding to the pywin32 version), into C:\\WINDOWS\\system32, which effectively overwrites the corresponding DLL versions from Anaconda already there. This later causes problem when openning (on Windows) \"Start Menu > Anaconda3 (64-bit) > Anaconda Prompt (a_virtual_env_name)\". At least Anaconda 2019.07 has pywin32 223 installed by default. Pywin32 224 may work, but 225-228 causes problem for Anaconda (2019.07)"} +{"question_id": 28431765, "score": 87, "creation_date": 1423571469, "tags": ["python", "selenium", "webdriver", "phantomjs"], "instruction": "Open web in new tab Selenium + Python\n\nSo I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join()", "output": "Editor's note: This answer no longer works for new Selenium versions. Refer to this comment. You can achieve the opening/closing of a tab by the combination of keys COMMAND + T or COMMAND + W (OSX). On other OSs you can use CONTROL + T / CONTROL + W. In selenium you can emulate such behavior. You will need to create one webdriver and as many tabs as the tests you need. Here it is the code. from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Firefox() driver.get(\"http://www.google.com/\") #open tab driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't') # You can use (Keys.CONTROL + 't') on other OSs # Load a page driver.get('http://stackoverflow.com/') # Make the tests... # close the tab # (Keys.CONTROL + 'w') on other OSs. driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 'w') driver.close()"} +{"question_id": 17109608, "score": 87, "creation_date": 1371216448, "tags": ["python", "python-2.7", "python-3.x", "matplotlib"], "instruction": "Change figure size and figure format in matplotlib\n\nI want to obtain fig1 exactly of 4 by 3 inch sized, and in tiff format correcting the program below: import matplotlib.pyplot as plt list1 = [3,4,5,6,9,12] list2 = [8,12,14,15,17,20] plt.plot(list1, list2) plt.savefig('fig1.png', dpi = 300) plt.close()", "output": "You can set the figure size if you explicitly create the figure with plt.figure(figsize=(3,4)) You need to set figure size before calling plt.plot() To change the format of the saved figure just change the extension in the file name. However, I don't know if any of matplotlib backends support tiff"} +{"question_id": 44203397, "score": 87, "creation_date": 1495806857, "tags": ["python", "utf-8", "python-requests"], "instruction": "python requests.get() returns improperly decoded text instead of UTF-8?\n\nWhen the content-type of the server is 'Content-Type:text/html', requests.get() returns improperly encoded data. However, if we have the content type explicitly as 'Content-Type:text/html; charset=utf-8', it returns properly encoded data. Also, when we use urllib.urlopen(), it returns properly encoded data. Has anyone noticed this before? Why does requests.get() behave like this?", "output": "From requests documentation: When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property. >>> r.encoding 'utf-8' >>> r.encoding = 'ISO-8859-1' Check the encoding requests used for your page, and if it's not the right one - try to force it to be the one you need. Regarding the differences between requests and urllib.urlopen - they probably use different ways to guess the encoding. Thats all."} +{"question_id": 32971241, "score": 87, "creation_date": 1444138045, "tags": ["python", "c++", "opencv"], "instruction": "How to get image width and height in OpenCV?\n\nI want to get image width and height, how can I do that in OpenCV? For example: Mat src = imread(\"path_to_image\"); cout << src.width; Is that right?", "output": "You can use rows and cols: cout << \"Width : \" << src.cols << endl; cout << \"Height: \" << src.rows << endl; or size(): cout << \"Width : \" << src.size().width << endl; cout << \"Height: \" << src.size().height << endl; or size cout << \"Width : \" << src.size[1] << endl; cout << \"Height: \" << src.size[0] << endl;"} +{"question_id": 558216, "score": 87, "creation_date": 1234896560, "tags": ["python", "math", "floating-point", "numpy"], "instruction": "Function to determine if two numbers are nearly equal when rounded to n significant decimal digits\n\nI have been asked to test a library provided by a 3rd party. The library is known to be accurate to n significant figures. Any less-significant errors can safely be ignored. I want to write a function to help me compare the results: def nearlyequal(a, b, sigfig=5): The purpose of this function is to determine if two floating-point numbers (a and b) are approximately equal. The function will return True if a==b (exact match) or if a and b have the same value when rounded to sigfig significant-figures when written in decimal. Can anybody suggest a good implementation? I've written a mini unit-test. Unless you can see a bug in my tests then a good implementation should pass the following: assert nearlyequal(1, 1, 5) assert nearlyequal(1.0, 1.0, 5) assert nearlyequal(1.0, 1.0, 5) assert nearlyequal(-1e-9, 1e-9, 5) assert nearlyequal(1e9, 1e9 + 1 , 5) assert not nearlyequal(1e4, 1e4 + 1, 5) assert nearlyequal(0.0, 1e-15, 5) assert not nearlyequal(0.0, 1e-4, 6) Additional notes: Values a and b might be of type int, float or numpy.float64. Values a and b will always be of the same type. It's vital that conversion does not introduce additional error into the function. Lets keep this numerical, so functions that convert to strings or use non-mathematical tricks are not ideal. This program will be audited by somebody who is a mathematician who will want to be able to prove that the function does what it is supposed to do. Speed... I've got to compare a lot of numbers so the faster the better. I've got numpy, scipy and the standard-library. Anything else will be hard for me to get, especially for such a small part of the project.", "output": "There is a function assert_approx_equal in numpy.testing (source here) which may be a good starting point. def assert_approx_equal(actual, desired, significant=7, err_msg='', verbose=True): \"\"\" Raise an assertion if two items are not equal up to significant digits. .. note:: It is recommended to use one of `assert_allclose`, `assert_array_almost_equal_nulp` or `assert_array_max_ulp` instead of this function for more consistent floating point comparisons. Given two numbers, check that they are approximately equal. Approximately equal is defined as the number of significant digits that agree."} +{"question_id": 19611729, "score": 87, "creation_date": 1382820411, "tags": ["python", "pandas", "google-sheets", "google-drive-api", "google-apps"], "instruction": "Getting Google Spreadsheet CSV into A Pandas Dataframe\n\nI uploaded a file to Google spreadsheets (to make a publically accessible example IPython Notebook, with data) I was using the file in it's native form could be read into a Pandas Dataframe. So now I use the following code to read the spreadsheet, works fine but just comes in as string,, and I'm not having any luck trying to get it back into a dataframe (you can get the data) import requests r = requests.get('https://docs.google.com/spreadsheet/ccc?key=0Ak1ecr7i0wotdGJmTURJRnZLYlV3M2daNTRubTdwTXc&output=csv') data = r.content The data ends up looking like: (1st row headers) ',City,region,Res_Comm,mkt_type,Quradate,National_exp,Alabama_exp,Sales_exp,Inventory_exp,Price_exp,Credit_exp\\n0,Dothan,South_Central-Montgomery-Auburn-Wiregrass-Dothan,Residential,Rural,1/15/2010,2,2,3,2,3,3\\n10,Foley,South_Mobile-Baldwin,Residential,Suburban_Urban,1/15/2010,4,4,4,4,4,3\\n12,Birmingham,North_Central-Birmingham-Tuscaloosa-Anniston,Commercial,Suburban_Urban,1/15/2010,2,2,3,2,2,3\\n The native pandas code that brings in the disk resident file looks like: df = pd.io.parsers.read_csv('/home/tom/Dropbox/Projects/annonallanswerswithmaster1012013.csv',index_col=0,parse_dates=['Quradate']) A \"clean\" solution would be helpful to many to provide an easy way to share datasets for Pandas use! I tried a bunch of alternative with no success and I'm pretty sure I'm missing something obvious again. Just a Update note The new Google spreadsheet has a different URL pattern Just use this in place of the URL in the above example and or the below answer and you should be fine here is an example: https://docs.google.com/spreadsheets/d/177_dFZ0i-duGxLiyg6tnwNDKruAYE-_Dd8vAQziipJQ/export?format=csv&id see solution below from @Max Ghenis which just used pd.read_csv, no need for StringIO or requests...", "output": "You can use read_csv() on a StringIO object: from io import BytesIO import requests import pandas as pd r = requests.get('https://docs.google.com/spreadsheet/ccc?key=0Ak1ecr7i0wotdGJmTURJRnZLYlV3M2daNTRubTdwTXc&output=csv') data = r.content In [10]: df = pd.read_csv(BytesIO(data), index_col=0,parse_dates=['Quradate']) In [11]: df.head() Out[11]: City region Res_Comm \\ 0 Dothan South_Central-Montgomery-Auburn-Wiregrass-Dothan Residential 10 Foley South_Mobile-Baldwin Residential 12 Birmingham North_Central-Birmingham-Tuscaloosa-Anniston Commercial 38 Brent North_Central-Birmingham-Tuscaloosa-Anniston Residential 44 Athens North_Huntsville-Decatur-Florence Residential mkt_type Quradate National_exp Alabama_exp Sales_exp \\ 0 Rural 2010-01-15 00:00:00 2 2 3 10 Suburban_Urban 2010-01-15 00:00:00 4 4 4 12 Suburban_Urban 2010-01-15 00:00:00 2 2 3 38 Rural 2010-01-15 00:00:00 3 3 3 44 Suburban_Urban 2010-01-15 00:00:00 4 5 4 Inventory_exp Price_exp Credit_exp 0 2 3 3 10 4 4 3 12 2 2 3 38 3 3 2 44 4 4 4"} +{"question_id": 3663450, "score": 87, "creation_date": 1283901917, "tags": ["python", "string"], "instruction": "Remove substring only at the end of string\n\nI have a bunch of strings, some of them have ' rec'. I want to remove that only if those are the last 4 characters. So in other words I have somestring = 'this is some string rec' and I want it to become somestring = 'this is some string' What is the Python way to approach this?", "output": "def rchop(s, suffix): if suffix and s.endswith(suffix): return s[:-len(suffix)] return s somestring = 'this is some string rec' rchop(somestring, ' rec') # returns 'this is some string'"} +{"question_id": 7670280, "score": 87, "creation_date": 1317874971, "tags": ["python", "plot", "tree", "data-visualization", "visualization"], "instruction": "Tree plotting in Python\n\nI want to plot trees using Python. Decision trees, Organizational charts, etc. Any library that helps me with that?", "output": "There's graphviz - http://www.graphviz.org/. It uses the \"DOT\" language to plot graphs. You can either generate the DOT code yourself, or use pydot - https://github.com/pydot/pydot. You could also use networkx - http://networkx.lanl.gov/tutorial/tutorial.html#drawing-graphs, which make it easy to draw to either graphviz or matplotlib. networkx + matplotlib + graphviz gives you the most flexibility and power, but you need to install a lot. If you want a quick solution, try: Install Graphviz. open('hello.dot','w').write(\"digraph G {Hello->World}\") import subprocess subprocess.call([\"path/to/dot.exe\",\"-Tpng\",\"hello.dot\",\"-o\",\"graph1.png\"]) # I think this is right - try it form the command line to debug Then you install pydot, because pydot already does this for you. Then you can use networkx to \"drive\" pydot."} +{"question_id": 33767627, "score": 87, "creation_date": 1447796775, "tags": ["python", "python-unittest"], "instruction": "Write unittest for console print\n\nFunction foo prints to console. I want to test the console print. How can I achieve this in Python? I need to test this function, which has no return statement: def foo(inStr): print \"hi\"+inStr My test: def test_foo(): cmdProcess = subprocess.Popen(foo(\"test\"), stdout=subprocess.PIPE) cmdOut = cmdProcess.communicate()[0] self.assertEquals(\"hitest\", cmdOut)", "output": "You can easily capture standard output by just temporarily redirecting sys.stdout to a StringIO object, as follows: import StringIO import sys def foo(inStr): print \"hi\"+inStr def test_foo(): capturedOutput = StringIO.StringIO() # Make StringIO. sys.stdout = capturedOutput # Redirect stdout. foo('test') # Call function. sys.stdout = sys.__stdout__ # Reset redirect. print 'Captured', capturedOutput.getvalue() # Now works. test_foo() The output of this program is: Captured hitest This shows that the redirection successfully captured the output and that you were able to restore the output stream to what it was before you began the capture. Note that the code above in for Python 2.7, as the question indicates. Python 3 is slightly different: import io import sys def foo(inStr): print (\"hi\"+inStr) def test_foo(): capturedOutput = io.StringIO() # Create StringIO. sys.stdout = capturedOutput # Redirect stdout. foo('test') # Call function. sys.stdout = sys.__stdout__ # Reset redirect. print ('Captured', capturedOutput.getvalue()) # Now works. test_foo() You also need to be careful if your function can raise an exception since that may mean that the redirection reset may not be done. This may then affect later tests. One way to solve that would be to wrap the function call in a try/except block and, on exception, reset the redirection and re-throw the exception. The single-line call to your function would then become something like this, with the other code remaining as-is: try: foo('test') # Call function. except: sys.stdout = sys.__stdout__ # Reset redirect. raise"} +{"question_id": 7624765, "score": 87, "creation_date": 1317533006, "tags": ["python", "image-processing", "opencv", "computer-vision"], "instruction": "Converting an OpenCV Image to Black and White\n\nHow do you convert a grayscale OpenCV image to black and white? I see a similar question has already been asked, but I'm using OpenCV 2.3, and the proposed solution no longer seems to work. I'm trying to convert a greyscale image to black and white, so that anything not absolutely black is white, and use this as a mask for surf.detect(), in order to ignore keypoints found on the edge of the black mask area. The following Python gets me almost there, but the threshold value sent to Threshold() doesn't appear to have any effect. If I set it to 0 or 16 or 128 or 255, the result is the same, with all pixels with a value > 128 becoming white, and everything else becoming black. What am I doing wrong? import cv, cv2 fn = 'myfile.jpg' im_gray = cv2.imread(fn, cv.CV_LOAD_IMAGE_GRAYSCALE) im_gray_mat = cv.fromarray(im_gray) im_bw = cv.CreateImage(cv.GetSize(im_gray_mat), cv.IPL_DEPTH_8U, 1); im_bw_mat = cv.GetMat(im_bw) threshold = 0 # 128#255# HAS NO EFFECT!?!? cv.Threshold(im_gray_mat, im_bw_mat, threshold, 255, cv.CV_THRESH_BINARY | cv.CV_THRESH_OTSU); cv2.imshow('', np.asarray(im_bw_mat)) cv2.waitKey()", "output": "Step-by-step answer similar to the one you refer to, using the new cv2 Python bindings: 1. Read a grayscale image import cv2 im_gray = cv2.imread('grayscale_image.png', cv2.IMREAD_GRAYSCALE) 2. Convert grayscale image to binary (thresh, im_bw) = cv2.threshold(im_gray, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) which determines the threshold automatically from the image using Otsu's method, or if you already know the threshold you can use: thresh = 127 im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1] 3. Save to disk cv2.imwrite('bw_image.png', im_bw)"} +{"question_id": 54392995, "score": 87, "creation_date": 1548624177, "tags": ["python", "windows", "conda"], "instruction": "RemoveError: 'requests' is a dependency of conda and cannot be removed from conda's operating environment\n\nHaving trouble installing a third party library and I have not seen this error before using Windows 10 with Anaconda installed: C:\\Users\\XYZ>conda env create -f python3.6-environment-windows.yml Collecting package metadata: done Solving environment: done Downloading and Extracting Packages certifi-2018.1.18 | 144 KB | ############################################################################ | 100% mkl-2018.0.1 | 155.2 MB | ############################################################################ | 100% pytz-2018.9 | 229 KB | ############################################################################ | 100% icc_rt-2019.0.0 | 9.4 MB | ############################################################################ | 100% icu-58.2 | 21.8 MB | ############################################################################ | 100% pip-9.0.1 | 1.7 MB | ############################################################################ | 100% xz-5.2.3 | 348 KB | ############################################################################ | 100% sip-4.18.1 | 269 KB | ############################################################################ | 100% libpng-1.6.36 | 1.3 MB | ############################################################################ | 100% vc-14 | 985 B | ############################################################################ | 100% numpy-1.14.0 | 3.7 MB | ############################################################################ | 100% python-3.6.4 | 17.6 MB | ############################################################################ | 100% jpeg-9c | 314 KB | ############################################################################ | 100% wheel-0.30.0 | 85 KB | ############################################################################ | 100% wincertstore-0.2 | 13 KB | ############################################################################ | 100% freetype-2.9.1 | 475 KB | ############################################################################ | 100% scipy-1.0.0 | 13.0 MB | ############################################################################ | 100% pyparsing-2.3.1 | 54 KB | ############################################################################ | 100% kiwisolver-1.0.1 | 60 KB | ############################################################################ | 100% qt-5.6.2 | 55.6 MB | ############################################################################ | 100% python-dateutil-2.7. | 218 KB | ############################################################################ | 100% vs2015_runtime-14.0. | 1.9 MB | ############################################################################ | 100% ca-certificates-2017 | 489 KB | ############################################################################ | 100% tk-8.6.7 | 3.5 MB | ############################################################################ | 100% setuptools-38.4.0 | 540 KB | ############################################################################ | 100% matplotlib-2.2.2 | 6.5 MB | ############################################################################ | 100% six-1.12.0 | 21 KB | ############################################################################ | 100% openssl-1.0.2n | 5.4 MB | ############################################################################ | 100% pyqt-5.6.0 | 4.5 MB | ############################################################################ | 100% zlib-1.2.11 | 236 KB | ############################################################################ | 100% tornado-5.1.1 | 665 KB | ############################################################################ | 100% sqlite-3.22.0 | 907 KB | ############################################################################ | 100% cycler-0.10.0 | 8 KB | ############################################################################ | 100% Preparing transaction: done Verifying transaction: failed RemoveError: 'requests' is a dependency of conda and cannot be removed from conda's operating environment. RemoveError: 'setuptools' is a dependency of conda and cannot be removed from conda's operating environment. In reference to the instructions here - https://enigma.co/catalyst/install.html#installing-with-conda", "output": "running conda update conda before solved the problem for me"} +{"question_id": 27264250, "score": 87, "creation_date": 1417584307, "tags": ["python", "python-typing"], "instruction": "How can I annotate a generator in Python 3?\n\nPython 3.x supports (optional) function annotations: def add_ints(x: int, y: int) -> int: return x + y I sometimes encounter problems as to how to represent a given \"type\", and this time, I have a function that returns a generator: def myfunc(x: [int]) -> \"generator that returns ints\": # ^~~~~~~~~~~~~~~~~~~~~~~~~~ return (n for n in x if n % 2 == 0) How should I annotate the return value?", "output": "The collections.abc module defines the collections.abc.Generator type, which you can use like: Generator[yield_type, send_type, return_type] where send_type and return_type default to None. (If your generator neither sends nor returns, it can just be annotated as Generator[yield_type]. Though it has been deprecated since 3.9, typing.Generator behaves very similarly."} +{"question_id": 2690324, "score": 87, "creation_date": 1271935769, "tags": ["python", "list", "directory", "operating-system"], "instruction": "List Directories and get the name of the Directory\n\nI am trying to get the code to list all the directories in a folder, change directory into that folder and get the name of the current folder. The code I have so far is below and isn't working at the minute. I seem to be getting the parent folder name. import os for directories in os.listdir(os.getcwd()): dir = os.path.join('/home/user/workspace', directories) os.chdir(dir) current = os.path.dirname(dir) new = str(current).split(\"-\")[0] print new I also have other files in the folder but I do not want to list them. I have tried the below code but I haven't got it working yet either. for directories in os.path.isdir(os.listdir(os.getcwd())): Can anyone see where I am going wrong? Thanks Got it working but it seems a bit round about. import os os.chdir('/home/user/workspace') all_subdirs = [d for d in os.listdir('.') if os.path.isdir(d)] for dirs in all_subdirs: dir = os.path.join('/home/user/workspace', dirs) os.chdir(dir) current = os.getcwd() new = str(current).split(\"/\")[4] print new", "output": "This will print all the subdirectories of the current directory: print ([name for name in os.listdir(\".\") if os.path.isdir(name)]) I'm not sure what you're doing with split(\"-\"), but perhaps this code will help you find a solution? If you want the full pathnames of the directories, use abspath: print ([os.path.abspath(name) for name in os.listdir(\".\") if os.path.isdir(name)]) # needs outside parenthesis Note that these pieces of code will only get the immediate subdirectories. If you want sub-sub-directories and so on, you should use walk as others have suggested."} +{"question_id": 23527887, "score": 87, "creation_date": 1399494997, "tags": ["python", "excel", "openpyxl"], "instruction": "getting sheet names from openpyxl\n\nI have a moderately large xlsx file (around 14 MB) and OpenOffice hangs trying to open it. I was trying to use openpyxl to read the content, following this tutorial. The code snippet is as follows: from openpyxl import load_workbook wb = load_workbook(filename = 'large_file.xlsx', use_iterators = True) ws = wb.get_sheet_by_name(name = 'big_data') The problem is, I don't know the sheet name, and Sheet1/Sheet2.. etc. didn't work (returned NoneType object). I could not find a documentation telling me How to get the sheet names for an xlsx files using openpyxl. Can anyone help me?", "output": "Use the sheetnames property: sheetnames Returns the list of the names of worksheets in this workbook. Names are returned in the worksheets order. Type: list of strings print (wb.sheetnames) You can also get worksheet objects from wb.worksheets: ws = wb.worksheets[0]"} +{"question_id": 20907180, "score": 87, "creation_date": 1388763265, "tags": ["python", "google-chrome", "logging", "selenium"], "instruction": "Getting console.log output from Chrome with Selenium Python API bindings\n\nI'm using Selenium to run tests in Chrome via the Python API bindings, and I'm having trouble figuring out how to configure Chrome to make the console.log output from the loaded test available. I see that there are get_log() and log_types() methods on the WebDriver object, and I've seen Get chrome's console log which shows how to do things in Java. But I don't see an equivalent of Java's LoggingPreferences type in the Python API. Is there some way to accomplish what I need?", "output": "Ok, finally figured it out: from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities # enable browser logging d = DesiredCapabilities.CHROME d['loggingPrefs'] = { 'browser':'ALL' } driver = webdriver.Chrome(desired_capabilities=d) # load the desired webpage driver.get('http://foo.com') # print messages for entry in driver.get_log('browser'): print(entry) Entries whose source field equals 'console-api' correspond to console messages, and the message itself is stored in the message field. Starting from chromedriver, 75.0.3770.8, you have to use goog:loggingPrefs instead of loggingPrefs: d['goog:loggingPrefs'] = { 'browser':'ALL' }"} +{"question_id": 59661904, "score": 87, "creation_date": 1578565653, "tags": ["python", "python-3.x", "string", "python-3.8", "f-string"], "instruction": "What does = (equal) do in f-strings inside the expression curly brackets?\n\nThe usage of {} in Python f-strings is well known to execute pieces of code and give the result in string format. However, what does the '=' at the end of the expression mean? log_file = open(\"log_aug_19.txt\", \"w\") console_error = '...stuff...' # the real code generates it with regex log_file.write(f'{console_error=}')", "output": "This is actually a brand-new feature as of Python 3.8. Added an = specifier to f-strings. An f-string such as f'{expr=}' will expand to the text of the expression, an equal sign, then the representation of the evaluated expression. Essentially, it facilitates the frequent use-case of print-debugging, so, whereas we would normally have to write: f\"some_var={some_var}\" we can now write: f\"{some_var=}\" So, as a demonstration, using a shiny-new Python 3.8.0 REPL: >>> print(f\"{foo=}\") foo=42 >>>"} +{"question_id": 5713218, "score": 87, "creation_date": 1303199445, "tags": ["python"], "instruction": "What is difference between pop() vs del for deleting a dictionary item? (python)\n\nIn Python there are at least two methods to delete an item from a dict using a key. d = {\"keyA\": 123, \"keyB\": 456, \"keyC\": 789} #remove via pop d.pop(\"keyA\") #remove via del del d[\"keyB\"] Both methods would remove the item from the dict. I wonder what the difference between these methods is and in what kinds of situations I should use one or the other.", "output": "d.pop(key) -- consider using this if you need the value for the item being deleted, and/or you want to specify a default value and don't want an exception raised if the key doesn't exist. e.g., value = d.pop(key, None) del key -- consider using this if you are certain that the key exists (or you expect an exception if it doesn't), and you don't need the value of the item being deleted. e.g., del key From the official Python language documentation: del d[key] Remove d[key] from d. Raises a KeyError if key is not in the map. pop(key[, default]) If key is in the dictionary, remove it and return its value, else return default. If default is not given and key is not in the dictionary, a KeyError is raised. https://docs.python.org/3/library/stdtypes.html#mapping-types-dict"} +{"question_id": 50132703, "score": 87, "creation_date": 1525257744, "tags": ["python", "unit-testing", "pytest"], "instruction": "Pytest fixture for a class through self not as method argument\n\nOften I'll write a test class that uses a pytest fixture in every method. Here's an example. I'd like to be able to avoid having to write the fixture name in the signature of every method. It's not DRY. How can this be done? I would like to be able to access the fixture by giving the fixture as an attribute of the test class. In this example, I would like to see the google fixture as an attribute of TestGoogle. Is this possible? from bs4 import BeautifulSoup import pytest import requests @pytest.fixture() def google(): return requests.get(\"https://www.google.com\") class TestGoogle: def test_alive(self, google): assert google.status_code == 200 def test_html_title(self, google): soup = BeautifulSoup(google.content, \"html.parser\") assert soup.title.text.upper() == \"GOOGLE\"", "output": "Sure, just use an autouse fixture. Here is the relevant spot in pytest docs. In your example, the change would be introducing an extra fixture (I named it _request_google_page): from bs4 import BeautifulSoup import pytest import requests @pytest.fixture() def google(): return requests.get(\"https://www.google.com\") class TestGoogle: @pytest.fixture(autouse=True) def _request_google_page(self, google): self._response = google def test_alive(self): assert self._response.status_code == 200 def test_html_title(self): soup = BeautifulSoup(self._response.content, \"html.parser\") assert soup.title.text.upper() == \"GOOGLE\" You could even drop the google fixture completely and move the code to _request_google_page: @pytest.fixture(autouse=True) def _request_google_page(self): self._response = requests.get(\"https://www.google.com\") Note that _request_google_page will be called once per test by default, so each test will get a new response. If you want the response to be initialized once and reused throughout all tests in the TestGoogle class, adjust the fixture scopes (scope='class' for _request_google_page and scope='module' or scope='session' for google). Example: from bs4 import BeautifulSoup import pytest import requests @pytest.fixture(scope='module') def google(): return requests.get(\"https://www.google.com\") @pytest.fixture(autouse=True, scope='class') def _request_google_page(request, google): request.cls._response = google class TestGoogle: def test_alive(self): assert self._response.status_code == 200 def test_html_title(self): soup = BeautifulSoup(self._response.content, \"html.parser\") assert soup.title.text.upper() == \"GOOGLE\""} +{"question_id": 50444346, "score": 87, "creation_date": 1526889487, "tags": ["python", "regex", "string", "pandas", "numpy"], "instruction": "Fast punctuation removal with pandas\n\nThis is a self-answered post. Below I outline a common problem in the NLP domain and propose a few performant methods to solve it. Oftentimes the need arises to remove punctuation during text cleaning and pre-processing. Punctuation is defined as any character in string.punctuation: >>> import string string.punctuation '!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~' This is a common enough problem and has been asked before ad nauseam. The most idiomatic solution uses pandas str.replace. However, for situations which involve a lot of text, a more performant solution may need to be considered. What are some good, performant alternatives to str.replace when dealing with hundreds of thousands of records?", "output": "Setup For the purpose of demonstration, let's consider this DataFrame. df = pd.DataFrame({'text':['a..b?!??', '%hgh&12','abc123!!!', '$$$1234']}) df text 0 a..b?!?? 1 %hgh&12 2 abc123!!! 3 $$$1234 Below, I list the alternatives, one by one, in increasing order of performance str.replace This option is included to establish the default method as a benchmark for comparing other, more performant solutions. This uses pandas in-built str.replace function which performs regex-based replacement. df['text'] = df['text'].str.replace(r'[^\\w\\s]+', '') df text 0 ab 1 hgh12 2 abc123 3 1234 This is very easy to code, and is quite readable, but slow. regex.sub This involves using the sub function from the re library. Pre-compile a regex pattern for performance, and call regex.sub inside a list comprehension. Convert df['text'] to a list beforehand if you can spare some memory, you'll get a nice little performance boost out of this. import re p = re.compile(r'[^\\w\\s]+') df['text'] = [p.sub('', x) for x in df['text'].tolist()] df text 0 ab 1 hgh12 2 abc123 3 1234 Note: If your data has NaN values, this (as well as the next method below) will not work as is. See the section on \"Other Considerations\". str.translate python's str.translate function is implemented in C, and is therefore very fast. How this works is: First, join all your strings together to form one huge string using a single (or more) character separator that you choose. You must use a character/substring that you can guarantee will not belong inside your data. Perform str.translate on the large string, removing punctuation (the separator from step 1 excluded). Split the string on the separator that was used to join in step 1. The resultant list must have the same length as your initial column. Here, in this example, we consider the pipe separator |. If your data contains the pipe, then you must choose another separator. import string punct = '!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{}~' # `|` is not present here transtab = str.maketrans(dict.fromkeys(punct, '')) df['text'] = '|'.join(df['text'].tolist()).translate(transtab).split('|') df text 0 ab 1 hgh12 2 abc123 3 1234 Performance str.translate performs the best, by far. Note that the graph below includes another variant Series.str.translate from MaxU's answer. (Interestingly, I reran this a second time, and the results are slightly different from before. During the second run, it seems re.sub was winning out over str.translate for really small amounts of data.) There is an inherent risk involved with using translate (particularly, the problem of automating the process of deciding which separator to use is non-trivial), but the trade-offs are worth the risk. Other Considerations Handling NaNs with list comprehension methods; Note that this method (and the next) will only work as long as your data does not have NaNs. When handling NaNs, you will have to determine the indices of non-null values and replace those only. Try something like this: df = pd.DataFrame({'text': [ 'a..b?!??', np.nan, '%hgh&12','abc123!!!', '$$$1234', np.nan]}) idx = np.flatnonzero(df['text'].notna()) col_idx = df.columns.get_loc('text') df.iloc[idx,col_idx] = [ p.sub('', x) for x in df.iloc[idx,col_idx].tolist()] df text 0 ab 1 NaN 2 hgh12 3 abc123 4 1234 5 NaN Dealing with DataFrames; If you are dealing with DataFrames, where every column requires replacement, the procedure is simple: v = pd.Series(df.values.ravel()) df[:] = translate(v).values.reshape(df.shape) Or, v = df.stack() v[:] = translate(v) df = v.unstack() Note that the translate function is defined below in with the benchmarking code. Every solution has tradeoffs, so deciding what solution best fits your needs will depend on what you're willing to sacrifice. Two very common considerations are performance (which we've already seen), and memory usage. str.translate is a memory-hungry solution, so use with caution. Another consideration is the complexity of your regex. Sometimes, you may want to remove anything that is not alphanumeric or whitespace. Othertimes, you will need to retain certain characters, such as hyphens, colons, and sentence terminators [.!?]. Specifying these explicitly add complexity to your regex, which may in turn impact the performance of these solutions. Make sure you test these solutions on your data before deciding what to use. Lastly, unicode characters will be removed with this solution. You may want to tweak your regex (if using a regex-based solution), or just go with str.translate otherwise. For even more performance (for larger N), take a look at this answer by Paul Panzer. Appendix Functions def pd_replace(df): return df.assign(text=df['text'].str.replace(r'[^\\w\\s]+', '')) def re_sub(df): p = re.compile(r'[^\\w\\s]+') return df.assign(text=[p.sub('', x) for x in df['text'].tolist()]) def translate(df): punct = string.punctuation.replace('|', '') transtab = str.maketrans(dict.fromkeys(punct, '')) return df.assign( text='|'.join(df['text'].tolist()).translate(transtab).split('|') ) # MaxU's version (https://stackoverflow.com/a/50444659/4909087) def pd_translate(df): punct = string.punctuation.replace('|', '') transtab = str.maketrans(dict.fromkeys(punct, '')) return df.assign(text=df['text'].str.translate(transtab)) Performance Benchmarking Code from timeit import timeit import pandas as pd import matplotlib.pyplot as plt res = pd.DataFrame( index=['pd_replace', 're_sub', 'translate', 'pd_translate'], columns=[10, 50, 100, 500, 1000, 5000, 10000, 50000], dtype=float ) for f in res.index: for c in res.columns: l = ['a..b?!??', '%hgh&12','abc123!!!', '$$$1234'] * c df = pd.DataFrame({'text' : l}) stmt = '{}(df)'.format(f) setp = 'from __main__ import df, {}'.format(f) res.at[f, c] = timeit(stmt, setp, number=30) ax = res.div(res.min()).T.plot(loglog=True) ax.set_xlabel(\"N\"); ax.set_ylabel(\"time (relative)\"); plt.show()"} +{"question_id": 5568555, "score": 87, "creation_date": 1302102111, "tags": ["python", "multithreading", "python-multithreading"], "instruction": "Thread vs. Threading\n\nWhat's the difference between the threading and thread modules in Python?", "output": "In Python 3, thread has been renamed to _thread. It is infrastructure code that is used to implement threading, and normal Python code shouldn't be going anywhere near it. _thread exposes a fairly raw view of the underlying OS level processes. This is almost never what you want, hence the rename in Py3k to indicate that it is really just an implementation detail. threading adds some additional automatic accounting, as well as several convenience utilities, all of which makes it the preferred option for standard Python code. Note: as mentioned in Jeril's alternate answer, when considering the use of threads in Python for execution of background tasks, also take a look at concurrent.futures.ThreadPoolExecutor as an even higher level API than threading: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor"} +{"question_id": 63793662, "score": 87, "creation_date": 1599567093, "tags": ["python", "fastapi", "python-dataclasses", "pydantic"], "instruction": "How to give a Pydantic list field a default value?\n\nI want to create a Pydantic model in which there is a list field, which left uninitialized has a default value of an empty list. Is there an idiomatic way to do this? For Python's built-in dataclass objects you can use field(default_factory=list), however in my own experiments this seems to prevent my Pydantic models from being pickled. A naive implementation might be, something like this: from pydantic import BaseModel class Foo(BaseModel): defaulted_list_field: Sequence[str] = [] # Bad! But we all know not to use a mutable value like the empty-list literal as a default. So what's the correct way to give a Pydantic list-field a default value?", "output": "For pydantic you can use mutable default value, like: class Foo(BaseModel): defaulted_list_field: List[str] = [] f1, f2 = Foo(), Foo() f1.defaulted_list_field.append(\"hey!\") print(f1) # defaulted_list_field=['hey!'] print(f2) # defaulted_list_field=[] It will be handled correctly (deep copy) and each model instance will have its own empty list. See \"Fields with non-hashable default values\" from the documentation. Pydantic also has default_factory parameter. In the case of an empty list, the result will be identical, it is rather used when declaring a field with a default value, you may want it to be dynamic (i.e. different for each model). from typing import List from pydantic import BaseModel, Field from uuid import UUID, uuid4 class Foo(BaseModel): defaulted_list_field: List[str] = Field(default_factory=list) uid: UUID = Field(default_factory=uuid4)"} +{"question_id": 2533120, "score": 87, "creation_date": 1269784305, "tags": ["python", "input"], "instruction": "Show default value for editing on Python input possible?\n\nIs it possible for python to accept input like this: Folder name: Download But instead of the user typing \"Download\" it is already there as a initial value. If the user wants to edit it as \"Downloads\" all he has to do is add a 's' and press enter. Using normal input command: folder=input('Folder name: ') all I can get is a blank prompt: Folder name: Is there a simple way to do this that I'm missing?", "output": "The standard library functions input() and raw_input() don't have this functionality. If you're using Linux you can use the readline module to define an input function that uses a prefill value and advanced line editing: import readline def rlinput(prompt, prefill=''): readline.set_startup_hook(lambda: readline.insert_text(prefill)) try: return input(prompt) # or raw_input in Python 2 finally: readline.set_startup_hook()"} +{"question_id": 39741429, "score": 87, "creation_date": 1475049631, "tags": ["python", "pandas"], "instruction": "Pandas replace a character in all column names\n\nI have data frames with column names (coming from .csv files) containing ( and ) and I'd like to replace them with _. How can I do that in place for all columns?", "output": "Use str.replace: df.columns = df.columns.str.replace(\"[()]\", \"_\", regex=True) Sample: df = pd.DataFrame({'(A)':[1,2,3], '(B)':[4,5,6], 'C)':[7,8,9]}) print (df) (A) (B) C) 0 1 4 7 1 2 5 8 2 3 6 9 df.columns = df.columns.str.replace(r\"[()]\", \"_\", regex=True) print (df) _A_ _B_ C_ 0 1 4 7 1 2 5 8 2 3 6 9"} +{"question_id": 22700606, "score": 87, "creation_date": 1395959388, "tags": ["python", "methods", "python-sphinx", "restructuredtext", "autodoc"], "instruction": "How would I cross-reference a function generated by autodoc in Sphinx?\n\nI am using the Sphinx autodoc feature to generate documentation based on the docstrings of my Python library. The syntax for cross referencing is found here A label must precede the section in order to allow that section to be referenced from other areas of the documentation. What I have is a .rst (ReStructeredText) file for one of my classes. It uses .. autoclass:: classname :members: To generate documentation for the class. My question is, how would I reference the auto-generated methods of the class from another .rst document in the documentation? If I try to place a label within the method's docstring, Sphinx complains. If I try to place a label before the method heading, Sphinx doesn't recognize it. Is there a simple way to do this, or will I have to explicitly write in my class file the method name and precede that with a label? Here is an example a reference within the [Python documentation2 doing what I need (I am assuming it used the autodoc feature, though I don't know for sure)", "output": "You don't need to add labels. In order to refer to a Python class, method, or other documented object, use the markup provided by the Python domain. For example, the following defines a cross-reference to the mymethod method: :py:meth:`mymodule.MyClass.mymethod` Or even simpler (since the Python domain is the default): :meth:`mymodule.MyClass.mymethod` The documentation of TextWrapper.wrap that you link to in the question includes two cross-references of this kind (click on \"Show Source\" to see the reST markup)."} +{"question_id": 46826218, "score": 87, "creation_date": 1508403573, "tags": ["python", "pytorch", "tensor"], "instruction": "PyTorch: How to get the shape of a Tensor as a list of int\n\nIn numpy, V.shape gives a tuple of ints of dimensions of V. In tensorflow V.get_shape().as_list() gives a list of integers of the dimensions of V. In pytorch, V.size() gives a size object, but how do I convert it to ints?", "output": "For PyTorch v1.0 and possibly above: >>> import torch >>> var = torch.tensor([[1,0], [0,1]]) # Using .size function, returns a torch.Size object. >>> var.size() torch.Size([2, 2]) >>> type(var.size()) # Similarly, using .shape >>> var.shape torch.Size([2, 2]) >>> type(var.shape) You can cast any torch.Size object to a native Python list: >>> list(var.size()) [2, 2] >>> type(list(var.size())) In PyTorch v0.3 and 0.4: Simply list(var.size()), e.g.: >>> import torch >>> from torch.autograd import Variable >>> from torch import IntTensor >>> var = Variable(IntTensor([[1,0],[0,1]])) >>> var Variable containing: 1 0 0 1 [torch.IntTensor of size 2x2] >>> var.size() torch.Size([2, 2]) >>> list(var.size()) [2, 2]"} +{"question_id": 3481554, "score": 87, "creation_date": 1281747640, "tags": ["python", "operators"], "instruction": "\"x not in\" vs. \"not x in\"\n\nI've noticed that both of these work the same: if x not in list and if not x in list. Is there some sort of difference between the two in certain cases? Is there a reason for having both, or is it just because it's more natural for some people to write one or the other? Which one am I more likely to see in other people's code?", "output": "The two forms make identical bytecode, as you can clearly verify: >>> import dis >>> dis.dis(compile('if x not in d: pass', '', 'exec')) 1 0 LOAD_NAME 0 (x) 3 LOAD_NAME 1 (d) 6 COMPARE_OP 7 (not in) 9 JUMP_IF_FALSE 4 (to 16) 12 POP_TOP 13 JUMP_FORWARD 1 (to 17) >> 16 POP_TOP >> 17 LOAD_CONST 0 (None) 20 RETURN_VALUE >>> dis.dis(compile('if not x in d: pass', '', 'exec')) 1 0 LOAD_NAME 0 (x) 3 LOAD_NAME 1 (d) 6 COMPARE_OP 7 (not in) 9 JUMP_IF_FALSE 4 (to 16) 12 POP_TOP 13 JUMP_FORWARD 1 (to 17) >> 16 POP_TOP >> 17 LOAD_CONST 0 (None) 20 RETURN_VALUE so obviously they're semantically identical. As a matter of style, PEP 8 does not mention the issue. Personally, I strongly prefer the if x not in y form -- that makes it immediately clear that not in is a single operator, and \"reads like English\". if not x in y may mislead some readers into thinking it means if (not x) in y (inverting the value of \"x\"), it reads a bit less like English, and it has absolutely no compensating advantages. The if x not in y form also makes the code much more readable when you have multiple conditions. For example, consider the statement if not x in y and x in m. A casual reader might think it means if not (x in y and x in m), but would get a surprise since the code actually executes as if (not x in y) and (x in m). So by moving the not next to the in operator it belongs to, you therefore make it clear that the not in only applies to that particular condition, rather than all of them. Therefore, if x not in y and x in m has multiple readability benefits and is easier to understand regardless of programming experience. Furthermore, not in is the official name of the \"not in\" bytecode operator (as seen in the disassembly above), regardless of how the user writes it, which further confirms that not in is Python's preferred style. Let's also consider a similar statement. The is not condition. If you attempt to write if not x is True, it becomes really obvious how silly and backwards that pattern is. Neither of them make any sense logically, but it becomes extra obvious in that is not example. In short, if X is not Y and if X not in Y are the preferred ways of writing these kinds of Python statements."} +{"question_id": 63277123, "score": 87, "creation_date": 1596690022, "tags": ["python", "pip"], "instruction": "What does the error message about pip --use-feature=2020-resolver mean?\n\nI'm trying to install jupyter on Ubuntu 16.04.6 x64 on DigitalOcean droplet. It is giving me the following error message, and I can't understand what this means. ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default. jsonschema 3.2.0 requires six>=1.11.0, but you'll have six 1.10.0 which is incompatible Any help would be greatly appreciated!", "output": "According to this announcement, pip will introduce a new dependency resolver in October 2020, which will be more robust but might break some existing setups. Therefore they are suggesting users to try running their pip install scripts at least once (in dev mode) with this option: --use-feature=2020-resolver to anticipate any potential issue before the new resolver becomes the default in October 2020 with pip version 20.3. On behalf of the PyPA, I am pleased to announce that we have just released pip 20.2, a new version of pip. You can install it by running python -m pip install --upgrade pip. The highlights for this release are: The beta of the next-generation dependency resolver is available Faster installations from wheel files Improved handling of wheels containing non-ASCII file contents Faster pip list using parallelized network operations Installed packages now contain metadata about whether they were directly requested by the user (PEP 376\u2019s REQUESTED file) The new dependency resolver is off by default because it is not yet ready for everyday use. The new dependency resolver is significantly stricter and more consistent when it receives incompatible instructions, and reduces support for certain kinds of constraints files, so some workarounds and workflows may break. Please test it with the --use-feature=2020-resolver flag. Please see our guide on how to test and migrate, and how to report issues . We are preparing to change the default dependency resolution behavior and make the new resolver the default in pip 20.3 (in October 2020)."} +{"question_id": 4778685, "score": 87, "creation_date": 1295843772, "tags": ["python", "django", "django-permissions"], "instruction": "How do I use Django groups and permissions?\n\nI understand the basic user stuff. I know authentication, login, creating accounts, etc. But now I want to work on groups and permissions. Where is the documentation for django groups/permissions? This is not it: http://docs.djangoproject.com/en/dev/topics/auth/", "output": "I suppose the first question you need to ask are what permissions do you need and what sort. By what sort, I mean do you want Model- or Object-level. To clarify the difference say you have a model Car. If you want to give permissions on all cars, then Model-level is appropriate, but if you want to give permissions on a per-car basis you want Object-level. You may need both, and this isn't a problem as we'll see. For Model permissions, Django handles these for you... mostly. For each model Django will create permissions in the form 'appname.permissionname_modelname'. If you have an app called 'drivers' with the Car model then one permission would be 'drivers.delete_car'. The permissions that Django automatically creates will be create, change, and delete. For some strange reason they decided not to include read permissions from CRUD, you will have to do this yourself. Note that Django decided to change CRUD's 'update' to 'change' for some reason. To add more permissions to a model, say read permissions, you use the Meta class: class Car( models.Model ): # model stuff here class Meta: permissions = ( ( \"read_car\", \"Can read Car\" ), ) Note that permissions is a set of tuples, where the tuple items are the permission as described above and a description of that permission. You don't have to follow the permname_modelname convention but I usually stick with it. Finally, to check permissions, you can use has_perm: obj.has_perm( 'drivers.read_car' ) Where obj is either a User or Group instance. I think it is simpler to write a function for this: def has_model_permissions( entity, model, perms, app ): for p in perms: if not entity.has_perm( \"%s.%s_%s\" % ( app, p, model.__name__ ) ): return False return True Where entity is the object to check permissions on (Group or User), model is the instance of a model, perms is a list of permissions as strings to check (e.g. ['read', 'change']), and app is the application name as a string. To do the same check as has_perm above you'd call something like this: result = has_model_permissions( myuser, mycar, ['read'], 'drivers' ) If you need to use object or row permissions (they mean the same thing), then Django can't really help you by itself. The nice thing is that you can use both model and object permissions side-by-side. If you want object permissions you'll have to either write your own (if using 1.2+) or find a project someone else has written, one I like is django-objectpermissions from washingtontimes."} +{"question_id": 18666885, "score": 87, "creation_date": 1378503097, "tags": ["python", "pycharm", "docstring", "sphinx-napoleon"], "instruction": "Custom PyCharm docstring stubs (i.e. for google docstring or numpydoc formats)\n\nDoes PyCharm 2.7 (or will PyCharm 3) have support for custom docstring and doctest stubs? If so, how does one go about writing this specific type of custom extension? My current project has standardized on using the Google Python Style Guide (http://google-styleguide.googlecode.com/svn/trunk/pyguide.html). I love PyCharm's docstring support, but it's only two supported formats right now are epytext and reStructureText. I want, and am willing to write myself, a PyCharm plugin that creates a documentation comment stub formatted in either Google or Numpydoc style (https://pypi.python.org/pypi/sphinxcontrib-napoleon/). Of special importance here is incorporating the type inference abilities that PyCharm has with the other two documentation types.", "output": "With PyCharm 5.0 we finally got to select Google and NumPy Style Python Docstrings templates. It is also mentioned in the whatsnew section for PyCharm 5.0. How to change the Docstring Format: File --> Settings --> Tools --> Python Integrated Tools There you can choose from the available Docstrings formats: Plain, Epytext, reStructuredText, NumPy, Google As pointed out by jstol: for Mac users, this is under PyCharm -> Preferences -> Tools -> Python Integrated Tools."} +{"question_id": 62256014, "score": 87, "creation_date": 1591596582, "tags": ["python", "unicode"], "instruction": "Does Python forbid two similarly looking Unicode identifiers?\n\nI was playing around with Unicode identifiers and stumbled upon this: >>> \ud835\udc53, x = 1, 2 >>> \ud835\udc53, x (1, 2) >>> \ud835\udc53, f = 1, 2 >>> \ud835\udc53, f (2, 2) What's going on here? Why does Python replace the object referenced by \ud835\udc53, but only sometimes? Where is that behavior described?", "output": "PEP 3131 -- Supporting Non-ASCII Identifiers says All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. You can use unicodedata to test the conversions: import unicodedata unicodedata.normalize('NFKC', '\ud835\udc53') # f which would indicate that '\ud835\udc53' gets converted to 'f' in parsing. Leading to the expected: \ud835\udc53 = \"Some String\" print(f) # \"Some String\""} +{"question_id": 31159157, "score": 87, "creation_date": 1435747693, "tags": ["python", "machine-learning", "scikit-learn"], "instruction": "Different result with roc_auc_score() and auc()\n\nI have trouble understanding the difference (if there is one) between roc_auc_score() and auc() in scikit-learn. Im tying to predict a binary output with imbalanced classes (around 1.5% for Y=1). Classifier model_logit = LogisticRegression(class_weight='auto') model_logit.fit(X_train_ridge, Y_train) Roc curve false_positive_rate, true_positive_rate, thresholds = roc_curve(Y_test, clf.predict_proba(xtest)[:,1]) AUC's auc(false_positive_rate, true_positive_rate) Out[490]: 0.82338034042531527 and roc_auc_score(Y_test, clf.predict(xtest)) Out[493]: 0.75944737191205602 Somebody can explain this difference ? I thought both were just calculating the area under the ROC curve. Might be because of the imbalanced dataset but I could not figure out why. Thanks!", "output": "AUC is not always area under the curve of a ROC curve. Area Under the Curve is an (abstract) area under some curve, so it is a more general thing than AUROC. With imbalanced classes, it may be better to find AUC for a precision-recall curve. See sklearn source for roc_auc_score: def roc_auc_score(y_true, y_score, average=\"macro\", sample_weight=None): # <...> docstring <...> def _binary_roc_auc_score(y_true, y_score, sample_weight=None): # <...> bla-bla <...> fpr, tpr, tresholds = roc_curve(y_true, y_score, sample_weight=sample_weight) return auc(fpr, tpr, reorder=True) return _average_binary_score( _binary_roc_auc_score, y_true, y_score, average, sample_weight=sample_weight) As you can see, this first gets a roc curve, and then calls auc() to get the area. I guess your problem is the predict_proba() call. For a normal predict() the outputs are always the same: import numpy as np from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_curve, auc, roc_auc_score est = LogisticRegression(class_weight='auto') X = np.random.rand(10, 2) y = np.random.randint(2, size=10) est.fit(X, y) false_positive_rate, true_positive_rate, thresholds = roc_curve(y, est.predict(X)) print auc(false_positive_rate, true_positive_rate) # 0.857142857143 print roc_auc_score(y, est.predict(X)) # 0.857142857143 If you change the above for this, you'll sometimes get different outputs: false_positive_rate, true_positive_rate, thresholds = roc_curve(y, est.predict_proba(X)[:,1]) # may differ print auc(false_positive_rate, true_positive_rate) print roc_auc_score(y, est.predict(X))"} +{"question_id": 3269686, "score": 86, "creation_date": 1279326809, "tags": ["python", "string", "encoding"], "instruction": "Short rot13 function - Python\n\nI am searching for a short and cool rot13 function in Python ;-) I've written this function: def rot13(s): chars = \"abcdefghijklmnopqrstuvwxyz\" trans = chars[13:]+chars[:13] rot_char = lambda c: trans[chars.find(c)] if chars.find(c)>-1 else c return ''.join( rot_char(c) for c in s ) Can anyone make it better? E.g supporting uppercase characters.", "output": "maketrans()/translate() solutions\u2026 Python 2.x import string rot13 = string.maketrans( \"ABCDEFGHIJKLMabcdefghijklmNOPQRSTUVWXYZnopqrstuvwxyz\", \"NOPQRSTUVWXYZnopqrstuvwxyzABCDEFGHIJKLMabcdefghijklm\") string.translate(\"Hello World!\", rot13) # 'Uryyb Jbeyq!' Python 3.x rot13 = str.maketrans( 'ABCDEFGHIJKLMabcdefghijklmNOPQRSTUVWXYZnopqrstuvwxyz', 'NOPQRSTUVWXYZnopqrstuvwxyzABCDEFGHIJKLMabcdefghijklm') 'Hello World!'.translate(rot13) # 'Uryyb Jbeyq!'"} +{"question_id": 12168648, "score": 86, "creation_date": 1346194221, "tags": ["python", "pandas", "dataframe"], "instruction": "How to add a column with values 1 to len(df) to a dataframe\n\nThe index that I have in the dataframe (with 30 rows) is of the form: Int64Index([171, 174, 173, 172, 199, \u2026, 175, 200]) The index is not strictly increasing because the data frame is the output of a sort(). I want to add a column which is the series: [1, 2, 3, 4, 5, \u2026, 30] How should I go about doing that?", "output": "How about this: from pandas import * idx = Int64Index([171, 174, 173]) df = DataFrame(index = idx, data =([1,2,3])) print df It gives me: 0 171 1 174 2 173 3 Is this what you are looking for?"} +{"question_id": 19684434, "score": 86, "creation_date": 1383141814, "tags": ["python", "function", "arguments"], "instruction": "Best way to check function arguments?\n\nI'm looking for an efficient way to check variables of a Python function. For example, I'd like to check arguments type and value. Is there a module for this? Or should I use something like decorators, or any specific idiom? def my_function(a, b, c): \"\"\"An example function I'd like to check the arguments of.\"\"\" # check that a is an int # check that 0 < b < 10 # check that c is not an empty string", "output": "The most Pythonic idiom is to clearly document what the function expects and then just try to use whatever gets passed to your function and either let exceptions propagate or just catch attribute errors and raise a TypeError instead. Type-checking should be avoided as much as possible as it goes against duck-typing. Value testing can be OK \u2013 depending on the context. The only place where validation really makes sense is at system or subsystem entry point, such as web forms, command line arguments, etc. Everywhere else, as long as your functions are properly documented, it's the caller's responsibility to pass appropriate arguments."} +{"question_id": 21498694, "score": 86, "creation_date": 1391261721, "tags": ["python", "url", "flask"], "instruction": "Flask: get current route\n\nIn Flask, when I have several routes for the same function, how can I know which route is used at the moment? For example: @app.route(\"/antitop/\") @app.route(\"/top/\") @requires_auth def show_top(): .... How can I know, that now route was called using /top/ or /antitop/? UPDATE I know about request.path I don't want use it, because the request can be rather complex, and I want repeat the routing logic in the function. I think that the solution with url_rule it the best one.", "output": "the most 'flasky' way to check which route triggered your view is, by request.url_rule. from flask import request rule = request.url_rule if 'antitop' in rule.rule: # request by '/antitop' elif 'top' in rule.rule: # request by '/top'"} +{"question_id": 65101442, "score": 86, "creation_date": 1606876762, "tags": ["python", "visual-studio-code", "vscode-extensions", "formatter", "python-black"], "instruction": "Formatter black is not working on my VSCode...but why?\n\nI have started using Python and Django and I am very new in this field. And, this is my first time to ask a question here...I do apologise in advance if there is a known solution to this issue... When I installed and set VSCode formatter 'black' (after setting linter as flake8), the tutorial video tutor's side shows up pop-up like 'formatter autopep8 is not installed. install?'. & Mine did not show up that message. So what I did was... manually input 'pipenv install flack --dev --pre' on terminal. manually input \"python.formatting.provider\": \"black\", to 'settings.json' on '.vscode' folder. Setting(VSCode) -> flake8, Python > Linting: Flake8 Enabled (Also modified in: workspace), (ticked the box) Whether to lint Python files using flake8 The bottom code is from settings.json (on vscode folder). { \"python.linting.pylintEnabled\": false, \"python.linting.flake8Enabled\": true, \"python.linting.enabled\": true, \"python.formatting.provider\": \"black\", # input manually \"python.linting.flake8Args\": [\"--max-line-length=88\"] # input manually } I found a 'black formatter' document. https://github.com/psf/black & it stated... python -m black {source_file_or_directory} & I get the following error message. Usage: __main__.py [OPTIONS] [SRC]... Try '__main__.py -h' for help. Error: Invalid value for '[SRC]...': Path '{source_file_or_directory}' does not exist. Yes, honestly, I am not sure which source_file_or_directory I should set...but above all now I am afraid whether I am on the right track or not. Can I hear your advice? At least some direction to go, please. Thanks..", "output": "Update 2023-09-15: Now VSCode has a Microsoft oficial Black Formatter extension. It will probably solve your problems. Original answer: I use Black from inside VSCode and it rocks. It frees mental cycles that you would spend deciding how to format your code. It's best to use it from your favorite editor. Just run from the command line if you need to format a lot of files at once. First, check if you have this in your VSCode settings.json (open it with Ctrl-P + settings): \"python.formatting.provider\": \"black\", \"editor.formatOnSave\": true, Remember that there may be 2 setting.json files: one in your home dir, and one in your project (.vscode/settings.json). The one inside the project prevails. That said, these kind of problems usually are about using a python interpreter where black isn't installed. I recommend the use of virtual environments, but first check your python interpreter on the status bar: If you didn't explicitly select an interpreter, do it now clicking on the Python version in your status bar. You can also do it with Ctrl-P + \"Python: Select Interpreter\". The status bar should change after selecting it. Now open a new terminal. Since you selected your interpreter, your virtual environment should be automatically activated by VSCode. Run python using your interpreter path and try to import black: $ python Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import black >>> Failed import? Problem solved. Just install black using the interpreter from the venv: python -m pip install black. You also can install using Conda, but in my experience VSCode works better with pip. Still not working? Click in the \"OUTPUT\" tab sibling of the TERMINAL and try to get more info at the \"Log\" output (if you use the newer Black plugun it may be called \"Black Formatter\"). Select it in the pull down menu:"} +{"question_id": 17682567, "score": 86, "creation_date": 1373993676, "tags": ["python", "django-models", "django-admin"], "instruction": "How to add a calculated field to a Django model\n\nI have a simple Employee model that includes firstname, lastname and middlename fields. On the admin side and likely elsewhere, I would like to display that as: lastname, firstname middlename To me the logical place to do this is in the model by creating a calculated field as such: from django.db import models from django.contrib import admin class Employee(models.Model): lastname = models.CharField(\"Last\", max_length=64) firstname = models.CharField(\"First\", max_length=64) middlename = models.CharField(\"Middle\", max_length=64) clocknumber = models.CharField(max_length=16) name = ''.join( [lastname.value_to_string(), ',', firstname.value_to_string(), ' ', middlename.value_to_string()]) class Meta: ordering = ['lastname','firstname', 'middlename'] class EmployeeAdmin(admin.ModelAdmin): list_display = ('clocknumber','name') fieldsets = [(\"Name\", {\"fields\":((\"lastname\", \"firstname\", \"middlename\"), \"clocknumber\")}), ] admin.site.register(Employee, EmployeeAdmin) Ultimately what I think I need is to get the value of the name fields as strings. The error I am getting is value_to_string() takes exactly 2 arguments (1 given). Value to string wants self, obj. I am not sure what obj means. There must be an easy way to do this, I am sure I am not the first to want to do this. Edit: Below is my code modified to Daniel's answer. The error I get is: django.core.exceptions.ImproperlyConfigured: EmployeeAdmin.list_display[1], 'name' is not a callable or an attribute of 'EmployeeAdmin' of found in the model 'Employee'. from django.db import models from django.contrib import admin class Employee(models.Model): lastname = models.CharField(\"Last\", max_length=64) firstname = models.CharField(\"First\", max_length=64) middlename = models.CharField(\"Middle\", max_length=64) clocknumber = models.CharField(max_length=16) @property def name(self): return ''.join( [self.lastname,' ,', self.firstname, ' ', self.middlename]) class Meta: ordering = ['lastname','firstname', 'middlename'] class EmployeeAdmin(admin.ModelAdmin): list_display = ('clocknumber','name') fieldsets = [(\"Name\", {\"fields\":((\"lastname\", \"firstname\", \"middlename\"), \"clocknumber\")}), ] admin.site.register(Employee, EmployeeAdmin)", "output": "Ok... Daniel Roseman's answer seemed like it should have worked. As is always the case, you find what you're looking for after you post the question. From the Django 1.5 docs I found this example that worked right out of the box. Thanks to all for your help. Here is the code that worked: from django.db import models from django.contrib import admin class Employee(models.Model): lastname = models.CharField(\"Last\", max_length=64) firstname = models.CharField(\"First\", max_length=64) middlename = models.CharField(\"Middle\", max_length=64) clocknumber = models.CharField(max_length=16) def _get_full_name(self): \"Returns the person's full name.\" return '%s, %s %s' % (self.lastname, self.firstname, self.middlename) full_name = property(_get_full_name) class Meta: ordering = ['lastname','firstname', 'middlename'] class EmployeeAdmin(admin.ModelAdmin): list_display = ('clocknumber','full_name') fieldsets = [(\"Name\", {\"fields\":((\"lastname\", \"firstname\", \"middlename\"), \"clocknumber\")}), ] admin.site.register(Employee, EmployeeAdmin)"} +{"question_id": 14951356, "score": 86, "creation_date": 1361256114, "tags": ["python", "django", "django-testing"], "instruction": "Django : Testing if the page has redirected to the desired url\n\nIn my django app, I have an authentication system. So, If I do not log in and try to access some profile's personal info, I get redirected to a login page. Now, I need to write a test case for this. The responses from the browsers I get is : GET /myprofile/data/some_id/ HTTP/1.1 302 0 GET /account/login?next=/myprofile/data/some_id/ HTTP/1.1 301 0 GET /account/login?next=/myprofile/data/some_id/ HTTP/1.1 200 6533 How do I write my test ? This what I have so far: self.client.login(user=\"user\", password=\"passwd\") response = self.client.get('/myprofile/data/some_id/') self.assertEqual(response.status,200) self.client.logout() response = self.client.get('/myprofile/data/some_id/') What could possibly come next ?", "output": "Django 1.4: https://docs.djangoproject.com/en/1.4/topics/testing/#django.test.TestCase.assertRedirects Django 2.0: https://docs.djangoproject.com/en/2.0/topics/testing/tools/#django.test.SimpleTestCase.assertRedirects SimpleTestCase.assertRedirects(response, expected_url, status_code=302, target_status_code=200, msg_prefix='', fetch_redirect_response=True) Asserts that the response returned a status_code redirect status, redirected to expected_url (including any GET data), and that the final page was received with target_status_code. If your request used the follow argument, the expected_url and target_status_code will be the url and status code for the final point of the redirect chain. If fetch_redirect_response is False, the final page won\u2019t be loaded. Since the test client can\u2019t fetch external URLs, this is particularly useful if expected_url isn\u2019t part of your Django app. Scheme is handled correctly when making comparisons between two URLs. If there isn\u2019t any scheme specified in the location where we are redirected to, the original request\u2019s scheme is used. If present, the scheme in expected_url is the one used to make the comparisons to."} +{"question_id": 10357992, "score": 86, "creation_date": 1335561443, "tags": ["python", "audio", "numpy", "matplotlib"], "instruction": "How to generate audio from a numpy array?\n\nI want to create \"heart rate monitor\" effect from a 2D array in numpy and want the tone to reflect the values in the array.", "output": "You can use the write function from scipy.io.wavfile to create a wav file which you can then play however you wish. Note that the array must be integers, so if you have floats, you might want to scale them appropriately: import numpy as np from scipy.io.wavfile import write rate = 44100 data = np.random.uniform(-1, 1, rate) # 1 second worth of random samples between -1 and 1 scaled = np.int16(data / np.max(np.abs(data)) * 32767) write('test.wav', rate, scaled) If you want Python to actually play audio, then this page provides an overview of some of the packages/modules."} +{"question_id": 40336601, "score": 86, "creation_date": 1477888233, "tags": ["python", "arrays"], "instruction": "Python appending array to an array\n\nI am currently working on DES implementation.In one part of the code I have to append array to an array.Below is my code: C0=[1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1] def Fiestel(): C=[] C.append(C0) temp=[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1] C.append(temp) print(C) Fiestel() How do I append an array to an existing array.I even tried declaring C as 2d array.Thankx in advance for helping. Each element is an array itself.", "output": "You can append the elements of one list to another with the \"+=\" operator. Note that the \"+\" operator creates a new list. a = [1, 2, 3] b = [10, 20] a = a + b # Create a new list a+b and assign back to a. print a # [1, 2, 3, 10, 20] # Equivalently: a = [1, 2, 3] b = [10, 20] a += b print a # [1, 2, 3, 10, 20] If you want to append the lists and keep them as lists, then try: result = [] result.append(a) result.append(b) print result # [[1, 2, 3], [10, 20]]"} +{"question_id": 26672532, "score": 86, "creation_date": 1414751505, "tags": ["python", "list-comprehension"], "instruction": "How to set local variable in list comprehension?\n\nI have a method that takes a list and returns an object: # input a list, returns an object def map_to_obj(lst): a_list = f(lst) return a_list[0] if a_list else None I want to get a list that contains all the mapped elements that aren't None. Like this: v_list = [v1, v2, v3, v4] [map_to_obj(v) for v in v_list if map_to_obj(v)] But it doesn't seem good to call the map_to_obj method twice in the list comprehension. Is there a way to have local variables in list comprehensions so that it can have better performance? Or does the compiler optimize it automatically? Here is what I want: (sml like) [let mapped = map_to_obj(v) in for v in v_list if mapped end]", "output": "Starting in Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), it's possible to use a local variable within a list comprehension in order to avoid calling the same function twice. In our case, we can name the evaluation of map_to_obj(v) as a variable o while using the result of the expression to filter the list; and thus use o as the mapped value: [o for v in [v1, v2, v3, v4] if (o := map_to_obj(v))]"} +{"question_id": 37038014, "score": 86, "creation_date": 1462396019, "tags": ["python", "apache-spark", "pyspark"], "instruction": "Pyspark replace strings in Spark dataframe column\n\nI'd like to perform some basic stemming on a Spark Dataframe column by replacing substrings. What's the quickest way to do this? In my current use case, I have a list of addresses that I want to normalize. For example this dataframe: id address 1 2 foo lane 2 10 bar lane 3 24 pants ln Would become id address 1 2 foo ln 2 10 bar ln 3 24 pants ln", "output": "For Spark 1.5 or later, you can use the functions package: from pyspark.sql.functions import regexp_replace newDf = df.withColumn('address', regexp_replace('address', 'lane', 'ln')) Quick explanation: The function withColumn is called to add (or replace, if the name exists) a column to the data frame. The function regexp_replace will generate a new column by replacing all substrings that match the pattern."} +{"question_id": 39839112, "score": 86, "creation_date": 1475523315, "tags": ["python", "pandas", "scikit-learn", "feature-extraction", "feature-selection"], "instruction": "The easiest way for getting feature names after running SelectKBest in Scikit Learn\n\nI'm trying to conduct a supervised machine-learning experiment using the SelectKBest feature of scikit-learn, but I'm not sure how to create a new dataframe after finding the best features: Let's assume I would like to conduct the experiment selecting 5 best features: from sklearn.feature_selection import SelectKBest, f_classif select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit_transform(features_dataframe, targeted_class) Now, if I add the line: import pandas as pd dataframe = pd.DataFrame(select_k_best_classifier) I receive a new dataframe without feature names (only index starting from 0 to 4), but I want to create a dataframe with the new selected features, in a way like this: dataframe = pd.DataFrame(fit_transofrmed_features, columns=features_names) My question is how to create the features_names list? I know that I should use: select_k_best_classifier.get_support() Which returns an array of boolean values, where true values indices represent the column that should be selected in the original dataframe. How should I use this boolean array with the array of all features names I can get via the method feature_names = list(features_dataframe.columns.values) ?", "output": "You can do the following : mask = select_k_best_classifier.get_support() #list of booleans new_features = [] # The list of your K best features for bool_val, feature in zip(mask, feature_names): if bool_val: new_features.append(feature) Then change the name of your features: dataframe = pd.DataFrame(fit_transformed_features, columns=new_features)"} +{"question_id": 22102999, "score": 86, "creation_date": 1393611980, "tags": ["python", "linux", "memory"], "instruction": "Get total physical memory in Python\n\nHow can I get the total physical memory within Python in a distribution agnostic fashion? I don't need used memory, just the total physical memory.", "output": "your best bet for a cross-platform solution is to use the psutil package (available on PyPI). import psutil psutil.virtual_memory().total # total physical memory in Bytes Documentation for virtual_memory is here."} +{"question_id": 38748257, "score": 86, "creation_date": 1470239276, "tags": ["python", "pytest", "pytest-django"], "instruction": "Disable autouse fixtures on specific pytest marks\n\nIs it possible to prevent the execution of \"function scoped\" fixtures with autouse=True on specific marks only? I have the following fixture set to autouse so that all outgoing requests are automatically mocked out: @pytest.fixture(autouse=True) def no_requests(monkeypatch): monkeypatch.setattr(\"requests.sessions.Session.request\", MagicMock()) But I have a mark called endtoend that I use to define a series of tests that are allowed to make external requests for more robust end to end testing. I would like to inject no_requests in all tests (the vast majority), but not in tests like the following: @pytest.mark.endtoend def test_api_returns_ok(): assert make_request().status_code == 200 Is this possible?", "output": "You can also use the request object in your fixture to check the markers used on the test, and don't do anything if a specific marker is set: import pytest @pytest.fixture(autouse=True) def autofixt(request): if 'noautofixt' in request.keywords: return print(\"patching stuff\") def test1(): pass @pytest.mark.noautofixt def test2(): pass Output with -vs: x.py::test1 patching stuff PASSED x.py::test2 PASSED"} +{"question_id": 11103856, "score": 86, "creation_date": 1340118311, "tags": ["python", "regex"], "instruction": "re.findall which returns a dict of named capturing groups?\n\nInspired by a now-deleted question; given a regex with named groups, is there a method like findall which returns a list of dict with the named capturing groups instead of a list of tuple? Given: >>> import re >>> text = \"bob sue jon richard harry\" >>> pat = re.compile('(?P[a-z]+)\\s+(?P[a-z]+)') >>> pat.findall(text) [('bob', 'sue'), ('jon', 'richard')] Should instead give: [{'name': 'bob', 'name2': 'sue'}, {'name': 'jon', 'name2': 'richard'}]", "output": "Using Pattern.finditer() then Match.groupdict(): >>> import re >>> s = \"bob sue jon richard harry\" >>> r = re.compile('(?P[a-z]+)\\s+(?P[a-z]+)') >>> [m.groupdict() for m in r.finditer(s)] [{'name2': 'sue', 'name': 'bob'}, {'name2': 'richard', 'name': 'jon'}]"} +{"question_id": 37083058, "score": 86, "creation_date": 1462579493, "tags": ["python", "google-custom-search"], "instruction": "Programmatically searching google in Python using custom search\n\nI have a snippet of code using the pygoogle python module that allows me to programmatically search for some term in google succintly: g = pygoogle(search_term) g.pages = 1 results = g.get_urls()[0:10] I just found out that this has been discontinued unfortunately and replaced by something called the google custom search. I looked at the other related questions on SO but didn't find anything I could use. I have two questions: 1) Does google custom search allow me to do exactly what I am doing in the three lines above? 2) If yes - where can I find example code to do exactly what I am doing above? If no then what is the alternative to do what I did using pygoogle?", "output": "It is possible to do this. The setup is... not very straightforward, but the end result is that you can search the entire web from python with few lines of code. There are 3 main steps in total. #1st step: get Google API key The pygoogle's page states: Unfortunately, Google no longer supports the SOAP API for search, nor do they provide new license keys. In a nutshell, PyGoogle is pretty much dead at this point. You can use their AJAX API instead. Take a look here for sample code: http://dcortesi.com/2008/05/28/google-ajax-search-api-example-python-code/ ... but you actually can't use AJAX API either. You have to get a Google API key. https://developers.google.com/api-client-library/python/guide/aaa_apikeys For simple experimental use I suggest \"server key\". #2nd step: setup Custom Search Engine so that you can search the entire web Indeed, the old API is not available. The best new API that is available is Custom Search. It seems to support only searching within specific domains, however, after following this SO answer you can search the whole web: From the Google Custom Search homepage ( http://www.google.com/cse/ ), click Create a Custom Search Engine. Type a name and description for your search engine. Under Define your search engine, in the Sites to Search box, enter at least one valid URL (For now, just put www.anyurl.com to get past this screen. More on this later ). Select the CSE edition you want and accept the Terms of Service, then click Next. Select the layout option you want, and then click Next. Click any of the links under the Next steps section to navigate to your Control panel. In the left-hand menu, under Control Panel, click Basics. In the Search Preferences section, select Search the entire web but emphasize included sites. Click Save Changes. In the left-hand menu, under Control Panel, click Sites. Delete the site you entered during the initial setup process. This approach is also recommended by Google: https://support.google.com/customsearch/answer/2631040 #3rd step: install Google API client for Python pip install google-api-python-client, more info here: repo: https://github.com/google/google-api-python-client more info: https://developers.google.com/api-client-library/python/apis/customsearch/v1 complete docs: https://api-python-client-doc.appspot.com/ #4th step (bonus): do the search So, after setting this up, you can follow the code samples from few places: simple example: https://github.com/google/google-api-python-client/blob/master/samples/customsearch/main.py cse() function docs: https://google-api-client-libraries.appspot.com/documentation/customsearch/v1/python/latest/customsearch_v1.cse.html and end up with this: from googleapiclient.discovery import build import pprint my_api_key = \"Google API key\" my_cse_id = \"Custom Search Engine ID\" def google_search(search_term, api_key, cse_id, **kwargs): service = build(\"customsearch\", \"v1\", developerKey=api_key) res = service.cse().list(q=search_term, cx=cse_id, **kwargs).execute() return res['items'] results = google_search( 'stackoverflow site:en.wikipedia.org', my_api_key, my_cse_id, num=10) for result in results: pprint.pprint(result) After some tweaking you could write some functions that behave exactly like your snippet, but I'll skip this step here. Note that num has an upper limit of 10 as per docs. Updating start in a loop might be necessary."} +{"question_id": 8205558, "score": 86, "creation_date": 1321830564, "tags": ["python", "python-2.7", "class", "boolean", "python-2.x"], "instruction": "Defining \"boolness\" of a class in python\n\nWhy doesn't this work as one may have naively expected? class Foo(object): def __init__(self): self.bar = 3 def __bool__(self): return self.bar > 10 foo = Foo() if foo: print 'x' else: print 'y' (The output is x)", "output": "For Python 2-3 compatibility, add a line after the class definition block to alias the method: class Foo(object): ... Foo.__nonzero__ = Foo.__bool__ or include the alias directly in the class definition: class Foo(object): def __bool__(self): ... __nonzero__ = __bool__ Of course this would also work the other way around, but I think the name __nonzero__ is just a legacy of the original C-ishness of Python's interpretation of objects as truthy or falsy based on their equivalence with zero. Just add the statement above and the code will work with regardless of the version of Python (and the __nonzero__ definition can be dropped when support for 2.x is no longer needed)."} +{"question_id": 30945784, "score": 86, "creation_date": 1434741657, "tags": ["python", "string", "replace"], "instruction": "How to remove all characters before a specific character in Python?\n\nI'd like to remove all characters before a designated character or set of characters (for example): intro = \"<>I'm Tom.\" Now I'd like to remove the <> before I'm (or more specifically, I). Any suggestions?", "output": "The included Python re has a sub function. Just match all the chars up to I, then replace the matched chars with I. import re re.sub(r'^.*?I', 'I', stri)"} +{"question_id": 1614059, "score": 86, "creation_date": 1256310130, "tags": ["python", "text-to-speech"], "instruction": "How to make Python speak\n\nHow could I make Python say some text? I could use Festival with subprocess but I won't be able to control it (or maybe in interactive mode, but it won't be clean). Is there a Python TTS library? Like an API for Festival, eSpeak, ... ?", "output": "You should try using the PyTTSx package since PyTTS is outdated. PyTTSx works with Python 2. For Python 3, install the PyTTSx3 package. http://pypi.python.org/pypi/pyttsx/ https://pypi.org/project/pyttsx3/"} +{"question_id": 15770488, "score": 86, "creation_date": 1364923928, "tags": ["python", "django", "django-rest-framework"], "instruction": "Return the current user with Django Rest Framework\n\nI am currently developing an API using Django. However, I would like to create a view that returns the current User with the following endpoint: /users/current/. To do so, I created a list view and filtered the queryset on the user that made the request. That works, but the result is a list, not a single object. Combined with pagination, the result looks way too complicated and inconsistent compared to other endpoints. I also tried to create a detail view and filtering the queryset, but DRF complains that I provided no pk or slug.", "output": "With something like this you're probably best off breaking out of the generic views and writing the view yourself. @api_view(['GET']) def current_user(request): serializer = UserSerializer(request.user) return Response(serializer.data) You could also do the same thing using a class based view like so... class CurrentUserView(APIView): def get(self, request): serializer = UserSerializer(request.user) return Response(serializer.data) Of course, there's also no requirement that you use a serializer, you could equally well just pull out the fields you need from the user instance. @api_view(['GET']) def current_user(request): user = request.user return Response({ 'username': user.username, 'email': user.email, ... })"} +{"question_id": 2216974, "score": 86, "creation_date": 1265551188, "tags": ["python", "django", "django-forms"], "instruction": "Django ModelForm for Many-to-Many fields\n\nConsider the following models and form: class Pizza(models.Model): name = models.CharField(max_length=50) class Topping(models.Model): name = models.CharField(max_length=50) ison = models.ManyToManyField(Pizza, blank=True) class ToppingForm(forms.ModelForm): class Meta: model = Topping When you view the ToppingForm it lets you choose what pizzas the toppings go on and everything is just dandy. My questions is: How do I define a ModelForm for Pizza that lets me take advantage of the Many-to-Many relationship between Pizza and Topping and lets me choose what Toppings go on the Pizza?", "output": "I guess you would have here to add a new ModelMultipleChoiceField to your PizzaForm, and manually link that form field with the model field, as Django won't do that automatically for you. The following snippet might be helpful : class PizzaForm(forms.ModelForm): class Meta: model = Pizza # Representing the many to many related field in Pizza toppings = forms.ModelMultipleChoiceField(queryset=Topping.objects.all()) # Overriding __init__ here allows us to provide initial # data for 'toppings' field def __init__(self, *args, **kwargs): # Only in case we build the form from an instance # (otherwise, 'toppings' list should be empty) if kwargs.get('instance'): # We get the 'initial' keyword argument or initialize it # as a dict if it didn't exist. initial = kwargs.setdefault('initial', {}) # The widget for a ModelMultipleChoiceField expects # a list of primary key for the selected data. initial['toppings'] = [t.pk for t in kwargs['instance'].topping_set.all()] forms.ModelForm.__init__(self, *args, **kwargs) # Overriding save allows us to process the value of 'toppings' field def save(self, commit=True): # Get the unsave Pizza instance instance = forms.ModelForm.save(self, False) # Prepare a 'save_m2m' method for the form, old_save_m2m = self.save_m2m def save_m2m(): old_save_m2m() # This is where we actually link the pizza with toppings instance.topping_set.clear() instance.topping_set.add(*self.cleaned_data['toppings']) self.save_m2m = save_m2m # Do we need to save all changes now? if commit: instance.save() self.save_m2m() return instance This PizzaForm can then be used everywhere, even in the admin : # yourapp/admin.py from django.contrib.admin import site, ModelAdmin from yourapp.models import Pizza from yourapp.forms import PizzaForm class PizzaAdmin(ModelAdmin): form = PizzaForm site.register(Pizza, PizzaAdmin) Note The save() method might be a bit too verbose, but you can simplify it if you don't need to support the commit=False situation, it will then be like that : def save(self): instance = forms.ModelForm.save(self) instance.topping_set.clear() instance.topping_set.add(*self.cleaned_data['toppings']) return instance"} +{"question_id": 40097590, "score": 86, "creation_date": 1476749758, "tags": ["python", "string", "ascii"], "instruction": "Detect whether a Python string is a number or a letter\n\nHow can I detect either numbers or letters in a string? I am aware you use the ASCII codes, but what functions take advantage of them?", "output": "Check if string is nonnegative digit (integer) and alphabet You may use str.isdigit() and str.isalpha() to check whether a given string is a nonnegative integer (0 or greater) and alphabetical character, respectively. Sample Results: # For alphabet >>> 'A'.isdigit() False >>> 'A'.isalpha() True # For digit >>> '1'.isdigit() True >>> '1'.isalpha() False Check for strings as positive/negative - integer/float str.isdigit() returns False if the string is a negative number or a float number. For example: # returns `False` for float >>> '123.3'.isdigit() False # returns `False` for negative number >>> '-123'.isdigit() False If you want to also check for the negative integers and float, then you may write a custom function to check for it as: def is_number(n): try: float(n) # Type-casting the string to `float`. # If string is not a valid `float`, # it'll raise `ValueError` exception except ValueError: return False return True Sample Run: >>> is_number('123') # positive integer number True >>> is_number('123.4') # positive float number True >>> is_number('-123') # negative integer number True >>> is_number('-123.4') # negative `float` number True >>> is_number('abc') # `False` for \"some random\" string False Discard \"NaN\" (not a number) strings while checking for number The above functions will return True for the \"NAN\" (Not a number) string because for Python it is valid float representing it is not a number. For example: >>> is_number('NaN') True In order to check whether the number is \"NaN\", you may use math.isnan() as: >>> import math >>> nan_num = float('nan') >>> math.isnan(nan_num) True Or if you don't want to import additional library to check this, then you may simply check it via comparing it with itself using ==. Python returns False when nan float is compared with itself. For example: # `nan_num` variable is taken from above example >>> nan_num == nan_num False Hence, above function is_number can be updated to return False for \"NaN\" as: def is_number(n): is_number = True try: num = float(n) # check for \"nan\" floats is_number = num == num # or use `math.isnan(num)` except ValueError: is_number = False return is_number Sample Run: >>> is_number('Nan') # not a number \"Nan\" string False >>> is_number('nan') # not a number string \"nan\" with all lower cased False >>> is_number('123') # positive integer True >>> is_number('-123') # negative integer True >>> is_number('-1.12') # negative `float` True >>> is_number('abc') # \"some random\" string False Allow Complex Number like \"1+2j\" to be treated as valid number The above function will still return you False for the complex numbers. If you want your is_number function to treat complex numbers as valid number, then you need to type cast your passed string to complex() instead of float(). Then your is_number function will look like: def is_number(n): is_number = True try: # v type-casting the number here as `complex`, instead of `float` num = complex(n) is_number = num == num except ValueError: is_number = False return is_number Sample Run: >>> is_number('1+2j') # Valid True # : complex number >>> is_number('1+ 2j') # Invalid False # : string with space in complex number represetantion # is treated as invalid complex number >>> is_number('123') # Valid True # : positive integer >>> is_number('-123') # Valid True # : negative integer >>> is_number('abc') # Invalid False # : some random string, not a valid number >>> is_number('nan') # Invalid False # : not a number \"nan\" string PS: Each operation for each check depending on the type of number comes with additional overhead. Choose the version of is_number function which fits your requirement."} +{"question_id": 19326004, "score": 86, "creation_date": 1381520589, "tags": ["python", "local-variables"], "instruction": "Access a function variable outside the function without using \"global\"\n\nI am trying to access a local function variable outside the function in Python. I can make code like this work with global variables: bye = '' def hi(): global bye bye = 5 sigh = 10 hi() print(bye) Next, I tried this code, hoping to access bye outside hi() without using global bye: def hi(): bye = 5 sigh = 10 return hi() x = hi() print(x.bye) This gives AttributeError: 'NoneType' object has no attribute 'bye'. Next, I tried: def hi(): bye = 5 sigh = 10 return bye hi() x = hi() print(x.bye) This didn't improve matters; I get AttributeError: 'int' object has no attribute 'bye'. Is there a way to access a local function variable (bye) outside its function (hi()) without using globals and also without printing out the sigh variable? How can I do it?", "output": "You could do something along these lines (which worked in both Python v2.7.17 and v3.8.1 when I tested it/them): def hi(): # other code... hi.bye = 42 # Create function attribute. sigh = 10 hi() print(hi.bye) # -> 42 Functions are objects in Python and can have arbitrary attributes assigned to them. If you're going to be doing this kind of thing often, you could implement something more generic by creating a function decorator that adds a this argument to each call to the decorated function. This additional argument will give functions a way to reference themselves without needing to explicitly embed (hardcode) their name into the rest of the definition and is similar to the instance argument that class methods automatically receive as their first argument which is usually named self \u2014 I picked something different to avoid confusion, but like the self argument, it can be named whatever you wish. Here's an example of that approach: def add_this_arg(func): def wrapped(*args, **kwargs): return func(wrapped, *args, **kwargs) return wrapped @add_this_arg def hi(this, that): # other code... this.bye = 2 * that # Create function attribute. sigh = 10 hi(21) print(hi.bye) # -> 42 Note This doesn't work for class methods. Just use the instance argument, named self by convention, that's already passed to methods instead of the method's name. You can reference class-level attributes through type(self). See Function's attributes when in a class."} +{"question_id": 6063755, "score": 86, "creation_date": 1305832612, "tags": ["python"], "instruction": "Increment a Python floating point value by the smallest possible amount\n\nHow can I increment a floating point value in python by the smallest possible amount? Background: I'm using floating point values as dictionary keys. Occasionally, very occasionally (and perhaps never, but not certainly never), there will be collisions. I would like to resolve these by incrementing the floating point value by as small an amount as possible. How can I do this? In C, I would twiddle the bits of the mantissa to achieve this, but I assume that isn't possible in Python.", "output": "Python 3.9 and above Starting with Python 3.9, released 2020-10-05, you can use the math.nextafter function: math.nextafter(x, y) Return the next floating-point value after x towards y. If x is equal to y, return y. Examples: math.nextafter(x, math.inf) goes up: towards positive infinity. math.nextafter(x, -math.inf) goes down: towards minus infinity. math.nextafter(x, 0.0) goes towards zero. math.nextafter(x, math.copysign(math.inf, x)) goes away from zero. See also math.ulp(). A simpler alternative to math.copysign(math.inf, x) is to simply substitute 2*x."} +{"question_id": 283707, "score": 86, "creation_date": 1226490586, "tags": ["python", "file", "file-io", "filesize", "tarfile"], "instruction": "Size of an open file object\n\nIs there a way to find the size of a file object that is currently open? Specifically, I am working with the tarfile module to create tarfiles, but I don't want my tarfile to exceed a certain size. As far as I know, tarfile objects are file-like objects, so I imagine a generic solution would work.", "output": "Do this: # seek to the end of the open file, f f.seek(0, os.SEEK_END) # get the current file position in the stream # - this is the size of the open file, in bytes f.tell() f.tell() documentation is here: https://docs.python.org/3/library/io.html#io.IOBase.tell Full example in a Python interpeter: $ ls -la chardet-1.0.1.tgz -rwxr-xr-x 1 vinko vinko 179218 2008-10-20 17:49 chardet-1.0.1.tgz $ python Python 2.5.1 (r251:54863, Jul 31 2008, 22:53:39) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> f = open('chardet-1.0.1.tgz','rb') >>> f.seek(0, os.SEEK_END) >>> f.tell() 179218L Adding ChrisJY's idea to the example >>> import os >>> os.fstat(f.fileno()).st_size 179218L >>> Note: Based on the comments, f.seek(0, os.SEEK_END) is must before calling f.tell(), without which it would return a size of 0. The reason is that f.seek(0, os.SEEK_END) moves the file object's position to the end of the file."} +{"question_id": 16044229, "score": 86, "creation_date": 1366136149, "tags": ["python", "pygame", "keyboard"], "instruction": "How to get keyboard input in pygame?\n\nI am making a game in pygame 1.9.2. It's a faily simple game in which a ship moves between five columns of bad guys who attack by moving slowly downward. I am attempting to make it so that the ship moves left and right with the left and right arrow keys. Here is my code: keys=pygame.key.get_pressed() if keys[K_LEFT]: location-=1 if location==-1: location=0 if keys[K_RIGHT]: location+=1 if location==5: location=4 It works too well. The ship moves too fast. It is near impossible to have it move only one location, left or right. How can i make it so the ship only moves once every time the key is pressed?", "output": "You can get the events from pygame and then watch out for the KEYDOWN event, instead of looking at the keys returned by get_pressed()(which gives you keys that are currently pressed down, whereas the KEYDOWN event shows you which keys were pressed down on that frame). What's happening with your code right now is that if your game is rendering at 30fps, and you hold down the left arrow key for half a second, you're updating the location 15 times. events = pygame.event.get() for event in events: if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: location -= 1 if event.key == pygame.K_RIGHT: location += 1 To support continuous movement while a key is being held down, you would have to establish some sort of limitation, either based on a forced maximum frame rate of the game loop or by a counter which only allows you to move every so many ticks of the loop. move_ticker = 0 keys=pygame.key.get_pressed() if keys[K_LEFT]: if move_ticker == 0: move_ticker = 10 location -= 1 if location == -1: location = 0 if keys[K_RIGHT]: if move_ticker == 0: move_ticker = 10 location+=1 if location == 5: location = 4 Then somewhere during the game loop you would do something like this: if move_ticker > 0: move_ticker -= 1 This would only let you move once every 10 frames (so if you move, the ticker gets set to 10, and after 10 frames it will allow you to move again)"} +{"question_id": 6466711, "score": 86, "creation_date": 1308912412, "tags": ["python", "operating-system"], "instruction": "What is the return value of os.system() in Python?\n\nI came across this: >>> import os >>> os.system('ls') file.txt README 0 What is the return value of os.system()? Why do I get 0?", "output": "The return value of os.system is OS-dependent. On Unix, the return value is a 16-bit number that contains two different pieces of information. From the documentation: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero) So if the signal number (low byte) is 0, it would, in theory, be safe to shift the result by 8 bits (result >> 8) to get the error code. The function os.WEXITSTATUS does exactly this. If the error code is 0, that usually means that the process exited without errors. On Windows, the documentation specifies that the return value of os.system is shell-dependent. If the shell is cmd.exe (the default one), the value is the return code of the process. Again, 0 would mean that there weren't errors. For others error codes: on Linux on Windows"} +{"question_id": 17583443, "score": 86, "creation_date": 1373504674, "tags": ["python", "setuptools", "distutils"], "instruction": "What is the correct way to share package version with setup.py and the package?\n\nWith distutils, setuptools, etc. a package version is specified in setup.py: # file: setup.py ... setup( name='foobar', version='1.0.0', # other attributes ) I would like to be able to access the same version number from within the package: >>> import foobar >>> foobar.__version__ '1.0.0' I could add __version__ = '1.0.0' to my package's __init__.py, but I would also like to include additional imports in my package to create a simplified interface to the package: # file: __init__.py from foobar import foo from foobar.bar import Bar __version__ = '1.0.0' and # file: setup.py from foobar import __version__ ... setup( name='foobar', version=__version__, # other attributes ) However, these additional imports can cause the installation of foobar to fail if they import other packages that are not yet installed. What is the correct way to share package version with setup.py and the package?", "output": "Set the version in setup.py only, and read your own version with importlib.metadata.version(), effectively querying the installed package metadata: file: setup.py setup( name='foobar', version='1.0.0', # other attributes ) file: __init__.py from importlib.metadata import version __version__ = version('foobar') To make this work in all cases, where you could end up running this without having installed it, test for PackageNotFoundError and use importlib.metadata.distribution() the introspect the distribution location: from importlib.metadata import distribution, version, PackageNotFoundError import os.path try: _dist = distribution('foobar') # Normalize case for Windows systems dist_loc = os.path.normcase(_dist.locate_file(\"\")) here = os.path.normcase(__file__) if not here.startswith(os.path.join(dist_loc, 'foobar')): # not installed, but there is another version that *is* raise PackageNotFoundError except PackageNotFoundError: __version__ = 'Please install this project with setup.py' else: __version__ = _dist.version"} +{"question_id": 53952214, "score": 86, "creation_date": 1545955442, "tags": ["python", "python-2.7", "python-venv"], "instruction": "how can I find out which python virtual environment I am using?\n\nI have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?", "output": "You can use sys.prefix to determine which virtualenv you're in. import sys print(sys.prefix) from the sys docs A string giving the site-specific directory prefix where the platform independent Python files are installed"} +{"question_id": 9007877, "score": 86, "creation_date": 1327515146, "tags": ["python", "numpy", "sorting"], "instruction": "Sort array's rows by another array in Python\n\nI'm trying to sort the rows of one array by the values of another. For example: import numpy as np arr1 = np.random.normal(1, 1, 80) arr2 = np.random.normal(1,1, (80,100)) I want to sort arr1 in descending order, and to have the current relationship between arr1 and arr2 to be maintained (ie, after sorting both, the rows of arr1[0] and arr2[0, :] are the same).", "output": "Use argsort as follows: arr1inds = arr1.argsort() sorted_arr1 = arr1[arr1inds[::-1]] sorted_arr2 = arr2[arr1inds[::-1]] This example sorts in descending order."} +{"question_id": 14359557, "score": 86, "creation_date": 1358342871, "tags": ["python", "yaml", "pyyaml"], "instruction": "Reading YAML file with Python results in yaml.composer.ComposerError: expected a single document in the stream\n\nI have a yaml file that looks like --- level_1: \"test\" level_2: 'NetApp, SOFS, ZFS Creation' request: 341570 --- level_1: \"test\" level_2: 'NetApp, SOFS, ZFS Creation' request: 341569 --- level_1: \"test\" level_2: 'NetApp, SOFS, ZFS Creation' request: 341568 I am able to read this correctly in Perl using YAML but not in python using YAML. It fails with the error: expected a single document in the stream Program: import yaml stram = open(\"test\", \"r\") print yaml.load(stram) Error: Traceback (most recent call last): File \"abcd\", line 4, in print yaml.load(stram) File \"/usr/local/pkgs/python-2.6.5/lib/python2.6/site-packages/yaml/__init__.py\", line 58, in load return loader.get_single_data() File \"/usr/local/pkgs/python-2.6.5/lib/python2.6/site-packages/yaml/constructor.py\", line 42, in get_single_data node = self.get_single_node() File \"/usr/local/pkgs/python-2.6.5/lib/python2.6/site-packages/yaml/composer.py\", line 43, in get_single_node event.start_mark) yaml.composer.ComposerError: expected a single document in the stream in \"test\", line 2, column 1 but found another document in \"test\", line 5, column 1", "output": "The yaml documents are separated by ---, and if any stream (e.g. a file) contains more than one document then you should use the yaml.load_all function rather than yaml.load. The code: import yaml stream = open(\"test\", \"r\") docs = yaml.load_all(stream, yaml.FullLoader) for doc in docs: for k,v in doc.items(): print k, \"->\", v print \"\\n\", results in for the input file as provided in the question: request -> 341570 level_1 -> test level_2 -> NetApp, SOFS, ZFS Creation request -> 341569 level_1 -> test level_2 -> NetApp, SOFS, ZFS Creation request -> 341568 level_1 -> test level_2 -> NetApp, SOFS, ZFS Creation"} +{"question_id": 52494128, "score": 86, "creation_date": 1537865432, "tags": ["python", "python-3.x"], "instruction": "Call function without optional arguments if they are None\n\nThere's a function which takes optional arguments. def alpha(p1=\"foo\", p2=\"bar\"): print('{0},{1}'.format(p1, p2)) Let me iterate over what happens when we use that function in different ways: >>> alpha() foo,bar >>> alpha(\"FOO\") FOO,bar >>> alpha(p2=\"BAR\") foo,BAR >>> alpha(p1=\"FOO\", p2=None) FOO,None Now consider the case where I want to call it like alpha(\"FOO\", myp2) and myp2 will either contain a value to be passed, or be None. But even though the function handles p2=None, I want it to use its default value \"bar\" instead. Maybe that's worded confusingly, so let me reword that: If myp2 is None, call alpha(\"FOO\"). Else, call alpha(\"FOO\", myp2). The distinction is relevant because alpha(\"FOO\", None) has a different result than alpha(\"FOO\"). How can I concisely (but readably) make this distinction? One possibility would usually be to check for None within alpha, which would be encouraged because that would make the code safer. But assume that alpha is used in other places where it is actually supposed to handle None as it does. I'd like to handle that on the caller-side. One possibility is to do a case distinction: if myp2 is None: alpha(\"FOO\") else: alpha(\"FOO\", myp2) But that can quickly become much code when there are multiple such arguments. (exponentially, 2^n) Another possibility is to simply do alpha(\"FOO\", myp2 or \"bar\"), but that requires us to know the default value. Usually, I'd probably go with this approach, but I might later change the default values for alpha and this call would then need to be updated manually in order to still call it with the (new) default value. I am using python 3.4 but it would be best if your answers can provide a good way that works in any python version. The question is technically finished here, but I reword some requirement again, since the first answer did gloss over that: I want the behaviour of alpha with its default values \"foo\", \"bar\" preserved in general, so it is (probably) not an option to change alpha itself. In yet again other words, assume that alpha is being used somewhere else as alpha(\"FOO\", None) where the output FOO,None is expected behaviour.", "output": "Pass the arguments as keyword arguments from a dictionary using argument unpacking. Keyword arguments can be passed as a dict using the ** operator. To filter out the arguments with None as their value use dictionary comprehension. kwargs = dict(p1='FOO', p2=None) alpha(**{k: v for k, v in kwargs.items() if v is not None})"} +{"question_id": 50842144, "score": 86, "creation_date": 1528907438, "tags": ["python", "requirements.txt"], "instruction": "Requirements.txt greater than equal to and then less than?\n\nI have this line in my requirements file django>=1.10,<1.11 Does that mean I need to have Django version >= 1.10 and then less than 1.11?", "output": "Yes. The pip manual [doc] has a section on the format of \"requirement specifiers\". These are documented in PEP-508 [pep] and PEP-440 [pep]: The comparison operator determines the kind of version clause: ~=: Compatible release clause ==: Version matching clause !=: Version exclusion clause <=, >=: Inclusive ordered comparison clause <, >: Exclusive ordered comparison clause ===: Arbitrary equality clause. The comma (\",\") is equivalent to a logical and operator: a candidate version must match all given version clauses in order to match the specifier as a whole. So in your case it means that the Django version is 1.10 or higher and not 1.11 or higher (so 1.10 is fine, 1.10.1, as well, but not 1.11, 1.11.1, or 2.0.1). Although a requirements.txt file often looks like: Django==4.2 this is often not a good idea, because you pin the version to 4.2, so any security patches are not installed if you recreate the virtual environment. If the package follows semantic versioning (semver), you can rewrite this to: Django>=4.2,<4.3 Because of the semantic versioning, this normally guarantees that the package has the same interface and expected behavior."} +{"question_id": 2074514, "score": 86, "creation_date": 1263586861, "tags": ["python", "django", "django-queryset", "greatest-n-per-group"], "instruction": "Django Query That Get Most Recent Objects From Different Categories\n\nI have two models A and B. All B objects have a foreign key to an A object. Given a set of A objects, is there anyway to use the ORM to get a set of B objects containing the most recent object created for each A object. Here's an simplified example: class Bakery(models.Model): town = models.CharField(max_length=255) class Cake(models.Model): bakery = models.ForeignKey(Bakery, on_delete=models.CASCADE) baked_at = models.DateTimeField() So I'm looking for a query that returns the most recent cake baked in each bakery in Anytown, USA.", "output": "As far as I know, there is no one-step way of doing this in Django ORM, but you can split it into two queries: from django.db.models import Max bakeries = Bakery.objects.annotate( hottest_cake_baked_at=Max('cake__baked_at') ) hottest_cakes = Cake.objects.filter( baked_at__in=[b.hottest_cake_baked_at for b in bakeries] ) If id's of cakes are progressing along with bake_at timestamps, you can simplify and disambiguate the above code (in case two cakes arrives at the same time you can get both of them): from django.db.models import Max hottest_cake_ids = Bakery.objects.annotate( hottest_cake_id=Max('cake__id') ).values_list('hottest_cak\u200c\u200be_id', flat=True) hottest_cakes = Cake.objects.filter(id__in=hottest_cake_ids) BTW credits for this goes to Daniel Roseman, who once answered similar question of mine: http://groups.google.pl/group/django-users/browse_thread/thread/3b3cd4cbad478d34/3e4c87f336696054?hl=pl&q= If the above method is too slow, then I know also second method - you can write custom SQL producing only those Cakes, that are hottest in relevant Bakeries, define it as database VIEW, and then write unmanaged Django model for it. It's also mentioned in the above django-users thread. Direct link to the original concept is here: http://web.archive.org/web/20130203180037/http://wolfram.kriesing.de/blog/index.php/2007/django-nice-and-critical-article#comment-48425 Hope this helps."} +{"question_id": 53845024, "score": 86, "creation_date": 1545196813, "tags": ["python", "python-typing"], "instruction": "Defining a recursive type hint in Python?\n\nLet's say I have a function that accepts a Garthok, an Iterable[Garthok], an Iterable[Iterable[Garthok]], etc. def narfle_the_garthoks(arg): if isinstance(arg, Iterable): for value in arg: narfle_the_garthoks(arg) else: arg.narfle() Is there any way to specify a type hint for arg that indicates that it accepts any level of Iterables of Garthoks? I suspect not, but thought I'd check if I'm missing something. As a workaround, I'm just specifying a few levels deep, and then ending with Iterable[Any]. Union[Garthok, Iterable[Union[Garthok, Iterable[Union[Garthok, Iterable[Union[Garthok, Iterable[Any]]]]]]]]", "output": "You can specify recursive types in the typing language by using type aliases and forward reference strings, Garthoks = Union[Garthok, Iterable['Garthoks']] Mypy supports recursive types by default since v0.990, and Pyright/Pylance since v2020.9.4. Some types of forward references are handled by PEP 563. You can use them starting from Python 3.7 by doing from __future__ import annotations \u2013 Konstantin As of Python 3.12, __future__.annotations/stringifying is not necessary if the type is defined using a type statement: type Garthoks = Garthok | Iterable[Garthoks]"} +{"question_id": 28280308, "score": 86, "creation_date": 1422888489, "tags": ["python", "debugging", "spyder"], "instruction": "How do I debug efficiently with Spyder in Python?\n\nI like Python and I like Spyder but I find debugging with Spyder terrible! Every time I put a break point, I need to press two buttons: first the debug and then the continue button (it pauses at first line automatically) which is annoying. Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage. The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than MATLAB. Last but not least, if I call a function from within the ipdb>> console, and put a breakpoint in it, it will not stop there. It seems like I have to put the breakpoint there before I start the debugging (Ctrl+F5). Do you have a solution or maybe can you tell me how you debug Python scripts and functions? I am using fresh install of Anaconda on a Windows 8.1 64bit.", "output": "(Spyder maintainer here) After our 4.2.0 version, released in November 2020, the debugging experience in Spyder is quite good. What we provide now is what people coming from Matlab would expect from a debugger, i.e. something that works like IPython and lets you inspect and plot variables at the current breakpoint or frame. Now about your points: If there is a breakpoint present in the file you're trying to debug, then Spyder enters in debug mode and continues until the first breakpoint is met. If it's present in another file, then you still need to press first Debug and then Continue. IPdb is the IPython debugger console. In Spyder 4.2.0 or above it comes with code completion, syntax highlighting, history browsing of commands with the up/down arrows (separate from the IPython history), multi-line evaluation of code, and inline and interactive plots with Matplotlib. This is fixed now. Also, to avoid clashes between Python code and Pdb commands, if you have (for instance) a variable called n and write n in the prompt to see its value, we will show it instead of running the n Pdb command. To run that command instead, you have to prefix it with an exclamation mark, like this: !n This is fixed too. You can set breakpoints in IPdb and they will be taken into account in your current session."} +{"question_id": 74508024, "score": 86, "creation_date": 1668946731, "tags": ["python", "python-packaging", "requirements.txt", "pyproject.toml"], "instruction": "Is requirements.txt still needed when using pyproject.toml?\n\nSince mid 2022 it is now possible to get rid of setup.py, setup.cfg in favor of pyproject.toml. Editable installs work with recent versions of setuptools and pip and even the official packaging tutorial switched away from setup.py to pyproject.toml. However, documentation regarding requirements.txt seems to be have been also removed, and I wonder where to put the pinned requirements now? As a refresher: It used to be common practice to put the dependencies (without version pinning) in setup.py avoiding issues when this package gets installed with other packages needing the same dependencies but with conflicting version requirements. For packaging libraries a setup.py was usually sufficient. For deployments (i.e. non libraries) you usually also provided a requirements.txt with version-pinned dependencies. So you don't accidentally get the latest and greatest but the exact versions of dependencies that that package has been tested with. So my question is, did anything change? Do you still put the pinned requirements in the requirements.txt when used together with pyproject.toml? Or is there an extra section for that in pyproject.toml? Is there some documentation on that somewhere?", "output": "Quoting myself from here My current assumption is: [...] you put your (mostly unpinned) dependencies to pyproject.toml instead of setup.py, so you library can be installed as a dependency of something else without causing much troubles because of issues resolving version constraints. On top of that, for \"deployable applications\" (for lack of a better term), you still want to maintain a separate requirements.txt with exact version pinning. Which has been confirmed by a Python Packaging Authority (PyPA) member and clarification of PyPA's recommendations should be updated accordingly at some point."} +{"question_id": 50125472, "score": 85, "creation_date": 1525219759, "tags": ["python", "ssl", "https", "anaconda", "conda"], "instruction": "Issues with installing python libraries on Windows : CondaHTTPError: HTTP 000 CONNECTION FAILED for url conda install -c anaconda pymongo Fetching package metadata ... CondaHTTPError: HTTP 000 CONNECTION FAILED for url Elapsed: - An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way. ConnectTimeout(MaxRetryError(\"HTTPSConnectionPool(host='conda.anaconda.org', por t=443): Max retries exceeded with url: /anaconda/win-64/repodata.json (Caused by ConnectTimeoutError(, 'Connection to conda.anaconda.org timed out. (connect timeout=9. 15)'))\",),) Steps taken to resolve: 1. Update C:\\Users\\\\xxxxxxx\\.condarc file with the following: channels: - defaults ssl_verify: false proxy_servers: http: http://sproxy.fg.xxx.com:1000 https: https://sproxy.fg.xxx.com:1000 2. (C:\\Users\\xxxxxxx\\AppData\\Local\\Continuum\\anaconda3) C:\\Users\\xxxxxxx> conda config --set ssl_verify False Additional Info: (C:\\Users\\xxxxxxx\\AppData\\Local\\Continuum\\anaconda3) C:\\Users\\xxxxxxx> conda info Current conda install: platform : win-64 conda version : 4.3.27 conda is private : False conda-env version : 4.3.27 conda-build version : 3.0.22 python version : 3.6.2.final.0 requests version : 2.18.4 config file : C:\\Users\\xxxxxxx\\.condarc netrc file : None offline mode : False user-agent : conda/4.3.27 requests/2.18.4 CPython/3.6.2 Windows/7 W indows/6.1.7601 administrator : False A number of posts online simply reinstalled Anaconda, any other options apart from a fresh install?", "output": "This works a charm:- Just copy these:- libcrypto-1_1-x64.dll libssl-1_1-x64.dll from D:\\Anaconda3\\Library\\bin to D:\\Anaconda3\\DLLs."} +{"question_id": 11339210, "score": 85, "creation_date": 1341471320, "tags": ["python", "string", "integer"], "instruction": "How to get integer values from a string in Python?\n\nSuppose I had a string string1 = \"498results should get\" Now I need to get only integer values from the string like 498. Here I don't want to use list slicing because the integer values may increase like these examples: string2 = \"49867results should get\" string3 = \"497543results should get\" So I want to get only integer values out from the string exactly in the same order. I mean like 498,49867,497543 from string1,string2,string3 respectively. Can anyone let me know how to do this in a one or two lines?", "output": ">>> import re >>> string1 = \"498results should get\" >>> int(re.search(r'\\d+', string1).group()) 498 If there are multiple integers in the string: >>> list(map(int, re.findall(r'\\d+', string1))) [498]"} +{"question_id": 28430904, "score": 85, "creation_date": 1423568909, "tags": ["python", "arrays", "numpy"], "instruction": "Set numpy array elements to zero if they are above a specific threshold\n\nSay, I have a numpy array consists of 10 elements, for example: a = np.array([2, 23, 15, 7, 9, 11, 17, 19, 5, 3]) Now I want to efficiently set all a values higher than 10 to 0, so I'll get: [2, 0, 0, 7, 9, 0, 0, 0, 5, 3] Because I currently use a for loop, which is very slow: # Zero values below \"threshold value\". def flat_values(sig, tv): \"\"\" :param sig: signal. :param tv: threshold value. :return: \"\"\" for i in np.arange(np.size(sig)): if sig[i] < tv: sig[i] = 0 return sig How can I achieve that in the most efficient way, having in mind big arrays of, say, 10^6 elements?", "output": "Generally, list comprehensions are faster than for loops in python (because python knows that it doesn't need to care for a lot of things that might happen in a regular for loop): a = [0 if a_ > thresh else a_ for a_ in a] but, as @unutbu correctly pointed out, numpy allows list indexing, and element-wise comparison giving you index lists, so: super_threshold_indices = a > thresh a[super_threshold_indices] = 0 would be even faster. Generally, when applying methods on vectors of data, have a look at numpy.ufuncs, which often perform much better than python functions that you map using any native mechanism."} +{"question_id": 27315592, "score": 85, "creation_date": 1417781329, "tags": ["python", "django"], "instruction": "@csrf_exempt does not work on generic view based class\n\nclass ChromeLoginView(View): def get(self, request): return JsonResponse({'status': request.user.is_authenticated()}) @method_decorator(csrf_exempt) def post(self, request): username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) if user is not None: if user.is_active: login(request, user) return JsonResponse({'status': True}) return JsonResponse({'status': False}) I am expecting that the post does stopped by csrf, but it return 403 error. But if remove that decorator and do this in the URLConf url(r'^chrome_login/', csrf_exempt(ChromeLoginView.as_view()), name='chrome_login'), it will work. What happened here? didn't it supposed to work, because I guess that's what method_decorator do. I'm using python3.4 and django1.7.1 Any advice would be great.", "output": "As @knbk said, this is the dispatch() method that must be decorated. Since Django 1.9, you can use the method_decorator directly on a class: from django.utils.decorators import method_decorator @method_decorator(csrf_exempt, name='dispatch') class ChromeLoginView(View): def get(self, request): return JsonResponse({'status': request.user.is_authenticated()}) def post(self, request): username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) if user is not None: if user.is_active: login(request, user) return JsonResponse({'status': True}) return JsonResponse({'status': False}) This avoids overriding the dispatch() method only to decorate it."} +{"question_id": 37879010, "score": 85, "creation_date": 1466158717, "tags": ["python", "selenium-webdriver", "web-scraping", "selenium-firefoxdriver"], "instruction": "Selenium-Debugging: Element is not clickable at point (X,Y)\n\nI try to scrape this site by Selenium. I want to click in \"Next Page\" buttom, for this I do: driver.find_element_by_class_name('pagination-r').click() it works for many pages but not for all, I got this error WebDriverException: Message: Element is not clickable at point (918, 13). Other element would receive the click:
always for this page I read this question and I tried this driver.implicitly_wait(10) el = driver.find_element_by_class_name('pagination-r') action = webdriver.common.action_chains.ActionChains(driver) action.move_to_element_with_offset(el, 918, 13) action.click() action.perform() but I got the same error", "output": "Another element is covering the element you are trying to click. You could use execute_script() to click on this. element = driver.find_element_by_class_name('pagination-r') driver.execute_script(\"arguments[0].click();\", element)"} +{"question_id": 404795, "score": 85, "creation_date": 1230803456, "tags": ["java", "python", "error-handling", "idioms"], "instruction": "LBYL vs EAFP in Java?\n\nI was recently teaching myself Python and discovered the LBYL/EAFP idioms with regards to error checking before code execution. In Python, it seems the accepted style is EAFP, and it seems to work well with the language. LBYL (Look Before You Leap): def safe_divide_1(x, y): if y == 0: print \"Divide-by-0 attempt detected\" return None else: return x/y EAFP (it's Easier to Ask Forgiveness than Permission): def safe_divide_2(x, y): try: return x/y except ZeroDivisionError: print \"Divide-by-0 attempt detected\" return None My question is this: I had never even heard of using EAFP as the primary data validation construct, coming from a Java and C++ background. Is EAFP something that is wise to use in Java? Or is there too much overhead from exceptions? I know that there is only overhead when an exception is actually thrown, so I'm unsure as to why the simpler method of EAFP is not used. Is it just preference?", "output": "Personally, and I think this is backed up by convention, EAFP is never a good way to go. You can look at it as an equivalent to the following: if (o != null) o.doSomething(); else // handle as opposed to: try { o.doSomething() } catch (NullPointerException npe) { // handle } Moreover, consider the following: if (a != null) if (b != null) if (c != null) a.getB().getC().doSomething(); else // handle c null else // handle b null else // handle a null This may look a lot less elegant (and yes this is a crude example - bear with me), but it gives you much greater granularity in handling the error, as opposed to wrapping it all in a try-catch to get that NullPointerException, and then try to figure out where and why you got it. The way I see it EAFP should never be used, except for rare situations. Also, since you raised the issue: yes, the try-catch block does incur some overhead even if the exception is not thrown."} +{"question_id": 67244301, "score": 85, "creation_date": 1619277272, "tags": ["python", "python-3.x", "pip"], "instruction": "\"WARNING: Value for scheme.data does not match\" when I try to update pip or install packages\n\nI have a M1 Mac and I just noticed that when I try to upgrade pip or install any packages I get a series of warnings: user@mac01 ~ $python3 -m pip install --upgrade pip WARNING: Value for scheme.platlib does not match. Please report this to distutils: /opt/homebrew/lib/python3.9/site-packages sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages WARNING: Value for scheme.purelib does not match. Please report this to distutils: /opt/homebrew/lib/python3.9/site-packages sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages WARNING: Value for scheme.headers does not match. Please report this to distutils: /opt/homebrew/include/python3.9/UNKNOWN sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9 WARNING: Value for scheme.scripts does not match. Please report this to distutils: /opt/homebrew/bin sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/bin WARNING: Value for scheme.data does not match. Please report this to distutils: /opt/homebrew sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9 WARNING: Additional context: user = False home = None root = None prefix = None Requirement already satisfied: pip in /opt/homebrew/lib/python3.9/site-packages (21.1) WARNING: Value for scheme.platlib does not match. Please report this to distutils: /opt/homebrew/lib/python3.9/site-packages sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages WARNING: Value for scheme.purelib does not match. Please report this to distutils: /opt/homebrew/lib/python3.9/site-packages sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages WARNING: Value for scheme.headers does not match. Please report this to distutils: /opt/homebrew/include/python3.9/UNKNOWN sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/include/python3.9 WARNING: Value for scheme.scripts does not match. Please report this to distutils: /opt/homebrew/bin sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/bin WARNING: Value for scheme.data does not match. Please report this to distutils: /opt/homebrew sysconfig: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9 WARNING: Additional context: user = False home = None root = None prefix = None user@mac01 ~ $ Please advise.", "output": "downgrading to an earlier version of pip fixed it for me: python -m pip install pip==21.0.1"} +{"question_id": 64729944, "score": 85, "creation_date": 1604766613, "tags": ["python", "numpy", "matplotlib", "pip", "virtualenv"], "instruction": "RuntimeError: The current NumPy installation fails to pass a sanity check due to a bug in the windows runtime\n\nI am using Python 3.9 on Windows 10 version 2004 x64. PowerShell as Administrator. Python version: Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] on win32 Install matplotlib error. pip install virtualenv virtualenv foo cd .\\foo .\\Scripts\\active pip install numpy pip install matplotlib Error Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. Try the new cross-platform PowerShell https://aka.ms/pscore6 PS C:\\WINDOWS\\system32> Set-ExecutionPolicy Unrestricted -Force PS C:\\WINDOWS\\system32> cd /d C:\\Windows\\System32\\cmd.exe Set-Location : A positional parameter cannot be found that accepts argument 'C:\\Windows\\System32\\cmd.exe'. At line:1 char:1 + cd /d C:\\Windows\\System32\\cmd.exe + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Set-Location], ParameterBindingException + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.SetLocationCommand PS C:\\WINDOWS\\system32> cd C:\\Windows\\System32\\cmd.exe cd : Cannot find path 'C:\\Windows\\System32\\cmd.exe' because it does not exist. At line:1 char:1 + cd C:\\Windows\\System32\\cmd.exe + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (C:\\Windows\\System32\\cmd.exe:String) [Set-Location], ItemNotFoundExcepti on + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand PS C:\\WINDOWS\\system32> cd D:\\ PS D:\\> cd .\\Users\\donhuvy\\ PS D:\\Users\\donhuvy> ls Directory: D:\\Users\\donhuvy Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 10/26/2020 3:35 PM AppData d----- 11/7/2020 9:33 AM PycharmProjects PS D:\\Users\\donhuvy> cd .\\PycharmProjects\\pythonProject\\ PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> virtualenv foo virtualenv : The term 'virtualenv' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + virtualenv foo + ~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (virtualenv:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> pip install virtualenv Collecting virtualenv Downloading virtualenv-20.1.0-py2.py3-none-any.whl (4.9 MB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.9 MB 1.1 MB/s Collecting distlib<1,>=0.3.1 Downloading distlib-0.3.1-py2.py3-none-any.whl (335 kB) |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 335 kB 6.4 MB/s Requirement already satisfied: six<2,>=1.9.0 in c:\\users\\donhuvy\\appdata\\roaming\\python\\python39\\site-packages (from virtualenv) (1.15.0) Collecting filelock<4,>=3.0.0 Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB) Collecting appdirs<2,>=1.4.3 Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB) Installing collected packages: distlib, filelock, appdirs, virtualenv Successfully installed appdirs-1.4.4 distlib-0.3.1 filelock-3.0.12 virtualenv-20.1.0 PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> virtualenv foo created virtual environment CPython3.9.0.final.0-64 in 1312ms creator CPython3Windows(dest=D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo, clear=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\\Users\\donhuvy\\AppData\\Local\\pypa\\virtualenv) added seed packages: pip==20.2.4, setuptools==50.3.2, wheel==0.35.1 activators BashActivator,BatchActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject> cd .\\foo PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> .\\Scripts\\activate (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> pip install numpy Collecting numpy Using cached numpy-1.19.4-cp39-cp39-win_amd64.whl (13.0 MB) Installing collected packages: numpy Successfully installed numpy-1.19.4 (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> pip install matplotlib Collecting matplotlib Using cached matplotlib-3.3.2.tar.gz (37.9 MB) ** On entry to DGEBAL parameter number 3 had an illegal value ** On entry to DGEHRD parameter number 2 had an illegal value ** On entry to DORGHR DORGQR parameter number 2 had an illegal value ** On entry to DHSEQR parameter number 4 had an illegal value ERROR: Command errored out with exit status 1: command: 'D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\Scripts\\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\donhuvy\\\\AppData\\\\Local\\\\Temp\\\\pip-install-8bn40qg7\\\\matplotlib\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\donhuvy\\\\AppData\\\\Local\\\\Temp\\\\pip-install-8bn40qg7\\\\matplotlib\\\\setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base 'C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe' cwd: C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\ Complete output (61 lines): Edit setup.cfg to change the build options; suppress output with --quiet. BUILDING MATPLOTLIB matplotlib: yes [3.3.2] python: yes [3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]] platform: yes [win32] sample_data: yes [installing] tests: no [skipping due to configuration] macosx: no [Mac OS-X only] running egg_info creating C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info writing C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\PKG-INFO writing dependency_links to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\dependency_links.txt writing namespace_packages to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\namespace_packages.txt writing requirements to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\requires.txt writing top-level names to C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\top_level.txt writing manifest file 'C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-pip-egg-info-39nmc0pe\\matplotlib.egg-info\\SOURCES.txt' Traceback (most recent call last): File \"\", line 1, in File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 242, in setup( # Finally, pass this all along to distutils to do the heavy lifting. File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\__init__.py\", line 153, in setup return distutils.core.setup(**attrs) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\core.py\", line 148, in setup dist.run_commands() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\dist.py\", line 966, in run_commands self.run_command(cmd) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\dist.py\", line 985, in run_command cmd_obj.run() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 298, in run self.find_sources() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 305, in find_sources mm.run() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 536, in run self.add_defaults() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\setuptools\\command\\egg_info.py\", line 572, in add_defaults sdist.add_defaults(self) File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\command\\sdist.py\", line 228, in add_defaults self._add_defaults_ext() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\command\\sdist.py\", line 311, in _add_defaults_ext build_ext = self.get_finalized_command('build_ext') File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\cmd.py\", line 299, in get_finalized_command cmd_obj.ensure_finalized() File \"d:\\users\\donhuvy\\appdata\\local\\programs\\python\\python39\\lib\\distutils\\cmd.py\", line 107, in ensure_finalized self.finalize_options() File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 88, in finalize_options self.distribution.ext_modules[:] = [ File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setup.py\", line 91, in for ext in package.get_extensions() File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setupext.py\", line 345, in get_extensions add_numpy_flags(ext) File \"C:\\Users\\donhuvy\\AppData\\Local\\Temp\\pip-install-8bn40qg7\\matplotlib\\setupext.py\", line 469, in add_numpy_flags import numpy as np File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\numpy\\__init__.py\", line 305, in _win_os_check() File \"D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo\\lib\\site-packages\\numpy\\__init__.py\", line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('D:\\\\Users\\\\donhuvy\\\\PycharmProjects\\\\pythonProject\\\\foo\\\\lib\\\\site-packages\\\\numpy\\\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://tinyurl.com/ y3dm3h86 ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. (foo) PS D:\\Users\\donhuvy\\PycharmProjects\\pythonProject\\foo> A screenshot of some of the above text Error information link to fmod(), after an update to windows 2004, is causing a strange interaction with other code I use PyCharm 2020.2 Ultimate, and it also catches the error. How can I fix it?", "output": "The temporary solution is to use NumPy 1.19.3. pip install numpy==1.19.3 From a Microsoft thread, a fix was promised to be available around January 2021. It was fixed in the KB4598291 update."} +{"question_id": 1551382, "score": 85, "creation_date": 1255285725, "tags": ["python", "datetime", "date", "time", "formatting"], "instruction": "User-friendly time format in Python?\n\nPython: I need to show file modification times in the \"1 day ago\", \"two hours ago\", format. Is there something ready to do that? It should be in English.", "output": "The code was originally published on a blog post \"Python Pretty Date function\" (http://evaisse.com/post/93417709/python-pretty-date-function) It is reproduced here as the blog account has been suspended and the page is no longer available. def pretty_date(time=False): \"\"\" Get a datetime object or a int() Epoch timestamp and return a pretty string like 'an hour ago', 'Yesterday', '3 months ago', 'just now', etc \"\"\" from datetime import datetime now = datetime.now() if type(time) is int: diff = now - datetime.fromtimestamp(time) elif isinstance(time, datetime): diff = now - time elif not time: diff = 0 second_diff = diff.seconds day_diff = diff.days if day_diff < 0: return '' if day_diff == 0: if second_diff < 10: return \"just now\" if second_diff < 60: return str(second_diff) + \" seconds ago\" if second_diff < 120: return \"a minute ago\" if second_diff < 3600: return str(second_diff // 60) + \" minutes ago\" if second_diff < 7200: return \"an hour ago\" if second_diff < 86400: return str(second_diff // 3600) + \" hours ago\" if day_diff == 1: return \"Yesterday\" if day_diff < 7: return str(day_diff) + \" days ago\" if day_diff < 31: return str(day_diff // 7) + \" weeks ago\" if day_diff < 365: return str(day_diff // 30) + \" months ago\" return str(day_diff // 365) + \" years ago\""} +{"question_id": 739260, "score": 85, "creation_date": 1239410638, "tags": ["python", "django", "django-models", "django-forms"], "instruction": "Customize/remove Django select box blank option\n\nI'm using Django 1.0.2. I've written a ModelForm backed by a Model. This model has a ForeignKey where blank=False. When Django generates HTML for this form it creates a select box with one option for each row in the table referenced by the ForeignKey. It also creates an option at the top of the list that has no value and displays as a series of dashes: What I'd like to know is: What is the cleanest way to remove this auto-generated option from the select box? What is the cleanest way to customize it so that it shows as: In searching for a solution I came across Django ticket 4653 which gave me the impression that others had the same question and that the default behavior of Django may have been modified. This ticket is over a year old so I was hoping there might be a cleaner way to accomplish these things. Thanks for any help, Jeff Edit: I've configured the ForeignKey field as such: verb = models.ForeignKey(Verb, blank=False, default=get_default_verb) This does set the default so that it's no longer the empty/dashes option but unfortunately it doesn't seem to resolve either of my questions. That is, the empty/dashes option still appears in the list.", "output": "Haven't tested this, but based on reading Django's code here and here I believe it should work: class ThingForm(forms.ModelForm): class Meta: model = Thing def __init__(self, *args, **kwargs): super(ThingForm, self).__init__(*args, **kwargs) self.fields['verb'].empty_label = None EDIT: This is documented, though you wouldn't necessarily know to look for ModelChoiceField if you're working with an auto-generated ModelForm. EDIT: As jlpp notes in his answer, this isn't complete - you have to re-assign the choices to the widgets after changing the empty_label attribute. Since that's a bit hacky, the other option that might be easier to understand is just overriding the entire ModelChoiceField: class ThingForm(forms.ModelForm): verb = ModelChoiceField(Verb.objects.all(), empty_label=None) class Meta: model = Thing"} +{"question_id": 16329946, "score": 85, "creation_date": 1367465826, "tags": ["python", "django", "django-models", "django-orm"], "instruction": "Django model method - create_or_update\n\nSimilar to get_or_create, I would like to be able to update_or_create in Django. Until now, I have using an approaching similar to how @Daniel Roseman does it here. However, I'd like to do this more succinctly as a model method. This snippet is quite old and I was wondering if there is a better way to do this in more recent version of Django.", "output": "See QuerySet.update_or_create (new in Django 1.7dev)"} +{"question_id": 64596394, "score": 85, "creation_date": 1603992852, "tags": ["python", "amazon-web-services", "docker", "aws-codebuild"], "instruction": "ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' in AWS CodeBuild\n\nImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/python3.7/site-packages/botocore/docs/bcdoc/init.py) Traceback (most recent call last): File \"/root/.pyenv/versions/3.7.6/bin/aws\", line 19, in import awscli.clidriver File \"/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/clidriver.py\", line 36, in from awscli.help import ProviderHelpCommand File \"/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/help.py\", line 23, in from botocore.docs.bcdoc import docevents ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/botocore/docs/bcdoc/__init__.py) [Container] 2020/10/29 16:48:39 Command did not exit successfully aws --version exit status 1 The failure occurs in the PRE_BUILD. And this is my spec build file: buildspec-cd.yml pre_build: commands: - AWS_REGION=${AWS_DEFAULT_REGION} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_VERSION=${COMMIT_HASH} - REPOSITORY_URI=${CONTAINER_REGISTRY}/${APPLICATION_NAME} - aws --version - echo Logging in to Amazon ECR... - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email) The codebuild was working correctly and nothing has been changed. Only stopped working.", "output": "Reading this GitHub issue #2596. i fixed my error. Just before the PRE_BUILD section, I added this line to my buildspec-cd.yml file: pip3 install --upgrade awscli install: commands: - pip3 install awsebcli --upgrade - eb --version - pip3 install --upgrade awscli pre_build: commands: - AWS_REGION=${AWS_DEFAULT_REGION} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_VERSION=${COMMIT_HASH} ..."} +{"question_id": 2774361, "score": 85, "creation_date": 1273071705, "tags": ["python", "json", "sorting"], "instruction": "JSON output sorting in Python\n\nI've a problem with JSON in python. In fact, if I try to execute this code, python gives me a sorted JSON string! For example: values = { 'profile': 'testprofile', 'format': 'RSA_RC4_Sealed', 'enc_key': base64.b64encode(chiave_da_inviare), 'request': base64.b64encode(data) } values_json = json.dumps(values, sort_keys = False, separators = (',', ':')) And this is the output: { \"profile\": \"testprofile\", \"enc_key\": \"GBWo[...]NV6w==\", \"request\": \"TFl[...]uYw==\", \"format\": \"RSA_RC4_Sealed\" } As you can see, I tried to use \"sort_keys=False\" but nothing changed. How can I stop Python sorting my JSON strings?", "output": "You are storing your values into a Python dict which has no inherent notion of ordering at all, it's just a key-to-value map. So your items lose all ordering when you place them into the values variable. In fact the only way to get a deterministic ordering would be to use sort_keys=True, which I assume places them in alphanumeric ordering. Why is the order so important?"} +{"question_id": 17725927, "score": 85, "creation_date": 1374156771, "tags": ["python", "matplotlib", "statistics", "boxplot"], "instruction": "Boxplots in matplotlib: Markers and outliers\n\nI have some questions about boxplots in matplotlib: Question A. What do the markers that I highlighted below with Q1, Q2, and Q3 represent? I believe Q1 is maximum and Q3 are outliers, but what is Q2? Question B How does matplotlib identify outliers? (i.e. how does it know that they are not the true max and min values?)", "output": "Here's a graphic that illustrates the components of the box from a stats.stackexchange answer. Note that k=1.5 if you don't supply the whis keyword in Pandas. The boxplot function in Pandas is a wrapper for matplotlib.pyplot.boxplot. The matplotlib docs explain the components of the boxes in detail: Question A: The box extends from the lower to upper quartile values of the data, with a line at the median. i.e. a quarter of the input data values is below the box, a quarter of the data lies in each part of the box, and the remaining quarter lies above the box. Question B: whis : float, sequence, or string (default = 1.5) As a float, determines the reach of the whiskers to the beyond the first and third quartiles. In other words, where IQR is the interquartile range (Q3-Q1), the upper whisker will extend to last datum less than Q3 + whis*IQR). Similarly, the lower whisker will extend to the first datum greater than Q1 - whis*IQR. Beyond the whiskers, data are considered outliers and are plotted as individual points. Matplotlib (and Pandas) also gives you a lot of options to change this default definition of the whiskers: Set this to an unreasonably high value to force the whiskers to show the min and max values. Alternatively, set this to an ascending sequence of percentile (e.g., [5, 95]) to set the whiskers at specific percentiles of the data. Finally, whis can be the string 'range' to force the whiskers to the min and max of the data."} +{"question_id": 3924117, "score": 85, "creation_date": 1286976187, "tags": ["python", "file-io", "temporary-files"], "instruction": "How to use tempfile.NamedTemporaryFile() in Python\n\nI want to use tempfile.NamedTemporaryFile() to write some contents into it and then open that file. I have written following code: tf = tempfile.NamedTemporaryFile() tfName = tf.name tf.seek(0) tf.write(contents) tf.flush() but I am unable to open this file and see its contents in Notepad or similar application. Is there any way to achieve this? Why can't I do something like: os.system('start notepad.exe ' + tfName) at the end. I don't want to save the file permanently on my system. I just want the contents to be opened as a text in Notepad or similar application and delete the file when I close that application.", "output": "This could be one of two reasons: Firstly, by default the temporary file is deleted as soon as it is closed. To fix this use: tf = tempfile.NamedTemporaryFile(delete=False) and then delete the file manually once you've finished viewing it in the other application. Alternatively, it could be that because the file is still open in Python Windows won't let you open it using another application. Edit: to answer some questions from the comments: As of the docs from 2 when using delete=False the file can be removed by using: tf.close() os.unlink(tf.name)"} +{"question_id": 19309287, "score": 85, "creation_date": 1381458350, "tags": ["python", "ipython", "jupyter-notebook", "ipython-magic"], "instruction": "How to (intermittently) skip certain cells when running IPython notebook?\n\nI usually have to rerun (most parts of) a notebook when reopen it, in order to get access to previously defined variables and go on working. However, sometimes I'd like to skip some of the cells, which have no influence to subsequent cells (e.g., they might comprise a branch of analysis that is finished) and could take very long time to run. These cells can be scattered throughout the notebook, so that something like \"Run All Below\" won't help much. Is there a way to achieve this? Ideally, those cells could be tagged with some special flags, so that they could be \"Run\" manually, but would be skipped when \"Run All\". EDIT %%cache (ipycache extension) as suggested by @Jakob solves the problem to some extent. Actually, I don't even need to load any variables (which can be large but unnecessary for following cells) when re-run, only the stored output matters as analyzing results. As a work-around, put %%cache folder/unique_identifier to the beginning of the cell. The code will be executed only once and no variables will be loaded when re-run unless you delete the unique_identifier file. Unfortunately, all the output results are lost when re-run with %%cache... EDIT II (Oct 14, 2013) The master version of ipython+ipycache now pickles (and re-displays) the codecell output as well. For rich display outputs including Latex, HTML(pandas DataFrame output), remember to use IPython's display() method, e.g., display(Latex(r'$\\alpha_1$'))", "output": "Currently, there is no such feature included in the IPython notebook. Nevertheless, there are some possibilities to make your life easier, like: use the %store or maybe better the %%cache magic (extension) to store the results of these intermittently cells, so they don't have to be recomputed (see https://github.com/rossant/ipycache) add a if==0: before the cells you don't want to execute convert these cells to raw cells (but you will loose the already stored output!) (see discussion at https://github.com/ipython/ipython/issues/2125)"} +{"question_id": 40088496, "score": 85, "creation_date": 1476712951, "tags": ["python", "logging", "python-logging"], "instruction": "How to use Python's RotatingFileHandler\n\nI'm trying to do a test run of the logging module's RotatingFileHandler as follows: import logging from logging.handlers import RotatingFileHandler # logging.basicConfig(filename=\"example.log\", level=logging.DEBUG) logger = logging.getLogger('my_logger') handler = RotatingFileHandler(\"my_log.log\", maxBytes=2000, backupCount=10) logger.addHandler(handler) for _ in range(10000): logger.debug(\"Hello, world!\") However, with logging.basicConfig line commented out, the resulting my_log.log file contains no data: If I comment in the line with logging.basicConfig(filename=\"example.log\", level=logging.DEBUG), I get the expected my_log.log files with numbered suffixes. However, there is also the example.log which is a (relatively) large file: How can I set up the logging so that it only generates the my_log.log files, and not the large example.log file?", "output": "Python provides 5 logging levels out of the box (in increasing order of severity): DEBUG, INFO, WARNING, ERROR and CRITICAL. The default one is WARNING. The docs says, that Logging messages which are less severe than lvl will be ignored. So if you use .debug with the default settings, you won't see anything in your logs. The easiest fix would be to use logger.warning function rather than logger.debug: import logging from logging.handlers import RotatingFileHandler logger = logging.getLogger('my_logger') handler = RotatingFileHandler('my_log.log', maxBytes=2000, backupCount=10) logger.addHandler(handler) for _ in range(10000): logger.warning('Hello, world!') And if you want to change logger level you can use .setLevel method: import logging from logging.handlers import RotatingFileHandler logger = logging.getLogger('my_logger') logger.setLevel(logging.DEBUG) handler = RotatingFileHandler('my_log.log', maxBytes=2000, backupCount=10) logger.addHandler(handler) for _ in range(10000): logger.debug('Hello, world!')"} +{"question_id": 2239737, "score": 85, "creation_date": 1265830450, "tags": ["python"], "instruction": "Is it better to use \"is\" or \"==\" for number comparison in Python?\n\nIs it better to use the \"is\" operator or the \"==\" operator to compare two numbers in Python? Examples: >>> a = 1 >>> a is 1 True >>> a == 1 True >>> a is 0 False >>> a == 0 False", "output": "Use ==. Sometimes, on some python implementations, by coincidence, integers from -5 to 256 will work with is (in CPython implementations for instance). But don't rely on this or use it in real programs."} +{"question_id": 7255885, "score": 85, "creation_date": 1314787804, "tags": ["python", "yaml", "pyyaml"], "instruction": "Save/dump a YAML file with comments in PyYAML\n\nI have a yaml file that looks like this: # The following key opens a door key: value Is there a way I can load and dump this data while maintaining the comment?", "output": "PyYAML throws away comments at a very low level (in Scanner.scan_to_next_token). While you could adapt or extend it to handle comments in its whole stack, this would be a major modification. Dumping (=emitting) comments seems to be easier and was discussed in ticket 114 on the old PyYAML bug tracker. As of 2023, the feature request about adding support for loading comments is still stalling."} +{"question_id": 31432843, "score": 85, "creation_date": 1436969499, "tags": ["python", "apache-spark", "median", "rdd", "pyspark"], "instruction": "How to find median and quantiles using Spark\n\nHow can I find median of an RDD of integers using a distributed method, IPython, and Spark? The RDD is approximately 700,000 elements and therefore too large to collect and find the median. This question is similar to this question: How can I calculate exact median with Apache Spark?. However, the answer to the question is in Scala, which I do not know. Using the thinking for the Scala answer, I am trying to write a similar solution in Python. I know I first want to sort the RDD. I do not know how. I see the sortBy (Sorts this RDD by the given keyfunc) and sortByKey (Sorts this RDD, which is assumed to consist of (key, value) pairs.) methods. I think both use key value and my RDD only has integer elements. First, I was thinking of doing myrdd.sortBy(lambda x: x)? Next I will find the length of the rdd (rdd.count()). Finally, I want to find the element or 2 elements at the center of the RDD. I need help with this method too. EDIT: I had an idea. Maybe I can index my RDD and then key = index and value = element. And then I can try to sort by value? I don't know if this is possible because there is only a sortByKey method.", "output": "Ongoing work SPARK-30569 - Add DSL functions invoking percentile_approx Spark 2.0+: You can use approxQuantile method which implements Greenwald-Khanna algorithm: Python: df.approxQuantile(\"x\", [0.5], 0.25) Scala: df.stat.approxQuantile(\"x\", Array(0.5), 0.25) where the last parameter is a relative error. The lower the number the more accurate results and more expensive computation. Since Spark 2.2 (SPARK-14352) it supports estimation on multiple columns: df.approxQuantile([\"x\", \"y\", \"z\"], [0.5], 0.25) and df.approxQuantile(Array(\"x\", \"y\", \"z\"), Array(0.5), 0.25) Underlying methods can be also used in SQL aggregation (both global and groped) using approx_percentile function: > SELECT approx_percentile(10.0, array(0.5, 0.4, 0.1), 100); [10.0,10.0,10.0] > SELECT approx_percentile(10.0, 0.5, 100); 10.0 Spark < 2.0 Python As I've mentioned in the comments it is most likely not worth all the fuss. If data is relatively small like in your case then simply collect and compute median locally: import numpy as np np.random.seed(323) rdd = sc.parallelize(np.random.randint(1000000, size=700000)) %time np.median(rdd.collect()) np.array(rdd.collect()).nbytes It takes around 0.01 second on my few years old computer and around 5.5MB of memory. If data is much larger sorting will be a limiting factor so instead of getting an exact value it is probably better to sample, collect, and compute locally. But if you really want a to use Spark something like this should do the trick (if I didn't mess up anything): from numpy import floor import time def quantile(rdd, p, sample=None, seed=None): \"\"\"Compute a quantile of order p \u2208 [0, 1] :rdd a numeric rdd :p quantile(between 0 and 1) :sample fraction of and rdd to use. If not provided we use a whole dataset :seed random number generator seed to be used with sample \"\"\" assert 0 <= p <= 1 assert sample is None or 0 < sample <= 1 seed = seed if seed is not None else time.time() rdd = rdd if sample is None else rdd.sample(False, sample, seed) rddSortedWithIndex = (rdd. sortBy(lambda x: x). zipWithIndex(). map(lambda (x, i): (i, x)). cache()) n = rddSortedWithIndex.count() h = (n - 1) * p rddX, rddXPlusOne = ( rddSortedWithIndex.lookup(x)[0] for x in int(floor(h)) + np.array([0L, 1L])) return rddX + (h - floor(h)) * (rddXPlusOne - rddX) And some tests: np.median(rdd.collect()), quantile(rdd, 0.5) ## (500184.5, 500184.5) np.percentile(rdd.collect(), 25), quantile(rdd, 0.25) ## (250506.75, 250506.75) np.percentile(rdd.collect(), 75), quantile(rdd, 0.75) (750069.25, 750069.25) Finally lets define median: from functools import partial median = partial(quantile, p=0.5) So far so good but it takes 4.66 s in a local mode without any network communication. There is probably way to improve this, but why even bother? Language independent (Hive UDAF): If you use HiveContext you can also use Hive UDAFs. With integral values: rdd.map(lambda x: (float(x), )).toDF([\"x\"]).registerTempTable(\"df\") sqlContext.sql(\"SELECT percentile_approx(x, 0.5) FROM df\") With continuous values: sqlContext.sql(\"SELECT percentile(x, 0.5) FROM df\") In percentile_approx you can pass an additional argument which determines a number of records to use."} +{"question_id": 72604922, "score": 85, "creation_date": 1655132088, "tags": ["python", "json", "dictionary", "python-dataclasses"], "instruction": "How to convert Python dataclass to dictionary of string literal?\n\nGiven a dataclass like below: class MessageHeader(BaseModel): message_id: uuid.UUID def dict(self, **kwargs): return json.loads(self.json()) I would like to get a dictionary of string literal when I call dict on MessageHeader The desired outcome of dictionary is like below: {'message_id': '383b0bfc-743e-4738-8361-27e6a0753b5a'} I want to avoid using 3rd party library like pydantic & I do not want to use json.loads(self.json()) as there are extra round trips Is there any better way to convert a dataclass to a dictionary with string literal like above?", "output": "You can use dataclasses.asdict: from dataclasses import dataclass, asdict class MessageHeader(BaseModel): message_id: uuid.UUID def dict(self): return {k: str(v) for k, v in asdict(self).items()} If you're sure that your class only has string values, you can skip the dictionary comprehension entirely: class MessageHeader(BaseModel): message_id: uuid.UUID dict = asdict"} +{"question_id": 50558458, "score": 85, "creation_date": 1527475377, "tags": ["python", "pandas", "dataframe", "group-by", "percentage"], "instruction": "Get frequency of item occurrences in a column as percentage\n\nI want to get a percentage of a particular value in a df column. Say I have a df with (col1, col2 , col3, gender) gender column has values of M, F, or Other. I want to get the percentage of M, F, Other values in the df. I have tried this, which gives me the number M, F, Other instances, but I want these as a percentage of the total number of values in the df. df.groupby('gender').size() Can someone help?", "output": "Use value_counts with normalize=True: df['gender'].value_counts(normalize=True) * 100 The result is a fraction in range (0, 1]. We multiply by 100 here in order to get the %."} +{"question_id": 47262955, "score": 85, "creation_date": 1510571471, "tags": ["python", "tensorflow", "keras", "deep-learning", "tf.keras"], "instruction": "How to import keras from tf.keras in Tensorflow?\n\nimport tensorflow as tf import tensorflow from tensorflow import keras from keras.layers import Dense I am getting the below error from keras.layers import Input, Dense Traceback (most recent call last): File \"\", line 1, in from keras.layers import Input, Dense ModuleNotFoundError: No module named 'keras' How do I solve this? Note: I am using Tensorflow version 1.4", "output": "Use the keras module from tensorflow like this: import tensorflow as tf Import classes from tensorflow.python.keras.layers import Input, Dense or use directly dense = tf.keras.layers.Dense(...) EDIT Tensorflow 2 from tensorflow.keras.layers import Input, Dense and the rest stays the same."} +{"question_id": 3911483, "score": 85, "creation_date": 1286854030, "tags": ["python", "slice"], "instruction": "How can I use Python built-in slice object?\n\nI know Pythonic slicing : l1[start:stop:step]. What's the use of built-in function slice? How can I use it ?", "output": "You create a slice by calling slice with the same fields you would use if doing [start:end:step] notation: sl = slice(0,4) To use the slice, just pass it as if it were the index into a list or string: >>> s = \"ABCDEFGHIJKL\" >>> sl = slice(0,4) >>> print(s[sl]) 'ABCD' Let's say you have a file of fixed-length text fields. You could define a list of slices to easily extract the values from each \"record\" in this file. data = \"\"\"\\ 0010GEORGE JETSON 12345 SPACESHIP ST HOUSTON TX 0020WILE E COYOTE 312 ACME BLVD TUCSON AZ 0030FRED FLINTSTONE 246 GRANITE LANE BEDROCK CA 0040JONNY QUEST 31416 SCIENCE AVE PALO ALTO CA\"\"\".splitlines() fieldslices = [slice(*fielddef) for fielddef in [ (0,4), (4, 21), (21,42), (42,56), (56,58), ]] fields = \"id name address city state\".split() for rec in data: for field,sl in zip(fields, fieldslices): print(\"{} : {}\".format(field, rec[sl])) print('') # or this same code using itemgetter, to make a function that # extracts all slices from a string into a tuple of values import operator rec_reader = operator.itemgetter(*fieldslices) for rec in data: for field, field_value in zip(fields, rec_reader(rec)): print(\"{} : {}\".format(field, field_value)) print('') Prints: id : 0010 name : GEORGE JETSON address : 12345 SPACESHIP ST city : HOUSTON state : TX id : 0020 name : WILE E COYOTE address : 312 ACME BLVD city : TUCSON state : AZ id : 0030 name : FRED FLINTSTONE address : 246 GRANITE LANE city : BEDROCK state : CA id : 0040 name : JONNY QUEST address : 31416 SCIENCE AVE city : PALO ALTO state : CA"} +{"question_id": 54899320, "score": 85, "creation_date": 1551248841, "tags": ["python", "django", "celery"], "instruction": "What is the meaning of bind = True keyword in celery?\n\nWhat is the meaning of bind=True in below celery code? When to use it and when not? @app.task(bind=True) def send_twitter_status(self, oauth, tweet): try: twitter = Twitter(oauth) twitter.update_status(tweet) except (Twitter.FailWhaleError, Twitter.LoginError) as exc: raise self.retry(exc=exc)", "output": "Just a small addition to other answers. As already stated, bound tasks have access to the task instance. One use case when this is needed are retries: @celery.task(bind=True, max_retries=5) def retrying(self): try: return 1/0 except Exception: self.retry(countdown=5) Another use case is when you want to define custom states for your tasks and be able to set it during task execution: @celery.task(bind=True) def show_progress(self, n): for i in range(n): self.update_state(state='PROGRESS', meta={'current': i, 'total': n})"} +{"question_id": 42983569, "score": 85, "creation_date": 1490291811, "tags": ["python", "generator", "fibonacci"], "instruction": "How to write a generator class?\n\nI see lot of examples of generator functions, but I want to know how to write generators for classes. Lets say, I wanted to write Fibonacci series as a class. class Fib: def __init__(self): self.a, self.b = 0, 1 def __next__(self): yield self.a self.a, self.b = self.b, self.a+self.b f = Fib() for i in range(3): print(next(f)) Output: Why is the value self.a not getting printed? Also, how do I write unittest for generators?", "output": "How to write a generator class? You're almost there, writing an Iterator class (I show a Generator at the end of the answer), but __next__ gets called every time you call the object with next, returning a generator object. Instead, to make your code work with the least changes, and the fewest lines of code, use __iter__, which makes your class instantiate an iterable (which isn't technically a generator): class Fib: def __init__(self): self.a, self.b = 0, 1 def __iter__(self): while True: yield self.a self.a, self.b = self.b, self.a+self.b When we pass an iterable to iter(), it gives us an iterator: >>> f = iter(Fib()) >>> for i in range(3): ... print(next(f)) ... 0 1 1 To make the class itself an iterator, it does require a __next__: class Fib: def __init__(self): self.a, self.b = 0, 1 def __next__(self): return_value = self.a self.a, self.b = self.b, self.a+self.b return return_value def __iter__(self): return self And now, since iter just returns the instance itself, we don't need to call it: >>> f = Fib() >>> for i in range(3): ... print(next(f)) ... 0 1 1 Why is the value self.a not getting printed? Here's your original code with my comments: class Fib: def __init__(self): self.a, self.b = 0, 1 def __next__(self): yield self.a # yield makes .__next__() return a generator! self.a, self.b = self.b, self.a+self.b f = Fib() for i in range(3): print(next(f)) So every time you called next(f) you got the generator object that __next__ returns: Also, how do I write unittest for generators? You still need to implement a send and throw method for a Generator from collections.abc import Iterator, Generator import unittest class Test(unittest.TestCase): def test_Fib(self): f = Fib() self.assertEqual(next(f), 0) self.assertEqual(next(f), 1) self.assertEqual(next(f), 1) self.assertEqual(next(f), 2) #etc... def test_Fib_is_iterator(self): f = Fib() self.assertIsInstance(f, Iterator) def test_Fib_is_generator(self): f = Fib() self.assertIsInstance(f, Generator) And now: >>> unittest.main(exit=False) ..F ====================================================================== FAIL: test_Fib_is_generator (__main__.Test) ---------------------------------------------------------------------- Traceback (most recent call last): File \"\", line 7, in test_Fib_is_generator AssertionError: <__main__.Fib object at 0x00000000031A6320> is not an instance of ---------------------------------------------------------------------- Ran 3 tests in 0.001s FAILED (failures=1) So let's implement a generator object, and leverage the Generator abstract base class from the collections module (see the source for its implementation), which means we only need to implement send and throw - giving us close, __iter__ (returns self), and __next__ (same as .send(None)) for free (see the Python data model on coroutines): class Fib(Generator): def __init__(self): self.a, self.b = 0, 1 def send(self, ignored_arg): return_value = self.a self.a, self.b = self.b, self.a+self.b return return_value def throw(self, type=None, value=None, traceback=None): raise StopIteration and using the same tests above: >>> unittest.main(exit=False) ... ---------------------------------------------------------------------- Ran 3 tests in 0.002s OK Python 2 The ABC Generator is only in Python 3. To do this without Generator, we need to write at least close, __iter__, and __next__ in addition to the methods we defined above. class Fib(object): def __init__(self): self.a, self.b = 0, 1 def send(self, ignored_arg): return_value = self.a self.a, self.b = self.b, self.a+self.b return return_value def throw(self, type=None, value=None, traceback=None): raise StopIteration def __iter__(self): return self def next(self): return self.send(None) def close(self): \"\"\"Raise GeneratorExit inside generator. \"\"\" try: self.throw(GeneratorExit) except (GeneratorExit, StopIteration): pass else: raise RuntimeError(\"generator ignored GeneratorExit\") Note that I copied close directly from the Python 3 standard library, without modification."} +{"question_id": 51700960, "score": 85, "creation_date": 1533532758, "tags": ["python", "python-3.x", "iterator", "python-3.7", "stopiteration"], "instruction": "\"RuntimeError: generator raised StopIteration\" every time I try to run app\n\nI am trying to run this code in Python 3.7: import web urls = ('/', 'index') if __name__ == \"__main__\": app = web.application(urls, globals()) app.run() But it gives me this error everytime: C:\\Users\\aidke\\Desktop>python app.py Traceback (most recent call last): File \"C:\\Users\\aidke\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages\\web\\utils.py\", line 526, in take yield next(seq) StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File \"app.py\", line 14, in app = web.application(urls, globals()) File \"C:\\Users\\aidke\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages\\web\\application.py\", line 62, in __init__ self.init_mapping(mapping) File \"C:\\Users\\aidke\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages\\web\\application.py\", line 130, in init_mapping self.mapping = list(utils.group(mapping, 2)) File \"C:\\Users\\aidke\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages\\web\\utils.py\", line 531, in group x = list(take(seq, size)) RuntimeError: generator raised StopIteration I tried someone else's code and the exact same thing happened. Additionally I tried reinstalling web.py(experimental) but it still didn't work.", "output": "To judge from the file paths, it looks like you're running Python 3.7. If so, you're getting caught by new-in-3.7 behavior: PEP 479 is enabled for all code in Python 3.7, meaning that StopIteration exceptions raised directly or indirectly in coroutines and generators are transformed into RuntimeError exceptions. (Contributed by Yury Selivanov in bpo-32670.) Before this change, a StopIteration raised by, or passing through, a generator simply ended the generator's useful life (the exception was silently swallowed). The module you're using will have to be recoded to work as intended with 3.7. Chances are they'll need to change: yield next(seq) to: try: yield next(seq) except StopIteration: return"} +{"question_id": 31751464, "score": 85, "creation_date": 1438362027, "tags": ["python", "python-imaging-library"], "instruction": "How do I close an image opened in Pillow?\n\nI have a python file with the Pillow library imported. I can open an image with Image.open(test.png) But how do I close that image? I'm not using Pillow to edit the image, just to show the image and allow the user to choose to save it or delete it.", "output": "With Image.close() You can also do it in a with block: with Image.open('test.png') as test_image: do_things(test_image) An example of using Image.close(): test = Image.open('test.png') test.close()"} +{"question_id": 2572916, "score": 85, "creation_date": 1270334396, "tags": ["python", "matrix", "numpy"], "instruction": "Numpy \u2018smart\u2019 symmetric matrix\n\nIs there a smart and space-efficient symmetric matrix in numpy which automatically (and transparently) fills the position at [j][i] when [i][j] is written to? import numpy a = numpy.symmetric((3, 3)) a[0][1] = 1 a[1][0] == a[0][1] # True print(a) # [[0 1 0], [1 0 0], [0 0 0]] assert numpy.all(a == a.T) # for any symmetric matrix An automatic Hermitian would also be nice, although I won\u2019t need that at the time of writing.", "output": "If you can afford to symmetrize the matrix just before doing calculations, the following should be reasonably fast: def symmetrize(a): \"\"\" Return a symmetrized version of NumPy array a. Values 0 are replaced by the array value at the symmetric position (with respect to the diagonal), i.e. if a_ij = 0, then the returned array a' is such that a'_ij = a_ji. Diagonal values are left untouched. a -- square NumPy array, such that a_ij = 0 or a_ji = 0, for i != j. \"\"\" return a + a.T - numpy.diag(a.diagonal()) This works under reasonable assumptions (such as not doing both a[0, 1] = 42 and the contradictory a[1, 0] = 123 before running symmetrize). If you really need a transparent symmetrization, you might consider subclassing numpy.ndarray and simply redefining __setitem__: class SymNDArray(numpy.ndarray): \"\"\" NumPy array subclass for symmetric matrices. A SymNDArray arr is such that doing arr[i,j] = value automatically does arr[j,i] = value, so that array updates remain symmetrical. \"\"\" def __setitem__(self, (i, j), value): super(SymNDArray, self).__setitem__((i, j), value) super(SymNDArray, self).__setitem__((j, i), value) def symarray(input_array): \"\"\" Return a symmetrized version of the array-like input_array. The returned array has class SymNDArray. Further assignments to the array are thus automatically symmetrized. \"\"\" return symmetrize(numpy.asarray(input_array)).view(SymNDArray) # Example: a = symarray(numpy.zeros((3, 3))) a[0, 1] = 42 print a # a[1, 0] == 42 too! (or the equivalent with matrices instead of arrays, depending on your needs). This approach even handles more complicated assignments, like a[:, 1] = -1, which correctly sets a[1, :] elements. Note that Python 3 removed the possibility of writing def \u2026(\u2026, (i, j),\u2026), so the code has to be slightly adapted before running with Python 3: def __setitem__(self, indexes, value): (i, j) = indexes\u2026"} +{"question_id": 62106028, "score": 85, "creation_date": 1590858791, "tags": ["python", "numpy", "range", "linspace"], "instruction": "What is the difference between np.linspace and np.arange?\n\nI have always used np.arange. I recently came across np.linspace. I am wondering what exactly is the difference between them... Looking at their documentation: np.arange: Return evenly spaced values within a given interval. np.linspace: Return evenly spaced numbers over a specified interval. The only difference I can see is linspace having more options... Like choosing to include the last element. Which one of these two would you recommend and why? And in which cases is np.linspace superior?", "output": "np.linspace allows you to define how many values you get including the specified min and max value. It infers the stepsize: >>> np.linspace(0,1,11) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) np.arange allows you to define the stepsize and infers the number of steps(the number of values you get). >>> np.arange(0,1,.1) array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) contributions from user2357112: np.arange excludes the maximum value unless rounding error makes it do otherwise. For example, the following results occur due to rounding error: >>> numpy.arange(1, 1.3, 0.1) array([1. , 1.1, 1.2, 1.3]) You can exclude the stop value (in our case 1.3) using endpoint=False: >>> numpy.linspace(1, 1.3, 3, endpoint=False) array([1. , 1.1, 1.2])"} +{"question_id": 42756537, "score": 85, "creation_date": 1489378601, "tags": ["python", "python-3.x", "string-formatting", "python-3.6", "f-string"], "instruction": "f-string syntax for unpacking a list with brace suppression\n\nI have been examining some of my string format options using the new f-string format. I routinely need to unpack lists and other iterables of unknown length. Currently I use the following... >>> a = [1, 'a', 3, 'b'] >>> (\"unpack a list: \" + \" {} \"*len(a)).format(*a) 'unpack a list: 1 a 3 b ' This, albeit a bit cumbersome, does the job using pre-3.6 .format notation. The new f-string format option is interesting given runtime string concatenation. It is the replication of the number of {} that I am having problems with. In my previous example, I simply created the necessary structure and unpacked within the .format() section. Attempts to do this yielded one variant that worked, however: 1) Both curly brackets together doesn't unpack... >>> 'unpack a list' f' {{*a}}' 'unpack a list {*a}' 2) Adding spaces around the interior {} pair: This works but leaves opening and closing braces {, } present: >>> 'unpack a list' f' { {*a} }' \"unpack a list {1, 3, 'a', 'b'}\" 2b) Concatenating the variants into one f-string This made the look and syntax better, since the evaluation, apparently, is from left to right. This, however, still left the enclosing curly brackets present: >>> f'unpack a list { {*a} }' \"unpack a list {1, 3, 'a', 'b'}\" 3) Tried automatic unpacking with just {a} Perhaps, I was overthinking the whole procedure and hoping for some form of automatic unpacking. This simply yielded the list representation with the curly brackets being replaced with [] : >>> f'unpack a list {a}' \"unpack a list [1, 'a', 3, 'b']\" What is required to suppress the curly brackets in variant (2) above, or must I keep using the existing .format() method? I want to keep it simple and use the new capabilities offered by the f-string and not revert back beyond the python versions which pre-date what I am currently comfortable with. I am beginning to suspect that f'strings' do not offer a complete coverage of what is offered by its .format() sibling. I will leave it at that for now, since I haven't even ventured into the escape encoding and the inability to use \\ in an f-string. I have read the PEP and search widely, however, I feel I am missing the obvious or what I wish for is currently not possible. EDIT several hours later: 4) Use subscripting to manually slice off the brackets: str(a)[1:-2] I did find this variant which will serve for some cases that I need f'unpack a list: {str(a)[1:-2]}' \"unpack a list: 1, 'a', 3, 'b\" But the slicing is little more than a convenience and still leaves the string quotes around the resultant. 5) and the final solution from @SenhorLucas a = np.arange(10) print(f\"{*a,}\") (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) Unpacking with trailing comma.", "output": "Shortest solution Just add a comma after the unpacked list. >>> a = [1, 2, 3] >>> print(f\"Unpacked list: {*a,}\") Unpacked list: (1, 2, 3) There is a longer explanation to this syntax in this thread. Caveat With this solution is that we do not have much control over the output formatting. We are stuck with whatever format returns, which is actually (and suprisingly) the result from tuple.__repr__. So the parenthesis that we get might be misleading, since we actually had a list, and not a tuple. If this is too bad to put up with, I would recommend using the approach suggested by Zero Piraeus: >>> a = [1, 2, 3] >>> print(f'Unpacked list: {\" \".join(str(i) for i in a)}') This gives us the flexibility to format the list as we wish."} +{"question_id": 49841324, "score": 85, "creation_date": 1523791187, "tags": ["python", "machine-learning", "scikit-learn"], "instruction": "What does calling fit() multiple times on the same model do?\n\nAfter I instantiate a scikit model (e.g. LinearRegression), if I call its fit() method multiple times (with different X and y data), what happens? Does it fit the model on the data like if I just re-instantiated the model (i.e. from scratch), or does it keep into accounts data already fitted from the previous call to fit()? Trying with LinearRegression (also looking at its source code) it seems to me that every time I call fit(), it fits from scratch, ignoring the result of any previous call to the same method. I wonder if this true in general, and I can rely on this behavior for all models/pipelines of scikit learn.", "output": "If you will execute model.fit(X_train, y_train) for a second time - it'll overwrite all previously fitted coefficients, weights, intercept (bias), etc. If you want to fit just a portion of your data set and then to improve your model by fitting a new data, then you can use estimators, supporting \"Incremental learning\" (those, that implement partial_fit() method)"} +{"question_id": 5658622, "score": 85, "creation_date": 1302755989, "tags": ["python", "path", "subprocess", "environment", "popen"], "instruction": "python subprocess Popen environment PATH?\n\nSuppose there's an executable and a Python script to launch it, and they're located in 'sibling' subdirectories, e.g. /tmp/subdir1/myexecutable /tmp/subdir2/myscript.py If in /tmp and running python subdir2/myscript.py with a relative path to executable # myscript.py from subprocess import Popen proc = Popen([\"../subdir1/myexecutable\"]) It makes OSError: [Errno 2] No such file or directory. How does the Python search for the executable? Does it use the current working directory and/or location of the script? Does it use PATH and/or PYTHONPATH? Can you change where and how subprocess.Popen searches for the executable? Are commands, absolute and relative paths for executables treated differently? Are there differences between Linux and Windows? What does shell=True or shell=False influence?", "output": "Relative paths (paths containing slashes) never get checked in any PATH, no matter what you do. They are relative to the current working directory only. If you need to resolve relative paths, you will have to search through the PATH manually. If you want to run a program relative to the location of the Python script, use __file__ and go from there to find the absolute path of the program, and then use the absolute path in Popen. Searching in the current process' environment variable PATH There is an issue in the Python bug tracker about how Python deals with bare commands (no slashes). Basically, on Unix/Mac Popen behaves like os.execvp when the argument env=None (some unexpected behavior has been observed and noted at the end): On POSIX, the class uses os.execvp()-like behavior to execute the child program. This is actually true for both shell=False and shell=True, provided env=None. What this behavior means is explained in the documentation of the function os.execvp: The variants which include a \u201cp\u201d near the end (execlp(), execlpe(), execvp(), and execvpe()) will use the PATH environment variable to locate the program file. When the environment is being replaced (using one of the exec*e variants, discussed in the next paragraph), the new environment is used as the source of the PATH variable. For execle(), execlpe(), execve(), and execvpe() (note that these all end in \u201ce\u201d), the env parameter must be a mapping which is used to define the environment variables for the new process (these are used instead of the current process\u2019 environment); the functions execl(), execlp(), execv(), and execvp() all cause the new process to inherit the environment of the current process. The second quoted paragraph implies that execvp will use the current process' environment variables. Combined with the first quoted paragraph, we deduce that execvp will use the value of the environment variable PATH from the environment of the current process. This means that Popen looks at the value of PATH as it was when Python launched (the Python that runs the Popen instantiation) and no amount of changing os.environ will help you fix that. Also, on Windows with shell=False, Popen pays no attention to PATH at all, and will only look in relative to the current working directory. What shell=True does What happens if we pass shell=True to Popen? In that case, Popen simply calls the shell: The shell argument (which defaults to False) specifies whether to use the shell as the program to execute. That is to say, Popen does the equivalent of: Popen(['/bin/sh', '-c', args[0], args[1], ...]) In other words, with shell=True Python will directly execute /bin/sh, without any searching (passing the argument executable to Popen can change this, and it seems that if it is a string without slashes, then it will be interpreted by Python as the shell program's name to search for in the value of PATH from the environment of the current process, i.e., as it searches for programs in the case shell=False described above). In turn, /bin/sh (or our shell executable) will look for the program we want to run in its own environment's PATH, which is the same as the PATH of the Python (current process), as deduced from the code after the phrase \"That is to say...\" above (because that call has shell=False, so it is the case already discussed earlier). Therefore, the execvp-like behavior is what we get with both shell=True and shell=False, as long as env=None. Passing env to Popen So what happens if we pass env=dict(PATH=...) to Popen (thus defining an environment variable PATH in the environment of the program that will be run by Popen)? In this case, the new environment is used to search for the program to execute. Quoting the documentation of Popen: If env is not None, it must be a mapping that defines the environment variables for the new process; these are used instead of the default behavior of inheriting the current process\u2019 environment. Combined with the above observations, and from experiments using Popen, this means that Popen in this case behaves like the function os.execvpe. If shell=False, Python searches for the given program in the newly defined PATH. As already discussed above for shell=True, in that case the program is either /bin/sh, or, if a program name is given with the argument executable, then this alternative (shell) program is searched for in the newly defined PATH. In addition, if shell=True, then inside the shell the search path that the shell will use to find the program given in args is the value of PATH passed to Popen via env. So with env != None, Popen searches in the value of the key PATH of env (if a key PATH is present in env). Propagating environment variables other than PATH as arguments There is a caveat about environment variables other than PATH: if the values of those variables are needed in the command (e.g., as command-line arguments to the program being run), then even if these are present in the env given to Popen, they will not get interpreted without shell=True. This is easily avoided without changing shell=True: insert those value directly in the list argument args that is given to Popen. (Also, if these values come from Python's own environment, the method os.environ.get can be used to get their values). Using /usr/bin/env If you JUST need path evaluation and don't really want to run your command line through a shell, and are on UNIX, I advise using env instead of shell=True, as in path = '/dir1:/dir2' subprocess.Popen(['/usr/bin/env', '-P', path, 'progtorun', other, args], ...) This lets you pass a different PATH to the env process (using the option -P), which will use it to find the program. It also avoids issues with shell metacharacters and potential security issues with passing arguments through the shell. Obviously, on Windows (pretty much the only platform without a /usr/bin/env) you will need to do something different. About shell=True Quoting the Popen documentation: If shell is True, it is recommended to pass args as a string rather than as a sequence. Note: Read the Security Considerations section before using shell=True. Unexpected observations The following behavior was observed: This call raises FileNotFoundError, as expected: subprocess.call(['sh'], shell=False, env=dict(PATH='')) This call finds sh, which is unexpected: subprocess.call(['sh'], shell=False, env=dict(FOO='')) Typing echo $PATH inside the shell that this opens reveals that the PATH value is not empty, and also different from the value of PATH in the environment of Python. So it seems that PATH was indeed not inherited from Python (as expected in the presence of env != None), but still, it the PATH is nonempty. Unknown why this is the case. This call raises FileNotFoundError, as expected: subprocess.call(['tree'], shell=False, env=dict(FOO='')) This finds tree, as expected: subprocess.call(['tree'], shell=False, env=None)"} +{"question_id": 53765366, "score": 85, "creation_date": 1544715681, "tags": ["python", "connection", "pool", "urllib3"], "instruction": "urllib3 connectionpool - Connection pool is full, discarding connection\n\nDoes seeing the urllib3.connectionpool WARNING - Connection pool is full, discarding connection mean that I am effectively loosing data (because of lost connection) OR Does it mean that connection is dropped (because pool is full); however, the same connection will be re-tried later on when connection pool becomes available?", "output": "No data is being lost! The connection is being discarded after the request is completed (because the pool is full, as mentioned). This means that this particular connection is not going to be re-used in the future. Because a urllib3 PoolManager reuses connections, it will limit how many connections are retained per host to avoid accumulating too many unused sockets. The PoolManager can be configured to avoid creating excess sockets when the pool doesn't have any idle sockets available with PoolManager(..., block=True). If you're relying on concurrency, it could be a good idea to increase the size of the pool (maxsize) to be at least as large as the number of threads you're using, so that each thread effectively gets its own connection. More details here: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#customizing-pool-behavior"} +{"question_id": 1744203, "score": 85, "creation_date": 1258398626, "tags": ["python", "python-3.x", "django", "django-admin", "one-to-one"], "instruction": "Django Admin: OneToOne Relation as an Inline?\n\nI'm putting together the admin for a satchmo application. Satchmo uses OneToOne relations to extend the base Product model, and I'd like to edit it all on one page. Is it possible to have a OneToOne relation as an Inline? If not, what is the best way to add a few fields to a given page of my admin that will eventually be saved into the OneToOne relation? for example: class Product(models.Model): name = models.CharField(max_length=100) ... class MyProduct(models.Model): product = models.OneToOne(Product) ... I tried this for my admin but it does not work, and seems to expect a Foreign Key: class ProductInline(admin.StackedInline): model = Product fields = ('name',) class MyProductAdmin(admin.ModelAdmin): inlines = (AlbumProductInline,) admin.site.register(MyProduct, MyProductAdmin) Which throws this error: has no ForeignKey to Is the only way to do this a Custom Form? edit: Just tried the following code to add the fields directly... also does not work: class AlbumAdmin(admin.ModelAdmin): fields = ('product__name',)", "output": "It's perfectly possible to use an inline for a OneToOne relationship. However, the actual field defining the relationship has to be on the inline model, not the parent one - in just the same way as for a ForeignKey. Switch it over and it will work. Edit after comment: you say the parent model is already registered with the admin: then unregister it and re-register. from original.satchmo.admin import ProductAdmin class MyProductInline(admin.StackedInline): model = MyProduct class ExtendedProductAdmin(ProductAdmin): inlines = ProductAdmin.inlines + (MyProductInline,) admin.site.unregister(Product) admin.site.register(Product, ExtendedProductAdmin) Update 2020 (Django 3.1.1) This method is still working but some types has changed in new Django version since inlines in ExtendedProductAdmin should now be added as list and not tuple, like this: class ExtendedProductAdmin(ProductAdmin): inlines = ProductAdmin.inlines + [MyProductInline] Or you will get this error: inlines = ProductAdmin.inlines + (MyProductInline,) TypeError: can only concatenate list (not \"tuple\") to list"} +{"question_id": 29627341, "score": 85, "creation_date": 1429014482, "tags": ["python", "pytest"], "instruction": "Pytest where to store expected data\n\nTesting function I need to pass parameters and see the output matches the expected output. It is easy when function's response is just a small array or a one-line string which can be defined inside the test function, but suppose function I test modifies a config file which can be huge. Or the resulting array is something 4 lines long if I define it explicitly. Where do I store that so my tests remain clean and easy to maintain? Right now if that is string I just put a file near the .py test and do open() it inside the test: def test_if_it_works(): with open('expected_answer_from_some_function.txt') as res_file: expected_data = res_file.read() input_data = ... # Maybe loaded from a file as well assert expected_data == if_it_works(input_data) I see many problems with such approach, like the problem of maintaining this file up to date. It looks bad as well. I can make things probably better moving this to a fixture: @pytest.fixture def expected_data() with open('expected_answer_from_some_function.txt') as res_file: expected_data = res_file.read() return expected_data @pytest.fixture def input_data() return '1,2,3,4' def test_if_it_works(input_data, expected_data): assert expected_data == if_it_works(input_data) That just moves the problem to another place and usually I need to test if function works in case of empty input, input with a single item or multiple items, so I should create one big fixture including all three cases or multiple fixtures. In the end code gets quite messy. If a function expects a complicated dictionary as an input or gives back the dictionary of the same huge size test code becomes ugly: @pytest.fixture def input_data(): # It's just an example return {['one_value': 3, 'one_value': 3, 'one_value': 3, 'anotherky': 3, 'somedata': 'somestring'], ['login': 3, 'ip_address': 32, 'value': 53, 'one_value': 3], ['one_vae': 3, 'password': 13, 'lue': 3]} It's quite hard to read tests with such fixtures and keep them up to date. Update After searching a while I found a library which solved a part of a problem when instead of big config files I had large HTML responses. It's betamax. For easier usage I created a fixture: from betamax import Betamax @pytest.fixture def session(request): session = requests.Session() recorder = Betamax(session) recorder.use_cassette(os.path.join(os.path.dirname(__file__), 'fixtures', request.function.__name__) recorder.start() request.addfinalizer(recorder.stop) return session So now in my tests I just use the session fixture and every request I make is being serialized automatically to the fixtures/test_name.json file so the next time I execute the test instead of doing a real HTTP request library loads it from the filesystem: def test_if_response_is_ok(session): r = session.get(\"http://google.com\") It's quite handy because in order to keep these fixtures up to date I just need to clean the fixtures folder and rerun my tests.", "output": "I had a similar problem once, where I have to test configuration file against an expected file. That's how I fixed it: Create a folder with the same name of your test module and at the same location. Put all your expected files inside that folder. test_foo/ expected_config_1.ini expected_config_2.ini test_foo.py Create a fixture responsible for moving the contents of this folder to a temporary file. I did use of tmpdir fixture for this matter. from __future__ import unicode_literals from distutils import dir_util from pytest import fixture import os @fixture def datadir(tmpdir, request): ''' Fixture responsible for searching a folder with the same name of test module and, if available, moving all contents to a temporary directory so tests can use them freely. ''' filename = request.module.__file__ test_dir, _ = os.path.splitext(filename) if os.path.isdir(test_dir): dir_util.copy_tree(test_dir, bytes(tmpdir)) return tmpdir Important: If you are using Python 3, replace dir_util.copy_tree(test_dir, bytes(tmpdir)) with dir_util.copy_tree(test_dir, str(tmpdir)). Use your new fixture. def test_foo(datadir): expected_config_1 = datadir.join('expected_config_1.ini') expected_config_2 = datadir.join('expected_config_2.ini') Remember: datadir is just the same as tmpdir fixture, plus the ability of working with your expected files placed into the a folder with the very name of test module."} +{"question_id": 1848464, "score": 85, "creation_date": 1259947709, "tags": ["python", "pdf", "parsing", "text-extraction", "information-extraction"], "instruction": "PDF Parsing Using Python - extracting formatted and plain texts\n\nI'm looking for a PDF library which will allow me to extract the text from a PDF document. I've looked at PyPDF, and this can extract the text from a PDF document very nicely. The problem with this is that if there are tables in the document, the text in the tables is extracted in-line with the rest of the document text. This can be problematic because it produces sections of text that aren't useful and look garbled (for instance, lots of numbers mashed together). I'd like to extract the text from a PDF document, excluding any tables and special formatting. Is there a library out there that does this?", "output": "You can also take a look at PDFMiner (or for older versions of Python see PDFMiner and PDFMiner). A particular feature of interest in PDFMiner is that you can control how it regroups text parts when extracting them. You do this by specifying the space between lines, words, characters, etc. So, maybe by tweaking this you can achieve what you want (that depends of the variability of your documents). PDFMiner can also give you the location of the text in the page, it can extract data by Object ID and other stuff. So dig in PDFMiner and be creative! But your problem is really not an easy one to solve because, in a PDF, the text is not continuous, but made from a lot of small groups of characters positioned absolutely in the page. The focus of PDF is to keep the layout intact. It's not content oriented but presentation oriented."} +{"question_id": 6764909, "score": 84, "creation_date": 1311177796, "tags": ["python", "list"], "instruction": "How to remove all duplicate items from a list\n\nHow would I use python to check a list and delete all duplicates? I don't want to have to specify what the duplicate item is - I want the code to figure out if there are any and remove them if so, keeping only one instance of each. It also must work if there are multiple duplicates in a list. For example, in my code below, the list lseparatedOrbList has 12 items - one is repeated six times, one is repeated five times, and there is only one instance of one. I want it to change the list so there are only three items - one of each, and in the same order they appeared before. I tried this: for i in lseparatedOrbList: for j in lseparatedOrblist: if lseparatedOrbList[i] == lseparatedOrbList[j]: lseparatedOrbList.remove(lseparatedOrbList[j]) But I get the error: Traceback (most recent call last): File \"qchemOutputSearch.py\", line 123, in for j in lseparatedOrblist: NameError: name 'lseparatedOrblist' is not defined I'm guessing because it's because I'm trying to loop through lseparatedOrbList while I loop through it, but I can't think of another way to do it.", "output": "Just make a new list to populate, if the item for your list is not yet in the new list input it, else just move on to the next item in your original list. for i in mylist: if i not in newlist: newlist.append(i)"} +{"question_id": 12402561, "score": 84, "creation_date": 1347525838, "tags": ["python", "matplotlib", "font-size", "legend"], "instruction": "How to set font size of Matplotlib axis Legend?\n\nI have a code like this: import matplotlib.pyplot as plt from matplotlib.pyplot import * from matplotlib.font_manager import FontProperties fontP = FontProperties() fontP.set_size('xx-small') fig=plt.figure() ax1=fig.add_subplot(111) plot([1,2,3], label=\"test1\") ax1.legend(loc=0, ncol=1, bbox_to_anchor=(0, 0, 1, 1), prop = fontP,fancybox=True,shadow=False,title='LEGEND') plt.show() It can be seen in the plot that the setting in Fontsize does not affect the Legend Title font size. How to set the font size of the legend title to a smaller size?", "output": "This is definitely an old question, but was frustrating me too and none of the other answers changed the legend title fontsize at all, but instead just changed the rest of the text. So after banging my head against the matplotlib documentation for awhile I came up with this. legend = ax1.legend(loc=0, ncol=1, bbox_to_anchor=(0, 0, 1, 1), prop = fontP,fancybox=True,shadow=False,title='LEGEND') plt.setp(legend.get_title(),fontsize='xx-small') As of Matplotlib 3.0.3, you can also set it globally with plt.rcParams['legend.title_fontsize'] = 'xx-small'"} +{"question_id": 16778435, "score": 84, "creation_date": 1369678100, "tags": ["python", "html", "urlopen"], "instruction": "Python check if website exists\n\nI wanted to check if a certain website exists, this is what I'm doing: user_agent = 'Mozilla/20.0.1 (compatible; MSIE 5.5; Windows NT)' headers = { 'User-Agent':user_agent } link = \"http://www.abc.com\" req = urllib2.Request(link, headers = headers) page = urllib2.urlopen(req).read() - ERROR 402 generated here! If the page doesn't exist (error 402, or whatever other errors), what can I do in the page = ... line to make sure that the page I'm reading does exit?", "output": "You can use HEAD request instead of GET. It will only download the header, but not the content. Then you can check the response status from the headers. For python 2.7.x, you can use httplib: import httplib c = httplib.HTTPConnection('www.example.com') c.request(\"HEAD\", '') if c.getresponse().status == 200: print('web site exists') or urllib2: import urllib2 try: urllib2.urlopen('http://www.example.com/some_page') except urllib2.HTTPError, e: print(e.code) except urllib2.URLError, e: print(e.args) or for 2.7 and 3.x, you can install requests import requests response = requests.get('http://www.example.com') if response.status_code == 200: print('Web site exists') else: print('Web site does not exist')"} +{"question_id": 28089942, "score": 84, "creation_date": 1421933185, "tags": ["python", "tkinter"], "instruction": "Difference between \"fill\" and \"expand\" options for tkinter pack method\n\nWhat's the difference between the \"fill\" and \"expand\" options for Tkinter's pack method? I have actually looked up about it everywhere, and I am unable to find the satisfactory answer. I found the following: fill option: it determines whether to use up more space or keep \"one's own\" dimensions. expand option: it deals with the expansion of parent widget. The problem is that these two sound more or less the same. I even tried out a few examples by toggling between the 4 values of fill and 2 values of expand but received more or less the same output in 2 or 3 cases, because of which I have this query.", "output": "I'm done with trial and error. Here is an overview: import tkinter as tk root = tk.Tk() root.geometry() for e, expand in enumerate([False, True]): for f, fill in enumerate([None, tk.X, tk.Y, tk.BOTH]): for s, side in enumerate([tk.TOP, tk.LEFT, tk.BOTTOM, tk.RIGHT]): position = '+{}+{}'.format(s * 205 + 100 + e * 820, f * 235 + 100) win = tk.Toplevel(root) win.geometry('200x200'+position) text = str(\"side='{}'\\nfill='{}'\\nexpand={}\".format(side, fill, str(expand))) tk.Label(win, text=text, bg=['#FF5555', '#55FF55'][e]).pack(side=side, fill=fill, expand=expand) root.mainloop()"} +{"question_id": 65644782, "score": 84, "creation_date": 1610208659, "tags": ["python", "ubuntu", "pip", "ubuntu-20.04"], "instruction": "How to install pip for Python 3.9 on Ubuntu 20.04\n\nUbuntu 20.04 comes with Python 3.8. I cannot uninstall Python 3.8 but I need Python 3.9 I went ahead and installed Python 3.9 from: sudo add-apt-repository ppa:deadsnakes/ppa sudo apt install python3.9 How do I install pip for python 3.9? Installing pip using sudo apt-get install python3-pip does not work for me as it installs pip for python 3.8 Installing pip using python3.9 get-pip.py gives an error: ~/python_tools$ python3.9 get-pip.py Traceback (most recent call last): File \"/home/ubuntu/python_tools/get-pip.py\", line 23704, in main() File \"/home/ubuntu/python_tools/get-pip.py\", line 198, in main bootstrap(tmpdir=tmpdir) File \"/home/ubuntu/python_tools/get-pip.py\", line 82, in bootstrap from pip._internal.cli.main import main as pip_entry_point File \"\", line 259, in load_module File \"/tmp/tmpkwyc8h7j/pip.zip/pip/_internal/cli/main.py\", line 10, in File \"\", line 259, in load_module File \"/tmp/tmpkwyc8h7j/pip.zip/pip/_internal/cli/autocompletion.py\", line 9, in File \"\", line 259, in load_module File \"/tmp/tmpkwyc8h7j/pip.zip/pip/_internal/cli/main_parser.py\", line 7, in File \"\", line 259, in load_module File \"/tmp/tmpkwyc8h7j/pip.zip/pip/_internal/cli/cmdoptions.py\", line 18, in ModuleNotFoundError: No module named 'distutils.util'", "output": "You can install pip for python 3.9 the following way: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3.9 get-pip.py It is important you use python3.9 instead of just python3, to ensure pip is installed for python 3.9. If you see any permissions errors, you may need to use python3.9 get-pip.py --user If you get an error like No module named 'distutils.util' when you run python3.9 get-pip.py, and you are on a Debian-based Linux distribution, run sudo apt install python3.9-distutils and then rerun your get-pip.py command. If you are not on a Debian-based distribution, use the equivalent command for your distribution's package manager. These instructions are based in part on the official installation instructions provided by the pip maintainers. This portion of my answer is a bit out of the scope of the question, since the question is specifically for python 3.9. However, for anyone trying to install pip on python 3.6 or older, at the time of writing the file at https://bootstrap.pypa.io/get-pip.py only supports python 3.7 or newer. The workaround is to instead download from https://bootstrap.pypa.io/pip//get-pip.py instead. For example, if you want to install pip for python 3.6, then you can download from https://bootstrap.pypa.io/pip/3.6/get-pip.py, and then follow all of the steps above as usual."} +{"question_id": 11195140, "score": 84, "creation_date": 1340649017, "tags": ["python", "with-statement"], "instruction": "Break or exit out of \"with\" statement?\n\nI'd just like to exit out of a with statement under certain conditions: with open(path) as f: print 'before condition' if : break #syntax error! print 'after condition' Of course, the above doesn't work. Is there a way to do this? (I know that I can invert the condition: if not : print 'after condition' -- any way that is like above?)", "output": "This question was asked before Python 3.4 existed but with 3.4 you can use contextlib.supress, suppressing your own personal exception. See that this (runnable as is) code from contextlib import suppress class InterruptWithBlock(UserWarning): \"\"\"To be used to interrupt the march of a with\"\"\" condition = True with suppress(InterruptWithBlock): print('before condition') if condition: raise InterruptWithBlock() print('after condition') # Will not print 'after condition` if condition is True. So with the code in the question, you'd do: with suppress(InterruptWithBlock) as _, open(path) as f: print('before condition') if : raise InterruptWithBlock() print('after condition') Note: If you're (still) before 3.4, you can still make your own suppress context manager easily."} +{"question_id": 33159106, "score": 84, "creation_date": 1444945819, "tags": ["python", "sorting"], "instruction": "Sort filenames in directory in ascending order\n\nI have a directory with jpgs and other files in it, the jpgs all have filenames with numbers in them. Some may have additional strings in the filename. For example. 01.jpg Or it could be Picture 03.jpg In Python I need a list of all the jpgs in ascending order. Here is the code snippet for this import os import numpy as np myimages = [] #list of image filenames dirFiles = os.listdir('.') #list of directory files dirFiles.sort() #good initial sort but doesnt sort numerically very well sorted(dirFiles) #sort numerically in ascending order for files in dirFiles: #filter out all non jpgs if '.jpg' in files: myimages.append(files) print len(myimages) print myimages What I get is this ['0.jpg', '1.jpg', '10.jpg', '11.jpg', '12.jpg', '13.jpg', '14.jpg', '15.jpg', '16.jpg', '17.jpg', '18.jpg', '19.jpg', '2.jpg', '20.jpg', '21.jpg', '22.jpg', '23.jpg', '24.jpg', '25.jpg', '26.jpg', '27.jpg', '28.jpg', '29.jpg', '3.jpg', '30.jpg', '31.jpg', '32.jpg', '33.jpg', '34.jpg', '35.jpg', '36.jpg', '37.jpg', '4.jpg', '5.jpg', '6.jpg', '7.jpg', '8.jpg', '9.jpg'] Clearly it sorts blindly the most significant number first. I tried using sorted() as you can see hoping that it would fix it but it makes no difference.", "output": "Assuming there's just one number in each file name: For Python 2X: >>> dirFiles = ['Picture 03.jpg', '02.jpg', '1.jpg'] >>> dirFiles.sort(key=lambda f: int(filter(str.isdigit, f))) >>> dirFiles ['1.jpg', '02.jpg', 'Picture 03.jpg'] For Python 3X: >>> dirFiles = ['Picture 03.jpg', '02.jpg', '1.jpg'] >>> dirFiles.sort(key=lambda f: int(''.join(filter(str.isdigit, f)))) >>> dirFiles ['1.jpg', '02.jpg', 'Picture 03.jpg'] A version that also works in Python 3: >>> import re >>> dirFiles.sort(key=lambda f: int(re.sub('\\D', '', f)))"} +{"question_id": 10663720, "score": 84, "creation_date": 1337417037, "tags": ["python", "time", "python-datetime"], "instruction": "How to convert a time string to seconds?\n\nI need to convert time value strings given in the following format to seconds, for example: 1.'00:00:00,000' -> 0 seconds 2.'00:00:10,000' -> 10 seconds 3.'00:01:04,000' -> 64 seconds 4.'01:01:09,000' -> 3669 seconds Do I need to use regex to do this? I tried to use the time module, but time.strptime('00:00:00,000','%I:%M:%S') throws: ValueError: time data '00:00:00,000' does not match format '%I:%M:%S' Edit: Looks like this: from datetime import datetime pt = datetime.strptime(timestring,'%H:%M:%S,%f') total_seconds = pt.second + pt.minute*60 + pt.hour*3600 gives the correct result. I was just using the wrong module.", "output": "import datetime import time x = time.strptime('00:01:00,000'.split(',')[0],'%H:%M:%S') datetime.timedelta(hours=x.tm_hour,minutes=x.tm_min,seconds=x.tm_sec).total_seconds() 60.0"} +{"question_id": 41826868, "score": 84, "creation_date": 1485256718, "tags": ["python", "directory", "move"], "instruction": "Moving all files from one directory to another using Python\n\nI want to move all text files from one folder to another folder using Python. I found this code: import os, shutil, glob dst = '/path/to/dir/Caches/com.apple.Safari/WebKitCache/Version\\ 4/Blobs ' try: os.makedirs(/path/to/dir/Tumblr/Uploads) # create destination directory, if needed (similar to mkdir -p) except OSError: # The directory already existed, nothing to do pass for txt_file in glob.iglob('*.txt'): shutil.copy2(txt_file, dst) I would want it to move all the files in the Blob folder. I am not getting an error, but it is also not moving the files.", "output": "Try this: import shutil import os source_dir = '/path/to/source_folder' target_dir = '/path/to/dest_folder' file_names = os.listdir(source_dir) for file_name in file_names: shutil.move(os.path.join(source_dir, file_name), target_dir)"} +{"question_id": 27732354, "score": 84, "creation_date": 1420128300, "tags": ["python", "multithreading", "authentication", "pyqt4", "pickle"], "instruction": "Unable to load files using pickle and multiple modules\n\nI'm trying to create a user system, which uses a setting and Gui module, and when the GUI module requests for the file to load up using pickle, I keep getting an attribute error. this is from the settings module: import pickle import hashlib class User(object): def __init__(self, fname, lname, dob, gender): self.firstname = fname self.lastname = lname self._dob = dob self.gender = gender self.type = 'General' self._username = '' self._hashkey = '' def Report(self): print(\"Full Name: {0} {1}\\nDate of Birth: {2}\\nGender: {3}\\nAccess Level: {4}\".format(self.firstname,self.lastname, self._dob, self.gender, self.type)) print(self._username) def Genusername(self): self._username = str(str(self._dob)[:2] + self.firstname[:2] + self.lastname[:2]) saveUsers(users) def Genhashkey(self, password): encoded = password.encode('utf-8','strict') return hashlib.sha256(encoded).hexdigest() def Verifypassword(self, password): if self._hashkey == self.Genhashkey(password): return True else: return False class SAdmin(User): def __init__(self, fname, lname, dob, gender): super().__init__(fname, lname, dob, gender) self.type = 'Stock Admin' class Manager(User): def __init__(self, fname, lname, dob, gender): super().__init__(fname, lname, dob, gender) self.type = 'Manager' def saveUsers(users): with open('user_data.pkl', 'wb') as file: pickle.dump(users, file, -1) # PICKLE HIGHEST LEVEL PROTOCOL def loadUsers(users): try: with open('user_data.pkl', 'rb') as file: temp = pickle.load(file) for item in temp: users.append(item) except IOError: saveUsers([]) def userReport(users): for user in users: print(user.firstname, user.lastname) def addUser(users): fname = input('What is your First Name?\\n > ') lname = input('What is your Last Name?\\n > ') dob = int(input('Please enter your date of birth in the following format, example 12211996\\n> ')) gender = input(\"What is your gender? 'M' or 'F'\\n >\") level = input(\"Enter the access level given to this user 'G', 'A', 'M'\\n > \") password = input(\"Enter a password:\\n > \") if level == 'G': usertype = User if level == 'A': usertype = SAdmin if level == 'M': usertype = Manager users.append(usertype(fname, lname, dob, gender)) user = users[len(users)-1] user.Genusername() user._hashkey = user.Genhashkey(password) saveUsers(users) def deleteUser(users): userReport(users) delete = input('Please type in the First Name of the user do you wish to delete:\\n > ') for user in users: if user.firstname == delete: users.remove(user) saveUsers(users) def changePass(users): userReport(users) change = input('Please type in the First Name of the user you wish to change the password for :\\n > ') for user in users: if user.firstname == change: oldpass = input('Please type in your old password:\\n > ') newpass = input('Please type in your new password:\\n > ') if user.Verifypassword(oldpass): user._hashkey = user.Genhashkey(newpass) saveUsers(users) else: print('Your old password does not match!') def verifyUser(username, password): for user in users: if user._username == username and user.Verifypassword(password): return True else: return False if __name__ == '__main__': users = [] loadUsers(users) and this is the GUI module: from PyQt4 import QtGui, QtCore import Settings class loginWindow(QtGui.QDialog): def __init__(self): super().__init__() self.initUI() def initUI(self): self.lbl1 = QtGui.QLabel('Username') self.lbl2 = QtGui.QLabel('Password') self.username = QtGui.QLineEdit() self.password = QtGui.QLineEdit() self.okButton = QtGui.QPushButton(\"OK\") self.okButton.clicked.connect(self.tryLogin) self.cancelButton = QtGui.QPushButton(\"Cancel\") grid = QtGui.QGridLayout() grid.setSpacing(10) grid.addWidget(self.lbl1, 1, 0) grid.addWidget(self.username, 1, 1) grid.addWidget(self.lbl2, 2, 0) grid.addWidget(self.password, 2, 1) grid.addWidget(self.okButton, 3, 1) grid.addWidget(self.cancelButton, 3, 0) self.setLayout(grid) self.setGeometry(300, 300, 2950, 150) self.setWindowTitle('Login') self.show() def tryLogin(self): print(self.username.text(), self.password.text()) if Settings.verifyUser(self.username.text(),self.password.text()): print('it Woks') else: QtGui.QMessageBox.warning( self, 'Error', 'Incorrect Username or Password') class Window(QtGui.QMainWindow): def __init__(self): super().__init__() if __name__ == '__main__': app = QtGui.QApplication(sys.argv) users = [] Settings.loadUsers(users) if loginWindow().exec_() == QtGui.QDialog.Accepted: window = Window() window.show() sys.exit(app.exec_()) each user is a class and are put into a list and then the list is saved using pickle when I load up just the settings file and verify the login everything works fine but when I open up the GUI module and try to verify it doesn't let me, the error I'm getting: Traceback (most recent call last): File \"C:\\Users`Program\\LoginGUI.py\", line 53, in Settings.loadUsers(users) File \"C:\\Users\\Program\\Settings.py\", line 51, in loadUsers temp = pickle.load(file) AttributeError: Can't get attribute 'Manager' on ", "output": "The issue is that you're pickling objects defined in Settings by actually running the 'Settings' module, then you're trying to unpickle the objects from the GUI module. Remember that pickle doesn't actually store information about how a class/object is constructed, and needs access to the class when unpickling. See wiki on using Pickle for more details. In the pkl data, you see that the object being referenced is __main__.Manager, as the 'Settings' module was main when you created the pickle file (i.e. you ran the 'Settings' module as the main script to invoke the addUser function). Then, you try unpickling in 'Gui' - so that module has the name __main__, and you're importing Setting within that module. So of course the Manager class will actually be Settings.Manager. But the pkl file doesn't know this, and looks for the Manager class within __main__, and throws an AttributeError because it doesn't exist (Settings.Manager does, but __main__.Manager doesn't). Here's a minimal code set to demonstrate. The class_def.py module: import pickle class Foo(object): def __init__(self, name): self.name = name def main(): foo = Foo('a') with open('test_data.pkl', 'wb') as f: pickle.dump(foo, f, -1) if __name__=='__main__': main() You run the above to generate the pickle data. The main_module.py module: import pickle import class_def if __name__=='__main__': with open('test_data.pkl', 'rb') as f: users = pickle.load(f) You run the above to attempt to open the pickle file, and this throws roughly the same error that you were seeing. (Slightly different, but I'm guessing that's because I'm on Python 2.7) The solution is either: You make the class available within the namespace of the top-level module (i.e. GUI or main_module) through an explicit import, or You create the pickle file from the same top-level module as the one that you will open it in (i.e. call Settings.addUser from GUI, or class_def.main from main_module). This means that the pkl file will save the objects as Settings.Manager or class_def.Foo, which can then be found in the GUI`main_module` namespace. Option 1 example: import pickle import class_def from class_def import Foo # Import Foo into main_module's namespace explicitly if __name__=='__main__': with open('test_data.pkl', 'rb') as f: users = pickle.load(f) Option 2 example: import pickle import class_def if __name__=='__main__': class_def.main() # Objects are being pickled with main_module as the top-level with open('test_data.pkl', 'rb') as f: users = pickle.load(f)"} +{"question_id": 48342098, "score": 84, "creation_date": 1516368086, "tags": ["python", "anaconda"], "instruction": "How to check python anaconda version installed on Windows 10 PC?\n\nI have a Windows 10 PC with python anaconda installed. The latest anaconda version is v5.0.1 I would like to find out whether the PC has the latest version v5.0.1 installed and whether it is 32-bit/64bit or python 2.7/3.6. How do I do that? https://www.anaconda.com/download/", "output": "On the anaconda prompt, do a conda -V or conda --version to get the conda version. python -V or python --version to get the python version. conda list anaconda$ to get the Anaconda version. conda list to get the Name, Version, Build & Channel details of all the packages installed (in the current environment). conda info to get all the current environment details. conda info --envs To see a list of all your environments Detailed description here, download cheat sheet from here"} +{"question_id": 41883254, "score": 84, "creation_date": 1485465920, "tags": ["python", "python-3.x", "django", "django-views", "django-urls"], "instruction": "Django - is not a registered namespace\n\nI am trying to process a form in django/python using the following code. home.html: views.py: def submit(request): a = request.POST(['initial']) return render(request, 'home/home.html', { 'error_message': \"returned\" }) urls.py: from django.conf.urls import url from . import views urlpatterns = [ url(r'^submit/$', views.submit, name='submit') ] when I try to run it in a browser I get the error: NoReverseMatch at /home/ u'home' is not a registered namespace and another error message indicating a problem with the form.", "output": "You should just change you action url in your template: On the note of url namespaces... In order to be able to call urls using home namespace you should have in your main urls.py file line something like: for django 1.x: url(r'^', include('home.urls', namespace='home')), for django 2.x and 3.x path('', include(('home.urls', 'home'), namespace='home'))"} +{"question_id": 41668813, "score": 84, "creation_date": 1484535406, "tags": ["python", "machine-learning", "keras", "keras-layer"], "instruction": "How to add and remove new layers in keras after loading weights?\n\nI am trying to do a transfer learning; for that purpose I want to remove the last two layers of the neural network and add another two layers. This is an example code which also output the same error. from keras.models import Sequential from keras.layers import Input,Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers.core import Dropout, Activation from keras.layers.pooling import GlobalAveragePooling2D from keras.models import Model in_img = Input(shape=(3, 32, 32)) x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img) x = Activation('relu', name='relu_conv1')(x) x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x) x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x) x = Activation('relu', name='relu_conv2')(x) x = GlobalAveragePooling2D()(x) o = Activation('softmax', name='loss')(x) model = Model(input=in_img, output=[o]) model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\") #model.load_weights('model_weights.h5', by_name=True) model.summary() model.layers.pop() model.layers.pop() model.summary() model.add(MaxPooling2D()) model.add(Activation('sigmoid', name='loss')) I removed the layer using pop() but when I tried to add its outputting this error AttributeError: 'Model' object has no attribute 'add' I know the most probable reason for the error is improper use of model.add(). what other syntax should I use? EDIT: I tried to remove/add layers in keras but its not allowing it to be added after loading external weights. from keras.models import Sequential from keras.layers import Input,Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers.core import Dropout, Activation from keras.layers.pooling import GlobalAveragePooling2D from keras.models import Model in_img = Input(shape=(3, 32, 32)) def gen_model(): in_img = Input(shape=(3, 32, 32)) x = Convolution2D(12, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(in_img) x = Activation('relu', name='relu_conv1')(x) x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x) x = Convolution2D(3, 1, 1, border_mode='valid', name='conv2')(x) x = Activation('relu', name='relu_conv2')(x) x = GlobalAveragePooling2D()(x) o = Activation('softmax', name='loss')(x) model = Model(input=in_img, output=[o]) return model #parent model model=gen_model() model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\") model.summary() #saving model weights model.save('model_weights.h5') #loading weights to second model model2=gen_model() model2.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\") model2.load_weights('model_weights.h5', by_name=True) model2.layers.pop() model2.layers.pop() model2.summary() #editing layers in the second model and saving as third model x = MaxPooling2D()(model2.layers[-1].output) o = Activation('sigmoid', name='loss')(x) model3 = Model(input=in_img, output=[o]) its showing this error RuntimeError: Graph disconnected: cannot obtain value for tensor input_4 at layer \"input_4\". The following previous layers were accessed without issue: []", "output": "You can take the output of the last model and create a new model. The lower layers remains the same. model.summary() model.layers.pop() model.layers.pop() model.summary() x = MaxPooling2D()(model.layers[-1].output) o = Activation('sigmoid', name='loss')(x) model2 = Model(inputs=in_img, outputs=[o]) model2.summary() Check How to use models from keras.applications for transfer learnig? Update on Edit: The new error is because you are trying to create the new model on global in_img which is actually not used in the previous model creation.. there you are actually defining a local in_img. So the global in_img is obviously not connected to the upper layers in the symbolic graph. And it has nothing to do with loading weights. To better resolve this problem you should instead use model.input to reference to the input. model3 = Model(input=model2.input, output=[o])"} +{"question_id": 38082602, "score": 84, "creation_date": 1467134694, "tags": ["python", "matplotlib", "seaborn"], "instruction": "Plotting multiple different plots in one figure using Seaborn\n\nI am attempting to recreate the following plot from the book Introduction to Statistical learning using seaborn I specifically want to recreate this using seaborn's lmplot to create the first two plots and boxplot to create the second. The main problem is that lmplot creates a FacetGrid according to this answer which forces me to hackily add another matplotlib Axes for the boxplot. I was wondering if there was an easier way to achieve this. Below, I have to do quite a bit of manual manipulation to get the desired plot. seaborn_grid = sns.lmplot('value', 'wage', col='variable', hue='education', data=df_melt, sharex=False) seaborn_grid.fig.set_figwidth(8) left, bottom, width, height = seaborn_grid.fig.axes[0]._position.bounds left2, bottom2, width2, height2 = seaborn_grid.fig.axes[1]._position.bounds left_diff = left2 - left seaborn_grid.fig.add_axes((left2 + left_diff, bottom, width, height)) sns.boxplot('education', 'wage', data=df_wage, ax = seaborn_grid.fig.axes[2]) ax2 = seaborn_grid.fig.axes[2] ax2.set_yticklabels([]) ax2.set_xticklabels(ax2.get_xmajorticklabels(), rotation=30) ax2.set_ylabel('') ax2.set_xlabel(''); leg = seaborn_grid.fig.legends[0] leg.set_bbox_to_anchor([0, .1, 1.5,1]) Which yields Sample data for DataFrames: df_melt = { 'education': ['1. < HS Grad', '4. College Grad', '3. Some College', '4. College Grad', '2. HS Grad'], 'value': [18, 24, 45, 43, 50], 'variable': ['age', 'age', 'age', 'age', 'age'], 'wage': [75.0431540173515, 70.47601964694451, 130.982177377461, 154.68529299563, 75.0431540173515]} df_wage = { 'education': ['1. < HS Grad', '4. College Grad', '3. Some College', '4. College Grad', '2. HS Grad'], 'wage': [75.0431540173515, 70.47601964694451, 130.982177377461, 154.68529299563, 75.0431540173515]}", "output": "One possibility would be to NOT use lmplot(), but directly use regplot() instead. regplot() plots on the axes you pass as an argument with ax=. You lose the ability to automatically split your dataset according to a certain variable, but if you know beforehand the plots you want to generate, it shouldn't be a problem. Something like this: import matplotlib.pyplot as plt import seaborn as sns fig, axs = plt.subplots(ncols=3) sns.regplot(x='value', y='wage', data=df_melt, ax=axs[0]) sns.regplot(x='value', y='wage', data=df_melt, ax=axs[1]) sns.boxplot(x='education',y='wage', data=df_melt, ax=axs[2])"} +{"question_id": 301068, "score": 84, "creation_date": 1227072167, "tags": ["python", "string"], "instruction": "Quoting backslashes in Python string literals\n\nI have a string that contains both double-quotes and backslashes that I want to set to a variable in Python. However, whenever I try to set it, the quotes or slashes are either removed or escaped. Here's an example: >>> foo = 'baz \"\\\"' >>> foo 'baz \"\"' So instead of baz \"\\\" like I want I'm getting baz \"\". If I then try to escape the backslash, it doesn't help either: >>> foo = 'baz \"\\\\\"' >>> foo 'baz \"\\\\\"' Which now matches what I put in but wasn't what I originally wanted. How do you get around this problem?", "output": "You're being mislead by output -- the second approach you're taking actually does what you want, you just aren't believing it. :) >>> foo = 'baz \"\\\\\"' >>> foo 'baz \"\\\\\"' >>> print(foo) baz \"\\\" Incidentally, there's another string form which might be a bit clearer: >>> print(r'baz \"\\\"') baz \"\\\""} +{"question_id": 12507206, "score": 84, "creation_date": 1348123007, "tags": ["python", "json", "dictionary", "python-2.7"], "instruction": "How to completely traverse a complex dictionary of unknown depth?\n\nImporting from JSON can get very complex and nested structures. For example: {u'body': [{u'declarations': [{u'id': {u'name': u'i', u'type': u'Identifier'}, u'init': {u'type': u'Literal', u'value': 2}, u'type': u'VariableDeclarator'}], u'kind': u'var', u'type': u'VariableDeclaration'}, {u'declarations': [{u'id': {u'name': u'j', u'type': u'Identifier'}, u'init': {u'type': u'Literal', u'value': 4}, u'type': u'VariableDeclarator'}], u'kind': u'var', u'type': u'VariableDeclaration'}, {u'declarations': [{u'id': {u'name': u'answer', u'type': u'Identifier'}, u'init': {u'left': {u'name': u'i', u'type': u'Identifier'}, u'operator': u'*', u'right': {u'name': u'j', u'type': u'Identifier'}, u'type': u'BinaryExpression'}, u'type': u'VariableDeclarator'}], u'kind': u'var', u'type': u'VariableDeclaration'}], u'type': u'Program'} What is the recommended way to walk complex structures like the above? Apart of a few list there are mostly dictionaries, the structure can become even more imbricated so I need a general solution.", "output": "You can use a recursive generator for converting your dictionary to flat lists. def dict_generator(indict, pre=None): pre = pre[:] if pre else [] if isinstance(indict, dict): for key, value in indict.items(): if isinstance(value, dict): for d in dict_generator(value, pre + [key]): yield d elif isinstance(value, list) or isinstance(value, tuple): for v in value: for d in dict_generator(v, pre + [key]): yield d else: yield pre + [key, value] else: yield pre + [indict] It returns [u'body', u'kind', u'var'] [u'init', u'declarations', u'body', u'type', u'Literal'] [u'init', u'declarations', u'body', u'value', 2] [u'declarations', u'body', u'type', u'VariableDeclarator'] [u'id', u'declarations', u'body', u'type', u'Identifier'] [u'id', u'declarations', u'body', u'name', u'i'] [u'body', u'type', u'VariableDeclaration'] [u'body', u'kind', u'var'] [u'init', u'declarations', u'body', u'type', u'Literal'] [u'init', u'declarations', u'body', u'value', 4] [u'declarations', u'body', u'type', u'VariableDeclarator'] [u'id', u'declarations', u'body', u'type', u'Identifier'] [u'id', u'declarations', u'body', u'name', u'j'] [u'body', u'type', u'VariableDeclaration'] [u'body', u'kind', u'var'] [u'init', u'declarations', u'body', u'operator', u'*'] [u'right', u'init', u'declarations', u'body', u'type', u'Identifier'] [u'right', u'init', u'declarations', u'body', u'name', u'j'] [u'init', u'declarations', u'body', u'type', u'BinaryExpression'] [u'left', u'init', u'declarations', u'body', u'type', u'Identifier'] [u'left', u'init', u'declarations', u'body', u'name', u'i'] [u'declarations', u'body', u'type', u'VariableDeclarator'] [u'id', u'declarations', u'body', u'type', u'Identifier'] [u'id', u'declarations', u'body', u'name', u'answer'] [u'body', u'type', u'VariableDeclaration'] [u'type', u'Program']"} +{"question_id": 23417941, "score": 84, "creation_date": 1398984286, "tags": ["python", "python-import"], "instruction": "import error: 'No module named' *does* exist\n\nI am getting this stack trace when I start pyramid pserve: % python $(which pserve) ../etc/development.ini Traceback (most recent call last): File \"/home/hughdbrown/.local/bin/pserve\", line 9, in load_entry_point('pyramid==1.5', 'console_scripts', 'pserve')() File \"/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/pyramid-1.5-py2.7.egg/pyramid/scripts/pserve.py\", line 51, in main return command.run() File \"/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/pyramid-1.5-py2.7.egg/pyramid/scripts/pserve.py\", line 316, in run global_conf=vars) File \"/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/pyramid-1.5-py2.7.egg/pyramid/scripts/pserve.py\", line 340, in loadapp return loadapp(app_spec, name=name, relative_to=relative_to, **kw) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 271, in loadobj global_conf=global_conf) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 296, in loadcontext global_conf=global_conf) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 454, in get_context section) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 406, in get_context global_conf=global_conf) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 296, in loadcontext global_conf=global_conf) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 337, in _loadfunc return loader.get_context(object_type, name, global_conf) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py\", line 681, in get_context obj = lookup_object(self.spec) File \"/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/util.py\", line 68, in lookup_object module = __import__(parts) File \"/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/ponder-0.0.40-py2.7.egg/ponder/server/__init__.py\", line 10, in from ponder.server.views import Endpoints, route ImportError: No module named views This works fine from a python REPL: % python Python 2.7.5+ (default, Feb 27 2014, 19:37:08) [GCC 4.8.1] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> from ponder.server.views import Endpoints, route >>> and from a command line import: % python -c \"from ponder.server.views import Endpoints, route\" An abridged tree output shows what I am working with: % tree \u251c\u2500\u2500 __init__.py \u251c\u2500\u2500 ponder \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u251c\u2500\u2500 server \u2502 \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2502 \u2514\u2500\u2500 views \u2502 \u2502 \u251c\u2500\u2500 environment_templates.py \u2502 \u2502 \u251c\u2500\u2500 groups.py \u2502 \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2502 \u251c\u2500\u2500 instances.py \u2502 \u2502 \u251c\u2500\u2500 tasks.py \u2502 \u2502 \u2514\u2500\u2500 users.py My PYTHONPATH is set to the root of this tree: % echo $PYTHONPATH /home/hughdbrown/workspace/ept/ponder/lib I am running this in a virtualenv that uses python 2.7. I have had this working off and on today but I can't figure out where the problem is. For one thing, the __init__.py seems to be okay with some imports that come just before: from .database import get_db from .config import parser from .views import Endpoints, route (I changed the last line to an absolute import. No luck.) Things that I have tried: Rebuilding virtualenv Setting PYTHONPATH Using absolute paths in code Looking for circular imports I am open to further suggestions in how to debug this error. So the mistake I made was to look only at the source tree. The problem was really in the runtime environment, in my virtualenv. And when I looked there, I found that the desired files were not being installed. The problem, at root, was the setup.py.", "output": "My usual trick is to simply print sys.path in the actual context where the import problem happens. In your case it'd seem that the place for the print is in /home/hughdbrown/.local/bin/pserve . Then check dirs & files in the places that path shows.. You do that by first having: import sys in python 3 with the print function: print(sys.path) or in python 2 with print expression: print sys.path"} +{"question_id": 13070461, "score": 84, "creation_date": 1351174105, "tags": ["python", "list"], "instruction": "Get indices of the top N values of a list\n\nI have a list say a = [5,3,1,4,10]. I need to get indices of the top two values of the list, that is for 5 and 10 I would get [0, 4]. Is there a one-liner that Python offers for such a case?", "output": "sorted(range(len(a)), key=lambda i: a[i])[-2:] or sorted(range(len(a)), key=lambda i: a[i], reverse=True)[:2] or import operator zip(*sorted(enumerate(a), key=operator.itemgetter(1)))[0][-2:] or (for long lists), consider using heapq.nlargest zip(*heapq.nlargest(2, enumerate(a), key=operator.itemgetter(1)))[0]"} +{"question_id": 34484066, "score": 84, "creation_date": 1451244894, "tags": ["python", "postgresql", "psycopg2"], "instruction": "Create a Postgres database using python\n\nI want to create Postgres database using Python. con = psql.connect(dbname='postgres', user=self.user_name, host='', password=self.password) cur = con.cursor() cur.execute(\"CREATE DATABASE %s ;\" % self.db_name) I am getting the following error: InternalError: CREATE DATABASE cannot run inside a transaction block I am using psycopg2 to connect. I don't understand what's the problem. What am I trying to do is to connect to database (Postgres): psql -postgres -U UserName And then create another database: create database test; This is what I usually do and I want to automate this by creating Python script.", "output": "Use ISOLATION_LEVEL_AUTOCOMMIT, a psycopg2 extensions: No transaction is started when command are issued and no commit() or rollback() is required. import psycopg2 from psycopg2 import sql from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT # <-- ADD THIS LINE con = psycopg2.connect(dbname='postgres', user=self.user_name, host='', password=self.password) con.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) # <-- ADD THIS LINE cur = con.cursor() # Use the psycopg2.sql module instead of string concatenation # in order to avoid sql injection attacks. cur.execute(sql.SQL(\"CREATE DATABASE {}\").format( sql.Identifier(self.db_name)) )"} +{"question_id": 19686533, "score": 84, "creation_date": 1383145783, "tags": ["python", "list"], "instruction": "How to zip two differently sized lists, repeating the shorter list?\n\nI want to zip two list with different length for example A = [1,2,3,4,5,6,7,8,9] B = [\"A\",\"B\",\"C\"] and I expect this [(1, 'A'), (2, 'B'), (3, 'C'), (4, 'A'), (5, 'B'), (6, 'C'), (7, 'A'), (8, 'B'), (9, 'C')] But the built-in zip won't repeat to pair with the list with larger size. Does there exist any built-in way can achieve this? Here is my code: idx = 0 zip_list = [] for value in larger: zip_list.append((value,smaller[idx])) idx += 1 if idx == len(smaller): idx = 0", "output": "You can use itertools.cycle: Make an iterator returning elements from the iterable and saving a copy of each. When the iterable is exhausted, return elements from the saved copy. Repeats indefinitely. Example: A = [1,2,3,4,5,6,7,8,9] B = [\"A\",\"B\",\"C\"] from itertools import cycle zip_list = zip(A, cycle(B)) if len(A) > len(B) else zip(cycle(A), B)"} +{"question_id": 14802128, "score": 84, "creation_date": 1360527235, "tags": ["python", "tuples", "min"], "instruction": "Tuple pairs, finding minimum using python\n\nI want to find the minimum of a list of tuples sorting by a given column. I have some data arranged as a list of 2-tuples for example. data = [ (1, 7.57), (2, 2.1), (3, 1.2), (4, 2.1), (5, 0.01), (6, 0.5), (7, 0.2), (8, 0.6)] How may I find the min of the dataset by the comparison of the second number in the tuples only? i.e. data[0][1] = 7.57 data[1][1] = 2.1 min( data ) = (5, 0.01) min( data ) returns (1, 7.57), which I accept is correct for the minimum of index 0, but I want minimum of index 1.", "output": "min_tuple = min([(y, x) for x, y in data])[::-1] This reverses all the pairs, uses min() as normal (comparing the new first number before comparing the second), and then un-reverses that resulting tuple back to the original format."} +{"question_id": 25440008, "score": 84, "creation_date": 1408684677, "tags": ["python", "list", "numpy", "pandas", "dataframe"], "instruction": "python pandas flatten a dataframe to a list\n\nI have a df like so: import pandas a=[['1/2/2014', 'a', '6', 'z1'], ['1/2/2014', 'a', '3', 'z1'], ['1/3/2014', 'c', '1', 'x3'], ] df = pandas.DataFrame.from_records(a[1:],columns=a[0]) I want to flatten the df so it is one continuous list like so: ['1/2/2014', 'a', '6', 'z1', '1/2/2014', 'a', '3', 'z1','1/3/2014', 'c', '1', 'x3'] I can loop through the rows and extend to a list, but is a much easier way to do it?", "output": "You can use .flatten() on the DataFrame converted to a NumPy array: df.to_numpy().flatten() and you can also add .tolist() if you want the result to be a Python list. Edit In previous versions of Pandas, the values attributed was used instead of the .to_numpy() method, as mentioned in the comments below."} +{"question_id": 58189683, "score": 84, "creation_date": 1569950847, "tags": ["python", "pytest"], "instruction": "TypeError: attrib() got an unexpected keyword argument 'convert'\n\nThis error occurred during automated testing of a python project on the CI server using pytest. I'm using pytest==4.0.2. This error only just started to occur, previous pipelines seem to work fine. The full error: File \"/usr/local/lib/python3.7/site-packages/_pytest/tmpdir.py\", line 35, in TempPathFactory lambda p: Path(os.path.abspath(six.text_type(p))) TypeError: attrib() got an unexpected keyword argument 'convert'", "output": "pytest seems to have the package attrs as a dependency. attrs==19.2.0 was released around 2019-10-01 17:00 UTC. This seems to cause the problem above. Switching back to attrs==19.1.0 fixes the problem. Just do the following: pip install attrs==19.1.0 NOTE: I expect that the issue will be resolved either by attrs or pytest soon by releasing a new version. So this fix should only be temporary. From the comments: This error does not occur on the newer versions of pytest i.e. pytest==5.2.0"} +{"question_id": 50796024, "score": 84, "creation_date": 1528714187, "tags": ["python", "machine-learning", "scikit-learn", "pca", "feature-selection"], "instruction": "Feature/Variable importance after a PCA analysis\n\nI have performed a PCA analysis over my original dataset and from the compressed dataset transformed by the PCA I have also selected the number of PC I want to keep (they explain almost the 94% of the variance). Now I am struggling with the identification of the original features that are important in the reduced dataset. How do I find out which feature is important and which is not among the remaining Principal Components after the dimension reduction? Here is my code: from sklearn.decomposition import PCA pca = PCA(n_components=8) pca.fit(scaledDataset) projection = pca.transform(scaledDataset) Furthermore, I tried also to perform a clustering algorithm on the reduced dataset but surprisingly for me, the score is lower than on the original dataset. How is it possible?", "output": "First of all, I assume that you call features the variables and not the samples/observations. In this case, you could do something like the following by creating a biplot function that shows everything in one plot. In this example, I am using the iris data. Before the example, please note that the basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings). See my last paragraph after the plot for more details. Overview: PART1: I explain how to check the importance of the features and how to plot a biplot. PART2: I explain how to check the importance of the features and how to save them into a pandas dataframe using the feature names. PART 1: import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.decomposition import PCA import pandas as pd from sklearn.preprocessing import StandardScaler iris = datasets.load_iris() X = iris.data y = iris.target #In general a good idea is to scale the data scaler = StandardScaler() scaler.fit(X) X=scaler.transform(X) pca = PCA() x_new = pca.fit_transform(X) def myplot(score,coeff,labels=None): xs = score[:,0] ys = score[:,1] n = coeff.shape[0] scalex = 1.0/(xs.max() - xs.min()) scaley = 1.0/(ys.max() - ys.min()) plt.scatter(xs * scalex,ys * scaley, c = y) for i in range(n): plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5) if labels is None: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, \"Var\"+str(i+1), color = 'g', ha = 'center', va = 'center') else: plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center') plt.xlim(-1,1) plt.ylim(-1,1) plt.xlabel(\"PC{}\".format(1)) plt.ylabel(\"PC{}\".format(2)) plt.grid() #Call the function. Use only the 2 PCs. myplot(x_new[:,0:2],np.transpose(pca.components_[0:2, :])) plt.show() Visualize what's going on using the biplot Now, the importance of each feature is reflected by the magnitude of the corresponding values in the eigenvectors (higher magnitude - higher importance) Let's see first what amount of variance does each PC explain. pca.explained_variance_ratio_ [0.72770452, 0.23030523, 0.03683832, 0.00515193] PC1 explains 72% and PC2 23%. Together, if we keep PC1 and PC2 only, they explain 95%. Now, let's find the most important features. print(abs( pca.components_ )) [[0.52237162 0.26335492 0.58125401 0.56561105] [0.37231836 0.92555649 0.02109478 0.06541577] [0.72101681 0.24203288 0.14089226 0.6338014 ] [0.26199559 0.12413481 0.80115427 0.52354627]] Here, pca.components_ has shape [n_components, n_features]. Thus, by looking at the PC1 (First Principal Component) which is the first row: [0.52237162 0.26335492 0.58125401 0.56561105]] we can conclude that feature 1, 3 and 4 (or Var 1, 3 and 4 in the biplot) are the most important. This is also clearly visible from the biplot (that's why we often use this plot to summarize the information in a visual way). To sum up, look at the absolute values of the Eigenvectors' components corresponding to the k largest Eigenvalues. In sklearn the components are sorted by explained_variance_. The larger they are these absolute values, the more a specific feature contributes to that principal component. PART 2: The important features are the ones that influence more the components and thus, have a large absolute value/score on the component. To get the most important features on the PCs with names and save them into a pandas dataframe use this: from sklearn.decomposition import PCA import pandas as pd import numpy as np np.random.seed(0) # 10 samples with 5 features train_features = np.random.rand(10,5) model = PCA(n_components=2).fit(train_features) X_pc = model.transform(train_features) # number of components n_pcs= model.components_.shape[0] # get the index of the most important feature on EACH component # LIST COMPREHENSION HERE most_important = [np.abs(model.components_[i]).argmax() for i in range(n_pcs)] initial_feature_names = ['a','b','c','d','e'] # get the names most_important_names = [initial_feature_names[most_important[i]] for i in range(n_pcs)] # LIST COMPREHENSION HERE AGAIN dic = {'PC{}'.format(i): most_important_names[i] for i in range(n_pcs)} # build the dataframe df = pd.DataFrame(dic.items()) This prints: 0 1 0 PC0 e 1 PC1 d So on the PC1 the feature named e is the most important and on PC2 the d. Nice article as well here: https://towardsdatascience.com/pca-clearly-explained-how-when-why-to-use-it-and-feature-importance-a-guide-in-python-7c274582c37e?source=friends_link&sk=65bf5440e444c24aff192fedf9f8b64f"} +{"question_id": 51143458, "score": 84, "creation_date": 1530564125, "tags": ["python", "opencv"], "instruction": "Difference in output with waitKey(0) and waitKey(1)\n\nI've just begun using the OpenCV library for Python and came across something I didn't understand. cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() #returns ret and the frame cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break When I use cv2.waitKey(1), I get a continuous live video feed from my laptops webcam. However when I use cv2.waitKey(0), I get still images. Every time I close the window, another one pops up with another picture taken at the time. Why does it not show as a continuous feed?", "output": "From the doc: 1.waitKey(0) will display the window infinitely until any keypress (it is suitable for image display). 2.waitKey(1) will display a frame for 1 ms, after which display will be automatically closed. Since the OS has a minimum time between switching threads, the function will not wait exactly 1 ms, it will wait at least 1 ms, depending on what else is running on your computer at that time. So, if you use waitKey(0) you see a still image until you actually press something while for waitKey(1) the function will show a frame for at least 1 ms only."} +{"question_id": 54785148, "score": 84, "creation_date": 1550661730, "tags": ["python"], "instruction": "Destructuring dicts and objects in Python\n\nIn Javascript, I can use destructuring to extract properties I want from a javascript objects in one liner. For example: currentUser = { \"id\": 24, \"name\": \"John Doe\", \"website\": \"http://mywebsite.com\", \"description\": \"I am an actor\", \"email\": \"example@example.com\", \"gender\": \"M\", \"phone_number\": \"+12345678\", \"username\": \"johndoe\", \"birth_date\": \"1991-02-23\", \"followers\": 46263, \"following\": 345, \"like\": 204, \"comments\": 9 } let { id, username } = this.currentUser; console.log(id) // 24 console.log(username) //johndoe Do we have something similar in Python for Python dicts and Python objects? Example of Python way of doing for python objects: class User: def __init__(self, id, name, website, description, email, gender, phone_number, username): self.id = id self.name = name self.website = website self.description = description self.email = email self.gender = gender self.phone_number = phone_number self.username = username current_user = User(24, \"Jon Doe\", \"http://mywebsite.com\", \"I am an actor\", \"example@example.com\", \"M\", \"+12345678\", \"johndoe\") # This is a pain id = current_user.id email = current_user.email gender = current_user.gender username = current_user.username print(id, email, gender, username) Writing those 4 lines (as mentioned in example above) vs writing a single line (as mentioned below) to fetch values I need from an object is a real pain point. (id, email, gender, username) = current_user", "output": "You can use operator module from standard library as follows: from operator import attrgetter id, email, gender, username = attrgetter('id', 'email', 'gender', 'username')(current_user) print(id, email, gender, username) In case you have a dict like from your example currentUser = { \"id\": 24, \"name\": \"John Doe\", \"website\": \"http://mywebsite.com\", \"description\": \"I am an actor\", \"email\": \"example@example.com\", \"gender\": \"M\", \"phone_number\": \"+12345678\", \"username\": \"johndoe\", \"birth_date\": \"1991-02-23\", \"followers\": 46263, \"following\": 345, \"like\": 204, \"comments\": 9 } just use itemgetter instead of attrgetter: from operator import itemgetter id, email, gender, username = itemgetter('id', 'email', 'gender', 'username')(currentUser) print(id, email, gender, username)"} +{"question_id": 25799576, "score": 84, "creation_date": 1410486909, "tags": ["python", "python-asyncio"], "instruction": "what's Python asyncio.Lock() for?\n\nIs it because coroutines may be preempted in the future? Or it allows people to use yield from in critical section (which IMO shouldn't be encouraged)?", "output": "You use it for the same reason you'd use a lock in threaded code: to protect a critical section. asyncio is primarily meant for use in single-threaded code, but there is still concurrent execution happening (any time you hit a yield from or await), which means sometimes you need synchronization. For example, consider a function that fetches some data from a web server, and then caches the results: async def get_stuff(url): if url in cache: return cache[url] stuff = await aiohttp.request('GET', url) cache[url] = stuff return stuff Now assume that you've got multiple co-routines running concurrently that might potentially need to use the return value of get_stuff: async def parse_stuff(): stuff = await get_stuff(\"www.example.com/data\") # do some parsing async def use_stuff(): stuff = await get_stuff(\"www.example.com/data\") # use stuff to do something interesting async def do_work(): out = await aiohttp.request(\"www.awebsite.com\") # do some work with out loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather( parse_stuff(), use_stuff(), do_work(), )) Now, pretend that fetching data from url is slow. If both parse_stuff and use_stuff run concurrently, each will be hit with the full cost of going over the network to fetch stuff. If you protect the method with a lock, you avoid this: stuff_lock = asyncio.Lock() async def get_stuff(url): async with stuff_lock: if url in cache: return cache[url] stuff = await aiohttp.request('GET', url) cache[url] = stuff return stuff One other thing to note is that while one coroutine is inside get_stuff, making the aiohttp call, and another waits on stuff_lock, a third coroutine that doesn't need to call get_stuff at all can also be running, without being affected by the coroutine blocking on the Lock. Obviously this example is a little bit contrived, but hopefully it gives you an idea of why asyncio.Lock can useful; it allows you to protect a critical section, without blocking other coroutines from running which don't need access to that critical section."} +{"question_id": 17057191, "score": 84, "creation_date": 1371007284, "tags": ["python", "flask"], "instruction": "redirect while passing arguments\n\nIn flask, I can do this: render_template(\"foo.html\", messages={'main':'hello'}) And if foo.html contains {{ messages['main'] }}, the page will show hello. But what if there's a route that leads to foo: @app.route(\"/foo\") def do_foo(): # do some logic here return render_template(\"foo.html\") In this case, the only way to get to foo.html, if I want that logic to happen anyway, is through a redirect: @app.route(\"/baz\") def do_baz(): if some_condition: return render_template(\"baz.html\") else: return redirect(\"/foo\", messages={\"main\":\"Condition failed on page baz\"}) # above produces TypeError: redirect() got an unexpected keyword argument 'messages' So, how can I get that messages variable to be passed to the foo route, so that I don't have to just rewrite the same logic code that that route computes before loading it up?", "output": "You could pass the messages as explicit URL parameter (appropriately encoded), or store the messages into session (cookie) variable before redirecting and then get the variable before rendering the template. For example: from flask import session, url_for def do_baz(): messages = json.dumps({\"main\":\"Condition failed on page baz\"}) session['messages'] = messages return redirect(url_for('.do_foo', messages=messages)) @app.route('/foo') def do_foo(): messages = request.args['messages'] # counterpart for url_for() messages = session['messages'] # counterpart for session return render_template(\"foo.html\", messages=json.loads(messages)) (encoding the session variable might not be necessary, flask may be handling it for you, but can't recall the details) Or you could probably just use Flask Message Flashing if you just need to show simple messages."} +{"question_id": 5091993, "score": 84, "creation_date": 1298470225, "tags": ["python", "matplotlib"], "instruction": "List of all available matplotlib backends\n\nThe current backend name is accessible via >>> import matplotlib.pyplot as plt >>> plt.get_backend() 'GTKAgg' Is there a way to get a list of all backends that can be used on a particular machine?", "output": "Matplotlib 3.9 and above has a backend registry that you can use to query backends in various ways: from matplotlib.backends import backend_registry # interactive backends backend_registry.list_builtin(matplotlib.backends.BackendFilter.INTERACTIVE) # noninteractive backends backend_registry.list_builtin(matplotlib.backends.BackendFilter.NON_INTERACTIVE) # all backends built into Matplotlib backend_registry.list_builtin() # all registered backends backend_registry.list_all() In older versions of Matplotlib you can use the lists matplotlib.rcsetup.interactive_bk matplotlib.rcsetup.non_interactive_bk matplotlib.rcsetup.all_backends the third being the concatenation of the former two. These will be remove in version 3.11."} +{"question_id": 29428894, "score": 84, "creation_date": 1428050238, "tags": ["python", "django", "unit-testing"], "instruction": "Django setUpTestData() vs. setUp()\n\nDjango 1.8 shipped with a refactored TestCase which allows for data initialization at the class level using transactions and savepoints via the setUpTestData() method. This is in contrast to unittest's setUp() which runs before every single test method. Question: What is the use case for setUp() in Django now that setUpTestData() exists? I'm looking for objective, high-level answers only, as otherwise this question would be too broad for Stack Overflow.", "output": "It's not uncommon for there to be set-up code that can't run as a class method. One notable example is the Django test client: you might not want to reuse the same client instance across tests that otherwise share much of the same data, and indeed, the client instances automatically included in subclasses of Django's SimpleTestCase are created per test method rather than for the entire class. Suppose you had a test from the pre-Django 1.8 world with a setUp method like this: def setUp(self): self.the_user = f.UserFactory.create() self.the_post = f.PostFactory.create(author=self.the_user) self.client.login( username=self.the_user.username, password=TEST_PASSWORD ) # ... &c. You might tempted to modernize the test case by changing setUp to setUpTestData, slapping a @classmethod decorator on top, and changing all the selfs to cls. But that will fail with a AttributeError: type object 'MyTestCase' has no attribute 'client'! Instead, you should use setUpTestData for the shared data and setUp for the per-test-method client: @classmethod def setUpTestData(cls): cls.the_user = f.UserFactory.create() cls.the_post = f.PostFactory.create(author=cls.the_user) # ... &c. def setUp(self): self.client.login( username=self.the_user.username, password=TEST_PASSWORD ) Note: if you are wondering what that variable f is doing in the example code, it comes from factoryboy - a useful fixtures library for creating objects for your tests."} +{"question_id": 57527131, "score": 84, "creation_date": 1565968532, "tags": ["python", "anaconda", "conda"], "instruction": "conda environment has no name visible in conda env list - how do I activate it at the shell?\n\nI have created an environment called B3 inside anaconda-navigator. It works fine if launched from within navigator. However, when I want to activate it at the shell, I get 'could not find environmnet B3.' If I use conda env list, the environment is visible but its name is blank. If I try using the file path instead, I get 'Not a conda environment.' Why is the name missing, and how can I activate it from the shell?", "output": "Name-based reference of Conda environments only works for environments located in one of the directories listed in the envs_dirs configuration option (see conda config --describe envs_dirs). By default this corresponds to the envs/ subdirectory in the Conda installation. If you create an env outside of one of these directories, then you cannot use a name to reference it. Instead, one must activate it by its path: Option 0: Activate by Path (Fix OP\u2019s Typo) conda activate /home/julianhatwell/anaconda3/envs/B3 Note that OP originally had a typo (anaconda2 should have been anaconda3). After pointing this out (see comments to question), the questioner instead requested an answer to: \u201cHow to convert a nameless environment to named one?\u201d Converting to Named Environment The following are possible ways to enabling name-based activation. Option 1: Clone Into Directory One option to use conda activate B3, is to recreate your B3 env in the default directory. You can use the --clone flag to accomplish this. conda create --clone path/to/the/nameless_env -n named_env Option 2: Add Parent Directory Alternatively, you can add the parent directory of the environment in question to the envs_dirs configuration option. conda config --append envs_dirs /path/to/the/parent_dir Option 3: Symbolic Link Another possibility is to create a symbolic link in one to the envs_dirs folders to the environment folder. It seems to work, but it is not a common practice, so it may have downsides that are unreported."} +{"question_id": 3588361, "score": 84, "creation_date": 1282945824, "tags": ["python", "csv"], "instruction": "How do you replace all the occurrences of a certain character in a string?\n\nI am reading a csv into a: import csv import collections import pdb import math import urllib def do_work(): a=get_file('c:/pythonwork/cds/cds.csv') a=remove_chars(a) print a[0:10] def get_file(start_file): #opens original file, reads it to array with open(start_file,'rb') as f: data=list(csv.reader(f)) return (data) def remove_chars(a): badchars=['a','b','c','d'] for row in a: for letter in badchars: row[8].replace(letter,'') return a I would like to replace all occurrences of ['a','b','c','d'] in the 8th element of the line with empty string. the remove_chars function is not working. Is there a better way to do this?", "output": "The problem is that you are not doing anything with the result of replace. In Python strings are immutable so anything that manipulates a string returns a new string instead of modifying the original string. line[8] = line[8].replace(letter, \"\")"} +{"question_id": 24036911, "score": 84, "creation_date": 1401883346, "tags": ["python", "pandas"], "instruction": "How to update values in a specific row in a Python Pandas DataFrame?\n\nWith the nice indexing methods in Pandas I have no problems extracting data in various ways. On the other hand I am still confused about how to change data in an existing DataFrame. In the following code I have two DataFrames and my goal is to update values in a specific row in the first df from values of the second df. How can I achieve this? import pandas as pd df = pd.DataFrame({'filename' : ['test0.dat', 'test2.dat'], 'm': [12, 13], 'n' : [None, None]}) df2 = pd.DataFrame({'filename' : 'test2.dat', 'n':16}, index=[0]) # this overwrites the first row but we want to update the second # df.update(df2) # this does not update anything df.loc[df.filename == 'test2.dat'].update(df2) print(df) gives filename m n 0 test0.dat 12 None 1 test2.dat 13 None [2 rows x 3 columns] but how can I achieve this: filename m n 0 test0.dat 12 None 1 test2.dat 13 16 [2 rows x 3 columns]", "output": "So first of all, pandas updates using the index. When an update command does not update anything, check both left-hand side and right-hand side. If you don't update the indices to follow your identification logic, you can do something along the lines of >>> df.loc[df.filename == 'test2.dat', 'n'] = df2[df2.filename == 'test2.dat'].loc[0]['n'] >>> df Out[331]: filename m n 0 test0.dat 12 None 1 test2.dat 13 16 If you want to do this for the whole table, I suggest a method I believe is superior to the previously mentioned ones: since your identifier is filename, set filename as your index, and then use update() as you wanted to. Both merge and the apply() approach contain unnecessary overhead: >>> df.set_index('filename', inplace=True) >>> df2.set_index('filename', inplace=True) >>> df.update(df2) >>> df Out[292]: m n filename test0.dat 12 None test2.dat 13 16"} +{"question_id": 30361824, "score": 84, "creation_date": 1432163086, "tags": ["python", "python-3.x", "python-asyncio"], "instruction": "Asynchronous exception handling in Python\n\nI've the following code using asyncio and aiohttp to make asynchronous HTTP requests. import sys import asyncio import aiohttp @asyncio.coroutine def get(url): try: print('GET %s' % url) resp = yield from aiohttp.request('GET', url) except Exception as e: raise Exception(\"%s has error '%s'\" % (url, e)) else: if resp.status >= 400: raise Exception(\"%s has error '%s: %s'\" % (url, resp.status, resp.reason)) return (yield from resp.text()) @asyncio.coroutine def fill_data(run): url = 'http://www.google.com/%s' % run['name'] run['data'] = yield from get(url) def get_runs(): runs = [ {'name': 'one'}, {'name': 'two'} ] loop = asyncio.get_event_loop() task = asyncio.wait([fill_data(r) for r in runs]) loop.run_until_complete(task) return runs try: get_runs() except Exception as e: print(repr(e)) sys.exit(1) For some reason, exceptions raised inside the get function are not caught: Future/Task exception was never retrieved Traceback (most recent call last): File \"site-packages/asyncio/tasks.py\", line 236, in _step result = coro.send(value) File \"mwe.py\", line 25, in fill_data run['data'] = yield from get(url) File \"mwe.py\", line 17, in get raise Exception(\"%s has error '%s: %s'\" % (url, resp.status, resp.reason)) Exception: http://www.google.com/two has error '404: Not Found' So, what is correct way to handle exceptions raised by coroutines?", "output": "asyncio.wait doesn't actually consume the Futures passed to it, it just waits for them to complete, and then returns the Future objects: coroutine asyncio.wait(futures, *, loop=None, timeout=None, return_when=ALL_COMPLETED) Wait for the Futures and coroutine objects given by the sequence futures to complete. Coroutines will be wrapped in Tasks. Returns two sets of Future: (done, pending). Until you actually yield from/await the items in the done list, they'll remain unconsumed. Since your program exits without consuming the futures, you see the \"exception was never retrieved\" messages. For your use-case, it probably makes more sense to use asyncio.gather, which will actually consume each Future, and then return a single Future that aggregates all their results (or raises the first Exception thrown by a future in the input list). def get_runs(): runs = [ {'name': 'one'}, {'name': 'two'} ] loop = asyncio.get_event_loop() tasks = asyncio.gather(*[fill_data(r) for r in runs]) loop.run_until_complete(tasks) return runs Output: GET http://www.google.com/two GET http://www.google.com/one Exception(\"http://www.google.com/one has error '404: Not Found'\",) Note that asyncio.gather actually lets you customize its behavior when one of the futures raises an exception; the default behavior is to raise the first exception it hits, but it can also just return each exception object in the output list: asyncio.gather(*coros_or_futures, loop=None, return_exceptions=False) Return a future aggregating results from the given coroutine objects or futures. All futures must share the same event loop. If all the tasks are done successfully, the returned future\u2019s result is the list of results (in the order of the original sequence, not necessarily the order of results arrival). If return_exceptions is True, exceptions in the tasks are treated the same as successful results, and gathered in the result list; otherwise, the first raised exception will be immediately propagated to the returned future."} +{"question_id": 37203970, "score": 84, "creation_date": 1463124923, "tags": ["python", "image", "opencv", "grayscale", "imread"], "instruction": "Opencv - Grayscale mode Vs gray color conversion\n\nI am working in opencv(2.4.11) python(2.7) and was playing around with gray images. I found an unusual behavior when loading image in gray scale mode and converting image from BGR to GRAY. Following is my experimental code: import cv2 path = 'some/path/to/color/image.jpg' # Load color image (BGR) and convert to gray img = cv2.imread(path) img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Load in grayscale mode img_gray_mode = cv2.imread(path, 0) # diff = img_gray_mode - img_gray diff = cv2.bitwise_xor(img_gray,img_gray_mode) cv2.imshow('diff', diff) cv2.waitKey() When I viewed the difference image, I can see the left out pixels instead of jet black image. Can you suggest any reason? What is the correct way of working with gray images. P.S. When I use both the images in SIFT, keypoints are different which may lead to different outcome specially when working with bad quality images.", "output": "To illustrate, I've opened up this same color JPEG image: once using the conversion img = cv2.imread(path) img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) and another by loading it in gray scale mode img_gray_mode = cv2.imread(path, cv2.IMREAD_GRAYSCALE) Like you've documented, the diff between the two images is not perfectly 0, I can see diff pixels in towards the left and the bottom I've summed up the diff too to see import numpy as np np.sum(diff) # I got 6143, on a 494 x 750 image I tried all cv2.imread() modes Among all the IMREAD_ modes for cv2.imread(), only IMREAD_COLOR and IMREAD_ANYCOLOR can be converted using COLOR_BGR2GRAY, and both of them gave me the same diff against the image opened in IMREAD_GRAYSCALE The difference doesn't seem that big. My guess is comes from the differences in the numeric calculations in the two methods (loading grayscale vs conversion to grayscale) Naturally what you want to avoid is fine tuning your code on a particular version of the image just to find out it was suboptimal for images coming from a different source. In brief, let's not mix the versions and types in the processing pipeline. So I'd keep the image sources homogenous, e.g. if you have capturing the image from a video camera in BGR, then I'd use BGR as the source, and do the BGR to grayscale conversion cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) Vice versa if my ultimate source is grayscale then I'd open the files and the video capture in gray scale cv2.imread(path, cv2.IMREAD_GRAYSCALE)"} +{"question_id": 71603314, "score": 83, "creation_date": 1648128152, "tags": ["python", "ssl", "openssl", "runtime-error"], "instruction": "SSL error unsafe legacy renegotiation disabled\n\nI am running a Python code where I have to get some data from HTTPSConnectionPool(host='ssd.jpl.nasa.gov', port=443). But each time I try to run the code I get the following error. I am on MAC OS 12.1 raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='ssd.jpl.nasa.gov', port=443): Max retries exceeded with url: /api/horizons.api?format=text&EPHEM_TYPE=OBSERVER&QUANTITIES_[...]_ (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:997)'))) I really don't know how to bypass this issue.", "output": "This error comes up when using OpenSSL 3 to connect to a server which does not support it. The solution is to downgrade the cryptography package in python: run pip install cryptography==36.0.2 in the used enviroment. source: https://github.com/scrapy/scrapy/issues/5491 EDIT: Refer to Harry Mallon and ahmkara's answer for a fix without downgrading cryptography"} +{"question_id": 46288847, "score": 83, "creation_date": 1505772375, "tags": ["python", "pip"], "instruction": "How to suppress pip upgrade warning?\n\nMy pip version was off -- every pip command was saying: You are using pip version 6.0.8, however version 8.1.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. And I didn't like the answers given here: How can I get rid of this warning to upgrade from pip? because they all want to get pip out of sync with the RH version. So I tried a clean system install with this VagrantFile: Vagrant.configure(\"2\") do |config| config.ssh.username = 'root' config.ssh.password = 'vagrant' config.ssh.insert_key = 'true' config.vm.box = \"bento/centos-7.3\" config.vm.provider \"virtualbox\" do |vb| vb.cpus = \"4\" vb.memory = \"2048\" end config.vm.synced_folder \"..\", \"/vagrant\" config.vm.network \"public_network\", bridge: \"eth0\", ip: \"192.168.1.31\" config.vm.provision \"shell\", inline: <<-SHELL set -x # Install pip yum install -y epel-release yum install -y python-pip pip freeze # See if pip prints version warning on fresh OS install. SHELL end But then I got: ==> default: ++ pip freeze ==> default: You are using pip version 8.1.2, however version 9.0.1 is available. ==> default: You should consider upgrading via the 'pip install --upgrade pip' command. So it seems that I'm using the wrong commands to install pip. What are correct commands to use?", "output": "There are many options (2021 update)... Use a command line flag pip --disable-pip-version-check [options] Configure pip from the command line pip config set global.disable-pip-version-check true Set an environment variable export PIP_DISABLE_PIP_VERSION_CHECK=1 Use a config file Create a pip configuration file and set disable-pip-version-check to true [global] disable-pip-version-check = True On many linux the default location for the pip configuration file is $HOME/.config/pip/pip.conf. Locations for Windows, macOS, and virtualenvs are too various to detail here. Refer to the docs: https://pip.pypa.io/en/stable/user_guide/#config-file"} +{"question_id": 59286983, "score": 83, "creation_date": 1576071215, "tags": ["python", "python-packaging", "python-poetry"], "instruction": "How to run a script using pyproject.toml settings and poetry?\n\nI am using poetry to create .whl files. I have an FTP sever running on a remote host. I wrote a Python script (log_revision.py) which save in a database the git commit, few more parameters and in the end sends the .whl (that poetry created) to the remote server (each .whl in a different path in the server, the path is saved in the DB). At the moment I run the script manually after each time I run the poetry build command. I know the pyproject.toml has the [tool.poetry.scripts] but I don't know how I can use it to run a Python script. I tried: [tool.poetry.scripts] my-script = \"my_package_name:log_revision.py and then poetry run my-script but I always get an error: AttributeError: module 'my_package_namen' has no attribute 'log_revision' How can I run this command? As a short term option (without git and params) I tried to use poetry publish -r http://192.168.1.xxx/home/whl -u hello -p world but I get the following error: [RuntimeError] Repository http://192.168.1.xxx/home/whl is not defined What am I doing wrong and how can I fix it?", "output": "At the moment the [tool.poetry.scripts] sections is equivalent to setuptools console_scripts. So the argument must be a valid module and method name. Let's imagine within your package my_package, you have log_revision.py, which has a method start(). Then you have to write: [tool.poetry.scripts] my-script = \"my_package.log_revision:start\" Here's a complete example: You should have this folder structure: my_package \u251c\u2500\u2500 my_package \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 log_revision.py \u2514\u2500\u2500 pyproject.toml The complete content of a functional pyproject.toml is: [tool.poetry] name = \"my_package\" version = \"0.1.0\" description = \"\" authors = [\"Your Name \"] [tool.poetry.dependencies] python = \"^3.8\" [tool.poetry.scripts] my-script = \"my_package.log_revision:start\" [build-system] requires = [\"poetry_core>=1.0.0\"] build-backend = \"poetry.core.masonry.api\" and in my_package/log_revision.py: def start(): print(\"Hello\") After you have run poetry install once you should be able to do this: $ poetry run my-script Hello You cannot pass something to the start() method directly. Instead you can use command line arguments and parse them, e.g. with python's argparse."} +{"question_id": 19821425, "score": 83, "creation_date": 1383767413, "tags": ["python", "numpy", "scipy", "nearest-neighbor"], "instruction": "How to filter numpy array by list of indices?\n\nI have a numpy array, filtered__rows, comprised of LAS data [x, y, z, intensity, classification]. I have created a cKDTree of points and have found nearest neighbors, query_ball_point, which is a list of indices for the point and its neighbors. Is there a way to filter filtered__rows to create an array of only points whose index is in the list returned by query_ball_point?", "output": "It looks like you just need a basic integer array indexing: filter_indices = [1,3,5] np.array([11,13,155,22,0xff,32,56,88])[filter_indices]"} +{"question_id": 3382352, "score": 83, "creation_date": 1280672699, "tags": ["python", "numpy"], "instruction": "Equivalent of Numpy.argsort() in basic python?\n\nis there a builtin function of Python that does on python.array what argsort() does on a numpy.array?", "output": "I timed the suggestions above and here are my results. import timeit import random import numpy as np def f(seq): # http://stackoverflow.com/questions/3382352/equivalent-of-numpy-argsort-in-basic-python/3383106#3383106 #non-lambda version by Tony Veijalainen return [i for (v, i) in sorted((v, i) for (i, v) in enumerate(seq))] def g(seq): # http://stackoverflow.com/questions/3382352/equivalent-of-numpy-argsort-in-basic-python/3383106#3383106 #lambda version by Tony Veijalainen return [x for x,y in sorted(enumerate(seq), key = lambda x: x[1])] def h(seq): #http://stackoverflow.com/questions/3382352/equivalent-of-numpy-argsort-in-basic-python/3382369#3382369 #by unutbu return sorted(range(len(seq)), key=seq.__getitem__) seq = list(range(10000)) random.shuffle(seq) n_trials = 100 for cmd in [ 'f(seq)', 'g(seq)', 'h(seq)', 'np.argsort(seq)', 'np.argsort(seq).tolist()' ]: t = timeit.Timer(cmd, globals={**globals(), **locals()}) print('time for {:d}x {:}: {:.6f}'.format(n_trials, cmd, t.timeit(n_trials))) output time for 100x f(seq): 0.323915 time for 100x g(seq): 0.235183 time for 100x h(seq): 0.132787 time for 100x np.argsort(seq): 0.091086 time for 100x np.argsort(seq).tolist(): 0.104226 A problem size dependent analysis is given here."} +{"question_id": 15738700, "score": 83, "creation_date": 1364797765, "tags": ["python"], "instruction": "A quick way to return list without a specific element in Python\n\nIf I have a list of card suits in arbitrary order like so: suits = [\"h\", \"c\", \"d\", \"s\"] and I want to return a list without the 'c' noclubs = [\"h\", \"d\", \"s\"] is there a simple way to do this?", "output": ">>> suits = [\"h\", \"c\", \"d\", \"s\"] >>> noclubs = list(suits) >>> noclubs.remove(\"c\") >>> noclubs ['h', 'd', 's'] If you don't need a seperate noclubs >>> suits = [\"h\", \"c\", \"d\", \"s\"] >>> suits.remove(\"c\")"} +{"question_id": 49028611, "score": 83, "creation_date": 1519817646, "tags": ["python", "python-3.x", "pytest"], "instruction": "pytest cannot find module\n\nI am following the pytest good practices or at least I think I am. However, pytest cannot find my module. It seems not to include the current directory in its PYTHONPATH. The source file: def add(x, y): return x + y The test file: import pytest from junk.ook import add def test_add_true(): assert add(1, 1) == 2 And the shell output with a Python 3 virtual environment called \"p3\". p3; pwd /home/usr/tmp/junk p3; ls total 0 0 junk/ 0 tests/ p3; ls junk total 4.0K 4.0K ook.py 0 __init__.py p3; ls tests total 4.0K 4.0K test_ook.py 0 __pycache__/ p3; pytest ============================= test session starts ============================== platform linux -- Python 3.4.5, pytest-3.4.1, py-1.5.2, pluggy-0.6.0 rootdir: /home/usr/tmp/junk, inifile: collected 0 items / 1 errors ==================================== ERRORS ==================================== ______________________ ERROR collecting tests/test_ook.py ______________________ ImportError while importing test module '/home/usr/tmp/junk/tests/test_ook.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/test_ook.py:2: in from junk.ook import add E ImportError: No module named 'junk' !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!! =========================== 1 error in 0.08 seconds ============================ def test_add_true(): assert add(1, 1) == 2 However, running the following does work fine. p3; python -m pytest tests/ ============================= test session starts ============================== platform linux -- Python 3.4.5, pytest-3.4.1, py-1.5.2, pluggy-0.6.0 rootdir: /home/usr/tmp/junk, inifile: collected 1 item tests/test_ook.py . [100%] =========================== 1 passed in 0.02 seconds =========================== What am I doing wrong?", "output": "Update for pytest 7 and newer: use the pythonpath setting Recently, pytest has added a new core plugin that supports sys.path modifications via the pythonpath configuration value. The solution is thus much simpler now and doesn't require any workarounds anymore: pyproject.toml example: [tool.pytest.ini_options] pythonpath = [ \".\" ] pytest.ini example: [pytest] pythonpath = . The path entries are calculated relative to the rootdir, thus . adds junk directory to sys.path in this case. Multiple path entries are also allowed: for a layout junk/ \u251c\u2500\u2500 src/ | \u2514\u2500\u2500 lib.py \u251c\u2500\u2500 junk/ \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 ook.py \u2514\u2500\u2500 tests \u251c\u2500\u2500 test_app.py \u2514\u2500\u2500 test_lib.py the configuration [tool.pytest.ini_options] pythonpath = [ \".\", \"src\", ] or [pytest] pythonpath = . src will add both lib module and junk package to sys.path, so import junk import lib will both work. Original answer Just put an empty conftest.py file in the project root directory: $ pwd /home/usr/tmp/junk $ touch conftest.py Your project structure should become: junk \u251c\u2500\u2500 conftest.py \u251c\u2500\u2500 junk \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 ook.py \u2514\u2500\u2500 tests \u2514\u2500\u2500 test_ook.py What happens here: when pytest discovers a conftest.py, it modifies sys.path so it can import stuff from the conftest module. So, since now an empty conftest.py is found in rootdir, pytest will be forced to append it to sys.path. The side effect of this is that your junk module becomes importable."} +{"question_id": 39400115, "score": 83, "creation_date": 1473368393, "tags": ["python", "pandas", "dataframe", "datetime", "group-by"], "instruction": "Python Pandas Group by date using datetime data\n\nI have a datetime column Date_Time that I wish to groupby without creating a new column. Is this possible? I tried the following and it does not work. df = pd.groupby(df,by=[df['Date_Time'].date()])", "output": "resample df.resample('D', on='Date_Time').mean() B Date_Time 2001-10-01 4.5 2001-10-02 6.0 Grouper As suggested by @JosephCottam df.set_index('Date_Time').groupby(pd.Grouper(freq='D')).mean() B Date_Time 2001-10-01 4.5 2001-10-02 6.0 Deprecated uses of TimeGrouper You can set the index to be 'Date_Time' and use pd.TimeGrouper df.set_index('Date_Time').groupby(pd.TimeGrouper('D')).mean().dropna() B Date_Time 2001-10-01 4.5 2001-10-02 6.0"} +{"question_id": 46623583, "score": 83, "creation_date": 1507400354, "tags": ["python", "pandas", "seaborn", "bar-chart", "countplot"], "instruction": "Order categories by count in a countplot\n\nI know that seaborn.countplot has the attribute order which can be set to determine the order of the categories. But what I would like to do is have the categories be in order of descending count. I know that I can accomplish this by computing the count manually (using a groupby operation on the original dataframe, etc.) but I am wondering if this functionality exists with seaborn.countplot.", "output": "This functionality is not built into seaborn.countplot as far as I know - the order parameter only accepts a list of strings for the categories, and leaves the ordering logic to the user. This is not hard to do with value_counts() provided you have a DataFrame though. For example, import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set(style='darkgrid') titanic = sns.load_dataset('titanic') sns.countplot(x = 'class', data = titanic, order = titanic['class'].value_counts().index) plt.show()"} +{"question_id": 59156895, "score": 83, "creation_date": 1575375232, "tags": ["python", "importerror"], "instruction": "cannot import name 'mydb' from partially initialized module 'connection' in Python\n\nPython 3.8 error ImportError: cannot import name 'mydb' from partially initialized module 'connection' (most likely due to a circular import) (C:\\U sers\\Mark04\\Documents\\Python tutorial\\databasing\\connection.py) When I tried to execute child module select.py import bcrypt; from connection import mydb That has an imported module connection.py import mysql.connector mydb = \"Success\"; I don't know what is the problem. The error doesn't appear when I remove import mysql.connector from my module connection.py, but it does not solve my problem. > python -m select", "output": "To answer the above question, we need to understand the problem of circular dependency. To understand the circular dependency, I want to layout a simple example, in front of you. I think every app needs to have the few basic blocks as follows: +----------------+-------------------------------------------------------------------------------------------+ | Filename | Description | +----------------+-------------------------------------------------------------------------------------------+ | app.py | Creates the app and starts the server. | | models.py | Define what the entity will look like (e.g, UserModel has username, email, password etc.) | | controllers.py | Fetches Data from database, generates HTML and sends the response to the user browser. | +----------------+-------------------------------------------------------------------------------------------+ Our simple example will also have three files project/ - app.py ( Creates and starts the server) - models.py ( Class to model a user) - controllers.py ( We will fetch data from database, and return html to user.) The contents of the app.py file will look as follows: # ============= # app.py # ============= # Define the application app = Flask() # Define the Database db = SQLAlchemy(app) # Register the Controller from .controllers import auth_controller app.register_blueprint(auth_controller) The contents of the models.py file will look as follows: # ============= # models.py # ============= from .app import db # We will not focus on implementation class User(db.Model): pass The contents of the controllers.py file will look as follows: # ============= # controllers.py # ============= from flask import Blueprint from .models import User # Create the auth app auth = Blueprint('auth', __name__) # Define the Rotues @auth.route('/login') def login(): return \"I will fetch some data and allow the user to login\" I think now, I have laid out a diagram of our app, now let's proceed to understanding how the app will work. The app starts from app.py app variable inside the app.py file gets created in memory. db variable inside the app.py gets created in memory. Now, to import auth from controllers.py file we switch to controllers.py file We import Blueprint from flask. To import User, we switch to models.py file. Now, inside models.py file we import db (We are able to import it because it was created in step 3) And program continues so on and so on.... The most important import step in the above sequence is step 7, because it will cause the problem of circular dependency in our app, in just a moment. Now we will try to change the app.py file to introduce the problem of circular dependency. Now, as a developer, we might think that all our imports should be at the top of the file, doesn't it make your code cleaner. Yes, of course! it does make the code cleaner. # ============================ # Refactored app.py file # ============================ from .controllers import auth_controller # ...... # Rest of the file is same, we only shifted this import at the top Now, we have a problem of circular dependency in our app. Let me show you, how? Our app starts from app.py file First, we need to import auth_controller from controllers.py file Let's visit the controllers.py file, and process it. From flask we import Blueprint Let's switch to models.py file to import User Inside models.py file, we import db from app (But db does not exist yet.) Now, I think you got the point, The problem if just saw, was an example of circular dependency. The same problem is causing the ImportError in your case. The solution would be to inspect the import statements and put them at the right place. Sometimes, we use code formatter, and it refactors all the import statements at the top. This might be causing the problem for you. I hope this may answer your question!"} +{"question_id": 6787233, "score": 83, "creation_date": 1311322200, "tags": ["python"], "instruction": "Python, how to read bytes from file and save it?\n\nI want to read bytes from a file and then write those bytes to another file, and save that file. How do I do this?", "output": "Here's how to do it with the basic file operations in Python. This opens one file, reads the data into memory, then opens the second file and writes it out. in_file = open(\"in-file\", \"rb\") # opening for [r]eading as [b]inary data = in_file.read() # if you only wanted to read 512 bytes, do .read(512) in_file.close() out_file = open(\"out-file\", \"wb\") # open for [w]riting as [b]inary out_file.write(data) out_file.close() We can do this more succinctly by using the with keyboard to handle closing the file. with open(\"in-file\", \"rb\") as in_file, open(\"out-file\", \"wb\") as out_file: out_file.write(in_file.read()) If you don't want to store the entire file in memory, you can transfer it in pieces. chunk_size = 4096 # 4 KiB with open(\"in-file\", \"rb\") as in_file, open(\"out-file\", \"wb\") as out_file: while True: chunk = in_file.read(chunk_size) if chunk == b\"\": break # end of file out_file.write(chunk)"} +{"question_id": 2333872, "score": 83, "creation_date": 1267100501, "tags": ["python", "file-io", "atomic"], "instruction": "How to make file creation an atomic operation?\n\nI am using Python to write chunks of text to files in a single operation: open(file, 'w').write(text) If the script is interrupted so a file write does not complete I want to have no file rather than a partially complete file. Can this be done?", "output": "Write data to a temporary file and when data has been successfully written, rename the file to the correct destination file e.g with open(tmpFile, 'w') as f: f.write(text) # make sure that all data is on disk # see http://stackoverflow.com/questions/7433057/is-rename-without-fsync-safe f.flush() os.fsync(f.fileno()) os.replace(tmpFile, myFile) # os.rename pre-3.3, but os.rename won't work on Windows According to doc http://docs.python.org/library/os.html#os.replace Rename the file or directory src to dst. If dst is a non-empty directory, OSError will be raised. If dst exists and is a file, it will be replaced silently if the user has permission. The operation may fail if src and dst are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement). Note: It may not be atomic operation if src and dest locations are not on same filesystem os.fsync step may be skipped if performance/responsiveness is more important than the data integrity in cases like power failure, system crash etc"} +{"question_id": 5079609, "score": 83, "creation_date": 1298386116, "tags": ["python", "overloading"], "instruction": "Methods with the same name in one class in Python\n\nHow can I declare a few methods with the same name, but with different numbers of parameters or different types in one class? What must I change in the following class? class MyClass: \"\"\"\"\"\" #---------------------------------------------------------------------- def __init__(self): \"\"\"Constructor\"\"\" def my_method(self,parameter_A_that_Must_Be_String): print parameter_A_that_Must_Be_String def my_method(self,parameter_A_that_Must_Be_String,parameter_B_that_Must_Be_String): print parameter_A_that_Must_Be_String print parameter_B_that_Must_Be_String def my_method(self,parameter_A_that_Must_Be_String,parameter_A_that_Must_Be_Int): print parameter_A_that_Must_Be_String * parameter_A_that_Must_Be_Int", "output": "You can have a function that takes in a variable number of arguments. def my_method(*args, **kwds): # Do something # When you call the method my_method(a1, a2, k1=a3, k2=a4) # You get: args = (a1, a2) kwds = {'k1':a3, 'k2':a4} So you can modify your function as follows: def my_method(*args): if len(args) == 1 and isinstance(args[0], str): # Case 1 elif len(args) == 2 and isinstance(args[1], int): # Case 2 elif len(args) == 2 and isinstance(args[1], str): # Case 3"} +{"question_id": 52016336, "score": 83, "creation_date": 1535192803, "tags": ["python", "python-3.x", "package", "pypi", "twine"], "instruction": "How to upload new versions of project to PyPI with twine?\n\nI've uploaded my Python package to PyPI. But now I made new version of my package and need to upload it. I tried to make same progress which I did when upload the package first time but I got the error below: HTTPError: 400 Client Error: File already exists. See https://pypi.org/help/#file-name-reuse for url: https://upload.pypi.org/legacy Now, how to upload new version without any error?!", "output": "PyPI does not allow for the reuse of distribution filenames (project name + version number + distribution type). This ensures that a given distribution for a given release for a given project will always resolve to the same file, and cannot be surreptitiously changed one day by the projects maintainer or a malicious party (it can only be removed). You will need to change the version number to one that you haven't previously uploaded to PyPI. You didn't mention how you're uploading the distribution, but if you're using twine, it's also possible you're attempting to re-upload a previously uploaded distribution. To resolve this, you can do: $ twine upload --skip-existing dist/*"} +{"question_id": 32938475, "score": 83, "creation_date": 1443993276, "tags": ["python", "flask", "sqlalchemy", "flask-sqlalchemy"], "instruction": "Flask-SQLAlchemy check if row exists in table\n\nI have a Flask application which uses Flask-SQLAlchemy to connect to a MySQL database. I would like to be able to check whether a row is present in a table. How would I modify a query like so to check the row exists: db.session.query(User).filter_by(name='John Smith') I found a solution on this question which uses SQLAlchemy but does not seem to fit with the way Flask-SQLAlchemy works: from sqlalchemy.sql import exists print session.query(exists().where(User.email == '...')).scalar() Thanks.", "output": "Since you only want to see if the user exists, you don't want to query the entire object. Only query the id, it exists if the scalar return is not None. exists = db.session.query(User.id).filter_by(name='davidism').first() is not None SELECT user.id AS user_id FROM user WHERE user.name = ? If you know name (or whatever field you're querying) is unique, you can use scalar instead of first. The second query you showed also works fine, Flask-SQLAlchemy does nothing to prevent any type of query that SQLAlchemy can make. This returns False or True instead of None or an id like above, but it is slightly more expensive because it uses a subquery. exists = db.session.query(db.exists().where(User.name == 'davidism')).scalar() SELECT EXISTS (SELECT * FROM user WHERE user.name = ?) AS anon_1"} +{"question_id": 8305518, "score": 83, "creation_date": 1322537807, "tags": ["python", "dictionary", "key"], "instruction": "switching keys and values in a dictionary in python\n\nSay I have a dictionary like so: my_dict = {2:3, 5:6, 8:9} Is there a way that I can switch the keys and values to get: {3:2, 6:5, 9:8}", "output": "For Python 3: my_dict2 = {y: x for x, y in my_dict.items()} For Python 2, you can use my_dict2 = dict((y, x) for x, y in my_dict.iteritems())"} +{"question_id": 7006238, "score": 83, "creation_date": 1312953670, "tags": ["python", "windows", "console", "subprocess"], "instruction": "How do I hide the console when I use os.system() or subprocess.call()?\n\nI wrote some statements like below: os.system(cmd) #do something subprocess.call('taskkill /F /IM exename.exe') both will pop up a console. How can I stop it from popping up the console?", "output": "The process STARTUPINFO can hide the console window: si = subprocess.STARTUPINFO() si.dwFlags |= subprocess.STARTF_USESHOWWINDOW #si.wShowWindow = subprocess.SW_HIDE # default subprocess.call('taskkill /F /IM exename.exe', startupinfo=si) Or set the creation flags to disable creating the window: CREATE_NO_WINDOW = 0x08000000 subprocess.call('taskkill /F /IM exename.exe', creationflags=CREATE_NO_WINDOW) The above is still a console process with valid handles for console I/O (verified by calling GetFileType on the handles returned by GetStdHandle). It just has no window and doesn't inherit the parent's console, if any. You can go a step farther by forcing the child to have no console at all: DETACHED_PROCESS = 0x00000008 subprocess.call('taskkill /F /IM exename.exe', creationflags=DETACHED_PROCESS) In this case the child's standard handles (i.e. GetStdHandle) are 0, but you can set them to an open disk file or pipe such as subprocess.DEVNULL (3.3) or subprocess.PIPE."} +{"question_id": 6313308, "score": 83, "creation_date": 1307752733, "tags": ["python", "matrix", "diagonal"], "instruction": "Get all the diagonals in a matrix/list of lists in Python\n\nI'm looking for a Pythonic way to get all the diagonals of a (square) matrix, represented as a list of lists. Suppose I have the following matrix: matrix = [[-2, 5, 3, 2], [ 9, -6, 5, 1], [ 3, 2, 7, 3], [-1, 8, -4, 8]] Then the large diagonals are easy: l = len(matrix[0]) print([matrix[i][i] for i in range(l)]) # [-2, -6, 7, 8] print([matrix[l-1-i][i] for i in range(l-1,-1,-1)]) # [ 2, 5, 2, -1] But I have trouble coming up with a way to generate all the diagonals. The output I'm looking for is: [[-2], [9, 5], [3,-6, 3], [-1, 2, 5, 2], [8, 7, 1], [-4, 3], [8], [2], [3,1], [5, 5, 3], [-2, -6, 7, 8], [9, 2, -4], [3, 8], [-1]]", "output": "I came across another interesting solution to this issue. The row, column, forward, and backward diagonal can all be immediately discovered by looking at a combination of x and y. Column = x Row = y F-Diag = x+y B-Diag = x-y B-Diag` = x-y-MIN | 0 1 2 | 0 1 2 | 0 1 2 | 0 1 2 | 0 1 2 --|--------- --|--------- --|--------- --|--------- --|--------- 0 | 0 1 2 0 | 0 0 0 0 | 0 1 2 0 | 0 1 2 0 | 2 3 4 1 | 0 1 2 1 | 1 1 1 1 | 1 2 3 1 |-1 0 1 1 | 1 2 3 2 | 0 1 2 2 | 2 2 2 2 | 2 3 4 2 |-2 -1 0 2 | 0 1 2 From the diagram you can see that each diagonal and axis is uniquely identifiable using these equations. Take each unique number from each table and create a container for that identifier. Note that the backward diagonals have been offset to start at a zero index, and that the length of forward diagonals is always equal to the length of backward diagonals. test = [[1,2,3],[4,5,6],[7,8,9],[10,11,12]] max_col = len(test[0]) max_row = len(test) cols = [[] for _ in range(max_col)] rows = [[] for _ in range(max_row)] fdiag = [[] for _ in range(max_row + max_col - 1)] bdiag = [[] for _ in range(len(fdiag))] min_bdiag = -max_row + 1 for x in range(max_col): for y in range(max_row): cols[x].append(test[y][x]) rows[y].append(test[y][x]) fdiag[x+y].append(test[y][x]) bdiag[x-y-min_bdiag].append(test[y][x]) print(cols) print(rows) print(fdiag) print(bdiag) Which will print [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]] [[1], [2, 4], [3, 5, 7], [6, 8, 10], [9, 11], [12]] [[10], [7, 11], [4, 8, 12], [1, 5, 9], [2, 6], [3]] Using a defaultdict and a lambda, this can be generalized further: from collections import defaultdict def groups(data, func): grouping = defaultdict(list) for y in range(len(data)): for x in range(len(data[y])): grouping[func(x, y)].append(data[y][x]) return list(map(grouping.get, sorted(grouping))) test = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] cols = groups(test, lambda x, y: x) rows = groups(test, lambda x, y: y) fdiag = groups(test, lambda x, y: x + y) bdiag = groups(test, lambda x, y: x - y)"} +{"question_id": 19957194, "score": 83, "creation_date": 1384355192, "tags": ["python", "python-2.7", "beautifulsoup", "pip"], "instruction": "Install Beautiful Soup using pip\n\nI am trying to install Beautiful Soup using pip in Python 2.7. I keep getting an error message and can't understand why. I followed the instructions to install pip, which was installed to the following directory: c:\\Python27\\Scripts\\pip.exe. Then I tried adding it to the path, and running the pip install package command. I tried it two different ways: import sys sys.path.append('C:\\\\Python27\\\\Scripts\\\\pip.exe') pip install beautifulsoup4 import sys sys.path.append('C:\\\\Python27\\\\Scripts') pip install beautifulsoup4 Both give me this error message: >>> pip install beautifulsoup4 SyntaxError: invalid syntax The shell is highlighting the word \"install\" and saying that it's invalid syntax. What's going on?", "output": "pip is a command line tool, not Python syntax. In other words, run the command in your console, not in the Python interpreter: pip install beautifulsoup4 You may have to use the full path: C:\\Python27\\Scripts\\pip install beautifulsoup4 or even C:\\Python27\\Scripts\\pip.exe install beautifulsoup4 Windows will then execute the pip program and that will use Python to install the package. Another option is to use the Python -m command-line switch to run the pip module, which then operates exactly like the pip command: python -m pip install beautifulsoup4 or python.exe -m pip install beautifulsoup4"} +{"question_id": 22104256, "score": 83, "creation_date": 1393615980, "tags": ["python", "numpy", "matplotlib"], "instruction": "Does matplotlib have a function for drawing diagonal lines in axis coordinates?\n\nMatplotlib Axes have the functions axhline and axvline for drawing horizontal or vertical lines at a given y or x coordinate (respectively) independently of the data scale on an Axes. Is there a similar function for plotting a constant diagonal? For example, if I have a scatterplot of variables with a similar domain, it is often useful to know whether they fall above or below the line of y = x: mean, cov = [0, 0], [(1, .6), (.6, 1)] x, y = np.random.multivariate_normal(mean, cov, 100).T y += x + 1 f, ax = plt.subplots(figsize=(6, 6)) ax.scatter(x, y, c=\".3\") ax.plot([-3, 3], [-3, 3], ls=\"--\", c=\".3\") ax.set(xlim=(-3, 3), ylim=(-3, 3)) This can of course be done programmatically by grabbing the axis limits, (ax.get_xlim(), etc.), but that a) takes a few extra steps and b) is brittle in cases where more data might end up on the plot and shift the limits. (Actually in some cases just adding the constant line itself stretches the axes). It would be preferable to just do, e.g., ax.axdline(ls=\"--\", c=\".3\"), but it's not clear if something like this exists in the matplotlib codebase. All you would need to do would be modify the axhline code to plot from [0, 1] in axes coordinates for both x and y, I think.", "output": "Starting from matplotlib 3.3.0, it will: https://matplotlib.org/3.3.0/api/_as_gen/matplotlib.axes.Axes.axline.html Axes.axline(self, xy1, xy2=None, *, slope=None, **kwargs) Add an infinitely long straight line. The line can be defined either by two points xy1 and xy2, or by one point xy1 and a slope. This draws a straight line \"on the screen\", regardless of the x and y scales, and is thus also suitable for drawing exponential decays in semilog plots, power laws in loglog plots, etc. However, slope should only be used with linear scales; It has no clear meaning for all other scales, and thus the behavior is undefined. Please specify the line using the points xy1, xy2 for non-linear scales."} +{"question_id": 20905350, "score": 83, "creation_date": 1388757642, "tags": ["python", "pip", "setuptools", "python-wheel"], "instruction": "Latest 'pip' fails with \"requires setuptools >= 0.8 for dist-info\"\n\nUsing the recent (1.5) version of pip, I get an error when attempting to update several packages. For example, sudo pip install -U pytz results in failure with: Wheel installs require setuptools >= 0.8 for dist-info support. pip's wheel support requires setuptools >= 0.8 for dist-info support. I don't understand this message (I have setuptools 2.1) or what to do about it. Exception information from the log for this error: Exception information: Traceback (most recent call last): File \"/Library/Python/2.7/site-packages/pip/basecommand.py\", line 122, in main status = self.run(options, args) File \"/Library/Python/2.7/site-packages/pip/commands/install.py\", line 230, in run finder = self._build_package_finder(options, index_urls, session) File \"/Library/Python/2.7/site-packages/pip/commands/install.py\", line 185, in _build_package_finder session=session, File \"/Library/Python/2.7/site-packages/pip/index.py\", line 50, in __init__ self.use_wheel = use_wheel File \"/Library/Python/2.7/site-packages/pip/index.py\", line 89, in use_wheel raise InstallationError(\"pip's wheel support requires setuptools >= 0.8 for dist-info support.\") InstallationError: pip's wheel support requires setuptools >= 0.8 for dist-info support.", "output": "This worked for me: sudo pip install setuptools --no-use-wheel --upgrade Note it's usage of sudo UPDATE On Windows you just need to execute pip install setuptools --no-use-wheel --upgrade as an administrator. In Unix/Linux, the sudo command is for elevating permissions. UPDATE 2 This appears to have been fixed in 1.5.1."} +{"question_id": 21222506, "score": 83, "creation_date": 1390164759, "tags": ["python", "dictionary"], "instruction": "Multiple assignments into a python dictionary\n\nIs it possible to assign values to more than one keys of a dictionary in a more concise way than the one below? I mean, let d be a dictionary initialized as below: d={'a':1,'b':2,'c':3} To assign values to multiple keys I need to do this: d['a']=10 d['b']=200 d['c']=30 Can I achieve same with something like this: d['a','b','c']=10,200,30 Thanks.", "output": "You can use dict.update: d.update({'a': 10, 'b': 200, 'c': 30}) This will overwrite the values for existing keys and add new key-value-pairs for keys that do not already exist."} +{"question_id": 7162366, "score": 83, "creation_date": 1314108622, "tags": ["python"], "instruction": "Get location of the .py source file\n\nSay I have a python file in directory e like this: /a/b/c/d/e/file.py Under directory e I have a few folders I want to access but if file.py is executed from anywhere else rather than from folder e the relative path won't work for me. Also folder e could be located anywhere but always with the a set of sub folders so absolute path will not work. First, is there any function to get the absolute path in relation to the source files location? If not, any ideas how to sort this out? Grab the command line used and add the CWD together? My problem here is that this folder are being installed on 20 different machines and OS's and I want it to be as dynamic as possible with little configurations and \"rules\" where it has to be installed etc.", "output": "In Python +3.4 use of pathlib is more handy: from pathlib import Path source_path = Path(__file__).resolve() source_dir = source_path.parent"} +{"question_id": 39815771, "score": 83, "creation_date": 1475401883, "tags": ["python", "python-3.x", "asynchronous", "celery", "python-asyncio"], "instruction": "How to combine Celery with asyncio?\n\nHow can I create a wrapper that makes celery tasks look like asyncio.Task? Or is there a better way to integrate Celery with asyncio? @asksol, the creator of Celery, said this:: It's quite common to use Celery as a distributed layer on top of async I/O frameworks (top tip: routing CPU-bound tasks to a prefork worker means they will not block your event loop). But I could not find any code examples specifically for asyncio framework.", "output": "EDIT: 01/12/2021 previous answer (find it at the bottom) didn't age well therefore I added a combination of possible solutions that may satisfy those who still look on how to co-use asyncio and Celery Lets quickly break up the use cases first (more in-depth analysis here: asyncio and coroutines vs task queues): If the task is I/O bound then it tends to be better to use coroutines and asyncio. If the task is CPU bound then it tends to be better to use Celery or other similar task management systems. So it makes sense in the context of Python's \"Do one thing and do it well\" to not try and mix asyncio and celery together. BUT what happens in cases where we want to be able to run a method both asynchronously and as an async task? then we have some options to consider: The best example that I was able to find is the following: https://johnfraney.ca/posts/2018/12/20/writing-unit-tests-celery-tasks-async-functions/ (and I just found out that it is @Franey's response): Define your async method. Use asgiref's sync.async_to_sync module to wrap the async method and run it synchronously inside a celery task: # tasks.py import asyncio from asgiref.sync import async_to_sync from celery import Celery app = Celery('async_test', broker='a_broker_url_goes_here') async def return_hello(): await asyncio.sleep(1) return 'hello' @app.task(name=\"sync_task\") def sync_task(): async_to_sync(return_hello)() A use case that I came upon in a FastAPI application was the reverse of the previous example: An intense CPU bound process is hogging up the async endpoints. The solution is to refactor the async CPU bound process into a celery task and pass a task instance for execution from the Celery queue. A minimal example for visualization of that case: import asyncio import uvicorn from celery import Celery from fastapi import FastAPI app = FastAPI(title='Example') worker = Celery('worker', broker='a_broker_url_goes_here') @worker.task(name='cpu_boun') def cpu_bound_task(): # Does stuff but let's simplify it print([n for n in range(1000)]) @app.get('/calculate') async def calculate(): cpu_bound_task.delay() if __name__ == \"__main__\": uvicorn.run('main:app', host='0.0.0.0', port=8000) Another solution seems to be what @juanra and @danius are proposing in their answers, but we have to keep in mind that performance tends to take a hit when we intermix sync and async executions, thus those answers need monitoring before we can decide to use them in a prod environment. Finally, there are some ready-made solutions, that I cannot recommend (because I have not used them myself) but I will list them here: Celery Pool AsyncIO which seems to solve exactly what Celery 5.0 didn't, but keep in mind that it seems a bit experimental (version 0.2.0 today 01/12/2021) aiotasks claims to be \"a Celery like task manager that distributes Asyncio coroutines\" but seems a bit stale (latest commit around 2 years ago) Well that didn't age so well did it? Version 5.0 of Celery didn't implement asyncio compatibility thus we cannot know when and if this will ever be implemented... Leaving this here for response legacy reasons (as it was the answer at the time) and for comment continuation. That will be possible from Celery version 5.0 as stated on the official site: http://docs.celeryproject.org/en/4.0/whatsnew-4.0.html#preface The next major version of Celery will support Python 3.5 only, where we are planning to take advantage of the new asyncio library. Dropping support for Python 2 will enable us to remove massive amounts of compatibility code, and going with Python 3.5 allows us to take advantage of typing, async/await, asyncio, and similar concepts there\u2019s no alternative for in older versions. The above was quoted from the previous link. So the best thing to do is wait for version 5.0 to be distributed! In the meantime, happy coding :)"} +{"question_id": 2739552, "score": 83, "creation_date": 1272563294, "tags": ["python", "list", "nested-lists"], "instruction": "2D list has weird behavor when trying to modify a single value\n\nWhen I try this code: data = [[None]*5]*5 data[0][0] = 'Cell A1' The value of data ends up like: [['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None], ['Cell A1', None, None, None, None]] Why does the 'Cell A1' value appear in every nested list?", "output": "This makes a list with five references to the same list: data = [[None]*5]*5 Use something like this instead which creates five separate lists: >>> data = [[None]*5 for _ in range(5)] Now it behaves as expected: >>> data[0][0] = 'Cell A1' >>> print(data) [['Cell A1', None, None, None, None], [None, None, None, None, None], [None, None, None, None, None], [None, None, None, None, None], [None, None, None, None, None]]"} +{"question_id": 11502113, "score": 83, "creation_date": 1342432889, "tags": ["python", "protocol-buffers"], "instruction": "How to get top-level protobuf enum value name by number in python?\n\nFor example, I have proto-file File.proto: enum Test { ONE = 1; TWO = 2; } I generate file File_pb2.py with protoc from File.proto. I want in a python-code get string \"ONE\" (that corresponds to the name of File_pb2.ONE) by value 1 (that corresponds to the value of File_pb2.ONE) from generated file File_pb2.py without defining my own dictionaries. How can I do that?", "output": "Assuming the generated python is located in File_pb2.py code Try this: file_pb2._TEST.values_by_number[1].name In your case, this should give 'ONE' The reverse is : file_pb2._TEST.values_by_name['ONE'].number will give 1. EDIT: As correctly pointed by @dyoo in the comments, a new method was later introduced in protobuf library: file_pb2.Test.Name(1) file_pb2.Test.Value('One') EDIT: This has changed again in proto3. Now the Name() and Value() methods belong to the EnumTypeWrapper class so they can be accessed like: file_pb2.Name(1) file_pb2.Value('One')"} +{"question_id": 2401628, "score": 83, "creation_date": 1268055544, "tags": ["python", "file-io"], "instruction": "open file in \"w\" mode: IOError: [Errno 2] No such file or directory\n\nWhen I try to open a file in write mode with the following code: packetFile = open(\"%s/%s/%s/%s.mol2\" % (\"dir\", \"dir2\", \"dir3\", \"some_file\"), \"w\") I get the following error: IOError: [Errno 2] No such file or directory: 'dir/dir2/dir3/some_file.mol2' The w mode should create the file if it doesn't exist, right? So how can this error ever occur?", "output": "You'll see this error if the directory containing the file you're trying to open does not exist, even when trying to open the file in w mode. Since you're opening the file with a relative path, it's possible that you're confused about exactly what that directory is. Try putting a quick print to check: import os curpath = os.path.abspath(os.curdir) packet_file = \"%s/%s/%s/%s.mol2\" % (\"dir\", \"dir2\", \"dir3\", \"some_file\") print \"Current path is: %s\" % (curpath) print \"Trying to open: %s\" % (os.path.join(curpath, packet_file)) packetFile = open(packet_file, \"w\")"} +{"question_id": 52725278, "score": 83, "creation_date": 1539101654, "tags": ["python", "python-3.x"], "instruction": "During handling of the above exception, another exception occurred\n\nI have below try-except to catch JSON parse errors: with open(json_file) as j: try: json_config = json.load(j) except ValueError as e: raise Exception('Invalid json: {}'.format(e)) Why is During handling of the above exception, another exception occurred printed out, and how do I resolve it? json.decoder.JSONDecodeError: Expecting ',' delimiter: line 103 column 9 (char 1093) During handling of the above exception, another exception occurred: Traceback (most recent call last): <....> raise Exception('Invalid json: {}'.format(e)) Exception: Invalid json: Expecting ',' delimiter: line 103 column 9 (char 1093)", "output": "Currently, you are having an issue with raising the ValueError exception inside another caught exception. The reasoning for this solution doesn't make much sense to me, but if you change this: raise Exception('Invalid json: {}'.format(e)) to this: raise Exception('Invalid json: {}'.format(e)) from None making your end code this: with open(json_file) as j: try: json_config = json.load(j) except ValueError as e: raise Exception('Invalid json: {}'.format(e)) from None You should get the desired result of catching an exception. e.g. >>> foo = {} >>> try: ... var = foo['bar'] ... except KeyError: ... raise KeyError('No key bar in dict foo') from None ... Traceback (most recent call last): File \"\", line 4, in KeyError: 'No key bar in dict foo' Sorry, I can't explain why this works precisely, but it seems to do the trick. UPDATE: It looks like there's a PEP doc explaining how to suppress exceptions inside exception warnings."} +{"question_id": 10542240, "score": 83, "creation_date": 1336684430, "tags": ["python", "numpy"], "instruction": "Easy way to test if each element in an numpy array lies between two values?\n\nI was wondering if there was a syntactically simple way of checking if each element in a numpy array lies between two numbers. In other words, just as numpy.array([1,2,3,4,5]) < 5 will return array([True, True, True, True, False]), I was wondering if it was possible to do something akin to this: 1 < numpy.array([1,2,3,4,5]) < 5 ... to obtain ... array([False, True, True, True, False]) I understand that I can obtain this through logical chaining of boolean tests, but I'm working through some rather complex code and I was looking for a syntactically clean solution. Any tips?", "output": "One solution would be: import numpy as np a = np.array([1, 2, 3, 4, 5]) (a > 1) & (a < 5) # 1 < element < 5? # array([False, True, True, True, False])"} +{"question_id": 10294014, "score": 83, "creation_date": 1335254578, "tags": ["python", "decorator", "syntactic-sugar"], "instruction": "Python decorator best practice, using a class vs a function\n\nAs I've understood it there are two ways to do a Python decorator, to either use the __call__ of a class or to define and call a function as the decorator. What's the advantages/disadvantages of these methods? Is there one preferred method? Example 1 class dec1(object): def __init__(self, f): self.f = f def __call__(self): print(\"Decorating\", self.f.__name__) self.f() @dec1 def func1(): print(\"inside func1()\") func1() # Decorating func1 # inside func1() Example 2 def dec2(f): def new_f(): print(\"Decorating\", f.__name__) f() return new_f @dec2 def func2(): print(\"inside func2()\") func2() # Decorating func2 # inside func2()", "output": "It is rather subjective to say whether there are \"advantages\" to each method. However, a good understanding of what goes under the hood would make it natural for one to pick the best choice for each occasion. A decorator (talking about function decorators), is simply a callable object that takes a function as its input parameter. Python has its rather interesting design that allows one to create other kinds of callable objects, besides functions - and one can put that to use to create more maintainable or shorter code on occasion. Decorators were added back in Python 2.3 as a \"syntactic shortcut\" for def a(x): ... a = my_decorator(a) Besides that, we usually call decorators some \"callables\" that would rather be \"decorator factories\" - when we use this kind: @my_decorator(param1, param2) def my_func(...): ... the call is made to \"my_decorator\" with param1 and param2 - it then returns an object that will be called again, this time having \"my_func\" as a parameter. So, in this case, technically the \"decorator\" is whatever is returned by the \"my_decorator\", making it a \"decorator factory\". Now, either decorators or \"decorator factories\" as described usually have to keep some internal state. In the first case, the only thing it does keep is a reference to the original function (the variable called f in your examples). A \"decorator factory\" may want to register extra state variables (\"param1\" and \"param2\" in the example above). This extra state, in the case of decorators written as functions is kept in variables within the enclosing functions, and accessed as \"nonlocal\" variables by the actual wrapper function. If one writes a proper class, they can be kept as instance variables in the decorator function (which will be seen as a \"callable object\", not a \"function\") - and access to them is more explicit and more readable. So, for most cases it is a matter of readability whether you will prefer one approach or the other: for short, simple decorators, the functional approach is often more readable than one written as a class - while sometimes a more elaborate one - especially one \"decorator factory\" will take full advantage of the \"flat is better than nested\" advice fore Python coding. Consider: def my_dec_factory(param1, param2): ... ... def real_decorator(func): ... def wraper_func(*args, **kwargs): ... #use param1 result = func(*args, **kwargs) #use param2 return result return wraper_func return real_decorator against this \"hybrid\" solution: class MyDecorator(object): \"\"\"Decorator example mixing class and function definitions.\"\"\" def __init__(self, func, param1, param2): self.func = func self.param1, self.param2 = param1, param2 def __call__(self, *args, **kwargs): ... #use self.param1 result = self.func(*args, **kwargs) #use self.param2 return result def my_dec_factory(param1, param2): def decorator(func): return MyDecorator(func, param1, param2) return decorator update: Missing \"pure class\" forms of decorators Now, note the \"hybrid\" method takes the \"best of both Worlds\" trying to keep the shortest and more readable code. A full \"decorator factory\" defined exclusively with classes would either need two classes, or a \"mode\" attribute to know if it was called to register the decorated function or to actually call the final function: class MyDecorator(object): \"\"\"Decorator example defined entirely as class.\"\"\" def __init__(self, p1, p2): self.p1 = p1 ... self.mode = \"decorating\" def __call__(self, *args, **kw): if self.mode == \"decorating\": self.func = args[0] self.mode = \"calling\" return self # code to run prior to function call result = self.func(*args, **kw) # code to run after function call return result @MyDecorator(p1, ...) def myfunc(): ... And finally a pure, \"white colar\" decorator defined with two classes - maybe keeping things more separated, but increasing the redundancy to a point one can't say it is more maintainable: class Stage2Decorator(object): def __init__(self, func, p1, p2, ...): self.func = func self.p1 = p1 ... def __call__(self, *args, **kw): # code to run prior to function call ... result = self.func(*args, **kw) # code to run after function call ... return result class Stage1Decorator(object): \"\"\"Decorator example defined as two classes. No \"hacks\" on the object model, most bureacratic. \"\"\" def __init__(self, p1, p2): self.p1 = p1 ... self.mode = \"decorating\" def __call__(self, func): return Stage2Decorator(func, self.p1, self.p2, ...) @Stage1Decorator(p1, p2, ...) def myfunc(): ... 2018 update I wrote the text above a couple years ago. I came up recently with a pattern I prefer due to creating code that is \"flatter\". The basic idea is to use a function, but return a partial object of itself if it is called with parameters before being used as a decorator: from functools import wraps, partial def decorator(func=None, parameter1=None, parameter2=None, ...): if not func: # The only drawback is that for functions there is no thing # like \"self\" - we have to rely on the decorator # function name on the module namespace return partial(decorator, parameter1=parameter1, parameter2=parameter2) @wraps(func) def wrapper(*args, **kwargs): # Decorator code- parameter1, etc... can be used # freely here return func(*args, **kwargs) return wrapper And that is it - decorators written using this pattern can decorate a function right away without being \"called\" first: @decorator def my_func(): pass Or customized with parameters: @decorator(parameter1=\"example.com\", ...): def my_func(): pass 2019 - With Python 3.8 and positional only parameters this last pattern will become even better, as the func argument can be declared as positional only, and require the parameters to be named; def decorator(func=None, /, *, parameter1=None, parameter2=None, ...):"} +{"question_id": 38806202, "score": 83, "creation_date": 1470499595, "tags": ["python", "heap"], "instruction": "What's the time complexity of functions in heapq library\n\nMy question is from the solution in leetcode below, I can't understand why it is O(k+(n-k)log(k)). Supplement: Maybe the complexity isn't that, in fact I don't know the time complexity of heappush() and heappop() # O(k+(n-k)lgk) time, min-heap def findKthLargest(self, nums, k): heap = [] for num in nums: heapq.heappush(heap, num) for _ in xrange(len(nums)-k): heapq.heappop(heap) return heapq.heappop(heap)", "output": "heapq is a binary heap, with O(log n) push and O(log n) pop. See the heapq source code. The algorithm you show takes O(n log n) to push all the items onto the heap, and then O((n-k) log n) to find the kth largest element. So the complexity would be O(n log n). It also requires O(n) extra space. You can do this in O(n log k), using O(k) extra space by modifying the algorithm slightly. I'm not a Python programmer, so you'll have to translate the pseudocode: # create a new min-heap # push the first k nums onto the heap for the rest of the nums: if num > heap.peek() heap.pop() heap.push(num) # at this point, the k largest items are on the heap. # The kth largest is the root: return heap.pop() The key here is that the heap contains just the largest items seen so far. If an item is smaller than the kth largest seen so far, it's never put onto the heap. The worst case is O(n log k). Actually, heapq has a heapreplace method, so you could replace this: if num > heap.peek() heap.pop() heap.push(num) with if num > heap.peek() heap.replace(num) Also, an alternative to pushing the first k items is to create a list of the first k items and call heapify. A more optimized (but still O(n log k)) algorithm is: # create array of first `k` items heap = heapify(array) for remaining nums if (num > heap.peek()) heap.replace(num) return heap.pop() You could also call heapify on the entire array, then pop the first n-k items, and then take the top: heapify(nums) for i = 0 to n-k heapq.heappop(nums) return heapq.heappop(nums) That's simpler. Not sure if it's faster than my previous suggestion, but it modifies the original array. The complexity is O(n) to build the heap, then O((n-k) log n) for the pops. So it's be O((n-k) log n). Worst case O(n log n)."} +{"question_id": 4761041, "score": 83, "creation_date": 1295626554, "tags": ["python", "unit-testing", "import"], "instruction": "Python import src modules when running tests\n\nMy source files are located under src and my test files are located under tests. When I want to run a test file, say python myTest.py, I get an import error: \"No module named ASourceModule.py\". How do I import all the modules from source needed to run my tests?", "output": "You could add that directory to the path: import sys sys.path.append('../src') Maybe put this into a module if you are using it a lot. Although this solution works for some situations, there are better solutions. See for example this answer."} +{"question_id": 10698858, "score": 83, "creation_date": 1337677325, "tags": ["python", "numpy", "python-3.x", "range"], "instruction": "built-in range or numpy.arange: which is more efficient?\n\nWhen iterating over a large array with a range expression, should I use Python's built-in range function, or numpy's arange to get the best performance? My reasoning so far: range probably resorts to a native implementation and might be faster therefore. On the other hand, arange returns a full array, which occupies memory, so there might be an overhead. Python 3's range expression is a generator, which does not hold all the values in memory.", "output": "For large arrays, a vectorised numpy operation is the fastest. If you must loop, prefer xrange/range and avoid using np.arange. In numpy you should use combinations of vectorized calculations, ufuncs and indexing to solve your problems as it runs at C speed. Looping over numpy arrays is inefficient compared to this. (Something like the worst thing you could do would be to iterate over the array with an index created with range or np.arange as the first sentence in your question suggests, but I'm not sure if you really mean that.) import numpy as np import sys sys.version # out: '2.7.3rc2 (default, Mar 22 2012, 04:35:15) \\n[GCC 4.6.3]' np.version.version # out: '1.6.2' size = int(1E6) %timeit for x in range(size): x ** 2 # out: 10 loops, best of 3: 136 ms per loop %timeit for x in xrange(size): x ** 2 # out: 10 loops, best of 3: 88.9 ms per loop # avoid this %timeit for x in np.arange(size): x ** 2 #out: 1 loops, best of 3: 1.16 s per loop # use this %timeit np.arange(size) ** 2 #out: 100 loops, best of 3: 19.5 ms per loop So for this case numpy is 4 times faster than using xrange if you do it right. Depending on your problem numpy can be much faster than a 4 or 5 times speed up. The answers to this question explain some more advantages of using numpy arrays instead of python lists for large data sets."} +{"question_id": 11621740, "score": 82, "creation_date": 1343084239, "tags": ["python"], "instruction": "How to determine whether a year is a leap year?\n\nI am trying to make a simple calculator to determine whether or not a certain year is a leap year. By definition, a leap year is divisible by four, but not by one hundred, unless it is divisible by four hundred. Here is my code: def leapyr(n): if n%4==0 and n%100!=0: if n%400==0: print(n, \"is a leap year.\") elif n%4!=0: print(n, \"is not a leap year.\") print(leapyr(1900)) When I try this inside the Python IDLE, the module returns None. I am pretty sure that I should get 1900 is a leap year.", "output": "Use calendar.isleap: import calendar print(calendar.isleap(1900))"} +{"question_id": 17667903, "score": 82, "creation_date": 1373948173, "tags": ["python", "sockets"], "instruction": "Python Socket Receive Large Amount of Data\n\nWhen I try to receive larger amounts of data it gets cut off and I have to press enter to get the rest of the data. At first I was able to increase it a little bit but it still won't receive all of it. As you can see I have increased the buffer on the conn.recv() but it still doesn't get all of the data. It cuts it off at a certain point. I have to press enter on my raw_input in order to receive the rest of the data. Is there anyway I can get all of the data at once? Here's the code. port = 7777 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind(('0.0.0.0', port)) sock.listen(1) print (\"Listening on port: \"+str(port)) while 1: conn, sock_addr = sock.accept() print \"accepted connection from\", sock_addr while 1: command = raw_input('shell> ') conn.send(command) data = conn.recv(8000) if not data: break print data, conn.close()", "output": "TCP/IP is a stream-based protocol, not a message-based protocol. There's no guarantee that every send() call by one peer results in a single recv() call by the other peer receiving the exact data sent\u2014it might receive the data piece-meal, split across multiple recv() calls, due to packet fragmentation. You need to define your own message-based protocol on top of TCP in order to differentiate message boundaries. Then, to read a message, you continue to call recv() until you've read an entire message or an error occurs. One simple way of sending a message is to prefix each message with its length. Then to read a message, you first read the length, then you read that many bytes. Here's how you might do that: def send_msg(sock, msg): # Prefix each message with a 4-byte length (network byte order) msg = struct.pack('>I', len(msg)) + msg sock.sendall(msg) def recv_msg(sock): # Read message length and unpack it into an integer raw_msglen = recvall(sock, 4) if not raw_msglen: return None msglen = struct.unpack('>I', raw_msglen)[0] # Read the message data return recvall(sock, msglen) def recvall(sock, n): # Helper function to recv n bytes or return None if EOF is hit data = bytearray() while len(data) < n: packet = sock.recv(n - len(data)) if not packet: return None data.extend(packet) return data Then you can use the send_msg and recv_msg functions to send and receive whole messages, and they won't have any problems with packets being split or coalesced on the network level."} +{"question_id": 58479556, "score": 82, "creation_date": 1571629696, "tags": ["python", "tensorflow", "keras", "loss-function"], "instruction": "NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array\n\nI try to pass 2 loss functions to a model as Keras allows that. loss: String (name of objective function) or objective function or Loss instance. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses. The two loss functions: def l_2nd(beta): def loss_2nd(y_true, y_pred): ... return K.mean(t) return loss_2nd and def l_1st(alpha): def loss_1st(y_true, y_pred): ... return alpha * 2 * tf.linalg.trace(tf.matmul(tf.matmul(Y, L, transpose_a=True), Y)) / batch_size return loss_1st Then I build the model: l2 = K.eval(l_2nd(self.beta)) l1 = K.eval(l_1st(self.alpha)) self.model.compile(opt, [l2, l1]) When I train, it produces the error: 1.15.0-rc3 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) in () 47 create_using=nx.DiGraph(), nodetype=None, data=[('weight', int)]) 48 ---> 49 model = SDNE(G, hidden_size=[256, 128],) 50 model.train(batch_size=100, epochs=40, verbose=2) 51 embeddings = model.get_embeddings() 10 frames in __init__(self, graph, hidden_size, alpha, beta, nu1, nu2) 72 self.A, self.L = self._create_A_L( 73 self.graph, self.node2idx) # Adj Matrix,L Matrix ---> 74 self.reset_model() 75 self.inputs = [self.A, self.L] 76 self._embeddings = {} in reset_model(self, opt) ---> 84 self.model.compile(opt, loss=[l2, l1]) 85 self.get_embeddings() 86 /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs) 455 self._self_setattr_tracking = False # pylint: disable=protected-access 456 try: --> 457 result = method(self, *args, **kwargs) 458 finally: 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array. Please help, thanks!", "output": "I found the solution to this problem: It was because I mixed symbolic tensor with a non-symbolic type, such as a numpy. For example. It is NOT recommended to have something like this: def my_mse_loss_b(b): def mseb(y_true, y_pred): ... a = np.ones_like(y_true) #numpy array here is not recommended return K.mean(K.square(y_pred - y_true)) + a return mseb Instead, you should convert all to symbolic tensors like this: def my_mse_loss_b(b): def mseb(y_true, y_pred): ... a = K.ones_like(y_true) #use Keras instead so they are all symbolic return K.mean(K.square(y_pred - y_true)) + a return mseb Hope this help!"} +{"question_id": 9450446, "score": 82, "creation_date": 1330229184, "tags": ["python"], "instruction": "How do I use a C-style for loop in Python?\n\nI want to use the traditional C-style for loop in Python. I want to loop through characters of a string, but also know what it is, and be able to jump through characters (e.g. i =5 somewhere in the code). for with range doesn't give me the flexibility of an actual for loop.", "output": "There is no simple, precise equivalent of C's for statement in Python. Other answers cover using a Python for statement with a range, and that is absolutely what you should do when possible. If you want to be able to modify the loop variable in the loop (and have it affect subsequent iterations), you have to use a while loop: i = 0 while i < 7: if someCondition(i): i = 5 i += 1 But in that loop, a continue statement will not have the same effect that a continue statement would have in a C for loop. If you want continue to work the way it does in C, you have to throw in a try/finally statement: i = 0 while i < 7: try: if someCondition(i): i = 5 elif otherCondition(i): continue print 'i = %d' % i finally: i += 1 As you can see, this is pretty ugly. You should look for a more Pythonic way to write your loop. UPDATE This just occurred to me... there is a complicated answer that lets you use a normal Python for loop like a C-style loop, and allows updating the loop variable, by writing a custom iterator. I wouldn't recommend this solution for any real programs, but it's a fun exercise. Example \u201cC-style\u201d for loop: for i in forrange(10): print(i) if i == 3: i.update(7) Output: 0 1 2 3 8 9 The trick is forrange uses a subclass of int that adds an update method. Implementation of forrange: class forrange: def __init__(self, startOrStop, stop=None, step=1): if step == 0: raise ValueError('forrange step argument must not be zero') if not isinstance(startOrStop, int): raise TypeError('forrange startOrStop argument must be an int') if stop is not None and not isinstance(stop, int): raise TypeError('forrange stop argument must be an int') if stop is None: self.start = 0 self.stop = startOrStop self.step = step else: self.start = startOrStop self.stop = stop self.step = step def __iter__(self): return self.foriterator(self.start, self.stop, self.step) class foriterator: def __init__(self, start, stop, step): self.currentValue = None self.nextValue = start self.stop = stop self.step = step def __iter__(self): return self def next(self): if self.step > 0 and self.nextValue >= self.stop: raise StopIteration if self.step < 0 and self.nextValue <= self.stop: raise StopIteration self.currentValue = forrange.forvalue(self.nextValue, self) self.nextValue += self.step return self.currentValue class forvalue(int): def __new__(cls, value, iterator): value = super(forrange.forvalue, cls).__new__(cls, value) value.iterator = iterator return value def update(self, value): if not isinstance(self, int): raise TypeError('forvalue.update value must be an int') if self == self.iterator.currentValue: self.iterator.nextValue = value + self.iterator.step"} +{"question_id": 35139108, "score": 82, "creation_date": 1454356353, "tags": ["python", "windows", "anaconda", "xgboost"], "instruction": "How to install xgboost in Anaconda Python (Windows platform)?\n\nI am a new Python user. I downloaded the latest Anaconda 3 2.4.1 (Python 3.5) from the below link: https://www.continuum.io/downloads My PC Configurations are: Windows 10, 64 bit, 4GB RAM I have spent hours trying to find the right way to download the package after the 'pip install xgboost' failed in the Anaconda command prompt but couldn't find any specific instructions for Anaconda. Can anyone help on how to install xgboost from Anaconda?", "output": "The easiest way (Worked for me) is to do the following: anaconda search -t conda xgboost You will get a list of install-able features like this: for example if you want to install the first one on the list mndrake/xgboost (FOR WINDOWS-64bits): conda install -c mndrake xgboost If you're in a Unix system you can choose any other package with \"linux-64\" on the right. Update on 22/10/2020: Without searching in conda list of channels, you can install it using (source: https://anaconda.org/anaconda/py-xgboost) : conda install -c anaconda py-xgboost"} +{"question_id": 44619077, "score": 82, "creation_date": 1497816436, "tags": ["python", "ocr", "tesseract"], "instruction": "Pytesseract OCR multiple config options\n\nI am having some problems with pytesseract. I need to configure Tesseract to that it is configured to accept single digits while also only being able to accept numbers as the number zero is often confused with an 'O'. Like this: target = pytesseract.image_to_string(im,config='-psm 7',config='outputbase digits')", "output": "tesseract-4.0.0a supports below psm. If you want to have single character recognition, set psm = 10. And if your text consists of numbers only, you can set tessedit_char_whitelist=0123456789. Page segmentation modes: 0 Orientation and script detection (OSD) only. 1 Automatic page segmentation with OSD. 2 Automatic page segmentation, but no OSD, or OCR. 3 Fully automatic page segmentation, but no OSD. (Default) 4 Assume a single column of text of variable sizes. 5 Assume a single uniform block of vertically aligned text. 6 Assume a single uniform block of text. 7 Treat the image as a single text line. 8 Treat the image as a single word. 9 Treat the image as a single word in a circle. 10 Treat the image as a single character. 11 Sparse text. Find as much text as possible in no particular order. 12 Sparse text with OSD. 13 Raw line. Treat the image as a single text line, bypassing hacks that are Tesseract-specific. Here is a sample usage of image_to_string with multiple parameters. target = pytesseract.image_to_string(image, lang='eng', boxes=False, \\ config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789')"} +{"question_id": 12232304, "score": 82, "creation_date": 1346542955, "tags": ["python", "redis", "flask", "juggernaut"], "instruction": "How to implement server push in Flask framework?\n\nI am trying to build a small site with the server push functionality on Flask micro-web framework, but I did not know if there is a framework to work with directly. I used Juggernaut, but it seems to be not working with redis-py in current version, and Juggernaut has been deprecated recently. Does anyone has a suggestion with my case?", "output": "Have a look at Server-Sent Events. Server-Sent Events is a browser API that lets you keep open a socket to your server, subscribing to a stream of updates. For more Information read Alex MacCaw (Author of Juggernaut) post on why he kills juggernaut and why the simpler Server-Sent Events are in manny cases the better tool for the job than Websockets. The protocol is really easy. Just add the mimetype text/event-stream to your response. The browser will keep the connection open and listen for updates. An Event sent from the server is a line of text starting with data: and a following newline. data: this is a simple message If you want to exchange structured data, just dump your data as json and send the json over the wire. An advantage is that you can use SSE in Flask without the need for an extra Server. There is a simple chat application example on github which uses redis as a pub/sub backend. def event_stream(): pubsub = red.pubsub() pubsub.subscribe('chat') for message in pubsub.listen(): print message yield 'data: %s\\n\\n' % message['data'] @app.route('/post', methods=['POST']) def post(): message = flask.request.form['message'] user = flask.session.get('user', 'anonymous') now = datetime.datetime.now().replace(microsecond=0).time() red.publish('chat', u'[%s] %s: %s' % (now.isoformat(), user, message)) @app.route('/stream') def stream(): return flask.Response(event_stream(), mimetype=\"text/event-stream\") You do not need to use gunicron to run the example app. Just make sure to use threading when running the app, because otherwise the SSE connection will block your development server: if __name__ == '__main__': app.debug = True app.run(threaded=True) On the client side you just need a Javascript handler function which will be called when a new message is pushed from the server. var source = new EventSource('/stream'); source.onmessage = function (event) { alert(event.data); }; Server-Sent Events are supported by recent Firefox, Chrome and Safari browsers. Internet Explorer does not yet support Server-Sent Events, but is expected to support them in Version 10. There are two recommended Polyfills to support older browsers EventSource.js jquery.eventsource"} +{"question_id": 2017381, "score": 82, "creation_date": 1262824053, "tags": ["python", "memory-leaks"], "instruction": "Is it possible to have an actual memory leak in Python because of your code?\n\nI don't have a code example, but I'm curious whether it's possible to write Python code that results in essentially a memory leak.", "output": "It is possible, yes. It depends on what kind of memory leak you are talking about. Within pure Python code, it's not possible to \"forget to free\" memory such as in C, but it is possible to leave a reference hanging somewhere. Some examples of such: An unhandled traceback object that is keeping an entire stack frame alive, even though the function is no longer running while game.running(): try: key_press = handle_input() except SomeException: etype, evalue, tb = sys.exc_info() # Do something with tb like inspecting or printing the traceback In this silly example of a game loop maybe, we assigned tb to a local. We had good intentions, but this tb contains frame information about the stack of whatever was happening in our handle_input all the way down to potentially very deep calls, and anything in those stacks. Presuming your game continues, this 'tb' is kept alive even in your next call to handle_input, and maybe forever. The docs for exc_info now talk about this potential circular reference issue and recommend simply not assigning tb if you don't absolutely need it. If you only need to get a traceback consider e.g. traceback.format_exc instead. Storing values in a class or global scope instead of instance scope, and not realizing it. This one can happen in insidious ways, but often happens when you define mutable types in your class scope. class Money: name = '' symbols = [] # This is the dangerous line here def set_name(self, name): self.name = name def add_symbol(self, symbol): self.symbols.append(symbol) In the above example, say you did m = Money() m.set_name('Dollar') m.add_symbol('$') You'll probably find this particular bug quickly. What happened is in this case you put a mutable value at class scope and even though you correctly access it at instance scope, it's actually \"falling through\" to the class object's __dict__. This used in certain contexts could potentially cause your application's heap to grow forever, and would cause issues in say, a production web application that didn't restart its processes occasionally. Cyclic references in classes which also have a __del__ method. Authors Note - As of Python 3.4, this issue is mostly solved by PEP-0442 Ironically, the existence of a __del__ made it impossible (in Python 2 & early versions of Python 3) for the cyclic garbage collector to clean an instance up. Say you had something where you wanted to do a destructor for finalization purposes: class ClientConnection: def __del__(self): if self.socket is not None: self.socket.close() self.socket = None Now this works fine on its own, and you may be led to believe it's being a good steward of OS resources to ensure the socket is 'disposed' of. However, if ClientConnection kept a reference to say, User and User kept a reference to the connection, you might be tempted to say that on cleanup, let's have user de-reference the connection. This is actually the flaw, however: the cyclic GC doesn't know the correct order of operations and cannot clean it up. The solution to this is to ensure you do cleanup on say, disconnect events by calling some sort of close, but name that method something other than __del__. Poorly implemented C extensions, or not properly using C libraries as they are designed. In Python, you trust in the garbage collector to throw away things you aren't using. But if you use an extension that wraps a C library, the majority of the time you are responsible for making sure you explicitly close or de-allocate resources. Mostly this is documented, but a Python programmer who is used to not having to do this explicit de-allocation might throw away the handle to that library or an object within without knowing that resources are being held. Scopes which contain closures that contain a whole lot more than you could've anticipated class User: def set_profile(self, profile): def on_completed(result): if result.success: self.profile = profile self._db.execute( change={'profile': profile}, on_complete=on_completed ) In this contrived example, we appear to be using some sort of 'async' call that will call us back at on_completed when the DB call is done (the implementation could've been promises, it ends up with the same outcome). What you may not realize is that the on_completed closure binds a reference to self in order to execute the self.profile assignment. Now, perhaps the DB client keeps track of active queries and pointers to the closures to call when they're done (since it's async) and say it crashes for whatever reason. If the DB client doesn't correctly cleanup callbacks etc, in this case, the DB client now has a reference to on_completed which has a reference to User which keeps a _db - you've now created a circular reference that may never get collected. (Even without a circular reference, the fact that closures bind locals and even instances sometimes may cause values you thought were collected to be living for a long time, which could include sockets, clients, large buffers, and entire trees of things) Default parameters which are mutable types def foo(a=[]): a.append(time.time()) return a This is a contrived example, but one could be led to believe that the default value of a being an empty list means append to it, when it is in fact a reference to the same list. This again similar to the earlier Money example might cause unbounded growth without knowing that you did that. (Note from August 2023 Update: This post was originally written in 2010 and the information within is still largely valid today, I just did some minor updates to the URL references and made sure the code examples are valid in both Python 2 & Python 3)"} +{"question_id": 15390807, "score": 82, "creation_date": 1363191595, "tags": ["python", "math", "integer", "sqrt"], "instruction": "Integer square root in python\n\nIs there an integer square root somewhere in python, or in standard libraries? I want it to be exact (i.e. return an integer), and raise an exception if the input isn't a perfect square. I tried using this code: def isqrt(n): i = int(math.sqrt(n) + 0.5) if i**2 == n: return i raise ValueError('input was not a perfect square') But it's ugly and I don't really trust it for large integers. I could iterate through the squares and give up if I've exceeded the value, but I assume it would be kinda slow to do something like that. Also, surely this is already implemented somewhere? See also: Check if a number is a perfect square.", "output": "Note: There is now math.isqrt in stdlib, available since Python 3.8. Newton's method works perfectly well on integers: def isqrt(n): x = n y = (x + 1) // 2 while y < x: x = y y = (x + n // x) // 2 return x This returns the largest integer x for which x * x does not exceed n. If you want to check if the result is exactly the square root, simply perform the multiplication to check if n is a perfect square. I discuss this algorithm, and three other algorithms for calculating square roots, at my blog."} +{"question_id": 22947427, "score": 82, "creation_date": 1396988718, "tags": ["python", "python-3.4"], "instruction": "Getting Home Directory with pathlib\n\nLooking through the new pathlib module in Python 3.4, I notice that there isn't any simple way to get the user's home directory. The only way I can come up with for getting a user's home directory is to use the older os.path lib like so: import pathlib from os import path p = pathlib.Path(path.expanduser(\"~\")) This seems clunky. Is there a better way?", "output": "It seems that this method was brought up in a bug report here. Some code was written (given here) but unfortunately it doesn't seem that it made it into the final Python 3.4 release. Incidentally the code that was proposed was extremely similar to the code you have in your question: # As a method of a Path object def expanduser(self): \"\"\" Return a new path with expanded ~ and ~user constructs (as returned by os.path.expanduser) \"\"\" return self.__class__(os.path.expanduser(str(self))) Here is a rudimentary subclassed version PathTest which subclasses WindowsPath (I'm on a Windows box but you could replace it with PosixPath). It adds a classmethod based on the code that was submitted in the bug report. from pathlib import WindowsPath import os.path class PathTest(WindowsPath): def __new__(cls, *args, **kwargs): return super(PathTest, cls).__new__(cls, *args, **kwargs) @classmethod def expanduser(cls): \"\"\" Return a new path with expanded ~ and ~user constructs (as returned by os.path.expanduser) \"\"\" return cls(os.path.expanduser('~')) p = PathTest('C:/') print(p) # 'C:/' q = PathTest.expanduser() print(q) # C:\\Users\\Username"} +{"question_id": 45969390, "score": 82, "creation_date": 1504125809, "tags": ["python", "machine-learning", "scikit-learn", "data-science", "cross-validation"], "instruction": "difference between StratifiedKFold and StratifiedShuffleSplit in sklearn\n\nAs from the title I am wondering what is the difference between StratifiedKFold with the parameter shuffle=True StratifiedKFold(n_splits=10, shuffle=True, random_state=0) and StratifiedShuffleSplit StratifiedShuffleSplit(n_splits=10, test_size=\u2019default\u2019, train_size=None, random_state=0) and what is the advantage of using StratifiedShuffleSplit", "output": "In stratKFolds, each test set should not overlap, even when shuffle is included. With stratKFolds and shuffle=True, the data is shuffled once at the start, and then divided into the number of desired splits. The test data is always one of the splits, the train data is the rest. In ShuffleSplit, the data is shuffled every time, and then split. This means the test sets may overlap between the splits. See this block for an example of the difference. Note the overlap of the elements in the test sets for ShuffleSplit. splits = 5 tx = range(10) ty = [0] * 5 + [1] * 5 from sklearn.model_selection import StratifiedShuffleSplit, StratifiedKFold from sklearn import datasets stratKfold = StratifiedKFold(n_splits=splits, shuffle=True, random_state=42) shufflesplit = StratifiedShuffleSplit(n_splits=splits, random_state=42, test_size=2) print(\"stratKFold\") for train_index, test_index in stratKfold.split(tx, ty): print(\"TRAIN:\", train_index, \"TEST:\", test_index) print(\"Shuffle Split\") for train_index, test_index in shufflesplit.split(tx, ty): print(\"TRAIN:\", train_index, \"TEST:\", test_index) Output: stratKFold TRAIN: [0 2 3 4 5 6 7 9] TEST: [1 8] TRAIN: [0 1 2 3 5 7 8 9] TEST: [4 6] TRAIN: [0 1 3 4 5 6 8 9] TEST: [2 7] TRAIN: [1 2 3 4 6 7 8 9] TEST: [0 5] TRAIN: [0 1 2 4 5 6 7 8] TEST: [3 9] Shuffle Split TRAIN: [8 4 1 0 6 5 7 2] TEST: [3 9] TRAIN: [7 0 3 9 4 5 1 6] TEST: [8 2] TRAIN: [1 2 5 6 4 8 9 0] TEST: [3 7] TRAIN: [4 6 7 8 3 5 1 2] TEST: [9 0] TRAIN: [7 2 6 5 4 3 0 9] TEST: [1 8] As for when to use them, I tend to use stratKFolds for any cross validation, and I use ShuffleSplit with a split of 2 for my train/test set splits. But I'm sure there are other use cases for both."} +{"question_id": 42128467, "score": 82, "creation_date": 1486615744, "tags": ["python", "pandas", "bar-chart", "stacked-bar-chart", "grouped-bar-chart"], "instruction": "Plot multiple columns of pandas DataFrame on the bar chart\n\nI am using the following code to plot a bar-chart: import matplotlib.pyplot as pls my_df.plot(x='my_timestampe', y='col_A', kind='bar') plt.show() The plot works fine. However, I want to improve the graph by having 3 columns: 'col_A', 'col_B', and 'col_C' all on the plot. Like in the example figure below: I would like the col_A displayed in blue above x-axis, col_B in red below x-axis, and col_C in green above x-axis. Is this something possible in matplotlib? How do I make changes to plot all the three columns? Thanks!", "output": "Tested in python 3.11, pandas 1.5.1, matplotlib 3.6.2 Sample Data and Imports import pandas as pd import matplotlib.pyplot as plt import numpy as np np.random.seed(2022) # creates a consistent sample y = np.random.rand(10,4) y[:,0]= np.arange(10) df = pd.DataFrame(y, columns=[\"X\", \"A\", \"B\", \"C\"]) X A B C 0 0.0 0.499058 0.113384 0.049974 1 1.0 0.486988 0.897657 0.647452 2 2.0 0.721135 0.831353 0.827568 3 3.0 0.957044 0.368044 0.494838 4 4.0 0.619429 0.977530 0.096433 5 5.0 0.292499 0.298675 0.752473 6 6.0 0.523737 0.864436 0.388843 7 7.0 0.475181 0.564672 0.349429 8 8.0 0.037820 0.794270 0.357883 9 9.0 0.914509 0.372662 0.964883 Several columns can be plotted at once by supplying a list of column names to the y= parameter in pandas.DataFrame.plot ax = df.plot(x=\"X\", y=[\"A\", \"B\", \"C\"], kind=\"bar\", rot=0) This will produce a graph where bars are grouped. ax = df.plot(x=\"X\", y=[\"A\", \"B\", \"C\"], kind=\"bar\", rot=0, stacked=True) _ = ax.legend(bbox_to_anchor=(1, 1.02), loc='upper left') This will produce a graph where bars are stacked. In order to have them overlapping, you would need to call .plot several times, and supply the first returned axes to the ax= parameter of the subsequent plots. ax = df.plot(x=\"X\", y=\"A\", kind=\"bar\", rot=0) df.plot(x=\"X\", y=\"B\", kind=\"bar\", ax=ax, color=\"C2\", rot=0) df.plot(x=\"X\", y=\"C\", kind=\"bar\", ax=ax, color=\"C3\", rot=0) plt.show() This will produce a graph where bars are layered, which is neither a standard or recommended implementation because larger values plotted in a later group will cover smaller values, as can be seen at x=9.0, where C=0.964883 covers, A=0.914509 and B=0.372662. Data plotted in this way is likely to be misinterpreted. This plot only makes sense if the highest values are those from the first column plotted for all bars. This seems to be the case in the desired output from the question. Otherwise I would not recommend using this kind of plot and instead either use a stacked plot or the grouped bars from the first solution here. One could experiment with transparency (alpha) and see if the latter solution gives an appealing result."} +{"question_id": 19200497, "score": 82, "creation_date": 1380993413, "tags": ["python", "exception", "selenium", "selenium-webdriver"], "instruction": "python selenium webscraping \"NoSuchElementException\" not recognized\n\nSometimes on a page I'll be looking for an element which may or may not be there. I wanted to try/catch this case with a NoSuchElementException, which selenium was throwing when certain HTML elements didn't exist. Original exception: NoSuchElementException: Message: u'Unable to locate element: {\"method\":\"css selector\",\"selector\":\"#one\"}' ; Stacktrace: at FirefoxDriver.prototype.findElementInternal_ (file:///var/folders/6q/7xcjtgyj32nfc2yp_y5tr9pm0000gn/T/tmp63Mz2a/extensions/fxdriver@googlecode.com/components/driver_component.js:8899) at FirefoxDriver.prototype.findChildElement (file:///var/folders/6q/7xcjtgyj32nfc2yp_y5tr9pm0000gn/T/tmp63Mz2a/extensions/fxdriver@googlecode.com/components/driver_component.js:8911) at DelayedCommand.prototype.executeInternal_/h (file:///var/folders/6q/7xcjtgyj32nfc2yp_y5tr9pm0000gn/T/tmp63Mz2a/extensions/fxdriver@googlecode.com/components/command_processor.js:10840) at DelayedCommand.prototype.executeInternal_ (file:///var/folders/6q/7xcjtgyj32nfc2yp_y5tr9pm0000gn/T/tmp63Mz2a/extensions/fxdriver@googlecode.com/components/command_processor.js:10845) at DelayedCommand.prototype.execute/< (file:///var/folders/6q/7xcjtgyj32nfc2yp_y5tr9pm0000gn/T/tmp63Mz2a/extensions/fxdriver@googlecode.com/components/command_processor.js:10787) Ironically, it won't let me catch this exception which it was throwing before? Code here: elt = driver.find_element_by_css_selector('.information') try: dat1 = elt.find_element_by_css_selector('#one').text dat2 = elt.find_elements_by_css_selector('#two')[1].text text = dat1 + dat2 except NoSuchElementException: text = elt.find_element_by_css_selector('#all').text item.set_description(text) Error here: NameError: name 'NoSuchElementException' is not defined Googling/documentation came up with nothing...and it strikes me as strange that selenium is fine throwing an exception but can't catch it.", "output": "You need to import the exception first from selenium.common.exceptions import NoSuchElementException and then you can reference it except NoSuchElementException: # handle the element not existing If you would like to give details of the exception in your output then you can use: except NoSuchElementException as exc: print(exc) # and/or other actions to recover"} +{"question_id": 41889974, "score": 82, "creation_date": 1485506980, "tags": ["python", "apache-spark", "pyspark", "apache-spark-sql"], "instruction": "Filter df when values matches part of a string in pyspark\n\nI have a large pyspark.sql.dataframe.DataFrame and I want to keep (so filter) all rows where the URL saved in the location column contains a pre-determined string, e.g. 'google.com'. I have tried: import pyspark.sql.functions as sf df.filter(sf.col('location').contains('google.com')).show(5) But this throws: TypeError: _TypeError: 'Column' object is not callable' How do I go around and filter my df properly?", "output": "Spark 2.2 onwards df.filter(df.location.contains('google.com')) Spark 2.2 documentation link Spark 2.1 and before You can use plain SQL in filter df.filter(\"location like '%google.com%'\") or with DataFrame column methods df.filter(df.location.like('%google.com%')) Spark 2.1 documentation link"} +{"question_id": 42478591, "score": 82, "creation_date": 1488174397, "tags": ["python", "selenium", "selenium-chromedriver"], "instruction": "Python Selenium Chrome Webdriver\n\nI'm beginning the automate the boring stuff book and I'm trying to open a chrome web browser through python. I have already installed selenium and I have tried to run this file: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys browser = webdriver.Chrome() browser.get('https://automatetheboringstuff.com') But because of that I get this Error: Traceback (most recent call last): File \"C:\\Program Files (x86)\\Python36-32\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 74, in start stdout=self.log_file, stderr=self.log_file) File \"C:\\Program Files (x86)\\Python36-32\\lib\\subprocess.py\", line 707, in __init__ restore_signals, start_new_session) File \"C:\\Program Files (x86)\\Python36-32\\lib\\subprocess.py\", line 990, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"C:/Program Files (x86)/Python36-32/test.py\", line 5, in browser = webdriver.Chrome() File \"C:\\Program Files (x86)\\Python36-32\\lib\\site-packages\\selenium\\webdriver\\chrome\\webdriver.py\", line 62, in __init__ self.service.start() File \"C:\\Program Files (x86)\\Python36-32\\lib\\site-packages\\selenium\\webdriver\\common\\service.py\", line 81, in start os.path.basename(self.path), self.start_error_message) selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home", "output": "You need to specify the path where your chromedriver is located. Download chromedriver for your desired platform from here. Place chromedriver on your system path, or where your code is. If not using a system path, link your chromedriver.exe (For non-Windows users, it's just called chromedriver): browser = webdriver.Chrome(executable_path=r\"C:\\path\\to\\chromedriver.exe\") (Set executable_path to the location where your chromedriver is located.) If you've placed chromedriver on your System Path, you can shortcut by just doing the following: browser = webdriver.Chrome() If you're running on a Unix-based operating system, you may need to update the permissions of chromedriver after downloading it in order to make it executable: chmod +x chromedriver That's all. If you're still experiencing issues, more info can be found on this other StackOverflow article: Can't use chrome driver for Selenium"} +{"question_id": 10526579, "score": 82, "creation_date": 1336615171, "tags": ["python", "classification", "scikit-learn"], "instruction": "Use scikit-learn to classify into multiple categories\n\nI'm trying to use one of scikit-learn's supervised learning methods to classify pieces of text into one or more categories. The predict function of all the algorithms I tried just returns one match. For example I have a piece of text: \"Theaters in New York compared to those in London\" And I have trained the algorithm to pick a place for every text snippet I feed it. In the above example I would want it to return New York and London, but it only returns New York. Is it possible to use scikit-learn to return multiple results? Or even return the label with the next highest probability? Thanks for your help. ---Update I tried using OneVsRestClassifier but I still only get one option back per piece of text. Below is the sample code I am using y_train = ('New York','London') train_set = (\"new york nyc big apple\", \"london uk great britain\") vocab = {'new york' :0,'nyc':1,'big apple':2,'london' : 3, 'uk': 4, 'great britain' : 5} count = CountVectorizer(analyzer=WordNGramAnalyzer(min_n=1, max_n=2),vocabulary=vocab) test_set = ('nice day in nyc','london town','hello welcome to the big apple. enjoy it here and london too') X_vectorized = count.transform(train_set).todense() smatrix2 = count.transform(test_set).todense() base_clf = MultinomialNB(alpha=1) clf = OneVsRestClassifier(base_clf).fit(X_vectorized, y_train) Y_pred = clf.predict(smatrix2) print Y_pred Result: ['New York' 'London' 'London']", "output": "What you want is called multi-label classification. Scikits-learn can do that. See here: http://scikit-learn.org/dev/modules/multiclass.html. I'm not sure what's going wrong in your example, my version of sklearn apparently doesn't have WordNGramAnalyzer. Perhaps it's a question of using more training examples or trying a different classifier? Though note that the multi-label classifier expects the target to be a list of tuples/lists of labels. The following works for me: import numpy as np from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.svm import LinearSVC from sklearn.feature_extraction.text import TfidfTransformer from sklearn.multiclass import OneVsRestClassifier X_train = np.array([\"new york is a hell of a town\", \"new york was originally dutch\", \"the big apple is great\", \"new york is also called the big apple\", \"nyc is nice\", \"people abbreviate new york city as nyc\", \"the capital of great britain is london\", \"london is in the uk\", \"london is in england\", \"london is in great britain\", \"it rains a lot in london\", \"london hosts the british museum\", \"new york is great and so is london\", \"i like london better than new york\"]) y_train = [[0],[0],[0],[0],[0],[0],[1],[1],[1],[1],[1],[1],[0,1],[0,1]] X_test = np.array(['nice day in nyc', 'welcome to london', 'hello welcome to new york. enjoy it here and london too']) target_names = ['New York', 'London'] classifier = Pipeline([ ('vectorizer', CountVectorizer(min_n=1,max_n=2)), ('tfidf', TfidfTransformer()), ('clf', OneVsRestClassifier(LinearSVC()))]) classifier.fit(X_train, y_train) predicted = classifier.predict(X_test) for item, labels in zip(X_test, predicted): print '%s => %s' % (item, ', '.join(target_names[x] for x in labels)) For me, this produces the output: nice day in nyc => New York welcome to london => London hello welcome to new york. enjoy it here and london too => New York, London"} +{"question_id": 19255120, "score": 82, "creation_date": 1381255725, "tags": ["python", "math", "operators", "square-root"], "instruction": "Is there a short-hand for nth root of x in Python?\n\nIn maths, if I wish to calculate 3 to the power of 2 then no symbol is required, but I write the 2 small: 3\u00b2. In Python this operation seems to be represented by the ** syntax. >>> 3**2 9 If I want to go the other direction and calculate the 2nd root of 9 then in maths I need to use a symbol: 2\u221a9 = 3 Is there a short-hand symbol in Python, similar to ** that achieves this i.e. 29? Or do I need to use the math module?", "output": "nth root of x is x^(1/n), so you can do 9**(1/2) to find the 2nd root of 9, for example. In general, you can compute the nth root of x as: x**(1/n) Note: In Python 2, you had to do 1/float(n) or 1.0/n so that the result would be a float rather than an int. For more details, see Why does Python give the \"wrong\" answer for square root?"} +{"question_id": 13648813, "score": 82, "creation_date": 1354291792, "tags": ["python"], "instruction": "What is the pythonic way to count the leading spaces in a string?\n\nI know I can count the leading spaces in a string with this: >>> a = \" foo bar baz qua \\n\" >>> print \"Leading spaces\", len(a) - len(a.lstrip()) Leading spaces 3 >>> But is there a more pythonic way?", "output": "Your way is pythonic but incorrect, it will also count other whitespace chars, to count only spaces be explicit a.lstrip(' '). Compare a = \" \\r\\t\\n\\tfoo bar baz qua \\n\" print(\"Leading spaces\", len(a) - len(a.lstrip())) >>> Leading spaces 7 and print(\"Leading spaces\", len(a) - len(a.lstrip(' ')) >>> Leading spaces 3"} +{"question_id": 75269700, "score": 82, "creation_date": 1674928449, "tags": ["python", "pre-commit", "pre-commit.com", "isort"], "instruction": "pre-commit fails to install isort 5.11.4 with error \"RuntimeError: The Poetry configuration is invalid\"\n\npre-commit suddenly started to fail installing the isort hook in our builds today with the following error [INFO] Installing environment for https://github.com/pycqa/isort. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... An unexpected error has occurred: CalledProcessError: command: ('/builds/.../.cache/pre-commit/repo0_h0f938/py_env-python3.8/bin/python', '-mpip', 'install', '.') return code: 1 expected return code: 0 [...] stderr: ERROR: Command errored out with exit status 1: [...] File \"/tmp/pip-build-env-_3j1398p/overlay/lib/python3.8/site-packages/poetry/core/masonry/api.py\", line 40, in prepare_metadata_for_build_wheel poetry = Factory().create_poetry(Path(\".\").resolve(), with_groups=False) File \"/tmp/pip-build-env-_3j1398p/overlay/lib/python3.8/site-packages/poetry/core/factory.py\", line 57, in create_poetry raise RuntimeError(\"The Poetry configuration is invalid:\\n\" + message) RuntimeError: The Poetry configuration is invalid: - [extras.pipfile_deprecated_finder.2] 'pip-shims<=0.3.4' does not match '^[a-zA-Z-_.0-9]+$' It seems to be related with poetry configuration..", "output": "Upgrading the hook to the freshly released isort 5.12.0 seems to be fixing the issue. Looking at the commit stack from isort repo, it sounds like recent version of Poetry had a breaking change incompatible with isort <= 5.11.4 (commit)"} +{"question_id": 17178483, "score": 82, "creation_date": 1371587995, "tags": ["python", "http", "streamreader", "webrequest"], "instruction": "How do you send an HTTP Get Web Request in Python?\n\nI am having trouble sending data to a website and getting a response in Python. I have seen similar questions, but none of them seem to accomplish what I am aiming for. This is my C# code I'm trying to port to Python: static void Request(Uri selectedUri) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create(selectedUri); request.ServicePoint.BindIPEndPointDelegate = BindIPEndPointCallback; request.Method = \"GET\"; request.Timeout = (int)Timeout.TotalMilliseconds; request.ReadWriteTimeout = (int)Timeout.TotalMilliseconds; request.CachePolicy = CachePolicy; request.UserAgent = UserAgent; using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { using (StreamReader responseReader = new StreamReader(response.GetResponseStream())) { string responseText = responseReader.ReadToEnd(); File.WriteAllText(UrlFileName, responseText.Trim(), Encoding.ASCII); } } } Here is my attempt in Python: def request(): web = httplib.HTTPConnection('https://someurl.com'); headers = {\"Content-type\": \"application/x-www-form-urlencoded\", \"Accept\": \"text/plain\"} web.request(\"GET\", \"/heartbeat.jsp\", headers); response = web.getresponse(); stream = \"\"; #something is wrong here Any help would be appreciated!", "output": "You can use urllib2 import urllib2 content = urllib2.urlopen(some_url).read() print content Also you can use httplib import httplib conn = httplib.HTTPConnection(\"www.python.org\") conn.request(\"HEAD\",\"/index.html\") res = conn.getresponse() print res.status, res.reason # Result: 200 OK or the requests library import requests r = requests.get('https://api.github.com/user', auth=('user', 'pass')) r.status_code # Result: 200"} +{"question_id": 17533024, "score": 82, "creation_date": 1373306253, "tags": ["python", "firefox", "selenium", "timeout", "selenium-webdriver"], "instruction": "How to set Selenium Python WebDriver default timeout?\n\nTrying to find a good way to set a maximum time limit for command execution latency in Selenium Python WebDriver. Ideally, something like: my_driver = get_my_driver() my_driver.set_timeout(30) # seconds my_driver.get('http://www.example.com') # stops / throws exception when time is over 30 seconds would work. I have found .implicitly_wait(30), but I'm not sure if it results in the desired behavior. In case it is useful, we are specifically using the WebDriver for Firefox. EDIT As per @amey's answer, this might be useful: ff = webdriver.Firefox() ff.implicitly_wait(10) # seconds ff.get(\"http://somedomain/url_that_delays_loading\") myDynamicElement = ff.find_element_by_id(\"myDynamicElement\") However, it is not clear to me whether the implicit wait applies both to get (which is the desired functionality) and to find_element_by_id. Thanks very much!", "output": "In python, the method to create a timeout for a page to load is: Firefox, Chromedriver and undetected_chromedriver: driver.set_page_load_timeout(30) Other: driver.implicitly_wait(30) This will throw a TimeoutException whenever the page load takes more than 30 seconds."} +{"question_id": 53779986, "score": 82, "creation_date": 1544791161, "tags": ["python", "pandas", "dataframe", "formula", "eval"], "instruction": "Dynamically evaluate an expression from a formula in Pandas\n\nI would like to perform arithmetic on one or more dataframes columns using pd.eval. Specifically, I would like to port the following code that evaluates a formula: x = 5 df2['D'] = df1['A'] + (df1['B'] * x) ...to code using pd.eval. The reason for using pd.eval is that I would like to automate many workflows, so creating them dynamically will be useful to me. My two input DataFrames are: import pandas as pd import numpy as np np.random.seed(0) df1 = pd.DataFrame(np.random.choice(10, (5, 4)), columns=list('ABCD')) df2 = pd.DataFrame(np.random.choice(10, (5, 4)), columns=list('ABCD')) df1 A B C D 0 5 0 3 3 1 7 9 3 5 2 2 4 7 6 3 8 8 1 6 4 7 7 8 1 df2 A B C D 0 5 9 8 9 1 4 3 0 3 2 5 0 2 3 3 8 1 3 3 4 3 7 0 1 I am trying to better understand pd.eval's engine and parser arguments to determine how best to solve my problem. I have gone through the documentation, but the difference was not made clear to me. What arguments should be used to ensure my code is working at the maximum performance? Is there a way to assign the result of the expression back to df2? Also, to make things more complicated, how do I pass x as an argument inside the string expression?", "output": "You can use 1) pd.eval(), 2) df.query(), or 3) df.eval(). Their various features and functionality are discussed below. Examples will involve these dataframes (unless otherwise specified). np.random.seed(0) df1 = pd.DataFrame(np.random.choice(10, (5, 4)), columns=list('ABCD')) df2 = pd.DataFrame(np.random.choice(10, (5, 4)), columns=list('ABCD')) df3 = pd.DataFrame(np.random.choice(10, (5, 4)), columns=list('ABCD')) df4 = pd.DataFrame(np.random.choice(10, (5, 4)), columns=list('ABCD')) 1) pandas.eval This is the \"Missing Manual\" that pandas doc should contain. Note: of the three functions being discussed, pd.eval is the most important. df.eval and df.query call pd.eval under the hood. Behaviour and usage is more or less consistent across the three functions, with some minor semantic variations which will be highlighted later. This section will introduce functionality that is common across all the three functions - this includes, (but not limited to) allowed syntax, precedence rules, and keyword arguments. pd.eval can evaluate arithmetic expressions which can consist of variables and/or literals. These expressions must be passed as strings. So, to answer the question as stated, you can do x = 5 pd.eval(\"df1.A + (df1.B * x)\") Some things to note here: The entire expression is a string df1, df2, and x refer to variables in the global namespace, these are picked up by eval when parsing the expression Specific columns are accessed using the attribute accessor index. You can also use \"df1['A'] + (df1['B'] * x)\" to the same effect. I will be addressing the specific issue of reassignment in the section explaining the target=... attribute below. But for now, here are more simple examples of valid operations with pd.eval: pd.eval(\"df1.A + df2.A\") # Valid, returns a pd.Series object pd.eval(\"abs(df1) ** .5\") # Valid, returns a pd.DataFrame object ...and so on. Conditional expressions are also supported in the same way. The statements below are all valid expressions and will be evaluated by the engine. pd.eval(\"df1 > df2\") pd.eval(\"df1 > 5\") pd.eval(\"df1 < df2 and df3 < df4\") pd.eval(\"df1 in [1, 2, 3]\") pd.eval(\"1 < 2 < 3\") A list detailing all the supported features and syntax can be found in the documentation. In summary, Arithmetic operations except for the left shift (<<) and right shift (>>) operators, e.g., df + 2 * pi / s ** 4 % 42 - the_golden_ratio Comparison operations, including chained comparisons, e.g., 2 < df < df2 Boolean operations, e.g., df < df2 and df3 < df4 or not df_bool list and tuple literals, e.g., [1, 2] or (1, 2) Attribute access, e.g., df.a Subscript expressions, e.g., df[0] Simple variable evaluation, e.g., pd.eval('df') (this is not very useful) Math functions: sin, cos, exp, log, expm1, log1p, sqrt, sinh, cosh, tanh, arcsin, arccos, arctan, arccosh, arcsinh, arctanh, abs and arctan2. This section of the documentation also specifies syntax rules that are not supported, including set/dict literals, if-else statements, loops, and comprehensions, and generator expressions. From the list, it is obvious you can also pass expressions involving the index, such as pd.eval('df1.A * (df1.index > 1)') 1a) Parser Selection: The parser=... argument pd.eval supports two different parser options when parsing the expression string to generate the syntax tree: pandas and python. The main difference between the two is highlighted by slightly differing precedence rules. Using the default parser pandas, the overloaded bitwise operators & and | which implement vectorized AND and OR operations with pandas objects will have the same operator precedence as and and or. So, pd.eval(\"(df1 > df2) & (df3 < df4)\") Will be the same as pd.eval(\"df1 > df2 & df3 < df4\") # pd.eval(\"df1 > df2 & df3 < df4\", parser='pandas') And also the same as pd.eval(\"df1 > df2 and df3 < df4\") Here, the parentheses are necessary. To do this conventionally, the parentheses would be required to override the higher precedence of bitwise operators: (df1 > df2) & (df3 < df4) Without that, we end up with df1 > df2 & df3 < df4 ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Use parser='python' if you want to maintain consistency with python's actual operator precedence rules while evaluating the string. pd.eval(\"(df1 > df2) & (df3 < df4)\", parser='python') The other difference between the two types of parsers are the semantics of the == and != operators with list and tuple nodes, which have the similar semantics as in and not in respectively, when using the 'pandas' parser. For example, pd.eval(\"df1 == [1, 2, 3]\") Is valid, and will run with the same semantics as pd.eval(\"df1 in [1, 2, 3]\") OTOH, pd.eval(\"df1 == [1, 2, 3]\", parser='python') will throw a NotImplementedError error. 1b) Backend Selection: The engine=... argument There are two options - numexpr (the default) and python. The numexpr option uses the numexpr backend which is optimized for performance. With Python backend, your expression is evaluated similar to just passing the expression to Python's eval function. You have the flexibility of doing more inside expressions, such as string operations, for instance. df = pd.DataFrame({'A': ['abc', 'def', 'abacus']}) pd.eval('df.A.str.contains(\"ab\")', engine='python') 0 True 1 False 2 True Name: A, dtype: bool Unfortunately, this method offers no performance benefits over the numexpr engine, and there are very few security measures to ensure that dangerous expressions are not evaluated, so use at your own risk! It is generally not recommended to change this option to 'python' unless you know what you're doing. 1c) local_dict and global_dict arguments Sometimes, it is useful to supply values for variables used inside expressions, but not currently defined in your namespace. You can pass a dictionary to local_dict For example: pd.eval(\"df1 > thresh\") UndefinedVariableError: name 'thresh' is not defined This fails because thresh is not defined. However, this works: pd.eval(\"df1 > thresh\", local_dict={'thresh': 10}) This is useful when you have variables to supply from a dictionary. Alternatively, with the Python engine, you could simply do this: mydict = {'thresh': 5} # Dictionary values with *string* keys cannot be accessed without # using the 'python' engine. pd.eval('df1 > mydict[\"thresh\"]', engine='python') But this is going to possibly be much slower than using the 'numexpr' engine and passing a dictionary to local_dict or global_dict. Hopefully, this should make a convincing argument for the use of these parameters. 1d) The target (+ inplace) argument, and Assignment Expressions This is not often a requirement because there are usually simpler ways of doing this, but you can assign the result of pd.eval to an object that implements __getitem__ such as dicts, and (you guessed it) DataFrames. Consider the example in the question x = 5 df2['D'] = df1['A'] + (df1['B'] * x) To assign a column \"D\" to df2, we do pd.eval('D = df1.A + (df1.B * x)', target=df2) A B C D 0 5 9 8 5 1 4 3 0 52 2 5 0 2 22 3 8 1 3 48 4 3 7 0 42 This is not an in-place modification of df2 (but it can be... read on). Consider another example: pd.eval('df1.A + df2.A') 0 10 1 11 2 7 3 16 4 10 dtype: int32 If you wanted to (for example) assign this back to a DataFrame, you could use the target argument as follows: df = pd.DataFrame(columns=list('FBGH'), index=df1.index) df F B G H 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN df = pd.eval('B = df1.A + df2.A', target=df) # Similar to # df = df.assign(B=pd.eval('df1.A + df2.A')) df F B G H 0 NaN 10 NaN NaN 1 NaN 11 NaN NaN 2 NaN 7 NaN NaN 3 NaN 16 NaN NaN 4 NaN 10 NaN NaN If you wanted to perform an in-place mutation on df, set inplace=True. pd.eval('B = df1.A + df2.A', target=df, inplace=True) # Similar to # df['B'] = pd.eval('df1.A + df2.A') df F B G H 0 NaN 10 NaN NaN 1 NaN 11 NaN NaN 2 NaN 7 NaN NaN 3 NaN 16 NaN NaN 4 NaN 10 NaN NaN If inplace is set without a target, a ValueError is raised. While the target argument is fun to play around with, you will seldom need to use it. If you wanted to do this with df.eval, you would use an expression involving an assignment: df = df.eval(\"B = @df1.A + @df2.A\") # df.eval(\"B = @df1.A + @df2.A\", inplace=True) df F B G H 0 NaN 10 NaN NaN 1 NaN 11 NaN NaN 2 NaN 7 NaN NaN 3 NaN 16 NaN NaN 4 NaN 10 NaN NaN Note One of pd.eval's unintended uses is parsing literal strings in a manner very similar to ast.literal_eval: pd.eval(\"[1, 2, 3]\") array([1, 2, 3], dtype=object) It can also parse nested lists with the 'python' engine: pd.eval(\"[[1, 2, 3], [4, 5], [10]]\", engine='python') [[1, 2, 3], [4, 5], [10]] And lists of strings: pd.eval([\"[1, 2, 3]\", \"[4, 5]\", \"[10]\"], engine='python') [[1, 2, 3], [4, 5], [10]] The problem, however, is for lists with length larger than 100: pd.eval([\"[1]\"] * 100, engine='python') # Works pd.eval([\"[1]\"] * 101, engine='python') AttributeError: 'PandasExprVisitor' object has no attribute 'visit_Ellipsis' More information can this error, causes, fixes, and workarounds can be found here. 2) DataFrame.eval: As mentioned above, df.eval calls pd.eval under the hood, with a bit of juxtaposition of arguments. The v0.23 source code shows this: def eval(self, expr, inplace=False, **kwargs): from pandas.core.computation.eval import eval as _eval inplace = validate_bool_kwarg(inplace, 'inplace') resolvers = kwargs.pop('resolvers', None) kwargs['level'] = kwargs.pop('level', 0) + 1 if resolvers is None: index_resolvers = self._get_index_resolvers() resolvers = dict(self.iteritems()), index_resolvers if 'target' not in kwargs: kwargs['target'] = self kwargs['resolvers'] = kwargs.get('resolvers', ()) + tuple(resolvers) return _eval(expr, inplace=inplace, **kwargs) eval creates arguments, does a little validation, and passes the arguments on to pd.eval. For more, you can read on: When to use DataFrame.eval() versus pandas.eval() or Python eval() 2a) Usage Differences 2a1) Expressions with DataFrames vs. Series Expressions For dynamic queries associated with entire DataFrames, you should prefer pd.eval. For example, there is no simple way to specify the equivalent of pd.eval(\"df1 + df2\") when you call df1.eval or df2.eval. 2a2) Specifying Column Names Another other major difference is how columns are accessed. For example, to add two columns \"A\" and \"B\" in df1, you would call pd.eval with the following expression: pd.eval(\"df1.A + df1.B\") With df.eval, you need only supply the column names: df1.eval(\"A + B\") Since, within the context of df1, it is clear that \"A\" and \"B\" refer to column names. You can also refer to the index and columns using index (unless the index is named, in which case you would use the name). df1.eval(\"A + index\") Or, more generally, for any DataFrame with an index having 1 or more levels, you can refer to the kth level of the index in an expression using the variable \"ilevel_k\" which stands for \"index at level k\". IOW, the expression above can be written as df1.eval(\"A + ilevel_0\"). These rules also apply to df.query. 2a3) Accessing Variables in Local/Global Namespace Variables supplied inside expressions must be preceded by the \"@\" symbol, to avoid confusion with column names. A = 5 df1.eval(\"A > @A\") The same goes for query. It goes without saying that your column names must follow the rules for valid identifier naming in Python to be accessible inside eval. See here for a list of rules on naming identifiers. 2a4) Multiline Queries and Assignment A little known fact is that eval supports multiline expressions that deal with assignment (whereas query doesn't). For example, to create two new columns \"E\" and \"F\" in df1 based on some arithmetic operations on some columns, and a third column \"G\" based on the previously created \"E\" and \"F\", we can do df1.eval(\"\"\" E = A + B F = @df2.A + @df2.B G = E >= F \"\"\") A B C D E F G 0 5 0 3 3 5 14 False 1 7 9 3 5 16 7 True 2 2 4 7 6 6 5 True 3 8 8 1 6 16 9 True 4 7 7 8 1 14 10 True 3) eval vs query It helps to think of df.query as a function that uses pd.eval as a subroutine. Typically, query (as the name suggests) is used to evaluate conditional expressions (i.e., expressions that result in True/False values) and return the rows corresponding to the True result. The result of the expression is then passed to loc (in most cases) to return the rows that satisfy the expression. According to the documentation, The result of the evaluation of this expression is first passed to DataFrame.loc and if that fails because of a multidimensional key (e.g., a DataFrame) then the result will be passed to DataFrame.__getitem__(). This method uses the top-level pandas.eval() function to evaluate the passed query. In terms of similarity, query and df.eval are both alike in how they access column names and variables. This key difference between the two, as mentioned above is how they handle the expression result. This becomes obvious when you actually run an expression through these two functions. For example, consider df1.A 0 5 1 7 2 2 3 8 4 7 Name: A, dtype: int32 df1.B 0 9 1 3 2 0 3 1 4 7 Name: B, dtype: int32 To get all rows where \"A\" >= \"B\" in df1, we would use eval like this: m = df1.eval(\"A >= B\") m 0 True 1 False 2 False 3 True 4 True dtype: bool m represents the intermediate result generated by evaluating the expression \"A >= B\". We then use the mask to filter df1: df1[m] # df1.loc[m] A B C D 0 5 0 3 3 3 8 8 1 6 4 7 7 8 1 However, with query, the intermediate result \"m\" is directly passed to loc, so with query, you would simply need to do df1.query(\"A >= B\") A B C D 0 5 0 3 3 3 8 8 1 6 4 7 7 8 1 Performance wise, it is exactly the same. df1_big = pd.concat([df1] * 100000, ignore_index=True) %timeit df1_big[df1_big.eval(\"A >= B\")] %timeit df1_big.query(\"A >= B\") 14.7 ms \u00b1 33.9 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) 14.7 ms \u00b1 24.3 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each) But the latter is more concise, and expresses the same operation in a single step. Note that you can also do weird stuff with query like this (to, say, return all rows indexed by df1.index) df1.query(\"index\") # Same as df1.loc[df1.index] # Pointless,... I know A B C D 0 5 0 3 3 1 7 9 3 5 2 2 4 7 6 3 8 8 1 6 4 7 7 8 1 But don't. Bottom line: Please use query when querying or filtering rows based on a conditional expression."} +{"question_id": 51930339, "score": 82, "creation_date": 1534766573, "tags": ["python", "multiple-inheritance", "mypy", "python-typing"], "instruction": "How do I correctly add type-hints to Mixin classes?\n\nConsider the following example. The example is contrived but illustrates the point in a runnable example: class MultiplicatorMixin: def multiply(self, m: int) -> int: return self.value * m class AdditionMixin: def add(self, b: int) -> int: return self.value + b class MyClass(MultiplicatorMixin, AdditionMixin): def __init__(self, value: int) -> None: self.value = value instance = MyClass(10) print(instance.add(2)) print(instance.multiply(2)) When executed this will give the following output: 12 20 The code works. But running mypy on it, yields the following errors: example.py:4: error: \"MultiplicatorMixin\" has no attribute \"value\" example.py:10: error: \"AdditionMixin\" has no attribute \"value\" I understand why mypy gives this result. But the mixin classes are never used by themselves. They are always used as additional superclasses. For context, this is a pattern which has been used in an existing application and I am in the process of adding type-hints. And in this case, the errors are false-positives. I am thinking about rewriting the part using the mixins as I don't particularly like it and the same could probably be done with reorganising the class hierarchy. But I still would like to know how something like this could be properly hinted.", "output": "In addition to Campi's answer about the mypy's recommendation of typing mixins with Protocol: An alternative to typing the methods' selfs is just inheriting the protocol. from typing import Protocol class HasValueProtocol(Protocol): @property def value(self) -> int: ... class MultiplicationMixin(HasValueProtocol): def multiply(self, m: int) -> int: return self.value * m class AdditionMixin(HasValueProtocol): def add(self, b: int) -> int: return self.value + b class MyClass(MultiplicationMixin, AdditionMixin): def __init__(self, value: int) -> None: self.value = value Additionally, if you are TYPE_CHECKING a Protocol, and given that you cannot forward reference a parent class (i.e. passing the parent class as a string literal), a workaround would be: from typing import Protocol, TYPE_CHECKING if TYPE_CHECKING: class HasValueProtocol(Protocol): @property def value(self) -> int: ... else: class HasValueProtocol: ... class MultiplicationMixin(HasValueProtocol): def multiply(self, m: int) -> int: return self.value * m ..."} +{"question_id": 67699451, "score": 82, "creation_date": 1622009818, "tags": ["python", "python-3.x", "fastapi", "pydantic"], "instruction": "Make every field as optional with Pydantic\n\nI'm making an API with FastAPI and Pydantic. I would like to have some PATCH endpoints, where 1 or N fields of a record could be edited at once. Moreover, I would like the client to only pass the necessary fields in the payload. Example: class Item(BaseModel): name: str description: str price: float tax: float @app.post(\"/items\", response_model=Item) async def post_item(item: Item): ... @app.patch(\"/items/{item_id}\", response_model=Item) async def update_item(item_id: str, item: Item): ... In this example, for the POST request, I want every field to be required. However, in the PATCH endpoint, I don't mind if the payload only contains, for example, the description field. That's why I wish to have all fields as optional. Naive approach: class UpdateItem(BaseModel): name: Optional[str] = None description: Optional[str] = None price: Optional[float] = None tax: Optional[float] But that would be terrible in terms of code repetition. Any better option?", "output": "This method prevents data validation Read this by @Anime Bk: https://stackoverflow.com/a/75011200 Solution with metaclasses I've just come up with the following: class AllOptional(pydantic.main.ModelMetaclass): def __new__(cls, name, bases, namespaces, **kwargs): annotations = namespaces.get('__annotations__', {}) for base in bases: annotations.update(base.__annotations__) for field in annotations: if not field.startswith('__'): annotations[field] = Optional[annotations[field]] namespaces['__annotations__'] = annotations return super().__new__(cls, name, bases, namespaces, **kwargs) Use it as: class UpdatedItem(Item, metaclass=AllOptional): pass So basically it replace all non optional fields with Optional Any edits are welcome! With your example: from typing import Optional from fastapi import FastAPI from pydantic import BaseModel import pydantic app = FastAPI() class Item(BaseModel): name: str description: str price: float tax: float class AllOptional(pydantic.main.ModelMetaclass): def __new__(self, name, bases, namespaces, **kwargs): annotations = namespaces.get('__annotations__', {}) for base in bases: annotations.update(base.__annotations__) for field in annotations: if not field.startswith('__'): annotations[field] = Optional[annotations[field]] namespaces['__annotations__'] = annotations return super().__new__(self, name, bases, namespaces, **kwargs) class UpdatedItem(Item, metaclass=AllOptional): pass # This continues to work correctly @app.get(\"/items/{item_id}\", response_model=Item) async def get_item(item_id: int): return { 'name': 'Uzbek Palov', 'description': 'Palov is my traditional meal', 'price': 15.0, 'tax': 0.5, } @app.patch(\"/items/{item_id}\") # does using response_model=UpdatedItem makes mypy sad? idk, i did not check async def update_item(item_id: str, item: UpdatedItem): return item"} +{"question_id": 27796294, "score": 82, "creation_date": 1420538791, "tags": ["python", "python-asyncio"], "instruction": "When using asyncio, how do you allow all running tasks to finish before shutting down the event loop\n\nI have the following code: @asyncio.coroutine def do_something_periodically(): while True: asyncio.async(my_expensive_operation()) yield from asyncio.sleep(my_interval) if shutdown_flag_is_set: print(\"Shutting down\") break I run this function until complete. The problem occurs when shutdown is set - the function completes and any pending tasks are never run. This is the error: task: wait_for=> How do I schedule a shutdown correctly? To give some context, I'm writing a system monitor which reads from /proc/stat every 5 seconds, computes the cpu usage in that period, and then sends the result to a server. I want to keep scheduling these monitoring jobs until I receive sigterm, when I stop scheduling, wait for all current jobs to finish, and exit gracefully.", "output": "You can retrieve unfinished tasks and run the loop again until they finished, then close the loop or exit your program. pending = asyncio.all_tasks() loop.run_until_complete(asyncio.gather(*pending)) pending is a list of pending tasks. asyncio.gather() allows to wait on several tasks at once. If you want to ensure all the tasks are completed inside a coroutine (maybe you have a \"main\" coroutine), you can do it this way, for instance: async def do_something_periodically(): while True: asyncio.create_task(my_expensive_operation()) await asyncio.sleep(my_interval) if shutdown_flag_is_set: print(\"Shutting down\") break await asyncio.gather(*asyncio.all_tasks()) Also, in this case, since all the tasks are created in the same coroutine, you already have access to the tasks: async def do_something_periodically(): tasks = [] while True: tasks.append(asyncio.create_task(my_expensive_operation())) await asyncio.sleep(my_interval) if shutdown_flag_is_set: print(\"Shutting down\") break await asyncio.gather(*tasks)"} +{"question_id": 13152252, "score": 82, "creation_date": 1351664176, "tags": ["c++", "python"], "instruction": "Is there a compact equivalent to Python range() in C++/STL\n\nHow can I do the equivalent of the following using C++/STL? I want to fill a std::vector with a range of values [min, max). # Python >>> x = range(0, 10) >>> x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] I suppose I could use std::generate_n and provide a functor to generate the sequence, but I was wondering if there is a more succinct way of doing this using STL?", "output": "In C++11, there's std::iota: #include #include //std::iota int main() { std::vector x(10); std::iota(std::begin(x), std::end(x), 0); //0 is the starting number } C++20 introduced a lazy version (just like Python) as part of the ranges library: #include #include namespace views = std::views; int main() { for (int x : views::iota(0, 10)) { std::cout << x << ' '; // 0 1 2 3 4 5 6 7 8 9 } }"} +{"question_id": 1171584, "score": 82, "creation_date": 1248354560, "tags": ["python", "json", "google-app-engine"], "instruction": "How can I parse JSON in Google App Engine?\n\nI'd like to parse a JSON string into an object under Google App Engine (python). What do you recommend? Something to encode/stringify would be nice too. Is what you recommend built in, or a library that I have to include in my app? Is it secure? Thanks.", "output": "Consider using Django's json lib, which is included with GAE. from django.utils import simplejson as json # load the object from a string obj = json.loads( string ) The link above has examples of Django's serializer, and here's the link for simplejson's documentation. If you're looking at storing Python class instances or objects (as opposed to compositions of lists, strings, numbers, and dictionaries), you probably want to look at pickle. Incidentally, to get Django 1.0 (instead of Django 0.96) running on GAE, you can use the following call in your main.py, per this article: from google.appengine.dist import use_library use_library('django', '1.0') Edit: Native JSON support in Google App Engine 1.6.0 with Python 2.7 As of Google App Engine 1.6.0, you can use the Python 2.7 runtime by adding runtime: python27 in app.yaml, and then you can import the native JSON library with import json."} +{"question_id": 35646908, "score": 82, "creation_date": 1456475427, "tags": ["python", "arrays", "numpy", "shuffle"], "instruction": "Numpy shuffle multidimensional array by row only, keep column order unchanged\n\nHow can I shuffle a multidimensional array by row only in Python (so do not shuffle the columns). I am looking for the most efficient solution, because my matrix is very huge. Is it also possible to do this highly efficient on the original array (to save memory)? Example: import numpy as np X = np.random.random((6, 2)) print(X) Y = ???shuffle by row only not colls??? print(Y) What I expect now is original matrix: [[ 0.48252164 0.12013048] [ 0.77254355 0.74382174] [ 0.45174186 0.8782033 ] [ 0.75623083 0.71763107] [ 0.26809253 0.75144034] [ 0.23442518 0.39031414]] Output shuffle the rows not cols e.g.: [[ 0.45174186 0.8782033 ] [ 0.48252164 0.12013048] [ 0.77254355 0.74382174] [ 0.75623083 0.71763107] [ 0.23442518 0.39031414] [ 0.26809253 0.75144034]]", "output": "You can use numpy.random.shuffle(). This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same. In [2]: import numpy as np In [3]: In [3]: X = np.random.random((6, 2)) In [4]: X Out[4]: array([[0.71935047, 0.25796155], [0.4621708 , 0.55140423], [0.22605866, 0.61581771], [0.47264172, 0.79307633], [0.22701656, 0.11927993], [0.20117207, 0.2754544 ]]) In [5]: np.random.shuffle(X) In [6]: X Out[6]: array([[0.71935047, 0.25796155], [0.47264172, 0.79307633], [0.4621708 , 0.55140423], [0.22701656, 0.11927993], [0.20117207, 0.2754544 ], [0.22605866, 0.61581771]]) For other functionalities you can also check out the following functions: random.Generator.shuffle random.Generator.permutation random.Generator.permuted The function random.Generator.permuted is introduced in Numpy's 1.20.0 Release. The new function differs from shuffle and permutation in that the subarrays indexed by an axis are permuted rather than the axis being treated as a separate 1-D array for every combination of the other indexes. For example, it is now possible to permute the rows or columns of a 2-D array."} +{"question_id": 30983982, "score": 82, "creation_date": 1434987015, "tags": ["python", "scala", "apache-spark", "apache-spark-sql", "pyspark"], "instruction": "How to use JDBC source to write and read data in (Py)Spark?\n\nThe goal of this question is to document: steps required to read and write data using JDBC connections in PySpark possible issues with JDBC sources and know solutions With small changes these methods should work with other supported languages including Scala and R.", "output": "Writing data Include applicable JDBC driver when you submit the application or start shell. You can use for example --packages: bin/pyspark --packages group:name:version or combining driver-class-path and jars bin/pyspark --driver-class-path $PATH_TO_DRIVER_JAR --jars $PATH_TO_DRIVER_JAR These properties can be also set using PYSPARK_SUBMIT_ARGS environment variable before JVM instance has been started or using conf/spark-defaults.conf to set spark.jars.packages or spark.jars / spark.driver.extraClassPath. Choose desired mode. Spark JDBC writer supports following modes: append: Append contents of this :class:DataFrame to existing data. overwrite: Overwrite existing data. ignore: Silently ignore this operation if data already exists. error (default case): Throw an exception if data already exists. Upserts or other fine-grained modifications are not supported mode = ... Prepare JDBC URI, for example: # You can encode credentials in URI or pass # separately using properties argument # of jdbc method or options url = \"jdbc:postgresql://localhost/foobar\" (Optional) Create a dictionary of JDBC arguments. properties = { \"user\": \"foo\", \"password\": \"bar\" } properties / options can be also used to set supported JDBC connection properties. Use DataFrame.write.jdbc df.write.jdbc(url=url, table=\"baz\", mode=mode, properties=properties) to save the data (see pyspark.sql.DataFrameWriter for details). Known issues: Suitable driver cannot be found when driver has been included using --packages (java.sql.SQLException: No suitable driver found for jdbc: ...) Assuming there is no driver version mismatch to solve this you can add driver class to the properties. For example: properties = { ... \"driver\": \"org.postgresql.Driver\" } using df.write.format(\"jdbc\").options(...).save() may result in: java.lang.RuntimeException: org.apache.spark.sql.execution.datasources.jdbc.DefaultSource does not allow create table as select. Solution unknown. in Pyspark 1.3 you can try calling Java method directly: df._jdf.insertIntoJDBC(url, \"baz\", True) Reading data Follow steps 1-4 from Writing data Use sqlContext.read.jdbc: sqlContext.read.jdbc(url=url, table=\"baz\", properties=properties) or sqlContext.read.format(\"jdbc\"): (sqlContext.read.format(\"jdbc\") .options(url=url, dbtable=\"baz\", **properties) .load()) Known issues and gotchas: Suitable driver cannot be found - see: Writing data Spark SQL supports predicate pushdown with JDBC sources although not all predicates can pushed down. It also doesn't delegate limits nor aggregations. Possible workaround is to replace dbtable / table argument with a valid subquery. See for example: Does spark predicate pushdown work with JDBC? More than one hour to execute pyspark.sql.DataFrame.take(4) How to use SQL query to define table in dbtable? By default JDBC data sources loads data sequentially using a single executor thread. To ensure distributed data loading you can: Provide partitioning column (must be IntegerType), lowerBound, upperBound, numPartitions. Provide a list of mutually exclusive predicates predicates, one for each desired partition. See: Partitioning in spark while reading from RDBMS via JDBC, How to optimize partitioning when migrating data from JDBC source?, How to improve performance for slow Spark jobs using DataFrame and JDBC connection? How to partition Spark RDD when importing Postgres using JDBC? In a distributed mode (with partitioning column or predicates) each executor operates in its own transaction. If the source database is modified at the same time there is no guarantee that the final view will be consistent. Where to find suitable drivers: Maven Repository (to obtain required coordinates for --packages select desired version and copy data from a Gradle tab in a form compile-group:name:version substituting respective fields) or Maven Central Repository: PostgreSQL MySQL Other options Depending on the database specialized source might exist, and be preferred in some cases: Greenplum - Pivotal Greenplum-Spark Connector Apache Phoenix - Apache Spark Plugin Microsoft SQL Server - Spark connector for Azure SQL Databases and SQL Server Amazon Redshift - Databricks Redshift connector (current versions available only in a proprietary Databricks Runtime. Discontinued open source version, available on GitHub)."} +{"question_id": 4152376, "score": 82, "creation_date": 1289462473, "tags": ["python", "list", "generator"], "instruction": "How to get the n next values of a generator into a list\n\nI have made a generator to read a file word by word and it works nicely. def word_reader(file): for line in open(file): for p in line.split(): yield p reader = word_reader('txtfile') next(reader) What is the easiest way of getting the n next values into a list?", "output": "Use itertools.islice: list(itertools.islice(it, n))"} +{"question_id": 40385782, "score": 82, "creation_date": 1478106591, "tags": ["python", "pandas", "geopandas", "shapely"], "instruction": "Make a union of polygons in GeoPandas, or Shapely (into a single geometry)\n\nI am trying to find the union of two polygons in GeoPandas and output a single geometry that encompasses points from both polygons as its vertices. The geopandas.overlay function gives me polygons for each individual union but I would like a single polygon. For context, I'm using this to combine two administrative areas together into a single area (i.e. include a town district within a country). The following example is from the geopandas website and illustrates what I'd like: from matplotlib import pyplot as plt import geopandas as gpd from shapely.geometry import Polygon polys1 = gpd.GeoSeries([Polygon([(0,0), (2,0), (2,2), (0,2)]), Polygon([(2,2), (4,2), (4,4), (2,4)])]) polys2 = gpd.GeoSeries([Polygon([(1,1), (3,1), (3,3), (1,3)]), Polygon([(3,3), (5,3), (5,5), (3,5)])]) df1 = gpd.GeoDataFrame({'geometry': polys1, 'df1':[1,2]}) df2 = gpd.GeoDataFrame({'geometry': polys2, 'df2':[1,2]}) res_union = gpd.overlay(df1, df2, how='union') res_union.plot() None of the output geometries are what I was expected, which is the following: poly_union = gpd.GeoSeries([Polygon([(0,0), (0,2), (1,2), (1,3), \\ (2,3), (2,4), (3, 4), (3, 5), (5, 5), (5, 3), (4, 3), (4, 2), \\ (3,2), (3,1), (2, 1), (2, 0), (0, 0)])]) poly_union.plot(color = 'red') plt.show() Firstly, how do I output the above polygon (poly_union) from the input polygons (df1, df2) using GeoPandas or shapely? Secondly, what is the correct nomenclature associated with the geometry (poly_union) that I'm trying to find? I would call it a 'union' but every example I find that refers to 'unions' does not output this geometry. Note: This example does not seem to output a single polygon either: poly1 = df1['geometry']; poly2 = df2['geometry'] mergedpoly = poly1.union(poly2) mergedpoly.plot()", "output": "Note: cascaded_union mentioned in the answer below is superceded by unary_union if GEOS 3.2+ is used - this allows unions on different geometry types, not only polygons. To check your version, >>> import shapely.geos >>> shapely.geos.geos_version (3, 5, 1) From the question/answer here, it seems this is called a cascaded_union within shapely: from shapely.ops import cascaded_union polygons = [poly1[0], poly1[1], poly2[0], poly2[1]] boundary = gpd.GeoSeries(cascaded_union(polygons)) boundary.plot(color = 'red') plt.show()"} +{"question_id": 38028384, "score": 82, "creation_date": 1466856568, "tags": ["python", "python-3.x", "beautifulsoup"], "instruction": "Beautifulsoup : Difference between .find() and .select()\n\nWhen you use BeautifulSoup to scrape a certain part of a website, you can use soup.find() and soup.findAll() or soup.select(). Is there a difference between the .find() and the .select() methods? (e.g. In performance or flexibility, etc.) Or are they the same?", "output": "To summarise the comments: select() finds multiple instances and always returns a list, while find() returns only the first or None, so they don't do the same thing. select_one() would be the parallel to find(), as both return a single element or None select() and find_all() always return a list (an empty list if they have no results) I almost always use css selectors when chaining tags or using tag.classname, if looking for a single element without a class I use find(). Essentially it comes down to the use case and personal preference. As far as flexibility goes I think you know the answer, soup.select(\"div[id=foo] > div > div > div[class=fee] > span > span > a\") would look pretty ugly using multiple chained find/find_all calls. The only issue with the css selectors in bs4 is the very limited support, nth-of-type is the only pseudo class implemented and chaining attributes like a[href][src] is also not supported as are many other parts of css selectors. But things like a[href*=..] , a[href^=], a[href$=] etc.. are I think much nicer than find(\"a\", href=re.compile(....)) but again that is personal preference. For performance we can run some tests, I modified the code from an answer here running on 800+ html files taken from here, is is not exhaustive but should give a clue to the readability of some of the options and the performance: The modified functions are: from bs4 import BeautifulSoup from glob import iglob def parse_find(soup): author = soup.find(\"h4\", class_=\"h12 talk-link__speaker\").text title = soup.find(\"h4\", class_=\"h9 m5\").text date = soup.find(\"span\", class_=\"meta__val\").text.strip() soup.find(\"footer\",class_=\"footer\").find_previous(\"data\", { \"class\": \"talk-transcript__para__time\"}).text.split(\":\") soup.find_all(\"span\",class_=\"talk-transcript__fragment\") def parse_select(soup): author = soup.select_one(\"h4.h12.talk-link__speaker\").text title = soup.select_one(\"h4.h9.m5\").text date = soup.select_one(\"span.meta__val\").text.strip() soup.select_one(\"footer.footer\").find_previous(\"data\", { \"class\": \"talk-transcript__para__time\"}).text soup.select(\"span.talk-transcript__fragment\") def test(patt, func): for html in iglob(patt): with open(html) as f: func(BeautifulSoup(f, \"lxml\") Now for the timings: In [7]: from testing import test, parse_find, parse_select In [8]: timeit test(\"./talks/*.html\",parse_find) 1 loops, best of 3: 51.9 s per loop In [9]: timeit test(\"./talks/*.html\",parse_select) 1 loops, best of 3: 32.7 s per loop Like I said not exhaustive but I think we can safely say the css selectors are definitely more efficient."} +{"question_id": 51540404, "score": 82, "creation_date": 1532612750, "tags": ["python", "python-3.x", "pip", "dependencies", "pipenv"], "instruction": "How to resolve Python package dependencies with pipenv?\n\nI am using pipenv to handle Python package dependencies. The Python package is using two packages (named pckg1 and pckg2) that rely on the same package named pckg3, but from two different versions. Showing the dependency tree : $ pipenv graph pckg1==3.0.0 - pckg3 [required: >=4.1.0] pckg2==1.0.2 - pckg3 [required: ==4.0.11] An attempt to install dependencies : $ pipenv install Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies. You can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation. Hint: try $ pipenv lock --pre if it is a pre-release dependency. Could not find a version that matches pckg3==4.0.11,==4.1.0,>=4.1.0 (from -r C:\\Users\\user\\AppData\\Local\\Temp\\pipenv-o7uxm080-requirements\\pipenv-hwekv7dc-constraints.txt (line 2)) Tried: 3.3.1, 3.3.2, 3.3.3, 3.4.0, 3.4.2, 4.0.0, 4.0.0, 4.0.1, 4.0.1, 4.0.2, 4.0.2, 4.0.3, 4.0.3, 4.0.4, 4.0.4, 4.0.6, 4.0.6, 4.0.8, 4.0.8, 4.0.9, 4.0.9, 4.0.10, 4.0.10, 4.0.11, 4.0.11, 4.1.0, 4.1.0, 4.1.1, 4.1.1, 4.1.2, 4.1.2, 4.2.1, 4.2.1, 4.3.0, 4.3.0 There are incompatible versions in the resolved dependencies. As suggested, pipenv install --skip-lock does the trick, but the dependency tree is still unresolved. I would love to tell Pipenv to override pckg2's requirement, and specify pckg3>=4.1.0. How can this be resolved?", "output": "I get that error constantly. Clearing the cache in the lock file works beautifully every time. $ pipenv lock --pre --clear"} +{"question_id": 11949808, "score": 82, "creation_date": 1344938003, "tags": ["python", "oop", "function", "methods"], "instruction": "What is the difference between a function, an unbound method and a bound method?\n\nI'm asking this question because of a discussion on the comment thread of this answer. I'm 90% of the way to getting my head round it. In [1]: class A(object): # class named 'A' ...: def f1(self): pass ...: In [2]: a = A() # an instance f1 exists in three different forms: In [3]: a.f1 # a bound method Out[3]: > In [4]: A.f1 # an unbound method Out[4]: In [5]: a.__dict__['f1'] # doesn't exist KeyError: 'f1' In [6]: A.__dict__['f1'] # a function Out[6]: What is the difference between the bound method, unbound method and function objects, all of which are described by f1? How does one call these three objects? How can they be transformed into each other? The documentation on this stuff is quite hard to understand.", "output": "A function is created by the def statement, or by lambda. Under Python 2, when a function appears within the body of a class statement (or is passed to a type class construction call), it is transformed into an unbound method. (Python 3 doesn't have unbound methods; see below.) When a function is accessed on a class instance, it is transformed into a bound method, that automatically supplies the instance to the method as the first self parameter. def f1(self): pass Here f1 is a function. class C(object): f1 = f1 Now C.f1 is an unbound method. >>> C.f1 >>> C.f1.im_func is f1 True We can also use the type class constructor: >>> C2 = type('C2', (object,), {'f1': f1}) >>> C2.f1 We can convert f1 to an unbound method manually: >>> import types >>> types.MethodType(f1, None, C) Unbound methods are bound by access on a class instance: >>> C().f1 > Access is translated into calling through the descriptor protocol: >>> C.f1.__get__(C(), C) > Combining these: >>> types.MethodType(f1, None, C).__get__(C(), C) > Or directly: >>> types.MethodType(f1, C(), C) > The main difference between a function and an unbound method is that the latter knows which class it is bound to; calling or binding an unbound method requires an instance of its class type: >>> f1(None) >>> C.f1(None) TypeError: unbound method f1() must be called with C instance as first argument (got NoneType instance instead) >>> class D(object): pass >>> f1.__get__(D(), D) > >>> C.f1.__get__(D(), D) Since the difference between a function and an unbound method is pretty minimal, Python 3 gets rid of the distinction; under Python 3 accessing a function on a class instance just gives you the function itself: >>> C.f1 >>> C.f1 is f1 True In both Python 2 and Python 3, then, these three are equivalent: f1(C()) C.f1(C()) C().f1() Binding a function to an instance has the effect of fixing its first parameter (conventionally called self) to the instance. Thus the bound method C().f1 is equivalent to either of: (lamdba *args, **kwargs: f1(C(), *args, **kwargs)) functools.partial(f1, C())"} +{"question_id": 34052115, "score": 82, "creation_date": 1449087081, "tags": ["python", "machine-learning", "scikit-learn", "logistic-regression", "feature-selection"], "instruction": "How to find the importance of the features for a logistic regression model?\n\nI have a binary prediction model trained by logistic regression algorithm. I want know which features (predictors) are more important for the decision of positive or negative class. I know there is coef_ parameter which comes from the scikit-learn package, but I don't know whether it is enough for the importance. Another thing is how I can evaluate the coef_ values in terms of the importance for negative and positive classes. I also read about standardized regression coefficients and I don't know what it is. Lets say there are features like size of tumor, weight of tumor, and etc to make a decision for a test case like malignant or not malignant. I want to know which of the features are more important for malignant and not malignant prediction.", "output": "One of the simplest options to get a feeling for the \"influence\" of a given parameter in a linear classification model (logistic being one of those), is to consider the magnitude of its coefficient times the standard deviation of the corresponding parameter in the data. Consider this example: import numpy as np from sklearn.linear_model import LogisticRegression x1 = np.random.randn(100) x2 = 4*np.random.randn(100) x3 = 0.5*np.random.randn(100) y = (3 + x1 + x2 + x3 + 0.2*np.random.randn()) > 0 X = np.column_stack([x1, x2, x3]) m = LogisticRegression() m.fit(X, y) # The estimated coefficients will all be around 1: print(m.coef_) # Those values, however, will show that the second parameter # is more influential print(np.array(np.std(X, 0))*m.coef_) An alternative way to get a similar result is to examine the coefficients of the model fit on standardized parameters: m.fit(X / np.std(X, 0), y) print(m.coef_) Note that this is the most basic approach and a number of other techniques for finding feature importance or parameter influence exist (using p-values, bootstrap scores, various \"discriminative indices\", etc). I am pretty sure you would get more interesting answers at https://stats.stackexchange.com/."} +{"question_id": 9809557, "score": 82, "creation_date": 1332350062, "tags": ["python", "pip"], "instruction": "use a relative path in requirements.txt to install a tar.gz file with pip\n\nWe're using a requirements.txt file to store all the external modules needed. Every module but one is gathered from internet. The other one is stored on a folder under the one holding the requirements.txt file. BTW, this module can be easily installed with pip install I've tried using this: file:folder/module or this: file:./folder/module or even this: folder/module but always throws me an error. Does anyone know which is the right way to do this? Thanks", "output": "The following line in requirements.txt: ./foo/bar/mymodule works when there is a setup.py at the top level of the mymodule directory. This path is not relative to the requirements.txt file, but rather to the current working directory. Therefore it is necessary to navigate into the same directory as the requirements.txt and then run the command: pip install -r requirements.txt"} +{"question_id": 25972979, "score": 82, "creation_date": 1411385195, "tags": ["python", "pdb"], "instruction": "Is it possible to step backwards in pdb?\n\nAfter I hit n to evaluate a line, I want to go back and then hit s to step into that function if it failed. Is this possible? The docs say: j(ump) lineno Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don\u2019t want to run.", "output": "The GNU debugger, gdb: It is extremely slow, as it undoes single machine instruction at a time. The Python debugger, pdb: The jump command takes you backwards in the code, but does not reverse the state of the program. For Python, the extended python debugger prototype, epdb, was created for this reason. Here is the thesis and here is the program and the code. I used epdb as a starting point to create a live reverse debugger as part of my MSc degree. The thesis is available online: Combining reverse debugging and live programming towards visual thinking in computer programming. In chapter 1 and 2 I also cover most of the historical approaches to reverse debugging."} +{"question_id": 63528797, "score": 82, "creation_date": 1598037017, "tags": ["python", "counting", "letter"], "instruction": "How do I count the letters in Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch?\n\nHow do I count the letters in Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch? print(len('Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch')) Says 58 Well if it was that easy I wouldn't be asking you, now would I?! Wikipedia says (https://en.wikipedia.org/wiki/Llanfairpwllgwyngyll#Placename_and_toponymy) The long form of the name is the longest place name in the United Kingdom and one of the longest in the world at 58 characters (51 \"letters\" since \"ch\" and \"ll\" are digraphs, and are treated as single letters in the Welsh language). So I want to count that and get the answer 51. Okey dokey. print(len(['Ll','a','n','f','a','i','r','p','w','ll','g','w','y','n','g','y','ll','g','o','g','e','r','y','ch','w','y','r','n','d','r','o','b','w','ll','ll','a','n','t','y','s','i','l','i','o','g','o','g','o','g','o','ch'])) 51 Yeh but that's cheating, obviously I want to use the word as input, not the list. Wikipedia also says that the digraphs in Welsh are ch, dd, ff, ng, ll, ph, rh, th https://en.wikipedia.org/wiki/Welsh_orthography#Digraphs So off we go. Let's add up the length and then take off the double counting. word='Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch' count=len(word) print('starting with count of',count) for index in range(len(word)-1): substring=word[index]+word[index+1] if substring.lower() in ['ch','dd','ff','ng','ll','ph','rh','th']: print('taking off double counting of',substring) count=count-1 print(count) This gets me this far starting with count of 58 taking off double counting of Ll taking off double counting of ll taking off double counting of ng taking off double counting of ll taking off double counting of ch taking off double counting of ll taking off double counting of ll taking off double counting of ll taking off double counting of ch 49 It appears that I've subtracted too many then. I'm supposed to get 51. Now one problem is that with the llll it has found 3 lls and taken off three instead of two. So that's going to need to be fixed. (Must not overlap.) And then there's another problem. The ng. Wikipedia didn't say anything about there being a letter \"ng\" in the name, but it's listed as one of the digraphs on the page I quoted above. Wikipedia gives us some more clue here: \"additional information may be needed to distinguish a genuine digraph from a juxtaposition of letters\". And it gives the example of \"llongyfarch\" where the ng is just a \"juxtaposition of letters\", and \"llong\" where it is a digraph. So it seems that 'Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch' is one of those words where the -ng- is bit just a \"juxtaposition of letters\". And obviously there's no way that the computer can know that. So I'm going to have to give it that \"additional information\" that Wikipedia talks about. So anyways, I decided to look in an online dictionary http://geiriadur.ac.uk/gpc/gpc.html and you can see that if you look up llongyfarch (the example from Wikipedia that has the \"juxtaposition of letters\") it displays it with a vertical line between the n and the g but if you look up \"llong\" then it doesn't do this. So I've decided okay what we need to do is provide the additional information by putting a | in the input string like it does in the dictionary, just so that the algorithm knows that the ng bit is really two letters. But obviously I don't want the | itself to be counted as a letter. So now I've got these inputs: word='llong' ANSWER NEEDS TO BE 3 (ll o ng) word='llon|gyfarch' ANSWER NEEDS TO BE 9 (ll o n g y f a r ch) word='Llanfairpwllgwyn|gyllgogerychwyrndrobwllllantysiliogogogoch' ANSWER NEEDS TO BE 51 (Ll a n f a i r p w ll g w y n g y ll g o g e r y ch w y r n d r o b w ll ll a n t y s i l i o g o g o g o ch) and still this list of digraphs: ['ch','dd','ff','ng','ll','ph','rh','th'] and the rules are going to be: ignore case if you see a digraph then count it as 1 work from left to right so that llll is ll + ll, not l + ll + l if you see a | don't count it, but you can't ignore it completely, it is there to stop ng being a digraph and I want it to count it as 51 and to do it for the right reasons, not just fluke it. Now I am getting 51 but it is fluking it because it is counting the | as a letter (1 too high), and then it is taking off one too many with the llll (1 too low) - ERRORS CANCEL OUT It is getting llong right (3). It is getting llon|gyfarch wrong (10) - counting the | again How can I fix it the right way?", "output": "Like many problems to do with strings, this can be done in a simple way with a regex. >>> word = 'Llanfairpwllgwyn|gyllgogerychwyrndrobwllllantysiliogogogoch' >>> import re >>> pattern = re.compile(r'ch|dd|ff|ng|ll|ph|rh|th|[^\\W\\d_]', flags=re.IGNORECASE) >>> len(pattern.findall(word)) 51 The character class [^\\W\\d_] (from here) matches word-characters that are not digits or underscores, i.e. letters, including those with diacritics."} +{"question_id": 3300944, "score": 81, "creation_date": 1279725875, "tags": ["python", "mysql", "django"], "instruction": "Can I use Django F() objects with string concatenation?\n\nI want to run a django update through the ORM that looks something like this: MyModel.objects.filter(**kwargs).update(my_field=F('my_other_field')+'a string') This causes MySQL to throw an exception. Is there anyway to do this without writing raw SQL?", "output": "What's happening is that Django is passing the '+' through to SQL - but SQL doesn't allow the use of '+' for concatenation, so it tries to add numerically. If you use an integer in place of 'a string', it does work in the sense that it adds the integer value of my_other_field to your variable. It's debatable whether this is a bug. The documentation for F() objects in lookup queries states: Django supports the use of addition, subtraction, multiplication, division and modulo arithmetic with F() objects so it could be argued that you shouldn't be trying to use it to update with strings. But that's certainly not documented, and the error message 'Incorrect DOUBLE value' is not very helpful. I'll open a ticket."} +{"question_id": 29291113, "score": 81, "creation_date": 1427414737, "tags": ["python", "icons", "pyinstaller", "executable"], "instruction": "Pyinstaller setting icons don't change\n\nWhen I use the command: pyinstaller.exe --icon=test.ico -F --noconsole test.py All icons do not change to test.ico. Some icons remain as the pyinstaller's default icon. Why? All icon change in windows 7 32bit windows 7 64bit (make an exe file OS) Some remain default windows 7 64bit (other PC)", "output": "I know this is old and whatnot (and not exactly sure if it's a question), but after searching, I had success with this command for --onefile: pyinstaller.exe --onefile --windowed --icon=app.ico app.py Google led me to this page while I was searching for an answer on how to set an icon for my .exe, so maybe it will help someone else. The information here was found at this site: Creating an Executable from a Python Script | Matt Borgerson (Archive.org)"} +{"question_id": 59441794, "score": 81, "creation_date": 1576991979, "tags": ["python", "pip", "cryptography", "python-cryptography"], "instruction": "ERROR: Could not build wheels for cryptography which use PEP 517 and cannot be installed directly\n\nI get an error when pip builds wheels for the cryptography package. Error: LINK : fatal error LNK1181: cannot open input file 'libssl.lib' error: command 'C:\\\\Program Files (x86)\\\\Microsoft Visual Studio\\\\2017\\\\BuildTools\\\\VC\\\\Tools\\\\MSVC\\\\14.16.27023\\\\bin\\\\HostX86\\\\x86\\\\link.exe' failed with exit status 1181 ---------------------------------------- ERROR: Failed building wheel for cryptography Running setup.py clean for cryptography Failed to build cryptography ERROR: Could not build wheels for cryptography which use PEP 517 and cannot be installed directly I have already installed OpenSSL and set the environment variables as suggested in this post yet the problem persists. My setup details: System - Windows 10 Python - 3.8 Pip - 19.3.1", "output": "Setting cryptography to version 2.8 in requirements.txt fixed the issue."} +{"question_id": 29244286, "score": 81, "creation_date": 1427237072, "tags": ["python", "arrays", "list"], "instruction": "how to flatten a 2D list to 1D without using numpy?\n\nI have a list looks like this: [[1,2,3],[1,2],[1,4,5,6,7]] and I want to flatten it into [1,2,3,1,2,1,4,5,6,7] is there a light weight function to do this without using numpy?", "output": "Without numpy ( ndarray.flatten ) one way would be using chain.from_iterable which is an alternate constructor for itertools.chain : >>> list(chain.from_iterable([[1,2,3],[1,2],[1,4,5,6,7]])) [1, 2, 3, 1, 2, 1, 4, 5, 6, 7] Or as another yet Pythonic approach you can use a list comprehension : [j for sub in [[1,2,3],[1,2],[1,4,5,6,7]] for j in sub] Another functional approach very suitable for short lists could also be reduce in Python2 and functools.reduce in Python3 (don't use this for long lists): In [4]: from functools import reduce # Python3 In [5]: reduce(lambda x,y :x+y ,[[1,2,3],[1,2],[1,4,5,6,7]]) Out[5]: [1, 2, 3, 1, 2, 1, 4, 5, 6, 7] To make it slightly faster you can use operator.add, which is built-in, instead of lambda: In [6]: from operator import add In [7]: reduce(add ,[[1,2,3],[1,2],[1,4,5,6,7]]) Out[7]: [1, 2, 3, 1, 2, 1, 4, 5, 6, 7] In [8]: %timeit reduce(lambda x,y :x+y ,[[1,2,3],[1,2],[1,4,5,6,7]]) 789 ns \u00b1 7.3 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each) In [9]: %timeit reduce(add ,[[1,2,3],[1,2],[1,4,5,6,7]]) 635 ns \u00b1 4.38 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each) benchmark: :~$ python -m timeit \"from itertools import chain;chain.from_iterable([[1,2,3],[1,2],[1,4,5,6,7]])\" 1000000 loops, best of 3: 1.58 usec per loop :~$ python -m timeit \"reduce(lambda x,y :x+y ,[[1,2,3],[1,2],[1,4,5,6,7]])\" 1000000 loops, best of 3: 0.791 usec per loop :~$ python -m timeit \"[j for i in [[1,2,3],[1,2],[1,4,5,6,7]] for j in i]\" 1000000 loops, best of 3: 0.784 usec per loop A benchmark on @Will's answer that used sum (its fast for short list but not for long list) : :~$ python -m timeit \"sum([[1,2,3],[4,5,6],[7,8,9]], [])\" 1000000 loops, best of 3: 0.575 usec per loop :~$ python -m timeit \"sum([range(100),range(100)], [])\" 100000 loops, best of 3: 2.27 usec per loop :~$ python -m timeit \"reduce(lambda x,y :x+y ,[range(100),range(100)])\" 100000 loops, best of 3: 2.1 usec per loop"} +{"question_id": 17215400, "score": 81, "creation_date": 1371736218, "tags": ["python", "string", "string-formatting", "missing-data", "defaultdict"], "instruction": "Format string unused named arguments\n\nLet's say I have: action = '{bond}, {james} {bond}'.format(bond='bond', james='james') this wil output: 'bond, james bond' Next we have: action = '{bond}, {james} {bond}'.format(bond='bond') this will output: KeyError: 'james' Is there some workaround to prevent this error to happen, something like: if keyrror: ignore, leave it alone (but do parse others) compare format string with available named arguments, if missing then add", "output": "If you are using Python 3.2+, use can use str.format_map(). For bond, bond: from collections import defaultdict '{bond}, {james} {bond}'.format_map(defaultdict(str, bond='bond')) Result: 'bond, bond' For bond, {james} bond: class SafeDict(dict): def __missing__(self, key): return '{' + key + '}' '{bond}, {james} {bond}'.format_map(SafeDict(bond='bond')) Result: 'bond, {james} bond' In Python 2.6/2.7 For bond, bond: from collections import defaultdict import string string.Formatter().vformat('{bond}, {james} {bond}', (), defaultdict(str, bond='bond')) Result: 'bond, bond' For bond, {james} bond: from collections import defaultdict import string class SafeDict(dict): def __missing__(self, key): return '{' + key + '}' string.Formatter().vformat('{bond}, {james} {bond}', (), SafeDict(bond='bond')) Result: 'bond, {james} bond'"} +{"question_id": 77507580, "score": 81, "creation_date": 1700322011, "tags": ["python", "matplotlib", "pycharm"], "instruction": "UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown plt.show()\n\nI am using Windows 10 PyCharm 2021.3.3 Professional Edition python 3.11.5 matplotlib 3.8.1 How can I permanently resolve this issue in my development environment? import numpy as np import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt # Read data from file, skipping the first row (header) data = np.loadtxt('cm.dat', skiprows=1) # Initialize reference point x0, y0, z0 = data[0] # Compute squared displacement for each time step SD = [(x - x0)**2 + (y - y0)**2 + (z - z0)**2 for x, y, z in data] # Compute the cumulative average of SD to get MSD at each time step MSD = np.cumsum(SD) / np.arange(1, len(SD) + 1) # Generate time steps t = np.arange(1, len(SD) + 1) # Create a log-log plot of MSD versus t plt.figure(figsize=(8, 6)) plt.loglog(t, MSD, marker='o') plt.title('Mean Squared Displacement vs Time') plt.xlabel('Time step') plt.ylabel('MSD') plt.grid(True, which=\"both\", ls=\"--\") plt.show() C:\\Users\\pc\\AppData\\Local\\Programs\\Python\\Python311\\python.exe C:/git/RouseModel/tau_plot.py C:\\git\\RouseModel\\tau_plot.py:29: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown plt.show() Process finished with exit code 0", "output": "I have the same issue. In my case, I installed the PyQt5==5.15.10. After that, I run my code successfully. pip install PyQt5==5.15.10 or pip install PyQt5 with python==3.11 But from 2024, you guys should install version PyQt6 or the last version with python==3.12 or later."} +{"question_id": 33650974, "score": 81, "creation_date": 1447246410, "tags": ["python", "opencv"], "instruction": "OpenCV/Python: read specific frame using VideoCapture\n\nIs there a way to get a specific frame using VideoCapture() method? My current code is: import numpy as np import cv2 cap = cv2.VideoCapture('video.avi') This is my reference tutorial.", "output": "Thank you GPPK. The video parameters should be given as integers. Each flag has its own value. See here for the code. The correct solution is: import numpy as np import cv2 #Get video name from user #Ginen video name must be in quotes, e.g. \"pirkagia.avi\" or \"plaque.avi\" video_name = input(\"Please give the video name including its extension. E.g. \\\"pirkagia.avi\\\":\\n\") #Open the video file cap = cv2.VideoCapture(video_name) #Set frame_no in range 0.0-1.0 #In this example we have a video of 30 seconds having 25 frames per seconds, thus we have 750 frames. #The examined frame must get a value from 0 to 749. #For more info about the video flags see here: https://stackoverflow.com/questions/11420748/setting-camera-parameters-in-opencv-python #Here we select the last frame as frame sequence=749. In case you want to select other frame change value 749. #BE CAREFUL! Each video has different time length and frame rate. #So make sure that you have the right parameters for the right video! time_length = 30.0 fps=25 frame_seq = 749 frame_no = (frame_seq /(time_length*fps)) #The first argument of cap.set(), number 2 defines that parameter for setting the frame selection. #Number 2 defines flag CV_CAP_PROP_POS_FRAMES which is a 0-based index of the frame to be decoded/captured next. #The second argument defines the frame number in range 0.0-1.0 cap.set(2,frame_no); #Read the next frame from the video. If you set frame 749 above then the code will return the last frame. ret, frame = cap.read() #Set grayscale colorspace for the frame. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #Cut the video extension to have the name of the video my_video_name = video_name.split(\".\")[0] #Display the resulting frame cv2.imshow(my_video_name+' frame '+ str(frame_seq),gray) #Set waitKey cv2.waitKey() #Store this frame to an image cv2.imwrite(my_video_name+'_frame_'+str(frame_seq)+'.jpg',gray) # When everything done, release the capture cap.release() cv2.destroyAllWindows()"} +{"question_id": 5181320, "score": 81, "creation_date": 1299158899, "tags": ["python", "operators"], "instruction": "Under what circumstances are __rmul__ called?\n\nSay I have a list l. Under what circumstance is l.__rmul__(self, other) called? I basically understood the documentation, but I would also like to see an example to clarify its usages beyond any doubt.", "output": "When Python attempts to multiply two objects, it first tries to call the left object's __mul__() method. If the left object doesn't have a __mul__() method (or the method returns NotImplemented, indicating it doesn't work with the right operand in question), then Python wants to know if the right object can do the multiplication. If the right operand is the same type as the left, Python knows it can't, because if the left object can't do it, another object of the same type certainly can't either. If the two objects are different types, though, Python figures it's worth a shot. However, it needs some way to tell the right object that it is the right object in the operation, in case the operation is not commutative. (Multiplication is, of course, but not all operators are, and in any case * is not always used for multiplication!) So it calls __rmul__() instead of __mul__(). As an example, consider the following two statements: print \"nom\" * 3 print 3 * \"nom\" In the first case, Python calls the string's __mul__() method. The string knows how to multiply itself by an integer, so all is well. In the second case, the integer does not know how to multiply itself by a string, so its __mul__() returns NotImplemented and the string's __rmul__() is called. It knows what to do, and you get the same result as the first case. Now we can see that __rmul__() allows all of the string's special multiplication behavior to be contained in the str class, such that other types (such as integers) do not need to know anything about strings to be able to multiply by them. A hundred years from now (assuming Python is still in use) you will be able to define a new type that can be multiplied by an integer in either order, even though the int class has known nothing of it for more than a century. By the way, the string class's __mul__() has a bug in some versions of Python. If it doesn't know how to multiply itself by an object, it raises a TypeError instead of returning NotImplemented. That means you can't multiply a string by a user-defined type even if the user-defined type has an __rmul__() method, because the string never lets it have a chance. The user-defined type has to go first (e.g. Foo() * 'bar' instead of 'bar' * Foo()) so its __mul__() is called. They seem to have fixed this in Python 2.7 (I tested it in Python 3.2 also), but Python 2.6.6 has the bug."} +{"question_id": 38733220, "score": 81, "creation_date": 1470190041, "tags": ["python", "python-3.x", "python-2.7", "scikit-learn"], "instruction": "Difference between scikit-learn and sklearn (now deprecated)\n\nOn OS X 10.11.6 and python 2.7.10 I need to import from sklearn manifold. I have numpy 1.8 Orc1, scipy .13 Ob1 and scikit-learn 0.17.1 installed. I used pip to install sklearn(0.0), but when I try to import from sklearn manifold I get the following: Traceback (most recent call last): File \"\", line 1, in File \"/Library/Python/2.7/site-packages/sklearn/init.py\", line 57, in from .base import clone File \"/Library/Python/2.7/site-packages/sklearn/base.py\", line 11, in from .utils.fixes import signature File \"/Library/Python/2.7/site-packages/sklearn/utils/init.py\", line 10, in from .murmurhash import murmurhash3_32 File \"numpy.pxd\", line 155, in init sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029) ValueError: numpy.dtype has the wrong size, try recompiling. What is the difference between scikit-learn and sklearn? Also, I cant import scikit-learn because of a syntax error", "output": "You might need to reinstall numpy. It doesn't seem to have been installed correctly. sklearn is how you type the scikit-learn name in python (only the latter should be installed, the former is now deprecated). Also, try running the standard tests in scikit-learn and check the output. You will have detailed error information there. As a side note, do you have nosetests installed? Try: nosetests -v sklearn. You type this in bash, not in the python interpreter."} +{"question_id": 1598932, "score": 81, "creation_date": 1256104148, "tags": ["python", "django", "transactions", "race-condition"], "instruction": "Atomic increment of a counter in django\n\nI'm trying to atomically increment a simple counter in Django. My code looks like this: from models import Counter from django.db import transaction @transaction.commit_on_success def increment_counter(name): counter = Counter.objects.get_or_create(name = name)[0] counter.count += 1 counter.save() If I understand Django correctly, this should wrap the function in a transaction and make the increment atomic. But it doesn't work and there is a race condition in the counter update. How can this code be made thread-safe?", "output": "Use an F expression: from django.db.models import F either in update(): Counter.objects.get_or_create(name=name) Counter.objects.filter(name=name).update(count=F(\"count\") + 1) or on the object instance: counter, _ = Counter.objects.get_or_create(name=name) counter.count = F(\"count\") + 1 counter.save(update_fields=[\"count\"]) Remember to specify update_fields, or you might encounter race conditions on other fields of the model. A note on the race condition avoided by using F expressions has been added to the official documentation."} +{"question_id": 21920233, "score": 81, "creation_date": 1392932544, "tags": ["python", "numpy", "matplotlib", "graphing"], "instruction": "Matplotlib log scale tick label number formatting\n\nWith matplotlib when a log scale is specified for an axis, the default method of labeling that axis is with numbers that are 10 to a power eg. 10^6. Is there an easy way to change all of these labels to be their full numerical representation? eg. 1, 10, 100, etc. Note that I do not know what the range of powers will be and want to support an arbitrary range (negatives included).", "output": "Sure, just change the formatter. For example, if we have this plot: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.axis([1, 10000, 1, 100000]) ax.loglog() plt.show() You could set the tick labels manually, but then the tick locations and labels would be fixed when you zoom/pan/etc. Therefore, it's best to change the formatter. By default, a logarithmic scale uses a LogFormatter, which will format the values in scientific notation. To change the formatter to the default for linear axes (ScalarFormatter) use e.g. from matplotlib.ticker import ScalarFormatter for axis in [ax.xaxis, ax.yaxis]: axis.set_major_formatter(ScalarFormatter())"} +{"question_id": 20458011, "score": 81, "creation_date": 1386531320, "tags": ["python", "python-2.7", "python-3.3", "python-2to3"], "instruction": "How to use 2to3 properly for python?\n\nI have some code in python 2.7 and I want to convert it all into python 3.3 code. I know 2to3 can be used but I am not sure exactly how to use it.", "output": "Install the following module which adds the 2to3 command directly to entry_points. pip install 2to3 As it is written on 2to3 docs, to translate an entire project from one directory tree to another, use: 2to3 --output-dir=python3-version/mycode -W -n python2-version/mycode"} +{"question_id": 25129144, "score": 81, "creation_date": 1407195513, "tags": ["python", "datetime", "pandas"], "instruction": "Pandas: Return Hour from Datetime Column Directly\n\nAssume I have a DataFrame sales of timestamp values: timestamp sales_office 2014-01-01 09:01:00 Cincinnati 2014-01-01 09:11:00 San Francisco 2014-01-01 15:22:00 Chicago 2014-01-01 19:01:00 Chicago I would like to create a new column time_hour. I can create it by writing a short function as so and using apply() to apply it iteratively: def hr_func(ts): return ts.hour sales['time_hour'] = sales['timestamp'].apply(hr_func) I would then see this result: timestamp sales_office time_hour 2014-01-01 09:01:00 Cincinnati 9 2014-01-01 09:11:00 San Francisco 9 2014-01-01 15:22:00 Chicago 15 2014-01-01 19:01:00 Chicago 19 What I'd like to achieve is some shorter transformation like this (which I know is erroneous but gets at the spirit): sales['time_hour'] = sales['timestamp'].hour Obviously the column is of type Series and as such doesn't have those attributes, but it seems there's a simpler way to make use of matrix operations. Is there a more-direct approach?", "output": "Assuming timestamp is the index of the data frame, you can just do the following: hours = sales.index.hour If you want to add that to your sales data frame, just do: import pandas as pd pd.concat([sales, pd.DataFrame(hours, index=sales.index)], axis = 1) Edit: If you have several columns of datetime objects, it's the same process. If you have a column ['date'] in your data frame, and assuming that 'date' has datetime values, you can access the hour from the 'date' as: hours = sales['date'].hour Edit2: If you want to adjust a column in your data frame you have to include dt: sales['datehour'] = sales['date'].dt.hour"} +{"question_id": 52171593, "score": 81, "creation_date": 1536083341, "tags": ["python", "pip", "virtualenv", "pipenv"], "instruction": "How to install dependencies from a copied pipfile inside a virtual environment?\n\nThe problem originates when I start by cloning a git project that uses pipenv, so it has a Pipfile + Pipfile.lock. I want to use a virtual environment with the project so I run pipenv shell. I now have a virtual environment created and I am inside the virtual environment. The project obviously has a lot of dependencies (listed in the Pipfile). I don't want to have to go through the list in the Pipfile one by one and install them using pipenv install . Is there a pipenv/pip command that installs all the packages from a Pipfile I already have? Or maybe I need to set up the environment differently than running pipenv shell?", "output": "The proper answer to this question is that pipenv install or pipenv install --dev (if there are dev dependencies) should be ran. That will install all the dependencies in the Pipfile. Putting the dependencies into a requirements.txt and then using pip will work but is not really necessary. The whole point of using pipenv for most people is to avoid the need to manage a requirements.txt or to use pip. EDIT: if the virtualenv is already activated, you can also use pipenv sync or pipenv sync --dev for the same effect."} +{"question_id": 21981820, "score": 81, "creation_date": 1393228160, "tags": ["python", "pandas"], "instruction": "Creating multiple Excel worksheets using data from a pandas DataFrame\n\nI have a worksheet which I have read into a dataframe and the applied forward fill (ffill) method to. I would then like to create a single excel document with two worksheets in it. One worksheet would have the data in the dataframe before the ffill method is applied and the next would have the dataframe which has had the ffill method applied. Eventually I intend to create one worksheet for every unique instance of data in a certain column of the dataframe. I would then like to apply some VBA formatting to the results - but I'm not sure which DLL or addon or something I would need to call excel vba using python to format headings as bold and add color etc. I've had partial success in that xlsxwriter will create a new workbook and add sheets, but dataframe.to_excel operations don't seems to work on the workbooks it creates, the workbooks open but the sheets are blank. import os import time import pandas as pd import xlwt from xlwt.Workbook import * from pandas import ExcelWriter import xlsxwriter #set folder to import files from path = r'path to some file' #folder = os.listdir(path) #for loop goes here #get date date = time.strftime('%Y-%m-%d',time.gmtime(os.path.getmtime(path))) #import excel document original = pd.DataFrame() data = pd.DataFrame() original = pd.read_excel(path,sheetname='Leave',skiprows=26) data = pd.read_excel(path,sheetname='Leave',skiprows=26) print (data.shape) data.fillna(method='ffill',inplace=True) #the code for creating the workbook and worksheets wb= Workbook() ws1 = wb.add_sheet('original') ws2 = wb.add_sheet('result') original.to_excel(writer,'original') data.to_excel(writer,'result') writer.save('final.xls')", "output": "Your sample code is almost correct except you need to create the writer object and you don't need to use the add_sheet() methods. The following should work: # ... writer = pd.ExcelWriter('final.xlsx') data.to_excel(writer,'original') # data.fillna() or similar. data.to_excel(writer,'result') writer.close() # ... The correct syntax for this is shown at the end of the Pandas DataFrame.to_excel() docs. See also Working with Python Pandas and XlsxWriter."} +{"question_id": 9694165, "score": 81, "creation_date": 1331684059, "tags": ["python", "css", "image", "colors", "tuples"], "instruction": "Convert RGB color to English color name, like 'green' with Python\n\nI want to convert a color tuple to a color name, like 'yellow' or 'blue' >>> im = Image.open(\"test.jpg\") >>> n, color = max(im.getcolors(im.size[0]*im.size[1])) >>> print color (119, 172, 152) Is there a simple way in python to do this?", "output": "It looks like webcolors will allow you to do this: rgb_to_name(rgb_triplet, spec='css3') Convert a 3-tuple of integers, suitable for use in an rgb() color triplet, to its corresponding normalized color name, if any such name exists; valid values are html4, css2, css21 and css3, and the default is css3. Example: >>> rgb_to_name((0, 0, 0)) 'black' it is vice-versa-able: >>> name_to_rgb('navy') (0, 0, 128) to find the closest colour name: However webcolors raises an exception if it can't find a match for the requested colour. I've written a little fix that delivers the closest matching name for the requested RGB colour. It matches by Euclidian distance in the RGB space. import webcolors def closest_colour(requested_colour): distances = {} for name in webcolors.names(): r_c, g_c, b_c = webcolors.name_to_rgb(name) rd = (r_c - requested_colour[0]) ** 2 gd = (g_c - requested_colour[1]) ** 2 bd = (b_c - requested_colour[2]) ** 2 distances[name] = rd + gd + bd return min(distances, key=distances.get) def get_colour_name(requested_colour): try: closest_name = actual_name = webcolors.rgb_to_name(requested_colour) except ValueError: closest_name = closest_colour(requested_colour) actual_name = None return actual_name, closest_name requested_colour = (119, 172, 152) actual_name, closest_name = get_colour_name(requested_colour) print(\"Actual colour name:\", actual_name, \", closest colour name:\", closest_name) Output: Actual colour name: None, closest colour name: cadetblue"} +{"question_id": 15140072, "score": 81, "creation_date": 1362067359, "tags": ["python", "matplotlib"], "instruction": "How to map number to color using matplotlib's colormap?\n\nConsider a variable x containing a floating point number. I want to use matplotlib's colormaps to map this number to a color, but not plot anything. Basically, I want to be able to choose the colormap with mpl.cm.autumn for example, use mpl.colors.Normalize(vmin = -20, vmax = 10) to set the range, and then map x to the corresponding color. But I really don't get the documentation of mpl.cm, so if anyone could give me a hint.", "output": "It's as simple as cm.hot(0.3): import matplotlib.cm as cm print(cm.hot(0.3)) (0.8240081481370484, 0.0, 0.0, 1.0) If you also want to have the normalizer, use import matplotlib as mpl import matplotlib.cm as cm norm = mpl.colors.Normalize(vmin=-20, vmax=10) cmap = cm.hot x = 0.3 m = cm.ScalarMappable(norm=norm, cmap=cmap) print(m.to_rgba(x)) (1.0, 0.8225486412996345, 0.0, 1.0)"} +{"question_id": 15510882, "score": 81, "creation_date": 1363728848, "tags": ["python", "selenium"], "instruction": "Selenium: get coordinates or dimensions of element with Python\n\nI see that there are methods for getting the screen position and dimensions of an element through various Java libraries for Selenium, such as org.openqa.selenium.Dimension, which offers .getSize(), and org.openqa.selenium.Point with getLocation(). Is there any way to get either the location or dimensions of an element with the Selenium Python bindings?", "output": "Got it! The clue was on selenium.webdriver.remote.webelement \u2014 Selenium 3.14 documentation. WebElements have the properties .size and .location. Both are of type dict. driver = webdriver.Firefox() e = driver.find_element_by_xpath(\"//someXpath\") location = e.location size = e.size w, h = size['width'], size['height'] print(location) print(size) print(w, h) Output: {'y': 202, 'x': 165} {'width': 77, 'height': 22} 77 22 They also have a property called rect which is itself a dict, and contains the element's size and location."} +{"question_id": 54678337, "score": 81, "creation_date": 1550087491, "tags": ["python", "python-3.x", "python-dataclasses"], "instruction": "How does one ignore extra arguments passed to a dataclass?\n\nI'd like to create a config dataclass in order to simplify whitelisting of and access to specific environment variables (typing os.environ['VAR_NAME'] is tedious relative to config.VAR_NAME). I therefore need to ignore unused environment variables in my dataclass's __init__ function, but I don't know how to extract the default __init__ in order to wrap it with, e.g., a function that also includes *_ as one of the arguments. import os from dataclasses import dataclass @dataclass class Config: VAR_NAME_1: str VAR_NAME_2: str config = Config(**os.environ) Running this gives me TypeError: __init__() got an unexpected keyword argument 'SOME_DEFAULT_ENV_VAR'.", "output": "I would just provide an explicit __init__ instead of using the autogenerated one. The body of the loop only sets recognized value, ignoring unexpected ones. Note that this won't complain about missing values without defaults until later, though. import dataclasses @dataclasses.dataclass(init=False) class Config: VAR_NAME_1: str VAR_NAME_2: str def __init__(self, **kwargs): names = set([f.name for f in dataclasses.fields(self)]) for k, v in kwargs.items(): if k in names: setattr(self, k, v) Alternatively, you can pass a filtered environment to the default Config.__init__. field_names = set(f.name for f in dataclasses.fields(Config)) c = Config(**{k:v for k,v in os.environ.items() if k in field_names})"} +{"question_id": 45747589, "score": 81, "creation_date": 1503023227, "tags": ["python", "pandas", "assign"], "instruction": "Copying a column from one DataFrame to another gives NaN values?\n\nI have two DataFrames with the same number of rows - df1 like so: date hour var1 a 2017-05-01 00:00:00 456585 b 2017-05-01 01:00:00 899875 c 2017-05-01 02:00:00 569566 d 2017-05-01 03:00:00 458756 e 2017-05-01 04:00:00 231458 f 2017-05-01 05:00:00 986545 and df2 like so: MyVar1 MyVar2 0 6169.719338 3688.045368 1 5861.148007 3152.238704 2 5797.053347 2700.469871 3 5779.102340 2730.471948 4 6708.219647 3181.298291 5 8550.380343 3793.580394 I want to merge the data from the date and hour columns of df1 into df2, to get a result like: MyVar1 MyVar2 date hour 0 6169.719338 3688.045368 2017-05-01 00:00:00 1 5861.148007 3152.238704 2017-05-01 01:00:00 2 5797.053347 2700.469871 2017-05-01 02:00:00 3 5779.102340 2730.471948 2017-05-01 03:00:00 4 6708.219647 3181.298291 2017-05-01 04:00:00 5 8550.380343 3793.580394 2017-05-01 05:00:00 I tried simply assigning the columns like so: df2['date'] = df1['date'] df2['hour'] = df1['hour'] but I get a result with NaN values in the date and hour columns instead: MyVar1 MyVar2 date hour 0 6169.719338 3688.045368 NaN NaN 1 5861.148007 3152.238704 NaN NaN 2 5797.053347 2700.469871 NaN NaN Why does this happen? How can I simply assign the values such that the data from the first row of df1 is shown in the first row of df2, etc.?", "output": "The culprit is unalignable indexes Your DataFrames' indexes are different (and correspondingly, the indexes for each columns), so when trying to assign a column of one DataFrame to another, pandas will try to align the indexes, and failing to do so, insert NaNs. Consider the following examples to understand what this means: # Setup A = pd.DataFrame(index=['a', 'b', 'c']) B = pd.DataFrame(index=['b', 'c', 'd', 'f']) C = pd.DataFrame(index=[1, 2, 3]) # Example of alignable indexes - A & B (complete or partial overlap of indexes) A.index B.index a b b (overlap) c c (overlap) d f # Example of unalignable indexes - A & C (no overlap at all) A.index C.index a b c 1 2 3 When there are no overlaps, pandas cannot match even a single value between the two DataFrames to put in the result of the assignment, so the output is a column full of NaNs. If you're working on an IPython notebook, you can check that this is indeed the root cause using, df1.index.equals(df2.index) # False df1.index.intersection(df2.index).empty # True You can use any of the following solutions to fix this issue. Solution 1: Assign the underlying NumPy arrays (bypassing pandas index alignment) This solution will only work if the lengths of the two DataFrames match (i.e. have same number of rows). df2['date'] = df1['date'].to_numpy() # (In pandas < 0.24, do df2['date'] = df1['date'].values ) To assign multiple columns easily, use, df2[['date', 'hour']] = df1[['date', 'hour']].to_numpy() Solution 2: Reset both DataFrames' indexes For when the two DataFrames have different lengths. Also maybe preferable if you didn't mean to have different indices in the first place, or if you don't particularly care about preserving the index. # Optional, if you want a RangeIndex => [0, 1, 2, ...] # df1.index = pd.RangeIndex(len(df)) # Homogenize the index values, df2.index = df1.index # Assign the columns. df2[['date', 'hour']] = df1[['date', 'hour']] If you want to keep the existing index, but as a column, you may use reset_index() instead."} +{"question_id": 13805939, "score": 81, "creation_date": 1355160308, "tags": ["python"], "instruction": "Testing equality of three values\n\nAre these equal? xs[i] == xs[i+1] == xs[i+2] xs[i] == xs[i+1] and xs[i+1] == xs[i+2]", "output": "Python chains such relational operators naturally (including in and is). a() == b() == c() is functionally equivalent to a() == b() and b() == c() whenever consecutive calls to b return the same value and have the same aggregate side effects as a single call to b. For instance, there is no difference between the two expressions whenever b is a pure function with no side-effects. The easiest way to show the slight difference: >>> print(1) == print(2) == print(3) 1 2 3 True >>> print(1) == print(2) and print(2) == print(3) 1 2 2 3 True print() always returns None, so all we are doing is comparing Nones here, so the result is always True, but note that in the second case, print(2) is called twice, so we get two 2s in the output, while in the first case, the result is used for both comparisons, so it is only executed once."} +{"question_id": 53699012, "score": 81, "creation_date": 1544411559, "tags": ["python", "pandas", "numpy", "dataframe", "merge"], "instruction": "Performant cartesian product (CROSS JOIN) with pandas\n\nThe contents of this post were originally meant to be a part of Pandas Merging 101, but due to the nature and size of the content required to fully do justice to this topic, it has been moved to its own QnA. Given two simple DataFrames; left = pd.DataFrame({'col1' : ['A', 'B', 'C'], 'col2' : [1, 2, 3]}) right = pd.DataFrame({'col1' : ['X', 'Y', 'Z'], 'col2' : [20, 30, 50]}) left col1 col2 0 A 1 1 B 2 2 C 3 right col1 col2 0 X 20 1 Y 30 2 Z 50 The cross product of these frames can be computed, and will look something like: A 1 X 20 A 1 Y 30 A 1 Z 50 B 2 X 20 B 2 Y 30 B 2 Z 50 C 3 X 20 C 3 Y 30 C 3 Z 50 What is the most performant method of computing this result?", "output": "Let's start by establishing a benchmark. The easiest method for solving this is using a temporary \"key\" column: pandas <= 1.1.X def cartesian_product_basic(left, right): return ( left.assign(key=1).merge(right.assign(key=1), on='key').drop('key', 1)) cartesian_product_basic(left, right) pandas >= 1.2 left.merge(right, how=\"cross\") # implements the technique above col1_x col2_x col1_y col2_y 0 A 1 X 20 1 A 1 Y 30 2 A 1 Z 50 3 B 2 X 20 4 B 2 Y 30 5 B 2 Z 50 6 C 3 X 20 7 C 3 Y 30 8 C 3 Z 50 How this works is that both DataFrames are assigned a temporary \"key\" column with the same value (say, 1). merge then performs a many-to-many JOIN on \"key\". While the many-to-many JOIN trick works for reasonably sized DataFrames, you will see relatively lower performance on larger data. A faster implementation will require NumPy. Here are some famous NumPy implementations of 1D cartesian product. We can build on some of these performant solutions to get our desired output. My favourite, however, is @senderle's first implementation. def cartesian_product(*arrays): la = len(arrays) dtype = np.result_type(*arrays) arr = np.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(np.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) Generalizing: CROSS JOIN on Unique or Non-Unique Indexed DataFrames Disclaimer These solutions are optimised for DataFrames with non-mixed scalar dtypes. If dealing with mixed dtypes, use at your own risk! This trick will work on any kind of DataFrame. We compute the cartesian product of the DataFrames' numeric indices using the aforementioned cartesian_product, use this to reindex the DataFrames, and def cartesian_product_generalized(left, right): la, lb = len(left), len(right) idx = cartesian_product(np.ogrid[:la], np.ogrid[:lb]) return pd.DataFrame( np.column_stack([left.values[idx[:,0]], right.values[idx[:,1]]])) cartesian_product_generalized(left, right) 0 1 2 3 0 A 1 X 20 1 A 1 Y 30 2 A 1 Z 50 3 B 2 X 20 4 B 2 Y 30 5 B 2 Z 50 6 C 3 X 20 7 C 3 Y 30 8 C 3 Z 50 np.array_equal(cartesian_product_generalized(left, right), cartesian_product_basic(left, right)) True And, along similar lines, left2 = left.copy() left2.index = ['s1', 's2', 's1'] right2 = right.copy() right2.index = ['x', 'y', 'y'] left2 col1 col2 s1 A 1 s2 B 2 s1 C 3 right2 col1 col2 x X 20 y Y 30 y Z 50 np.array_equal(cartesian_product_generalized(left, right), cartesian_product_basic(left2, right2)) True This solution can generalise to multiple DataFrames. For example, def cartesian_product_multi(*dfs): idx = cartesian_product(*[np.ogrid[:len(df)] for df in dfs]) return pd.DataFrame( np.column_stack([df.values[idx[:,i]] for i,df in enumerate(dfs)])) cartesian_product_multi(*[left, right, left]).head() 0 1 2 3 4 5 0 A 1 X 20 A 1 1 A 1 X 20 B 2 2 A 1 X 20 C 3 3 A 1 X 20 D 4 4 A 1 Y 30 A 1 Further Simplification A simpler solution not involving @senderle's cartesian_product is possible when dealing with just two DataFrames. Using np.broadcast_arrays, we can achieve almost the same level of performance. def cartesian_product_simplified(left, right): la, lb = len(left), len(right) ia2, ib2 = np.broadcast_arrays(*np.ogrid[:la,:lb]) return pd.DataFrame( np.column_stack([left.values[ia2.ravel()], right.values[ib2.ravel()]])) np.array_equal(cartesian_product_simplified(left, right), cartesian_product_basic(left2, right2)) True Performance Comparison Benchmarking these solutions on some contrived DataFrames with unique indices, we have Do note that timings may vary based on your setup, data, and choice of cartesian_product helper function as applicable. Performance Benchmarking Code This is the timing script. All functions called here are defined above. from timeit import timeit import pandas as pd import matplotlib.pyplot as plt res = pd.DataFrame( index=['cartesian_product_basic', 'cartesian_product_generalized', 'cartesian_product_multi', 'cartesian_product_simplified'], columns=[1, 10, 50, 100, 200, 300, 400, 500, 600, 800, 1000, 2000], dtype=float ) for f in res.index: for c in res.columns: # print(f,c) left2 = pd.concat([left] * c, ignore_index=True) right2 = pd.concat([right] * c, ignore_index=True) stmt = '{}(left2, right2)'.format(f) setp = 'from __main__ import left2, right2, {}'.format(f) res.at[f, c] = timeit(stmt, setp, number=5) ax = res.div(res.min()).T.plot(loglog=True) ax.set_xlabel(\"N\"); ax.set_ylabel(\"time (relative)\"); plt.show() Continue Reading Jump to other topics in Pandas Merging 101 to continue learning: Merging basics - basic types of joins Index-based joins Generalizing to multiple DataFrames Cross join * * you are here"} +{"question_id": 4754485, "score": 81, "creation_date": 1295572443, "tags": ["python", "django"], "instruction": "DRY way to add created/modified by and time\n\nHaving something like created_by created_date modified_by modified_date Would be a very common pattern for a lot of tables. 1) You can set created date automatically (but not others) in model.py with created_date = models.DateTimeField(auto_now_add=True, editable=False) 2) You could do created/modified dates (but not by/user as don't have request context) in model.py with def save(self): if self.id: self.modified_date = datetime.now() else: self.created_date = datetime.now() super(MyModel,self).save() 3) You could set the created/modifed date and by in admin.py - but this doesn't deal with non admin updates def save_model(self, request, obj, form, change): if change: obj.modified_by = request.user obj.modified_date = datetime.now() else: obj.created_by = request.user obj.created_date = datetime.now() obj.save() 4) And the final place would be in the view.py which can do all 4, but doesn't cover admin updates. So realistically have to have logic spread out, at a minimum repeated in 3 & 4 (or a method on the model called from both, which will be missed) Whats a better way? (I've been working with python/django for a couple of days so could easily be missing something obvious) Can you do someting like @login_required e.g. @audit_changes Can you get access to the request and current user in the model and centralise logic there?", "output": "The create/modification dates can be handled by Django now, so they can be implemented like: class BaseModel(models.Model): created_date = models.DateTimeField(auto_now_add=True) modified_date = models.DateTimeField(auto_now=True) class Meta: abstract = True By adding this to a abstract model base class, it can be easily added to all models of the application, e.g.: class Pizza(BaseModel): .... class Topping(BaseModel): ... Storing the user is harder, since the request.user is not available. As SeanOC mentioned, this is a separation of concerns between the web request, and model layer. Either you pass this field all the time, or store request.user in a threadlocal. Django CMS does this for their permission system. from django.utils.deprecation import MiddlewareMixin class CurrentUserMiddleware(MiddlewareMixin): def process_request(self, request): set_current_user(getattr(request, 'user', None)) And the user tracking happens elsewhere: from threading import local _thread_locals = local() def set_current_user(user): _thread_locals.user=user def get_current_user(): return getattr(_thread_locals, 'user', None) For non-web environments (e.g. management commands), you'd have to call set_current_user at the start of the script."} +{"question_id": 48572831, "score": 81, "creation_date": 1517524294, "tags": ["python", "generics", "python-typing"], "instruction": "How to access the type arguments of typing.Generic?\n\nThe typing module provides a base class for generic type hints: The typing.Generic class. Subclasses of Generic accept type arguments in square brackets, for example: list_of_ints = typing.List[int] str_to_bool_dict = typing.Dict[str, bool] My question is, how can I access these type arguments? That is, given str_to_bool_dict as input, how can I get str and bool as output? Basically I'm looking for a function such that >>> magic_function(str_to_bool_dict) (, )", "output": "Python >= 3.8 As of Python3.8 there is typing.get_args: print( get_args( List[int] ) ) # (,) PEP-560 also provides __orig_bases__[n], which allows us the arguments of the nth generic base: from typing import TypeVar, Generic, get_args T = TypeVar( \"T\" ) class Base( Generic[T] ): pass class Derived( Base[int] ): pass print( get_args( Derived.__orig_bases__[0] ) ) # (,) Python >= 3.6 As of Python 3.6. there is a public __args__ and (__parameters__) field. For instance: print( typing.List[int].__args__ ) This contains the generic parameters (i.e. int), whilst __parameters__ contains the generic itself (i.e. ~T). Python < 3.6 Use typing_inspect.getargs Some considerations typing follows PEP8. Both PEP8 and typing are coauthored by Guido van Rossum. A double leading and trailing underscore is defined in as: \"\u201cmagic\u201d objects or attributes that live in user-controlled namespaces\". The dunders are also commented in-line; from the official repository for typing we can see: \"__args__ is a tuple of all arguments used in subscripting, e.g., Dict[T, int].__args__ == (T, int)\". However, the authors also note: \"The typing module has provisional status, so it is not covered by the high standards of backward compatibility (although we try to keep it as much as possible), this is especially true for (yet undocumented) dunder attributes like __union_params__. If you want to work with typing types in runtime context, then you may be interested in the typing_inspect project (part of which may end up in typing later).\" I general, whatever you do with typing will need to be kept up-to-date for the time being. If you need forward compatible changes, I'd recommend writing your own annotation classes."} +{"question_id": 44630676, "score": 81, "creation_date": 1497875574, "tags": ["python", "python-asyncio", "aiohttp"], "instruction": "How can I call an async function without await?\n\nI have a controller action in aiohttp application. async def handler_message(request): try: content = await request.json() perform_message(x,y,z) except (RuntimeError): print(\"error in perform fb message\") finally: return web.Response(text=\"Done\") perform_message is async function. Now, when I call action I want that my action return as soon as possible and perform_message put in event loop. In this way, perform_message isn't executed", "output": "One way would be to use create_task function: import asyncio async def handler_message(request): ... loop = asyncio.get_event_loop() loop.create_task(perform_message(x,y,z)) ... As per the loop documentation, starting Python 3.10, asyncio.get_event_loop() is deprecated. If you're trying to get a loop instance from a coroutine/callback, you should use asyncio.get_running_loop() instead. This method will not work if called from the main thread, in which case a new loop must be instantiated: loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.create_task(perform_message(x, y, z)) loop.run_forever() Furthermore, if the call is only made once throughout your program's runtime and no other loop needs to be is instantiated (unlikely), you may use: asyncio.run(perform_message(x, y, z)) This function creates an event loop and terminates it once the coroutine ends, therefore should only be used in the aforementioned scenario."} +{"question_id": 39268052, "score": 81, "creation_date": 1472722948, "tags": ["python", "python-3.x", "enums", "compare"], "instruction": "How to compare Enums in Python?\n\nSince Python 3.4, the Enum class exists. I am writing a program, where some constants have a specific order and I wonder which way is the most pythonic to compare them: class Information(Enum): ValueOnly = 0 FirstDerivative = 1 SecondDerivative = 2 Now there is a method, which needs to compare a given information of Information with the different enums: information = Information.FirstDerivative print(value) if information >= Information.FirstDerivative: print(jacobian) if information >= Information.SecondDerivative: print(hessian) The direct comparison does not work with Enums, so there are three approaches and I wonder which one is preferred: Approach 1: Use values: if information.value >= Information.FirstDerivative.value: ... Approach 2: Use IntEnum: class Information(IntEnum): ... Approach 3: Not using Enums at all: class Information: ValueOnly = 0 FirstDerivative = 1 SecondDerivative = 2 Each approach works, Approach 1 is a bit more verbose, while Approach 2 uses the not recommended IntEnum-class, while and Approach 3 seems to be the way one did this before Enum was added. I tend to use Approach 1, but I am not sure. Thanks for any advise!", "output": "I hadn't encountered Enum before so I scanned the doc (https://docs.python.org/3/library/enum.html) ... and found OrderedEnum (section 8.13.13.2) Isn't this what you want? From the doc: >>> class Grade(OrderedEnum): ... A = 5 ... B = 4 ... C = 3 ... D = 2 ... F = 1 ... >>> Grade.C < Grade.A True"} +{"question_id": 36517137, "score": 81, "creation_date": 1460206985, "tags": ["python", "unit-testing", "python-3.x", "coverage.py"], "instruction": "How to properly use coverage.py in Python?\n\nI've just started using Coverage.py module and so decided to make a simple test to check how it works. Sample.py def sum(num1, num2): return num1 + num2 def sum_only_positive(num1, num2): if num1 > 0 and num2 > 0: return num1 + num2 else: return None test.py from sample import sum, sum_only_positive def test_sum(): assert sum(5, 5) == 10 def test_sum_positive_ok(): assert sum_only_positive(2, 2) == 4 def test_sum_positive_fail(): assert sum_only_positive(-1, 2) is None As you see, all my code is covered with tests and py.test says all of them pass. I expect Coverage.py to show 100% coverage. Well, no. Well, Coverage.py may not see test.py file, so I copied test functions to sample.py file and ran Coverage again: Then I added this block of code: if __name__ == \"__main__\": print(sum(2, 4)) print(sum_only_positive(2, 4)) print(sum_only_positive(-1, 3)) and removed all test functions. After that, Coverage.py shows 100%: Why is it so? Shouldn't Coverage.py show code test coverage, not just execution coverage? I've read an official F.A.Q. for Coverage.py, but can't find the solution. Since many SO users are familiar with code testing and code coverage, I hope you can tell me, where am I mistaken. I have just one thought here: Coverage.py may simply watch which lines of code aren't executed so I should write tests for those lines. But there're lines which are executed already but aren't covered with tests so Coverage.py will fail here.", "output": "Coverage looks for a .coverage file to read and generate that report for you. Py.test on its own does not create one. You need py.test plugin for coverage: pip install pytest-cov If you already have it, then you can run both at once like this: py.test test.py --cov=sample.py Which means run test module test.py and record/display coverage report on sample.py. If you need to have multiple test runs and accumulate their recorded coverage and then display a final report, you can run it like this: py.test test.py --cov=sample.py --cov-report= py.test test.py --cov=sample2.py --cov-report= py.test test.py --cov=sample3.py --cov-report= Which means run test module test.py and record (only) coverage on sample.py - don't display a report. Now you can run coverage command separately for a complete report: coverage report -m The command above simply displays a formatted coverage report based on the accumulated .coverage data file from previous test runs. -m means show lines missed i.e. lines not covered by tests: Name Stmts Miss Cover Missing ----------------------------------------- sample.py 6 0 100% Coverage supports more switches like --include and --omit to include/exclude files using path patterns. For more info check out their docs: https://coverage.readthedocs.io/en/6.0.2/source.html?highlight=reporting#reporting"} +{"question_id": 6519546, "score": 81, "creation_date": 1309345109, "tags": ["python", "django", "orm", "sqlalchemy", "flask-sqlalchemy"], "instruction": "scoped_session(sessionmaker()) or plain sessionmaker() in sqlalchemy?\n\nI am using SQlAlchemy in my web project. What should I use - scoped_session(sessionmaker()) or plain sessionmaker() - and why? Or should I use something else? ## model.py from sqlalchemy import * from sqlalchemy.orm import * engine = create_engine('mysql://dbUser:dbPassword@dbServer:dbPort/dbName', pool_recycle=3600, echo=False) metadata = MetaData(engine) Session = scoped_session(sessionmaker()) Session.configure(bind=engine) user = Table('user', metadata, autoload=True) class User(object): pass usermapper = mapper(User, user) ## some other python file called abc.py from models import * def getalluser(): session = Session() session.query(User).all() session.flush() session.close() ## onemore file defg.py from models import * def updateuser(): session = Session() session.query(User).filter(User.user_id == '4').update({User.user_lname: 'villkoo'}) session.commit() session.flush() session.close() I create a session = Session() object for each request and I close it. Am I doing the right thing or is there a better way to do it?", "output": "Reading the documentation is recommended: the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread. In short, use scoped_session() for thread safety."} +{"question_id": 9321741, "score": 80, "creation_date": 1329442506, "tags": ["python"], "instruction": "Printing to screen and writing to a file at the same time\n\nI found some code online that generally works, but I want to use it multiple times in the same program (write different things to different files, while still printing to the screen the whole time). That is to say, when it closes, I think sys.stdout closes, so printing at all, and using this class again fails. I tried reimporting sys, and other dumb stuff, but I can't get it to work. Here's the site, and the code groups.google.com/group/comp.lang.python/browse_thread/thread/d25a9f5608e473af/ import sys class MyWriter: def __init__(self, stdout, filename): self.stdout = stdout self.logfile = file(filename, 'a') def write(self, text): self.stdout.write(text) self.logfile.write(text) def close(self): # self.stdout.close() self.logfile.close() writer = MyWriter(sys.stdout, 'log.txt') sys.stdout = writer print 'test'", "output": "You are trying to reproduce poorly something that is done very well by the Python Standard Library; please check the logging module. With this module you can do exactly what you want, but in a much simpler, standard, and extensible manner. You can proceed as follows (this example is a copy/paste from the logging cookbook): Let\u2019s say you want to log to console and file with different message formats and in differing circumstances. Say you want to log messages with levels of DEBUG and higher to file, and those messages at level INFO and higher to the console. Let\u2019s also assume that the file should contain timestamps, but the console messages should not. Here\u2019s how you can achieve this: import logging # set up logging to file - see previous section for more details logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s', datefmt='%m-%d %H:%M', filename='/temp/myapp.log', filemode='w') # define a Handler which writes INFO messages or higher to the sys.stderr console = logging.StreamHandler() console.setLevel(logging.INFO) # set a format which is simpler for console use formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s') # tell the handler to use this format console.setFormatter(formatter) # add the handler to the root logger logging.getLogger().addHandler(console) # Now, we can log to the root logger, or any other logger. First the root... logging.info('Jackdaws love my big sphinx of quartz.') # Now, define a couple of other loggers which might represent areas in your # application: logger1 = logging.getLogger('myapp.area1') logger2 = logging.getLogger('myapp.area2') logger1.debug('Quick zephyrs blow, vexing daft Jim.') logger1.info('How quickly daft jumping zebras vex.') logger2.warning('Jail zesty vixen who grabbed pay from quack.') logger2.error('The five boxing wizards jump quickly.') When you run this, on the console you will see root : INFO Jackdaws love my big sphinx of quartz. myapp.area1 : INFO How quickly daft jumping zebras vex. myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack. myapp.area2 : ERROR The five boxing wizards jump quickly. and in the file you will see something like 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz. 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim. 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex. 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack. 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly. As you can see, the DEBUG message only shows up in the file. The other messages are sent to both destinations. This example uses console and file handlers, but you can use any number and combination of handlers you choose."} +{"question_id": 27292145, "score": 80, "creation_date": 1417690362, "tags": ["python", "amazon-s3", "boto"], "instruction": "Python boto, list contents of specific dir in bucket\n\nI have S3 access only to a specific directory in an S3 bucket. For example, with the s3cmd command if I try to list the whole bucket: $ s3cmd ls s3://bucket-name I get an error: Access to bucket 'my-bucket-url' was denied But if I try access a specific directory in the bucket, I can see the contents: $ s3cmd ls s3://bucket-name/dir-in-bucket Now I want to connect to the S3 bucket with python boto. Similary with: bucket = conn.get_bucket('bucket-name') I get an error: boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden But if I try: bucket = conn.get_bucket('bucket-name/dir-in-bucket') The script stalls for about 10 seconds, and prints out an error afterwards. Bellow is the full trace. Any idea how to proceed with this? Note question is about the boto version 2 module, not boto3. Traceback (most recent call last): File \"test_s3.py\", line 7, in bucket = conn.get_bucket('bucket-name/dir-name') File \"/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py\", line 471, in get_bucket return self.head_bucket(bucket_name, headers=headers) File \"/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py\", line 490, in head_bucket response = self.make_request('HEAD', bucket_name, headers=headers) File \"/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py\", line 633, in make_request retry_handler=retry_handler File \"/usr/local/lib/python2.7/dist-packages/boto/connection.py\", line 1046, in make_request retry_handler=retry_handler) File \"/usr/local/lib/python2.7/dist-packages/boto/connection.py\", line 922, in _mexe request.body, request.headers) File \"/usr/lib/python2.7/httplib.py\", line 958, in request self._send_request(method, url, body, headers) File \"/usr/lib/python2.7/httplib.py\", line 992, in _send_request self.endheaders(body) File \"/usr/lib/python2.7/httplib.py\", line 954, in endheaders self._send_output(message_body) File \"/usr/lib/python2.7/httplib.py\", line 814, in _send_output self.send(msg) File \"/usr/lib/python2.7/httplib.py\", line 776, in send self.connect() File \"/usr/lib/python2.7/httplib.py\", line 1157, in connect self.timeout, self.source_address) File \"/usr/lib/python2.7/socket.py\", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known", "output": "For boto3 import boto3 s3 = boto3.resource('s3') my_bucket = s3.Bucket('my_bucket_name') for object_summary in my_bucket.objects.filter(Prefix=\"dir_name/\"): print(object_summary.key)"} +{"question_id": 2265319, "score": 80, "creation_date": 1266230534, "tags": ["python", "matplotlib", "subplot"], "instruction": "How to make an axes occupy multiple subplots with pyplot\n\nI would like to have three plots in a single figure. The figure should have a subplot layout of two by two, where the first plot should occupy the first two subplot cells (i.e. the whole first row of plot cells) and the other plots should be positioned underneath the first one in cells 3 and 4. I know that MATLAB allows this by using the subplot command like so: subplot(2,2,[1,2]) % the plot will span subplots 1 and 2 Is it also possible in pyplot to have a single axes occupy more than one subplot? The docstring of pyplot.subplot doesn't talk about it. Anyone got an easy solution?", "output": "You can simply do: import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 7, 0.01) plt.subplot(2, 1, 1) plt.plot(x, np.sin(x)) plt.subplot(2, 2, 3) plt.plot(x, np.cos(x)) plt.subplot(2, 2, 4) plt.plot(x, np.sin(x)*np.cos(x)) i.e., the first plot is really a plot in the upper half (the figure is only divided into 2x1 = 2 cells), and the following two smaller plots are done in a 2x2=4 cell grid. The third argument to subplot() is the position of the plot inside the grid (in the direction of reading in English, with cell 1 being in the top-left corner): for example in the second subplot (subplot(2, 2, 3)), the axes will go to the third section of the 2x2 matrix i.e, to the bottom-left corner."} +{"question_id": 32480108, "score": 80, "creation_date": 1441803251, "tags": ["python", "python-asyncio", "python-mock"], "instruction": "Mocking async call in python 3.5\n\nHow do I mock async call from one native coroutine to other one using unittest.mock.patch? I currently have quite an awkward solution: class CoroutineMock(MagicMock): def __await__(self, *args, **kwargs): future = Future() future.set_result(self) result = yield from future return result Then class TestCoroutines(TestCase): @patch('some.path', new_callable=CoroutineMock) def test(self, mock): some_action() mock.assert_called_with(1,2,3) This works but looks ugly. Is there more pythonic way to do this?", "output": "The solution was actually quite simple: I just needed to convert __call__ method of mock into coroutine: class AsyncMock(MagicMock): async def __call__(self, *args, **kwargs): return super(AsyncMock, self).__call__(*args, **kwargs) This works perfectly, when mock is called, code receives native coroutine Example usage: @mock.patch('my.path.asyncio.sleep', new_callable=AsyncMock) def test_stuff(sleep): # code"} +{"question_id": 1506010, "score": 80, "creation_date": 1254425975, "tags": ["python", "linux", "export"], "instruction": "How to use export with Python on Linux\n\nI need to make an export like this in Python : # export MY_DATA=\"my_export\" I've tried to do : # -*- python-mode -*- # -*- coding: utf-8 -*- import os os.system('export MY_DATA=\"my_export\"') But when I list export, \"MY_DATA\" not appear : # export How I can do an export with Python without saving \"my_export\" into a file ?", "output": "export is a command that you give directly to the shell (e.g. bash), to tell it to add or modify one of its environment variables. You can't change your shell's environment from a child process (such as Python), it's just not possible. Here's what's happening when you try os.system('export MY_DATA=\"my_export\"')... /bin/bash process, command `python yourscript.py` forks python subprocess |_ /usr/bin/python process, command `os.system()` forks /bin/sh subprocess |_ /bin/sh process, command `export ...` changes its local environment When the bottom-most /bin/sh subprocess finishes running your export ... command, then it's discarded, along with the environment that you have just changed."} +{"question_id": 39007571, "score": 80, "creation_date": 1471475172, "tags": ["python", "ipython", "jupyter", "jupyter-notebook"], "instruction": "Running Jupyter with multiple Python and IPython paths\n\nI'd like to work with Jupyter notebooks, but have had difficulty doing basic imports (such as import matplotlib). I think this was because I have several user-managed python installations. For instance: > which -a python /usr/bin/python /usr/local/bin/python > which -a ipython /Library/Frameworks/Python.framework/Versions/3.5/bin/ipython /usr/local/bin/ipython > which -a jupyter /Library/Frameworks/Python.framework/Versions/3.5/bin/jupyter /usr/local/bin/jupyter I used to have anaconda, but removed if from the ~/anaconda directory. Now, when I start a Jupyter Notebook, I get a Kernel Error: File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/pytho\u200cn3.5/subprocess.py\", line 947, in init restore_signals, start_new_session) File \"/Library/Frameworks/Python.framework/Versions/3.5/lib/pytho\u200cn3.5/subprocess.py\", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg) FileNotFoundError: [Errno 2] No such file or directory: '/Users/npr1/anaconda/envs/py27/bin/python' What should I do?!", "output": "This is fairly straightforward to fix, but it involves understanding three different concepts: How Unix/Linux/OSX use $PATH to find executables (%PATH% in Windows) How Python installs and finds packages How Jupyter knows what Python to use For the sake of completeness, I'll try to do a quick ELI5 on each of these, so you'll know how to solve this issue in the best way for you. 1. Unix/Linux/OSX $PATH When you type any command at the prompt (say, python), the system has a well-defined sequence of places that it looks for the executable. This sequence is defined in a system variable called PATH, which the user can specify. To see your PATH, you can type echo $PATH. The result is a list of directories on your computer, which will be searched in order for the desired executable. From your output above, I assume that it contains this: $ echo $PATH /usr/bin/:/Library/Frameworks/Python.framework/Versions/3.5/bin/:/usr/local/bin/ In windows echo %path% Probably with some other paths interspersed as well. What this means is that when you type python, the system will go to /usr/bin/python. When you type ipython, in this example, the system will go to /Library/Frameworks/Python.framework/Versions/3.5/bin/ipython, because there is no ipython in /usr/bin/. It's always important to know what executable you're using, particularly when you have so many installations of the same program on your system. Changing the path is not too complicated; see e.g. How to permanently set $PATH on Linux?. Windows - How to set environment variables in Windows 10 2. How Python finds packages When you run python and do something like import matplotlib, Python has to play a similar game to find the package you have in mind. Similar to $PATH in unix, Python has sys.path that specifies these: $ python >>> import sys >>> sys.path ['', '/Users/jakevdp/anaconda/lib/python3.5', '/Users/jakevdp/anaconda/lib/python3.5/site-packages', ...] Some important things: by default, the first entry in sys.path is the current directory. Also, unless you modify this (which you shouldn't do unless you know exactly what you're doing) you'll usually find something called site-packages in the path: this is the default place that Python puts packages when you install them using python setup.py install, or pip, or conda, or a similar means. The important thing to note is that each python installation has its own site-packages, where packages are installed for that specific Python version. In other words, if you install something for, e.g. /usr/bin/python, then ~/anaconda/bin/python can't use that package, because it was installed on a different Python! This is why in our twitter exchange I recommended you focus on one Python installation, and fix your$PATH so that you're only using the one you want to use. There's another component to this: some Python packages come bundled with stand-alone scripts that you can run from the command line (examples are pip, ipython, jupyter, pep8, etc.) By default, these executables will be put in the same directory path as the Python used to install them, and are designed to work only with that Python installation. That means that, as your system is set-up, when you run python, you get /usr/bin/python, but when you run ipython, you get /Library/Frameworks/Python.framework/Versions/3.5/bin/ipython which is associated with the Python version at /Library/Frameworks/Python.framework/Versions/3.5/bin/python! Further, this means that the packages you can import when running python are entirely separate from the packages you can import when running ipython or a Jupyter notebook: you're using two completely independent Python installations. So how to fix this? Well, first make sure your $PATH variable is doing what you want it to. You likely have a startup script called something like ~/.bash_profile or ~/.bashrc that sets this $PATH variable. On Windows, you can modify the user specific environment variables. You can manually modify that if you want your system to search things in a different order. When you first install anaconda/miniconda, there will be an option to do this automatically (add Python to the PATH): say yes to that, and then python will always point to ~/anaconda/python, which is probably what you want. 3. How Jupyter knows what Python to use We're not totally out of the water yet. You mentioned that in the Jupyter notebook, you're getting a kernel error: this indicates that Jupyter is looking for a non-existent Python version. Jupyter is set-up to be able to use a wide range of \"kernels\", or execution engines for the code. These can be Python 2, Python 3, R, Julia, Ruby... there are dozens of possible kernels to use. But in order for this to happen, Jupyter needs to know where to look for the associated executable: that is, it needs to know which path the python sits in. These paths are specified in jupyter's kernelspec, and it's possible for the user to adjust them to their desires. For example, here's the list of kernels that I have on my system: $ jupyter kernelspec list Available kernels: python2.7 /Users/jakevdp/.ipython/kernels/python2.7 python3.3 /Users/jakevdp/.ipython/kernels/python3.3 python3.4 /Users/jakevdp/.ipython/kernels/python3.4 python3.5 /Users/jakevdp/.ipython/kernels/python3.5 python2 /Users/jakevdp/Library/Jupyter/kernels/python2 python3 /Users/jakevdp/Library/Jupyter/kernels/python3 Each of these is a directory containing some metadata that specifies the kernel name, the path to the executable, and other relevant info. You can adjust kernels manually, editing the metadata inside the directories listed above. The command to install a kernel can change depending on the kernel. IPython relies on the ipykernel package which contains a command to install a python kernel: for example $ python -m ipykernel install It will create a kernelspec associated with the Python executable you use to run this command. You can then choose this kernel in the Jupyter notebook to run your code with that Python. You can see other options that ipykernel provides using the help command: $ python -m ipykernel install --help usage: ipython-kernel-install [-h] [--user] [--name NAME] [--display-name DISPLAY_NAME] [--prefix PREFIX] [--sys-prefix] Install the IPython kernel spec. optional arguments: -h, --help show this help message and exit --user Install for the current user instead of system-wide --name NAME Specify a name for the kernelspec. This is needed to have multiple IPython kernels at the same time. --display-name DISPLAY_NAME Specify the display name for the kernelspec. This is helpful when you have multiple IPython kernels. --prefix PREFIX Specify an install prefix for the kernelspec. This is needed to install into a non-default location, such as a conda/virtual-env. --sys-prefix Install to Python's sys.prefix. Shorthand for --prefix='/Users/bussonniermatthias/anaconda'. For use in conda/virtual-envs. Note: the recent version of anaconda ships with an extension for the notebook that should automatically detect your various conda environments if the ipykernel package is installed in it. Wrap-up: Fixing your Issue So with that background, your issue is quite easy to fix: Set your PATH so that the desired Python version is first. For example, you could run export PATH=\"/path/to/python/bin:$PATH\" to specify (one time) which Python you'd like to use. To do this permanently, add that line to your .bash_profile/.bashrc (note that anaconda can do this automatically for you when you install it). I'd recommend using the Python that comes with anaconda or miniconda: this will allow you to conda install all the tools you need. Make sure the packages you want to use are installed for that python. If you're using conda, you can type, e.g. conda install jupyter matplotlib scikit-learn to install those packages for anaconda/bin/python. Make sure that your Jupyter kernels point to the Python versions you want to use. When you conda install jupyter it should set this up for anaconda/bin/python automatically. Otherwise you can use the jupyter kernelspec command or python -m ipykernel install command to adjust existing kernels or install new ones. For installing modules into other Python Jupyter kernels not managed by Anaconda, you need to copy the path to the Python executable for the kernel and run /path/to/python -m pip install "} +{"question_id": 3850022, "score": 80, "creation_date": 1286114100, "tags": ["python", "performance", "sqlite"], "instruction": "How can I load an existing database file into memory in Python SQLite 3?\n\nI have an existing SQLite 3 database file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (approximately 10 MB), so there shouldn't be any problem to load it into memory. Is there a Pythonic way to load the existing file into memory in order to speed up the calculations?", "output": "Here is the snippet that I wrote for my Flask application: import sqlite3 from io import StringIO def init_sqlite_db(app): # Read database to tempfile con = sqlite3.connect(app.config['SQLITE_DATABASE']) tempfile = StringIO() for line in con.iterdump(): tempfile.write('%s\\n' % line) con.close() tempfile.seek(0) # Create a database in memory and import from tempfile app.sqlite = sqlite3.connect(\":memory:\") app.sqlite.cursor().executescript(tempfile.read()) app.sqlite.commit() app.sqlite.row_factory = sqlite3.Row"} +{"question_id": 5788891, "score": 80, "creation_date": 1303813091, "tags": ["python", "shell"], "instruction": "Execute a file with arguments in Python shell\n\nI would like to run a command in Python Shell to execute a file with an argument. For example: execfile(\"abc.py\") but how to add 2 arguments?", "output": "execfile runs a Python file, but by loading it, not as a script. You can only pass in variable bindings, not arguments. If you want to run a program from within Python, use subprocess.call. E.g. import subprocess subprocess.call(['./abc.py', arg1, arg2])"} +{"question_id": 534375, "score": 80, "creation_date": 1234302703, "tags": ["python", "pass-by-reference", "pass-by-value"], "instruction": "Passing values in Python\n\nWhen you pass a collection like list, array to another function in python, does it make a copy of it, or is it just a pointer?", "output": "Python passes references-to-objects by value. Python passes references-to-objects by value (like Java), and everything in Python is an object. This sounds simple, but then you will notice that some data types seem to exhibit pass-by-value characteristics, while others seem to act like pass-by-reference... what's the deal? It is important to understand mutable and immutable objects. Some objects, like strings, tuples, and numbers, are immutable. Altering them inside a function/method will create a new instance and the original instance outside the function/method is not changed. Other objects, like lists and dictionaries are mutable, which means you can change the object in-place. Therefore, altering an object inside a function/method will also change the original object outside."} +{"question_id": 13316397, "score": 80, "creation_date": 1352497111, "tags": ["python", "linux", "matplotlib", "runtime-error", "matplotlib-animation"], "instruction": "Matplotlib-Animation \"No MovieWriters Available\"\n\nUnder Linux, I've been checking out matplotlib's animation class, and it seems to work except that I cant initialise the movie writer to write out the movie. Using either of the examples: http://matplotlib.org/examples/animation/moviewriter.html http://matplotlib.org/examples/animation/basic_example_writer.html results in the error \"RuntimeError: No MovieWriters available!\" Im using matplotlib version 1.3.x and have installed (hopefully) all the codecs. Can someone please suggest as to why I get this error? If its a codecs issue, which codecs (+versions) should I install? If its something else that's broken, is there an alternative for creating animations in python?", "output": "Had the same problem....managed to get it to work after a little while. Thing to do is follow instructions on installing FFmpeg - which is (at least on windows) a bundle of executables you need to set a path to in your environment variables http://www.wikihow.com/Install-FFmpeg-on-Windows Download from ffmpeg.org Hope this helps someone - even after a while after the question - good luck"} +{"question_id": 20084487, "score": 80, "creation_date": 1384904153, "tags": ["python", "pandas", "data-visualization"], "instruction": "Use index in pandas to plot data\n\nI have a pandas-Dataframe and use resample() to calculate means (e.g. daily or monthly means). Here is a small example. import pandas as pd import numpy as np dates = pd.date_range('1/1/2000', periods=100) df = pd.DataFrame(np.random.randn(100, 1), index=dates, columns=['A']) A 2000-01-01 -1.210683 2000-01-02 2.242549 2000-01-03 0.801811 2000-01-04 2.353149 2000-01-05 0.359541 monthly_mean = df.resample('M').mean() A 2000-01-31 -0.048088 2000-02-29 -0.094143 2000-03-31 0.126364 2000-04-30 -0.413753 How do I plot the monthly_mean now? How do I manage to use the index of my new created DataFrame monthly_mean as the x-axis?", "output": "You can use reset_index to turn the index back into a column: monthly_mean.reset_index().plot(x='index', y='A') Look at monthly_mean.reset_index() by itself- the date is no longer in the index, but is a column in the dataframe, which is now just indexed by integers. If you look at the documentation for reset_index, you can get a bit more control over the process, including assigning sensible names to the index."} +{"question_id": 4176613, "score": 80, "creation_date": 1289720196, "tags": ["python", "python-3.x", "django", "django-admin", "django-widget"], "instruction": "Django Admin: Using a custom widget for only one model field\n\nI have a DateTimeField field in my model. I wanted to display it as a checkbox widget in the Django admin site. To do this, I created a custom form widget. However, I do not know how to use my custom widget for only this one field. The Django documentation explains how to use a custom widget for all fields of a certain type: class StopAdmin(admin.ModelAdmin): formfield_overrides = { models.DateTimeField: {'widget': ApproveStopWidget } } This is not granular enough though. I want to change it for only one field.", "output": "Create a custom ModelForm for your ModelAdmin and add 'widgets' to its Meta class, like so: class StopAdminForm(forms.ModelForm): class Meta: model = Stop widgets = { 'field_name': ApproveStopWidget(), } fields = '__all__' class StopAdmin(admin.ModelAdmin): form = StopAdminForm Done! Documentation for this is sort of non-intuitively placed in the ModelForm docs, without any mention to it given in the admin docs. See: Creating forms from models"} +{"question_id": 2498875, "score": 80, "creation_date": 1269338065, "tags": ["python", "python-imaging-library"], "instruction": "How to invert colors of image with PIL (Python-Imaging)?\n\nI need to convert series of images drawn as white on black background letters to images where white and black are inverted (as negative). How can I achieve this using PIL?", "output": "Try the following from the docs: https://pillow.readthedocs.io/en/stable/reference/ImageOps.html from PIL import Image import PIL.ImageOps image = Image.open('your_image.png') inverted_image = PIL.ImageOps.invert(image) inverted_image.save('new_name.png') Note: \"The ImageOps module contains a number of 'ready-made' image processing operations. This module is somewhat experimental, and most operators only work on L and RGB images.\""} +{"question_id": 865911, "score": 80, "creation_date": 1242336782, "tags": ["python", "ruby", "object", "language-comparisons"], "instruction": "Is everything an object in Python like Ruby?\n\nI read on another Stack Overflow question that Python was just like Ruby, as it relates to \"everything's an object,\" and everything in Python was an object, just like Ruby. Is this true? Is everything an object in Python like Ruby? How are the two different in this respect or are they really the same? For example, can you take a number and do the Ruby stuff I've seen like: y = 5.plus 6 Can that be done the same way in Python?", "output": "DiveIntoPython - Everything Is an Object Everything in Python is an object, and almost everything has attributes and methods. All functions have a built-in attribute __doc__, which returns the doc string defined in the function's source code. The sys module is an object which has (among other things) an attribute called path. And so forth. Still, this begs the question. What is an object? Different programming languages define \u201cobject\u201d in different ways. In some, it means that all objects must have attributes and methods; in others, it means that all objects are subclassable. In Python, the definition is looser; some objects have neither attributes nor methods (more on this in Chapter 3), and not all objects are subclassable (more on this in Chapter 5). But everything is an object in the sense that it can be assigned to a variable or passed as an argument to a function (more in this in Chapter 4). Ruby Docs - To Ruby From Python As with Python, in Ruby,... Everything is an object So there you have it from Ruby's own website: in Python everything is an object."} +{"question_id": 8342549, "score": 80, "creation_date": 1322748134, "tags": ["python", "matplotlib", "colorbar"], "instruction": "add colorbar to a sequence of line plots\n\nI have a sequence of line plots for two variables (x,y) for a number of different values of a variable z. I would normally add the line plots with legends like this: import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) # suppose mydata is a list of tuples containing (xs, ys, z) # where xs and ys are lists of x's and y's and z is a number. legns = [] for(xs,ys,z) in mydata: pl = ax.plot(xs,ys,color = (z,0,0)) legns.append(\"z = %f\"%(z)) ax.legends(legns) plt.show() But I have too many graphs and the legends will cover the graph. I'd rather have a colorbar indicating the value of z corresponding to the color. I can't find anything like that in the galery and all my attempts do deal with the colorbar failed. Apparently I must create a collection of plots before trying to add a colorbar. Is there an easy way to do this? Thanks. EDIT (clarification): I wanted to do something like this: import matplotlib.pyplot as plt import matplotlib.cm as cm fig = plt.figure() ax = fig.add_subplot(111) mycmap = cm.hot # suppose mydata is a list of tuples containing (xs, ys, z) # where xs and ys are lists of x's and y's and z is a number between 0 and 1 plots = [] for(xs,ys,z) in mydata: pl = ax.plot(xs,ys,color = mycmap(z)) plots.append(pl) fig.colorbar(plots) plt.show() But this won't work according to the Matplotlib reference because a list of plots is not a \"mappable\", whatever this means. I've created an alternative plot function using LineCollection: def myplot(ax,xs,ys,zs, cmap): plot = lc([zip(x,y) for (x,y) in zip(xs,ys)], cmap = cmap) plot.set_array(array(zs)) x0,x1 = amin(xs),amax(xs) y0,y1 = amin(ys),amax(ys) ax.add_collection(plot) ax.set_xlim(x0,x1) ax.set_ylim(y0,y1) return plot xs and ys are lists of lists of x and y coordinates and zs is a list of the different conditions to colorize each line. It feels a bit like a cludge though... I thought that there would be a more neat way to do this. I like the flexibility of the plt.plot() function.", "output": "(I know this is an old question but...) Colorbars require a matplotlib.cm.ScalarMappable, plt.plot produces lines which are not scalar mappable, therefore, in order to make a colorbar, we are going to need to make a scalar mappable. Ok. So the constructor of a ScalarMappable takes a cmap and a norm instance. (norms scale data to the range 0-1, cmaps you have already worked with and take a number between 0-1 and returns a color). So in your case: import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.normalize(min=0, max=1)) plt.colorbar(sm) Because your data is in the range 0-1 already, you can simplify the sm creation to: sm = plt.cm.ScalarMappable(cmap=my_cmap) EDIT: For matplotlib v1.2 or greater the code becomes: import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.normalize(vmin=0, vmax=1)) # fake up the array of the scalar mappable. Urgh... sm._A = [] plt.colorbar(sm) EDIT: For matplotlib v1.3 or greater the code becomes: import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.Normalize(vmin=0, vmax=1)) # fake up the array of the scalar mappable. Urgh... sm._A = [] plt.colorbar(sm) EDIT: For matplotlib v3.1 or greater simplifies to: import matplotlib.pyplot as plt sm = plt.cm.ScalarMappable(cmap=my_cmap, norm=plt.Normalize(vmin=0, vmax=1)) plt.colorbar(sm)"} +{"question_id": 36351109, "score": 80, "creation_date": 1459497802, "tags": ["python", "jupyter-notebook", "ipywidgets"], "instruction": "IPython Notebook ipywidgets does not show\n\nI created a table with an interactive slider that allows me to toggle between different periods on my table. It was working for the past few days, until today, when I re-ran the notebook, the slide bar doesn't show anymore. No error messages appear. The code seem to run just fine, as the table appears, but the slide bar just doesn't appear. I didn't change my code either, as I was working on a separate part of my notebook. I tried many approaches, including searching in this space but was still unable to come up with a solution. I was wondering if anyone faced a similar issue, and have any solution to this. Below is my code snippet. Anything before was solely to get my data into the desired form for tabulation. from IPython.display import display from ipywidgets import widgets, interactive v = interactive(by_q, quarter=(['q1y2015', 'q2y2015', 'q3y2015', 'q4y2015']), days=(1, 180)) display(v)", "output": "With the new version of ipywidgets, I ran into this problem, but then I found on their github readme that you now need to follow pip install ipywidgets with jupyter nbextension enable --py widgetsnbextension Note: To those using virtual environments (including conda environments) the recommended way to activate the extension is to run jupyter nbextension enable --py --sys-prefix widgetsnbextension That cleared the problem up for me."} +{"question_id": 1060796, "score": 80, "creation_date": 1246312869, "tags": ["python", "module", "python-import"], "instruction": "Callable modules\n\nWhy doesn't Python allow modules to have a __call__ method? (Beyond the obvious that it wouldn't be easy to import directly.) Specifically, why doesn't using a(b) syntax find the __call__ attribute like it does for functions, classes, and objects? (Is lookup just incompatibly different for modules?) >>> print(open(\"mod_call.py\").read()) def __call__(): return 42 >>> import mod_call >>> mod_call() Traceback (most recent call last): File \"\", line 1, in TypeError: 'module' object is not callable >>> mod_call.__call__() 42", "output": "Special methods are only guaranteed to be called implicitly when they are defined on the type, not on the instance. (__call__ is an attribute of the module instance mod_call, not of .) You can't add methods to built-in types. https://docs.python.org/reference/datamodel.html#special-lookup"} +{"question_id": 30651271, "score": 80, "creation_date": 1433441927, "tags": ["python", "regex", "python-re"], "instruction": "Using more than one flag in python re.findall\n\nI would like to use more than one flag with the re.findall function. More specifically, I would like to use the IGNORECASE and DOTALL flags at the same time. x = re.findall(r'CAT.+?END', 'Cat \\n eND', (re.I, re.DOTALL)) Error : Traceback (most recent call last): File \"\", line 1, in x = re.findall(r'CAT.+?END','Cat \\n eND',(re.I,re.DOTALL)) File \"C:\\Python27\\lib\\re.py\", line 177, in findall return _compile(pattern, flags).findall(string) File \"C:\\Python27\\lib\\re.py\", line 243, in _compile p = sre_compile.compile(pattern, flags) File \"C:\\Python27\\lib\\sre_compile.py\", line 500, in compile p = sre_parse.parse(p, flags) File \"C:\\Python27\\lib\\sre_parse.py\", line 673, in parse p = _parse_sub(source, pattern, 0) File \"C:\\Python27\\lib\\sre_parse.py\", line 308, in _parse_sub itemsappend(_parse(source, state)) File \"C:\\Python27\\lib\\sre_parse.py\", line 401, in _parse if state.flags & SRE_FLAG_VERBOSE: TypeError: unsupported operand type(s) for &: 'tuple' and 'int' Is there a way to use more than one flag ?", "output": "Yes, but you have to OR them together: x = re.findall(pattern=r'CAT.+?END', string='Cat \\n eND', flags=re.I | re.DOTALL)"} +{"question_id": 54313463, "score": 80, "creation_date": 1548177832, "tags": ["python", "pandas", "datetime", "type-conversion", "timestamp"], "instruction": "pandas datetime to unix timestamp seconds\n\nFrom the official documentation of pandas.to_datetime we can say, unit : string, default \u2018ns\u2019 unit of the arg (D,s,ms,us,ns) denote the unit, which is an integer or float number. This will be based off the origin. Example, with unit=\u2019ms\u2019 and origin=\u2019unix\u2019 (the default), this would calculate the number of milliseconds to the unix epoch start. So when I try like this way, import pandas as pd df = pd.DataFrame({'time': [pd.to_datetime('2019-01-15 13:25:43')]}) df_unix_sec = pd.to_datetime(df['time'], unit='ms', origin='unix') print(df) print(df_unix_sec) time 0 2019-01-15 13:25:43 0 2019-01-15 13:25:43 Name: time, dtype: datetime64[ns] Output is not changing for the latter one. Every time it is showing the datetime value not number of milliseconds to the unix epoch start for the 2nd one. Why is that? Am I missing something?", "output": "I think you misunderstood what the argument is for. The purpose of origin='unix' is to convert an integer timestamp to datetime, not the other way. pd.to_datetime(1.547559e+09, unit='s', origin='unix') # Timestamp('2019-01-15 13:30:00') Here are some options: Option 1: integer division Conversely, you can get the timestamp by converting to integer (to get nanoseconds) and divide by 109. pd.to_datetime(['2019-01-15 13:30:00']).astype(int) / 10**9 # Float64Index([1547559000.0], dtype='float64') Pros: super fast Cons: makes assumptions about how pandas internally stores dates Option 2: recommended by pandas Pandas docs recommend using the following method: # create test data dates = pd.to_datetime(['2019-01-15 13:30:00']) # calculate unix datetime (dates - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s') [out]: Int64Index([1547559000], dtype='int64') Pros: \"idiomatic\", recommended by the library Cons: unweildy not as performant as integer division Option 3: pd.Timestamp If you have a single date string, you can use pd.Timestamp as shown in the other answer: pd.Timestamp('2019-01-15 13:30:00').timestamp() # 1547559000.0 If you have to cooerce multiple datetimes (where pd.to_datetime is your only option), you can initialize and map: pd.to_datetime(['2019-01-15 13:30:00']).map(pd.Timestamp.timestamp) # Float64Index([1547559000.0], dtype='float64') Pros: best method for a single datetime string easy to remember Cons: not as performant as integer division"} +{"question_id": 2365701, "score": 80, "creation_date": 1267555083, "tags": ["python", "python-decorators"], "instruction": "Decorating class methods - how to pass the instance to the decorator?\n\nThis is Python 2.5, and it's GAE too, not that it matters. I have the following code. I'm decorating the foo() method in bar, using the dec_check class as a decorator. class dec_check(object): def __init__(self, f): self.func = f def __call__(self): print 'In dec_check.__init__()' self.func() class bar(object): @dec_check def foo(self): print 'In bar.foo()' b = bar() b.foo() When executing this I was hoping to see: In dec_check.__init__() In bar.foo() But I'm getting TypeError: foo() takes exactly 1 argument (0 given) as .foo(), being an object method, takes self as an argument. I'm guessing problem is that the instance of bar doesn't actually exist when I'm executing the decorator code. So how do I pass an instance of bar to the decorator class?", "output": "You need to make the decorator into a descriptor -- either by ensuring its (meta)class has a __get__ method, or, way simpler, by using a decorator function instead of a decorator class (since functions are already descriptors). E.g.: def dec_check(f): def deco(self): print 'In deco' f(self) return deco class bar(object): @dec_check def foo(self): print 'in bar.foo' b = bar() b.foo() this prints In deco in bar.foo as desired."} +{"question_id": 22355540, "score": 80, "creation_date": 1394636535, "tags": ["python", "ajax", "django", "cordova", "cors"], "instruction": "Access-Control-Allow-Origin in Django app\n\nI'm developing a Phonegap app for my Django based app, but when trying to make Ajax calls I get this error: XMLHttpRequest cannot load http://domain.herokuapp.com/getcsrf/?tags=jquery%2Cjavascript&tagmode=any&format=json. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. How can I make it so my Django app allows cross origin for some urls? Here's my Ajax code: get: function() { $.getJSON(\"http://domain.herokuapp.com/getcsrf/\", { tags: \"jquery,javascript\", tagmode: \"any\", format: \"json\" }, function(data) { $.each(data.items, function(item){ console.log(item); }); }); }", "output": "Django by default does not provide the headers necessary to provide cross origin. The easiest way would be to just use this Django app that handles it for you: https://github.com/adamchainz/django-cors-headers Add to installed apps Add to middleware Then stuff like... CORS_ALLOWED_ORIGINS = [ \"http://read.only.com\", \"http://change.allowed.com\", ] to support allowing all, just use the setting... CORS_ALLOW_ALL_ORIGINS = True and then do any filtering of the request in middleware or in the view."} +{"question_id": 6854080, "score": 80, "creation_date": 1311827221, "tags": ["python"], "instruction": "Is it necessary to include __init__ as the first function every time in a class in Python?\n\nIn Python, I want to know if it is necessary to include __init__ as the first method while creating a class, as in the example below: class ExampleClass: def __init__(self, some_message): self.message = some_message print \"New Class instance created, with message:\" print self.message Also, why do we use self to call methods? Can someone explain the use of \"self\" in detail? Also, why do we use pass statement in Python?", "output": "No, it isn't necessary. For example. class A(object): def f(): print 'foo' And you can of course use it, in this manner: a = A() a.f() In fact you can even define a class in this manner. class A: pass However, defining __init__ is a common practice because instances of a class usually store some sort of state information or data and the methods of the class offer a way to manipulate or do something with that state information or data. __init__ allows us to initialize this state information or data while creating an instance of the class. Here is a complete example. class BankAccount(object): def __init__(self, deposit): self.amount = deposit def withdraw(self, amount): self.amount -= amount def deposit(self, amount): self.amount += amount def balance(self): return self.amount # Let me create an instance of 'BankAccount' class with the initial # balance as $2000. myAccount = BankAccount(2000) # Let me check if the balance is right. print myAccount.balance() # Let me deposit my salary myAccount.deposit(10000) # Let me withdraw some money to buy dinner. myAccount.withdraw(15) # What's the balance left? print myAccount.balance() An instance of the class is always passed as the first argument to a method of the class. For example if there is class A and you have an instance a = A(), whenever you call a.foo(x, y), Python calls foo(a, x, y) of class A automatically. (Note the first argument.) By convention, we name this first argument as self."} +{"question_id": 13190392, "score": 80, "creation_date": 1351837631, "tags": ["python", "sqlalchemy"], "instruction": "How can I bind a list to a parameter in a custom query in SQLAlchemy?\n\nI am using this SQL for performance reasons: sql_tmpl = \"\"\"delete from Data where id_data in (:iddata) \"\"\" params = { 'iddata':[1, 2,3 4], } # 'session' is a session object from SQLAlchemy self.session.execute(text(sql_tmpl), params) However, I got an exception: NotSupportedError: (NotSupportedError) ('Python type list not supported. param=1', 'HY097') Is there a workaround that can allow me to bind a list to the parameter of the 'in' clause?", "output": "psycopg2 now supports type adaptation, which allows, among other things, the ability to pass a list into a single parameterized value in the query. This also works in SQLAlchemy, at the very least for raw-SQL-esque queries to a PostgreSQL database (I don't have access to other database types, so I don't know if sqlalchemy will respect this convention for other databases, but my inclinationcitation needed is that it will work). some_ids = [1, 2, 3, 4] query = \"SELECT * FROM my_table t WHERE t.id = ANY(:ids);\" conn.execute(sqlalchemy.text(query), ids=some_ids) ## runs just fine I found that without the wrapper call to sqlalchemy.text, it gave a ProgrammingError: syntax error at or near \":\"."} +{"question_id": 9397944, "score": 80, "creation_date": 1329925172, "tags": ["python", "matplotlib", "colors"], "instruction": "How to set the default color cycle for all subplots with matplotlib?\n\nHow can I set a default set of colors for plots made with matplotlib? I can set a particular color map like this import numpy as np import matplotlib.pyplot as plt fig=plt.figure(i) ax=plt.gca() colormap = plt.get_cmap('jet') ax.set_color_cycle([colormap(k) for k in np.linspace(0, 1, 10)]) but is there some way to set the same set of colors for all plots, including subplots?", "output": "Sure! Either specify axes.color_cycle in your .matplotlibrc file or set it at runtime using matplotlib.rcParams or matplotlib.rc. As an example of the latter: import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np # Set the default color cycle mpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=[\"r\", \"k\", \"c\"]) x = np.linspace(0, 20, 100) fig, axes = plt.subplots(nrows=2) for i in range(10): axes[0].plot(x, i * (x - 10)**2) for i in range(10): axes[1].plot(x, i * np.cos(x)) plt.show()"} +{"question_id": 14711552, "score": 80, "creation_date": 1360079801, "tags": ["javascript", "python", "flask", "external"], "instruction": "External JavaScript file is not getting added when running on Flask\n\nI have an HTML file named showMap.html: Map
And another JavaScript file map.js is placed in the js folder of the same directory. This code works fine when the HTML file is loaded, but this does not work when I run it in the server. I use Python's Flask framework for back-end programming and the funny thing is that the same thing will work well if I expand the JavaScript code inside the HTML file. The only problem is with the external file.", "output": "Serve the map.js file as a static resource: move the file to a static/ subdirectory of your package generate a static URL for it in a Jinja2 template like so: The filename parameter takes a relative path; you can use subdirectories was needed."} +{"question_id": 8073097, "score": 80, "creation_date": 1320880591, "tags": ["python", "pip", "virtualenv", "virtualenvwrapper", "virtual-environment"], "instruction": "How to freeze packages installed only in the virtual environment?\n\nHow to freeze packages installed only in the virtual environment, that is, without the global ones?", "output": "You need to use the -l or --local option to freeze only the local packages (and not the global ones) pip freeze -l > requirements.txt Make sure you are working in the virtualenv before doing pip freeze -l."} +{"question_id": 3671666, "score": 80, "creation_date": 1283978197, "tags": ["python", "process", "multiprocessing", "sharing"], "instruction": "Sharing a complex object between processes?\n\nI have a fairly complex Python object that I need to share between multiple processes. I launch these processes using multiprocessing.Process. When I share an object with multiprocessing.Queue and multiprocessing.Pipe in it, they are shared just fine. But when I try to share an object with other non-multiprocessing-module objects, it seems like Python forks these objects. Is that true? I tried using multiprocessing.Value. But I'm not sure what the type should be? My object class is called MyClass. But when I try multiprocess.Value(MyClass, instance), it fails with: TypeError: this type has no size Any idea what's going on?", "output": "You can do this using Python's multiprocessing \"Manager\" classes and a proxy class that you define. See Proxy Objects in the Python docs. What you want to do is define a proxy class for your custom object, and then share the object using a \"Remote Manager\" -- look at the examples in the same linked doc page in the \"Using a remote manager\" section where the docs show how to share a remote queue. You're going to be doing the same thing, but your call to your_manager_instance.register() will include your custom proxy class in its argument list. In this manner, you're setting up a server to share the custom object with a custom proxy. Your clients need access to the server (again, see the excellent documentation examples of how to setup client/server access to a remote queue, but instead of sharing a Queue, you are sharing access to your specific class)."} +{"question_id": 3155436, "score": 80, "creation_date": 1277964547, "tags": ["python", "class-method", "getattr"], "instruction": "__getattr__ for static/class variables\n\nI have a class like: class MyClass: Foo = 1 Bar = 2 Whenever MyClass.Foo or MyClass.Bar is invoked, I need a custom method to be invoked before the value is returned. Is it possible in Python? I know it is possible if I create an instance of the class and I can define my own __getattr__ method. But my scnenario involves using this class as such without creating any instance of it. Also I need a custom __str__ method to be invoked when str(MyClass.Foo) is invoked. Does Python provide such an option?", "output": "__getattr__() and __str__() for an object are found on its class, so if you want to customize those things for a class, you need the class-of-a-class. A metaclass. class FooType(type): def _foo_func(cls): return 'foo!' def _bar_func(cls): return 'bar!' def __getattr__(cls, key): if key == 'Foo': return cls._foo_func() elif key == 'Bar': return cls._bar_func() raise AttributeError(key) def __str__(cls): return 'custom str for %s' % (cls.__name__,) class MyClass(metaclass=FooType): pass # # in python 2: # class MyClass: # __metaclass__ = FooType print(MyClass.Foo) print(MyClass.Bar) print(str(MyClass)) printing: foo! bar! custom str for MyClass And no, an object can't intercept a request for a stringifying one of its attributes. The object returned for the attribute must define its own __str__() behavior. Updated 2023-02-20 for Python 3.x default implementation (python 2 as a comment)."} +{"question_id": 232435, "score": 80, "creation_date": 1224820370, "tags": ["python", "django", "django-models"], "instruction": "How do I restrict foreign keys choices to related objects only in django\n\nI have a two way foreign relation similar to the following class Parent(models.Model): name = models.CharField(max_length=255) favoritechild = models.ForeignKey(\"Child\", blank=True, null=True) class Child(models.Model): name = models.CharField(max_length=255) myparent = models.ForeignKey(Parent) How do I restrict the choices for Parent.favoritechild to only children whose parent is itself? I tried class Parent(models.Model): name = models.CharField(max_length=255) favoritechild = models.ForeignKey(\"Child\", blank=True, null=True, limit_choices_to = {\"myparent\": \"self\"}) but that causes the admin interface to not list any children.", "output": "I just came across ForeignKey.limit_choices_to in the Django docs. Not sure yet how it works, but it might be the right thing here. Update: ForeignKey.limit_choices_to allows one to specify either a constant, a callable or a Q object to restrict the allowable choices for the key. A constant obviously is of no use here, since it knows nothing about the objects involved. Using a callable (function or class method or any callable object) seems more promising. However, the problem of how to access the necessary information from the HttpRequest object remains. Using thread local storage may be a solution. 2. Update: Here is what has worked for me: I created a middleware as described in the link above. It extracts one or more arguments from the request's GET part, such as \"product=1\", and stores this information in the thread locals. Next there is a class method in the model that reads the thread local variable and returns a list of ids to limit the choice of a foreign key field. @classmethod def _product_list(cls): \"\"\" return a list containing the one product_id contained in the request URL, or a query containing all valid product_ids if not id present in URL used to limit the choice of foreign key object to those related to the current product \"\"\" id = threadlocals.get_current_product() if id is not None: return [id] else: return Product.objects.all().values('pk').query It is important to return a query containing all possible ids if none was selected so that the normal admin pages work ok. The foreign key field is then declared as: product = models.ForeignKey( Product, limit_choices_to={ id__in=BaseModel._product_list, }, ) The catch is that you have to provide the information to restrict the choices via the request. I don't see a way to access \"self\" here."} +{"question_id": 22245711, "score": 80, "creation_date": 1394183114, "tags": ["python", "import", "module"], "instruction": "from ... import OR import ... as for modules\n\nShould I use from foo import bar OR import foo.bar as bar when importing a module and there is no need/wish for changing the name (bar)? Are there any differences? Does it matter?", "output": "Assuming that bar is a module or package in foo, there is no difference*, it doesn't matter. The two statements have exactly the same result: >>> import os.path as path >>> path >>> from os import path >>> path If bar is not a module or package, the second form will not work; a traceback is thrown instead: >>> import os.walk as walk Traceback (most recent call last): File \"\", line 1, in ImportError: No module named walk Note that if bar is both a module inside foo and an object in the foo package namespace (so imported into the __init__.py module or defined there), then from foo import bar will import the object, and import foo.bar as bar imports the module. * In Python 3.6 and before, there was a bug with the initialization ordering of packages containing other modules, where in the loading stage of the package using import contained.module.something as alias in a submodule would fail where from contained.module import something as alias would not. See Imports in __init__.py and 'import as' statement for a very illustrative example of that problem, as well as Python issues #23203 and #30024."} +{"question_id": 14885895, "score": 80, "creation_date": 1360884192, "tags": ["python", "pandas", "matplotlib", "seaborn", "scatter-plot"], "instruction": "Color a scatter plot by Column Values\n\nOne of my favorite aspects of using the ggplot2 library in R is the ability to easily specify aesthetics. I can quickly make a scatterplot and apply color associated with a specific column and I would love to be able to do this with python/pandas/matplotlib. I'm wondering if there are there any convenience functions that people use to map colors to values using pandas dataframes and Matplotlib? ##ggplot scatterplot example with R dataframe, `df`, colored by col3 ggplot(data = df, aes(x=col1, y=col2, color=col3)) + geom_point() ##ideal situation with pandas dataframe, 'df', where colors are chosen by col3 df.plot(x=col1,y=col2,color=col3) EDIT: Thank you for your responses but I want to include a sample dataframe to clarify what I am asking. Two columns contain numerical data and the third is a categorical variable. The script I am thinking of will assign colors based on this value. np.random.seed(250) df = pd.DataFrame({'Height': np.append(np.random.normal(6, 0.25, size=5), np.random.normal(5.4, 0.25, size=5)), 'Weight': np.append(np.random.normal(180, 20, size=5), np.random.normal(140, 20, size=5)), 'Gender': [\"Male\",\"Male\",\"Male\",\"Male\",\"Male\", \"Female\",\"Female\",\"Female\",\"Female\",\"Female\"]}) Height Weight Gender 0 5.824970 159.210508 Male 1 5.780403 180.294943 Male 2 6.318295 199.142201 Male 3 5.617211 157.813278 Male 4 6.340892 191.849944 Male 5 5.625131 139.588467 Female 6 4.950479 146.711220 Female 7 5.617245 121.571890 Female 8 5.556821 141.536028 Female 9 5.714171 134.396203 Female", "output": "Imports and Data import numpy import pandas import matplotlib.pyplot as plt import seaborn as sns seaborn.set(style='ticks') numpy.random.seed(0) N = 37 _genders= ['Female', 'Male', 'Non-binary', 'No Response'] df = pandas.DataFrame({ 'Height (cm)': numpy.random.uniform(low=130, high=200, size=N), 'Weight (kg)': numpy.random.uniform(low=30, high=100, size=N), 'Gender': numpy.random.choice(_genders, size=N) }) Update August 2021 With seaborn 0.11.0, it's recommended to use new figure level functions like seaborn.relplot than to use FacetGrid directly. sns.relplot(data=df, x='Weight (kg)', y='Height (cm)', hue='Gender', hue_order=_genders, aspect=1.61) plt.show() Update October 2015 Seaborn handles this use-case splendidly: Map matplotlib.pyplot.scatter onto a seaborn.FacetGrid fg = sns.FacetGrid(data=df, hue='Gender', hue_order=_genders, aspect=1.61) fg.map(plt.scatter, 'Weight (kg)', 'Height (cm)').add_legend() Which immediately outputs: Old Answer In this case, I would use matplotlib directly. import numpy as np import matplotlib.pyplot as plt import pandas as pd def dfScatter(df, xcol='Height', ycol='Weight', catcol='Gender'): fig, ax = plt.subplots() categories = np.unique(df[catcol]) colors = np.linspace(0, 1, len(categories)) colordict = dict(zip(categories, colors)) df[\"Color\"] = df[catcol].apply(lambda x: colordict[x]) ax.scatter(df[xcol], df[ycol], c=df.Color) return fig if 1: df = pd.DataFrame({'Height':np.random.normal(size=10), 'Weight':np.random.normal(size=10), 'Gender': [\"Male\",\"Male\",\"Unknown\",\"Male\",\"Male\", \"Female\",\"Did not respond\",\"Unknown\",\"Female\",\"Female\"]}) fig = dfScatter(df) fig.savefig('fig1.png') And that gives me: As far as I know, that color column can be any matplotlib compatible color (RBGA tuples, HTML names, hex values, etc). I'm having trouble getting anything but numerical values to work with the colormaps."} +{"question_id": 12647196, "score": 80, "creation_date": 1348865297, "tags": ["python", "d3.js", "simplehttpserver"], "instruction": "How do I shut down a python simpleHTTPserver?\n\nSo I'm trying to learn d3, and the wiki suggested that To view the examples locally, you must have a local web server. Any web server will work; for example you can run Python's built-in server: python -m SimpleHTTPServer 8888 & Great... only now I have a server running... but at some point I think I should probably shut that down again. Is there a better way of shutting it down than using kill ? That seems like kind of a big hammer for a little job. (I'm running Mac OS 10.6.8 (Snow Leopard)) FWIW: ctrl+c gives about 10 lines of traceback, complaining about being interrupted. kill -3 gives a Finder warning in a separate window 'Python quit unexpectedly'. The default kill and kill -15 are relatively clean (and simple).", "output": "You are simply sending signals to the processes. kill is a command to send those signals. The keyboard command Ctrl+C sends a SIGINT, kill -9 sends a SIGKILL, and kill -15 sends a SIGTERM. What signal do you want to send to your server to end it?"} +{"question_id": 49542417, "score": 80, "creation_date": 1522264357, "tags": ["python", "ipython", "jupyter", "ipywidgets", "jupyter-lab"], "instruction": "How to get ipywidgets working in Jupyter Lab?\n\nIn Jupyter Notebook, ipywidgets work fine, however they seem to not work in Jupyter Lab (which is supposedly better than Notebook). I followed these directions. Step 1: Installed Node.js (https://nodejs.org/en/) Step 2: Installed requirements on Python 3 with conda: conda install -c conda-forge ipywidgets jupyter labextension install @jupyter-widgets/jupyterlab-manager Unfortunately, basic widgets do not work inside the jupyter lab notebook:", "output": "JupyterLab now prefers a model where arbitrary javascript is no longer allowed to be embedded in a cell's output, which is how many interactive Jupyter Notebook modules used to work. They now ask that modules with interactivity create a JupyterLab extension. ipywidgets provides @jupyter-widgets/jupyterlab-manager extension which satisfies this requirement. When using ipywidgets 7.6 or newer in JupyterLab 3.0 or newer you do not need to do anything: it is installed by default. You can check if this is installed by running: jupyter labextension list which should include a line like: @jupyter-widgets/jupyterlab-manager v3.0.0 enabled OK (python, jupyterlab_widgets) If you are using JupyterLab 1 or 2 (or old version of ipywidgets) you need to install this extension manually by running this on your command line (which assumes you already have NodeJS installed): jupyter labextension install @jupyter-widgets/jupyterlab-manager"} +{"question_id": 31097247, "score": 80, "creation_date": 1435476848, "tags": ["python", "numpy"], "instruction": "Remove duplicate rows of a numpy array\n\nHow can I remove duplicate rows of a 2 dimensional numpy array? data = np.array([[1,8,3,3,4], [1,8,9,9,4], [1,8,3,3,4]]) The answer should be as follows: ans = array([[1,8,3,3,4], [1,8,9,9,4]]) If there are two rows that are the same, then I would like to remove one \"duplicate\" row.", "output": "You can use numpy unique. Since you want the unique rows, we need to put them into tuples: import numpy as np data = np.array([[1,8,3,3,4], [1,8,9,9,4], [1,8,3,3,4]]) just applying np.unique to the data array will result in this: >>> uniques array([1, 3, 4, 8, 9]) prints out the unique elements in the list. So putting them into tuples results in: new_array = [tuple(row) for row in data] uniques = np.unique(new_array) which prints: >>> uniques array([[1, 8, 3, 3, 4], [1, 8, 9, 9, 4]]) UPDATE In the new version, you need to set np.unique(data, axis=0)"} +{"question_id": 38982807, "score": 80, "creation_date": 1471374954, "tags": ["python", "nginx", "flask", "uwsgi"], "instruction": "Are a WSGI server and HTTP server required to serve a Flask app?\n\nSetting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time. If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests? Does it make sense to not use Nginx either, just running bare Flask app on a port?", "output": "When you \"run Flask\" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable. The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server. Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available. The answer is similar for \"should I use a web server\". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.). Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask."} +{"question_id": 8382847, "score": 80, "creation_date": 1323075111, "tags": ["python", "ssh", "putty", "paramiko", "public-key"], "instruction": "How to ssh connect through Python Paramiko with ppk public key\n\nI'm using Paramiko to connect through SSH to a server. Basic authentication works well, but I can't understand how to connect with public key. When I connect with PuTTY, the server tell me this: Using username \"root\". Authenticating with public key \"rsa-key@ddddd.com\" Passphrase for key \"rsa-key@ddddd.com\": [i've inserted the passphrase here] Last login: Mon Dec 5 09:25:18 2011 from ... I connect to it with this ppk file: PuTTY-User-Key-File-2: ssh-rsa Encryption: aes256-cbc Comment: rsa-key@dddd.com Public-Lines: 4 [4 lines key] Private-Lines: 8 [8 lines key] Private-MAC: [hash] With basic auth the error I get (from the log) is: DEB [20111205-09:48:44.328] thr=1 paramiko.transport: userauth is OK DEB [20111205-09:48:44.927] thr=1 paramiko.transport: Authentication type (password) not permitted. DEB [20111205-09:48:44.927] thr=1 paramiko.transport: Allowed methods: ['publickey', 'gssapi-with-mic'] I've tried to include that ppk file and set to auth_public_key, but didn't work. Can you help me?", "output": "Ok @Adam and @Kimvais were right, Paramiko cannot parse .ppk files. So the way to go (thanks to @JimB too) is to convert .ppk file to OpenSSH private key format; this can be achieved using PuTTYgen as described here. Then it's very simple getting connected with it: import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('', username='', password='', key_filename='') stdin, stdout, stderr = ssh.exec_command('ls') print stdout.readlines() ssh.close()"} +{"question_id": 3453188, "score": 80, "creation_date": 1281472499, "tags": ["python", "ssh", "matplotlib"], "instruction": "Matplotlib: display plot on a remote machine\n\nI have a python code doing some calculation on a remote machine, named A. I connect on A via ssh from a machine named B. Is there a way to display the figure on machine B?", "output": "If you use matplotlib on Mac OS X on the remote machine (A), you must first make sure that you use one of the X11-based display back-ends, since the native Mac OS X back-end cannot export its plots to another display. Selecting a back-end can be achieved with import matplotlib matplotlib.use('GTK') # Or any other X11 back-end The list of supported back-ends can be obtained by giving use() an incorrect back-end name: matplotlib then prints an error message listing the possible back-ends. ssh X11 forwarding can then be used to display matplotlib plots (see David Z's answer)."} +{"question_id": 34739315, "score": 80, "creation_date": 1452589368, "tags": ["python", "python-3.x"], "instruction": ".pyw files in python program\n\nI am new to Python programming. Can anybody provide an explanation on what a *.pyw file is and how it works.", "output": "Python scripts (files with the extension .py) will be executed by python.exe by default. This executable opens a terminal, which stays open even if the program uses a GUI. If you do not want this to happen, use the extension .pyw which will cause the script to be executed by pythonw.exe by default (both executables are located in the top-level of your Python installation directory). This suppresses the terminal window on startup. You can also make all .py scripts execute with pythonw.exe, setting this through the usual facilities, for example (might require administrative rights): Source: 3. Using Python on Windows \u2192 3.3.4. Executing scripts So in practice the only difference is that one leaves a console window hanging around and the other doesn't. The most obvious usage for *.pyw are GUI apps since an app with an independent GUI obviously does not need or want the console window around. There are some subtle implementation differences between python.exe and pythonw.exe, see https://stackoverflow.com/a/30313091/3703989"} +{"question_id": 42193712, "score": 80, "creation_date": 1486936408, "tags": ["python", "loops", "dictionary", "enumerate"], "instruction": "How to iterate `dict` with `enumerate` and unpack the index, key, and value along with iteration\n\nHow to iterate a dict with enumerate such that I could unpack the index, key and value at the time of iteration? Something like: for i, (k, v) in enumerate(mydict): # some stuff I want to iterate through the keys and values in a dictionary called mydict and count them, so I know when I'm on the last pair.", "output": "Instead of using mydict, you should be using mydict.items() with enumerate as: for i, (k, v) in enumerate(mydict.items()): # your stuff Sample example: mydict = {1: 'a', 2: 'b'} for i, (k, v) in enumerate(mydict.items()): print(\"index: {}, key: {}, value: {}\".format(i, k, v)) # which will print: # ----------------- # index: 0, key: 1, value: a # index: 1, key: 2, value: b Explanation: enumerate() returns an iterator object which contains tuples in the format: [(index, list_element), ...] dict.items() returns an iterator object (in Python 3.x. It returns list in Python 2.7) in the format: [(key, value), ...] On combining together, enumerate(dict.items()) will return an iterator object containing tuples in the format: [(index, (key, value)), ...]"} +{"question_id": 18608812, "score": 80, "creation_date": 1378282931, "tags": ["python", "dictionary", "argparse"], "instruction": "Accepting a dictionary as an argument with argparse and python\n\nI'm trying to accept an argument of type=dict with argparse but no matter the input it gives an error of invalid dict value. #!/usr/bin/env python import argparse MYDICT = {'key': 'value'} parser = argparse.ArgumentParser() parser.add_argument(\"-m\", \"--mydict\", action=\"store\", required=False, type=dict, default=MYDICT) args = parser.parse_args() print args.mydict This is what happens when I try and pass a dictionary to the script ./argp.py -m \"{'key1': 'value1'}\" usage: argp.py [-h] [-m MYDICT] argp.py: error: argument -m/--mydict: invalid dict value: \"{'key1': 'value1'}\" Looking at the documents I would think that this would be possible. http://docs.python.org/dev/library/argparse.html \u201cAny object that supports the in operator can be passed as the choices value, so dict objects, set objects, custom containers, etc. are all supported.\u201d", "output": "I do not think it is possible to pass a dictionary as an argument in the command line because there doesn't exist a conversion function from string to dict (EDIT: A hack is possible which gives similar behaviour, see below). What you are essentially telling python to do is: dict(\"{'key1': 'value1'}\") Which if you try it out in the python console, does not work. What the phrase: \"Any object that supports the in operator can be passed as the choices value, so dict objects, set objects, custom containers, etc. are all supported.\" refers to is the choices argument that can be passed with the add_argument function - not to the type argument. Your best bet is to probably accept your argument as a string and then convert it using the json capabilities of python: parser.add_argument('-m', '--my-dict', type=str) args = parser.parse_args() import json my_dictionary = json.loads(args.my_dict) You can then pass a dictionary in the form of a string. You can try the json encoder/decoder out for yourself in the python console to see how it works: >>>json.loads('{\"value1\":\"key1\"}') {u'value1': u'key1'} EDIT: hpaulj has pointed out to me that you can \"hack\" the type parameter by passing it json.loads which allows you to pass JSON that is similar looking to a dictionary. import json parser.add_argument('-d', '--my-dict', type=json.loads) args = parse.parse_args() mydict = args.my_dict # Will return a dictionary NOTE: The input format you pass is not the same as python dictionary but is probably similar enough for your use case. The reason this works is actually quite interesting because internally argparse will just use the parameter value as a function to convert the argument. i.e. if type=int then it will use int(arg) or if type=json.loads then json.loads(arg) This also means that you can pass any function which takes a single parameter in as the argument to type and perform custom conversions if you need to :)"} +{"question_id": 6297404, "score": 80, "creation_date": 1307643619, "tags": ["python", "multithreading", "sqlalchemy"], "instruction": "Multi-threaded use of SQLAlchemy\n\nI want to make a Database Application Programming Interface written in Python and using SQLAlchemy (or any other database connectors if it is told that using SQLAlchemy for this kind of task is not the good way to go). The setup is a MySQL server running on Linux or BSD and a the Python software running on a Linux or BSD machine (Either foreign or local). Basically what I want to do is spawn a new thread for each connections and the protocol would be custom and quite simple, although for each requests I would like to open a new transaction (or session as I have read) and then I need to commit the session. The problem I am facing right now is that there is high probability that another sessions happen at the same time from another connection. My question here is what should I do to handle this situation? Should I use a lock so only a single session can run at the same time? Are sessions actually thread-safe and I am wrong about thinking that they are not? Is there a better way to handle this situation? Is threading the way not-to-go?", "output": "Session objects are not thread-safe, but are thread-local. From the docs: \"The Session object is entirely designed to be used in a non-concurrent fashion, which in terms of multithreading means \"only in one thread at a time\" .. some process needs to be in place such that mutltiple calls across many threads don\u2019t actually get a handle to the same session. We call this notion thread local storage.\" If you don't want to do the work of managing threads and sessions yourself, SQLAlchemy has the ScopedSession object to take care of this for you: The ScopedSession object by default uses threading.local() as storage, so that a single Session is maintained for all who call upon the ScopedSession registry, but only within the scope of a single thread. Callers who call upon the registry in a different thread get a Session instance that is local to that other thread. Using this technique, the ScopedSession provides a quick and relatively simple way of providing a single, global object in an application that is safe to be called upon from multiple threads. See the examples in Contextual/Thread-local Sessions for setting up your own thread-safe sessions: # set up a scoped_session from sqlalchemy.orm import scoped_session from sqlalchemy.orm import sessionmaker session_factory = sessionmaker(bind=some_engine) Session = scoped_session(session_factory) # now all calls to Session() will create a thread-local session some_session = Session() # you can now use some_session to run multiple queries, etc. # remember to close it when you're finished! Session.remove()"} +{"question_id": 43264838, "score": 80, "creation_date": 1491509197, "tags": ["python", "django", "redis", "rabbitmq", "celery"], "instruction": "Celery: When should you choose Redis as a message broker over RabbitMQ?\n\nMy rough understanding is that Redis is better if you need the in-memory key-value store feature, however I am not sure how that has anything to do with distributing tasks? Does that mean we should use Redis as a message broker IF we are already using it for something else?", "output": "I've used both recently (2017-2018), and they are both super stable with Celery 4. So your choice can be based on the details of your hosting setup. If you must use Celery version 2 or version 3, go with RabbitMQ. Otherwise... If you are using Redis for any other reason, go with Redis If you are hosting at AWS, go with Redis so that you can use a managed Redis as service If you hate complicated installs, go with Redis If you already have RabbitMQ installed, stay with RabbitMQ In the past, I would have recommended RabbitMQ because it was more stable and easier to setup with Celery than Redis, but I don't believe that's true any more. Update 2019 AWS now has a managed service that is equivalent to RabbitMQ called Amazon MQ, which could reduce the headache of running this as a service in production. Please comment below if you have any experience with this and celery. Update 2025 The Celery Using Redis documentation lists some caveats for choosing Redis, which includes limitations with Visibility timeout - If a task isn\u2019t acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed. Key eviction - In some circumstances a key could be removed unexpectedly. Group result ordering - For older versions of celery, results could be returned in an unexpected order. ... but if you're starting a new project it's unlikely that any situations will apply to you for some time, if ever."} +{"question_id": 16745507, "score": 80, "creation_date": 1369444576, "tags": ["python", "multithreading", "tkinter", "progress-bar", "event-loop"], "instruction": "Tkinter: How to use threads to preventing main event loop from \"freezing\"\n\nI have a small GUI test with a \"Start\" button and a Progress bar. The desired behavior is: Click Start Progressbar oscillates for 5 seconds Progressbar stops The observed behavior is the \"Start\" button freezes for 5 seconds, then a Progressbar is displayed (no oscillation). Here is my code so far: class GUI: def __init__(self, master): self.master = master self.test_button = Button(self.master, command=self.tb_click) self.test_button.configure( text=\"Start\", background=\"Grey\", padx=50 ) self.test_button.pack(side=TOP) def progress(self): self.prog_bar = ttk.Progressbar( self.master, orient=\"horizontal\", length=200, mode=\"indeterminate\" ) self.prog_bar.pack(side=TOP) def tb_click(self): self.progress() self.prog_bar.start() # Simulate long running process t = threading.Thread(target=time.sleep, args=(5,)) t.start() t.join() self.prog_bar.stop() root = Tk() root.title(\"Test Button\") main_ui = GUI(root) root.mainloop() Based on the information from Bryan Oakley here, I understand that I need to use threads. I tried creating a thread, but I'm guessing that since the thread is started from within the main thread, it doesn't help. I had the idea to place the logic portion in a different class, and instantiate the GUI from within that class, similar to the example code by A. Rodas here. My question: I can't figure out how to code it so that this command: self.test_button = Button(self.master, command=self.tb_click) calls a function that is located in the other class. Is this a Bad Thing to do or is it even possible? How would I create a 2nd class that can handle the self.tb_click? I tried following along to A. Rodas' example code which works beautifully. But I cannot figure out how to implement his solution in the case of a Button widget that triggers an action. If I should instead handle the thread from within the single GUI class, how would one create a thread that doesn't interfere with the main thread?", "output": "When you join the new thread in the main thread, it will wait until the thread finishes, so the GUI will block even though you are using multithreading. If you want to place the logic portion in a different class, you can subclass Thread directly, and then start a new object of this class when you press the button. The constructor of this subclass of Thread can receive a Queue object and then you will be able to communicate it with the GUI part. So my suggestion is: Create a Queue object in the main thread Create a new thread with access to that queue Check periodically the queue in the main thread Then you have to solve the problem of what happens if the user clicks two times the same button (it will spawn a new thread with each click), but you can fix it by disabling the start button and enabling it again after you call self.prog_bar.stop(). import queue class GUI: # ... def tb_click(self): self.progress() self.prog_bar.start() self.queue = queue.Queue() ThreadedTask(self.queue).start() self.master.after(100, self.process_queue) def process_queue(self): try: msg = self.queue.get_nowait() # Show result of the task if needed self.prog_bar.stop() except queue.Empty: self.master.after(100, self.process_queue) class ThreadedTask(threading.Thread): def __init__(self, queue): super().__init__() self.queue = queue def run(self): time.sleep(5) # Simulate long running process self.queue.put(\"Task finished\")"} +{"question_id": 7001606, "score": 80, "creation_date": 1312917114, "tags": ["python", "json", "tuples"], "instruction": "JSON serialize a dictionary with tuples as key\n\nIs there a way in Python to serialize a dictionary that is using a tuple as key? e.g. a = {(1, 2): 'a'} simply using json.dumps(a) raises this error: Traceback (most recent call last): File \"\", line 1, in File \"/usr/lib/python2.6/json/__init__.py\", line 230, in dumps return _default_encoder.encode(obj) File \"/usr/lib/python2.6/json/encoder.py\", line 367, in encode chunks = list(self.iterencode(o)) File \"/usr/lib/python2.6/json/encoder.py\", line 309, in _iterencode for chunk in self._iterencode_dict(o, markers): File \"/usr/lib/python2.6/json/encoder.py\", line 268, in _iterencode_dict raise TypeError(\"key {0!r} is not a string\".format(key)) TypeError: key (1, 2) is not a string", "output": "You can't serialize that as json, json has a much less flexible idea about what counts as a dict key than python. You could transform the mapping into a sequence of key, value pairs, something like this: import json def remap_keys(mapping): return [{'key':k, 'value': v} for k, v in mapping.iteritems()] ... json.dumps(remap_keys({(1, 2): 'foo'})) >>> '[{\"value\": \"foo\", \"key\": [1, 2]}]'"} +{"question_id": 6235123, "score": 80, "creation_date": 1307008238, "tags": ["windows", "python", "executable"], "instruction": "Python executables: py2exe or PyInstaller?\n\nTo create executable files (windows) I assume that we should use one of them: Py2exe or PyInstaller. What are the difference between them?", "output": "Py2exe and PyInstaller both are wrappers but here are few differences that I noticed, Py2exe is compatible with python2.4+ including python3.0 & 3.1 whereas PyInstaller is currently, compatible with python 2.7 and 3.3\u20133.5 As far I know, Py2exe didn't support signing whereas Pyinstaller has support for signing from version 1.4 In PyInstaller it is easy to create one exe, By default both create a bunch of exes & dlls. In py2exe its easier to embed manifest file in exe, useful for run as administrator mode in windows vista and beyond. Pyinstaller is modular and has a feature of hooks to include files in the build that you like. I don't know about this feature in py2exe. Hope this helps you in your decision making. [Update] - It looks like PyInstaller is actively developed (https://github.com/pyinstaller/pyinstaller/) and released. py2exe is still using sourceforge and its release cycle is very random on pypi there is no build after 2014 and their code show development in 2017 as well (https://sourceforge.net/p/py2exe/svn/HEAD/tree/trunk/py2exe-3/py2exe/). So, I recommend using pyinstaller till the time py2exe stabilizes its release cycle in favor of developers."} +{"question_id": 31709792, "score": 80, "creation_date": 1438199241, "tags": ["python", "unit-testing", "attributes", "mocking", "python-unittest"], "instruction": "patching a class yields \"AttributeError: Mock object has no attribute\" when accessing instance attributes\n\nThe Problem Using mock.patch with autospec=True to patch a class is not preserving attributes of instances of that class. The Details I am trying to test a class Bar that instantiates an instance of class Foo as a Bar object attribute called foo. The Bar method under test is called bar; it calls method foo of the Foo instance belonging to Bar. In testing this, I am mocking Foo, as I only want to test that Bar is accessing the correct Foo member: import unittest from mock import patch class Foo(object): def __init__(self): self.foo = 'foo' class Bar(object): def __init__(self): self.foo = Foo() def bar(self): return self.foo.foo class TestBar(unittest.TestCase): @patch('foo.Foo', autospec=True) def test_patched(self, mock_Foo): Bar().bar() def test_unpatched(self): assert Bar().bar() == 'foo' The classes and methods work just fine (test_unpatched passes), but when I try to Foo in a test case (tested using both nosetests and pytest) using autospec=True, I encounter \"AttributeError: Mock object has no attribute 'foo'\" 19:39 $ nosetests -sv foo.py test_patched (foo.TestBar) ... ERROR test_unpatched (foo.TestBar) ... ok ====================================================================== ERROR: test_patched (foo.TestBar) ---------------------------------------------------------------------- Traceback (most recent call last): File \"/usr/local/lib/python2.7/dist-packages/mock.py\", line 1201, in patched return func(*args, **keywargs) File \"/home/vagrant/dev/constellation/test/foo.py\", line 19, in test_patched Bar().bar() File \"/home/vagrant/dev/constellation/test/foo.py\", line 14, in bar return self.foo.foo File \"/usr/local/lib/python2.7/dist-packages/mock.py\", line 658, in __getattr__ raise AttributeError(\"Mock object has no attribute %r\" % name) AttributeError: Mock object has no attribute 'foo' Indeed, when I print out mock_Foo.return_value.__dict__, I can see that foo is not in the list of children or methods: {'_mock_call_args': None, '_mock_call_args_list': [], '_mock_call_count': 0, '_mock_called': False, '_mock_children': {}, '_mock_delegate': None, '_mock_methods': ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__'], '_mock_mock_calls': [], '_mock_name': '()', '_mock_new_name': '()', '_mock_new_parent': , '_mock_parent': , '_mock_wraps': None, '_spec_class': , '_spec_set': None, 'method_calls': []} My understanding of autospec is that, if True, the patch specs should apply recursively. Since foo is indeed an attribute of Foo instances, should it not be patched? If not, how do I get the Foo mock to preserve the attributes of Foo instances? NOTE: This is a trivial example that shows the basic problem. In reality, I am mocking a third party module.Class -- consul.Consul -- whose client I instantiate in a Consul wrapper class that I have. As I don't maintain the consul module, I can't modify the source to suit my tests (I wouldn't really want to do that anyway). For what it's worth, consul.Consul() returns a consul client, which has an attribute kv -- an instance of consul.Consul.KV. kv has a method get, which I am wrapping in an instance method get_key in my Consul class. After patching consul.Consul, the call to get fails because of AttributeError: Mock object has no attribute kv. Resources Already Checked: http://mock.readthedocs.org/en/latest/helpers.html#autospeccing http://mock.readthedocs.org/en/latest/patch.html", "output": "No, autospeccing cannot mock out attributes set in the __init__ method of the original class (or in any other method). It can only mock out static attributes, everything that can be found on the class. Otherwise, the mock would have to create an instance of the class you tried to replace with a mock in the first place, which is not a good idea (think classes that create a lot of real resources when instantiated). The recursive nature of an auto-specced mock is then limited to those static attributes; if foo is a class attribute, accessing Foo().foo will return an auto-specced mock for that attribute. If you have a class Spam whose eggs attribute is an object of type Ham, then the mock of Spam.eggs will be an auto-specced mock of the Ham class. The documentation you read explicitly covers this: A more serious problem is that it is common for instance attributes to be created in the __init__ method and not to exist on the class at all. autospec can\u2019t know about any dynamically created attributes and restricts the api to visible attributes. You should just set the missing attributes yourself: @patch('foo.Foo', autospec=Foo) def test_patched(self, mock_Foo): mock_Foo.return_value.foo = 'foo' Bar().bar() or create a subclass of your Foo class for testing purposes that adds the attribute as a class attribute: class TestFoo(foo.Foo): foo = 'foo' # class attribute @patch('foo.Foo', autospec=TestFoo) def test_patched(self, mock_Foo): Bar().bar()"} +{"question_id": 11227620, "score": 79, "creation_date": 1340804529, "tags": ["python", "decimal"], "instruction": "drop trailing zeros from decimal\n\nI have a long list of Decimals and that I have to adjust by factors of 10, 100, 1000,..... 1000000 depending on certain conditions. When I multiply them there is sometimes a useless trailing zero (though not always) that I want to get rid of. For example... from decimal import Decimal # outputs 25.0, PROBLEM! I would like it to output 25 print Decimal('2.5') * 10 # outputs 2567.8000, PROBLEM! I would like it to output 2567.8 print Decimal('2.5678') * 1000 Is there a function that tells the decimal object to drop these insignificant zeros? The only way I can think of doing this is to convert to a string and replace them using regular expressions. Should probably mention that I am using python 2.6.5 EDIT senderle's fine answer made me realize that I occasionally get a number like 250.0 which when normalized produces 2.5E+2. I guess in these cases I could try to sort them out and convert to a int", "output": "There's probably a better way of doing this, but you could use .rstrip('0').rstrip('.') to achieve the result that you want. Using your numbers as an example: >>> s = str(Decimal('2.5') * 10) >>> print s.rstrip('0').rstrip('.') if '.' in s else s 25 >>> s = str(Decimal('2.5678') * 1000) >>> print s.rstrip('0').rstrip('.') if '.' in s else s 2567.8 And here's the fix for the problem that @gerrit pointed out in the comments: >>> s = str(Decimal('1500')) >>> print s.rstrip('0').rstrip('.') if '.' in s else s 1500"} +{"question_id": 12030809, "score": 79, "creation_date": 1345419049, "tags": ["python", "flask", "uwsgi"], "instruction": "Flask and uWSGI - unable to load app 0 (mountpoint='') (callable not found or import error)\n\nI get the below error when I try and start Flask using uWSGI. Here is how I start: > # cd .. > root@localhost:# uwsgi --socket 127.0.0.1:6000 --file /path/to/folder/run.py --callable app - -processes 2 Here is my directory structure: -/path/to/folder/run.py -|app -|__init__.py -|views.py -|templates -|static Contents of /path/to/folder/run.py if __name__ == '__main__': from app import app #app.run(debug = True) app.run() Contents of /path/to/folder/app/__init__.py import os from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy from flask.ext.login import LoginManager #from flaskext.babel import Babel from config import basedir app = Flask(__name__) app.config.from_object('config') #app.config.from_pyfile('babel.cfg') db = SQLAlchemy(app) login_manager = LoginManager() login_manager.setup_app(app) login_manager.login_view = 'login' login_manager.login_message = u\"Please log in to access this page.\" from app import views *** Operational MODE: preforking *** unable to find \"application\" callable in file /path/to/folder/run.py unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** *** uWSGI is running in multiple interpreter mode *** spawned uWSGI worker 1 (pid: 26972, cores: 1) spawned uWSGI worker 2 (pid: 26973, cores: 1)", "output": "uWSGI doesn't load your app as __main__, so it never will find the app (since that only gets loaded when the app is run as name __main__). Thus, you need to import it outside of the if __name__ == \"__main__\": block. Really simple change: from app import app as application # for example, should be app if __name__ == \"__main__\": application.run() Now you can run the app directly with python run.py or run it through uWSGI the way you have it. NOTE: if you set --callable myapp, you'd need to change it from as application to myapp (by default uwsgi expects application"} +{"question_id": 32761999, "score": 79, "creation_date": 1443099568, "tags": ["python"], "instruction": "How to pass an entire list as command line argument in Python?\n\nI was trying to pass two lists containing integers as arguments to a python code. But sys.argv[i] gets the parameters as a list of string. Input would look like, $ python filename.py [2,3,4,5] [1,2,3,4] I found the following hack to convert the list. strA = sys.argv[1].replace('[', ' ').replace(']', ' ').replace(',', ' ').split() strB = sys.argv[2].replace('[', ' ').replace(']', ' ').replace(',', ' ').split() A = [float(i) for i in strA] B = [float (i) for i in strB] Is there a better way to do this?", "output": "Command line arguments are always passed as strings. You will need to parse them into your required data type yourself. >>> input = \"[2,3,4,5]\" >>> map(float, input.strip('[]').split(',')) [2.0, 3.0, 4.0, 5.0] >>> A = map(float, input.strip('[]').split(',')) >>> print(A, type(A)) ([2.0, 3.0, 4.0, 5.0], ) There are libraries like argparse and click that let you define your own argument type conversion but argparse treats \"[2,3,4]\" the same as [ 2 , 3 , 4 ] so I doubt it will be useful. edit Jan 2019 This answer seems to get a bit of action still so I'll add another option taken directly from the argparse docs. You can use action=append to allow repeated arguments to be collected into a single list. >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', action='append') >>> parser.parse_args('--foo 1 --foo 2'.split()) Namespace(foo=['1', '2']) In this case you would pass --foo ? once for each list item. Using OPs example: python filename.py --foo 2 --foo 3 --foo 4 --foo 5 would result in foo=[2,3,4,5]"} +{"question_id": 41721734, "score": 79, "creation_date": 1484748894, "tags": ["python", "selenium", "selenium-chromedriver", "webpage-screenshot"], "instruction": "Take screenshot of full page with Selenium Python with chromedriver\n\nAfter trying out various approaches... I have stumbled upon this page to take full-page screenshot with chromedriver, selenium and python. The original code is here. (and I copy the code in this posting below) It uses PIL and it works great! However, there is one issue... which is it captures fixed headers and repeats for the whole page and also misses some parts of the page during page change. sample url to take a screenshot: http://www.w3schools.com/js/default.asp How to avoid the repeated headers with this code... Or is there any better option which uses python only... ( i don't know java and do not want to use java). Please see the screenshot of the current result and sample code below. test.py \"\"\" This script uses a simplified version of the one here: https://snipt.net/restrada/python-selenium-workaround-for-full-page-screenshot-using-chromedriver-2x/ It contains the *crucial* correction added in the comments by Jason Coutu. \"\"\" import sys from selenium import webdriver import unittest import util class Test(unittest.TestCase): \"\"\" Demonstration: Get Chrome to generate fullscreen screenshot \"\"\" def setUp(self): self.driver = webdriver.Chrome() def tearDown(self): self.driver.quit() def test_fullpage_screenshot(self): ''' Generate document-height screenshot ''' #url = \"http://effbot.org/imagingbook/introduction.htm\" url = \"http://www.w3schools.com/js/default.asp\" self.driver.get(url) util.fullpage_screenshot(self.driver, \"test.png\") if __name__ == \"__main__\": unittest.main(argv=[sys.argv[0]]) util.py import os import time from PIL import Image def fullpage_screenshot(driver, file): print(\"Starting chrome full page screenshot workaround ...\") total_width = driver.execute_script(\"return document.body.offsetWidth\") total_height = driver.execute_script(\"return document.body.parentNode.scrollHeight\") viewport_width = driver.execute_script(\"return document.body.clientWidth\") viewport_height = driver.execute_script(\"return window.innerHeight\") print(\"Total: ({0}, {1}), Viewport: ({2},{3})\".format(total_width, total_height,viewport_width,viewport_height)) rectangles = [] i = 0 while i < total_height: ii = 0 top_height = i + viewport_height if top_height > total_height: top_height = total_height while ii < total_width: top_width = ii + viewport_width if top_width > total_width: top_width = total_width print(\"Appending rectangle ({0},{1},{2},{3})\".format(ii, i, top_width, top_height)) rectangles.append((ii, i, top_width,top_height)) ii = ii + viewport_width i = i + viewport_height stitched_image = Image.new('RGB', (total_width, total_height)) previous = None part = 0 for rectangle in rectangles: if not previous is None: driver.execute_script(\"window.scrollTo({0}, {1})\".format(rectangle[0], rectangle[1])) print(\"Scrolled To ({0},{1})\".format(rectangle[0], rectangle[1])) time.sleep(0.2) file_name = \"part_{0}.png\".format(part) print(\"Capturing {0} ...\".format(file_name)) driver.get_screenshot_as_file(file_name) screenshot = Image.open(file_name) if rectangle[1] + viewport_height > total_height: offset = (rectangle[0], total_height - viewport_height) else: offset = (rectangle[0], rectangle[1]) print(\"Adding to stitched image with offset ({0}, {1})\".format(offset[0],offset[1])) stitched_image.paste(screenshot, offset) del screenshot os.remove(file_name) part = part + 1 previous = rectangle stitched_image.save(file) print(\"Finishing chrome full page screenshot workaround...\") return True", "output": "How it works: set browser height as longest as you can... #coding=utf-8 import time from selenium import webdriver from selenium.webdriver.chrome.options import Options def test_fullpage_screenshot(self): # please note that we MUST use headless mode chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--start-maximized') driver = webdriver.Chrome(chrome_options=chrome_options) driver.get(\"yoururlxxx\") time.sleep(2) height = driver.execute_script('return document.documentElement.scrollHeight') width = driver.execute_script('return document.documentElement.scrollWidth') driver.set_window_size(width, height) # the trick time.sleep(2) driver.save_screenshot(\"screenshot1.png\") driver.quit() if __name__ == \"__main__\": test_fullpage_screenshot()"} +{"question_id": 22571259, "score": 79, "creation_date": 1395444300, "tags": ["python"], "instruction": "Split a string into N equal parts?\n\nI have a string I would like to split into N equal parts. For example, imagine I had a string with length 128 and I want to split it in to 4 chunks of length 32 each; i.e., first 32 chars, then the second 32 and so on. How can I do this?", "output": "import textwrap print(textwrap.wrap(\"123456789\", 2)) #prints ['12', '34', '56', '78', '9'] Note: be careful with whitespace etc - this may or may not be what you want. \"\"\"Wrap a single paragraph of text, returning a list of wrapped lines. Reformat the single paragraph in 'text' so it fits in lines of no more than 'width' columns, and return a list of wrapped lines. By default, tabs in 'text' are expanded with string.expandtabs(), and all other whitespace characters (including newline) are converted to space. See TextWrapper class for available keyword args to customize wrapping behaviour. \"\"\""} +{"question_id": 30062429, "score": 79, "creation_date": 1430856870, "tags": ["python", "list", "multidimensional-array"], "instruction": "How to get every first element in 2 dimensional list\n\nI have a list like this: a = [[4.0, 4, 4.0], [3.0, 3, 3.6], [3.5, 6, 4.8]] I want an outcome like this (EVERY first element in the list): 4.0, 3.0, 3.5 I tried a[::1][0], but it doesn't work", "output": "You can get the index [0] from each element in a list comprehension >>> [i[0] for i in a] [4.0, 3.0, 3.5]"} +{"question_id": 48014769, "score": 79, "creation_date": 1514497618, "tags": ["python", "tensorflow", "pip", "homebrew"], "instruction": "pip3: command not found\n\nI want to install TensorFlow following these instructions: https://web.archive.org/web/20170627102751/https://www.tensorflow.org/versions/r0.12/get_started/os_setup#pip_installation But when I try this code on terminal, it returns an error. $ sudo pip3 install --upgrade $TF_BINARY_URL sudo: pip3: command not found So I installed Homebrew and tried to uninstall and reinstall python3-pip, but didn't work. MakotonoMacBook-ea:~ makotomiyazaki$ brew uninstall python3-pip Error: No such keg: /usr/local/Cellar/python3-pip MakotonoMacBook-ea:~ makotomiyazaki$ brew install python3-pip Error: No available formula with the name \"python3-pip\" ==> Searching for a previously deleted formula... Warning: homebrew/core is shallow clone. To get complete history run: git -C \"$(brew --repo homebrew/core)\" fetch --unshallow What should I do for getting pip3? My OS is macOS High Sierra, and I have Python 3.6.2 already installed. EDIT: I tried python3 -m pip and what's returned was this: The directory '/Users/makotomiyazaki/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/makotomiyazaki/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. You must give at least one requirement to install (see \"pip help install\") I also tried which pip3, but just I don't know if it worked... MakotonoMacBook-ea:~ makotomiyazaki$ sudo which pip3 install --upgrade $TF_BINARY_URL /usr/bin/install", "output": "You would need to install pip3. On Linux, run first sudo apt update. Then the command would be: sudo apt install python3-pip On Mac, using brew, first brew install python3 Then brew postinstall python3 Try calling pip3 -V to see if it worked."} +{"question_id": 13757090, "score": 79, "creation_date": 1354855768, "tags": ["python", "pandas", "string", "dataframe"], "instruction": "Pandas column access w/column names containing spaces\n\nIf I import or create a pandas column that contains no spaces, I can access it as such: from pandas import DataFrame df1 = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'], 'data1': range(7)}) df1.data1 which would return that series for me. If, however, that column has a space in its name, it isn't accessible via that method: from pandas import DataFrame df2 = DataFrame({'key': ['a','b','d'], 'data 2': range(3)}) df2.data 2 # <--- not the droid I'm looking for. I know I can access it using .xs(): df2.xs('data 2', axis=1) There's got to be another way. I've googled it like mad and can't think of any other way to google it. I've read all 96 entries here on SO that contain \"column\" and \"string\" and \"pandas\" and could find no previous answer. Is this the only way, or is there something better?", "output": "I think the default way is to use the bracket method instead of the dot notation. import pandas as pd df1 = pd.DataFrame({ 'key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'], 'dat a1': range(7) }) df1['dat a1'] The other methods, like exposing it as an attribute are more for convenience."} +{"question_id": 111954, "score": 79, "creation_date": 1222027998, "tags": ["python", "ftp", "portability"], "instruction": "Using Python's ftplib to get a directory listing, portably\n\nYou can use ftplib for full FTP support in Python. However the preferred way of getting a directory listing is: # File: ftplib-example-1.py import ftplib ftp = ftplib.FTP(\"www.python.org\") ftp.login(\"anonymous\", \"ftplib-example-1\") data = [] ftp.dir(data.append) ftp.quit() for line in data: print \"-\", line Which yields: $ python ftplib-example-1.py - total 34 - drwxrwxr-x 11 root 4127 512 Sep 14 14:18 . - drwxrwxr-x 11 root 4127 512 Sep 14 14:18 .. - drwxrwxr-x 2 root 4127 512 Sep 13 15:18 RCS - lrwxrwxrwx 1 root bin 11 Jun 29 14:34 README -> welcome.msg - drwxr-xr-x 3 root wheel 512 May 19 1998 bin - drwxr-sr-x 3 root 1400 512 Jun 9 1997 dev - drwxrwxr-- 2 root 4127 512 Feb 8 1998 dup - drwxr-xr-x 3 root wheel 512 May 19 1998 etc ... I guess the idea is to parse the results to get the directory listing. However this listing is directly dependent on the FTP server's way of formatting the list. It would be very messy to write code for this having to anticipate all the different ways FTP servers might format this list. Is there a portable way to get an array filled with the directory listing? (The array should only have the folder names.)", "output": "Try using ftp.nlst(dir). However, note that if the folder is empty, it might throw an error: files = [] try: files = ftp.nlst() except ftplib.error_perm as resp: if str(resp) == \"550 No files found\": print \"No files in this directory\" else: raise for f in files: print f"} +{"question_id": 4840102, "score": 79, "creation_date": 1296343393, "tags": ["python", "django", "migration", "django-south"], "instruction": "Why don't my south migrations work?\n\nFirst, I create my database. create database mydb; I add \"south\" to installed Apps. Then, I go to this tutorial: http://south.aeracode.org/docs/tutorial/part1.html The tutorial tells me to do this: $ py manage.py schemamigration wall --initial >>> Created 0001_initial.py. You can now apply this migration with: ./manage.py migrate wall Great, now I migrate. $ py manage.py migrate wall But it gives me this error... django.db.utils.DatabaseError: (1146, \"Table 'fable.south_migrationhistory' doesn't exist\") So I use Google (which never works. hence my 870 questions asked on Stackoverflow), and I get this page: http://groups.google.com/group/south-users/browse_thread/thread/d4c83f821dd2ca1c Alright, so I follow that instructions >> Drop database mydb; >> Create database mydb; $ rm -rf ./wall/migrations $ py manage.py syncdb But when I run syncdb, Django creates a bunch of tables. Yes, it creates the south_migrationhistory table, but it also creates my app's tables. Synced: > django.contrib.admin > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.sites > django.contrib.messages > south > fable.notification > pagination > timezones > fable.wall > mediasync > staticfiles > debug_toolbar Not synced (use migrations): - (use ./manage.py migrate to migrate these) Cool....now it tells me to migrate these. So, I do this: $ py manage.py migrate wall The app 'wall' does not appear to use migrations. Alright, so fine. I'll add wall to initial migrations. $ py manage.py schemamigration wall --initial Then I migrate: $ py manage.py migrate wall You know what? It gives me this BS: _mysql_exceptions.OperationalError: (1050, \"Table 'wall_content' already exists\") Sorry, this is really pissing me off. Can someone help ? thanks. How do I get South to work and sync correctly with everything? The only thing I can think of is remove my app from INSTALLED_APPS, then run syncdb, then add it back on. That is SO SILLY.", "output": "South allows you to create migrations when you first start out with a new app and the tables haven't been added to the database yet, as well as creating migrations for legacy apps that already have tables in the database. The key is to know when to do what. Your first mistake was when you deleted your migrations, as soon as you did that, and then ran syncdb, Django didn't know that you wanted south to manage that app anymore, so it created the tables for you. When you created your initial migrations and then ran migrate, south was trying to create tables that django already created, and thus your error. At this point you have two options. Delete the tables for the wall app from your database and then run $ py manage.py migrate wall This will run the migration and create your tables. Fake out the initial migration run $ py manage.py migrate wall 0001 --fake This will tell south that you already have the tables on the database so just fake it, which will add a row to the south_migrationhistory table, so that the next time you run a migrate it will know that the first migration has already been run. Setting up a brand new project and no database create your database add south to installed apps run syncdb, this will add the django and south tables to the database add your apps for each app run python manage.py schemamigration app_name --initial this will create the initial migration files for your app then run south migrate python manage.py migrate app_name this will add the tables to the database. Setting up a legacy project and database add south to installed apps run syncdb, this will add the south tables to the database for each of your apps run python manage.py schemamigration app_name --initial This will create your initial migrations for each of your apps run python manage.py migrate app_name 0001 --fake , this will fake out south, it won't do anything to the database for those models, it will just add records to the south_migrationhistory table so that the next time you want to create a migration, you are all set. Setting up a legacy project and no database create database add south to installed apps for each of your apps run python manage.py schemamigration app_name --initial This will create your initial migrations run syncdb, this will add any apps that don't have migrations to the database. then run south migrate python manage.py migrate this will run all migrations for your apps. Now that you are setup with south, you can start using south to manage model changes to those apps. The most common command to run is python manage.py schemamigration app_name migration_name --auto that will look at the last migration you ran and it will find the changes and build out a migration file for you. Then you just need to run python manage.py migrate and it alter your database for you."} +{"question_id": 17658512, "score": 79, "creation_date": 1373903773, "tags": ["python", "pipe"], "instruction": "How to pipe input to python line by line from linux program?\n\nI want to pipe the output of ps -ef to python line by line. The script I am using is this (first.py) - #! /usr/bin/python import sys for line in sys.argv: print line Unfortunately, the \"line\" is split into words separated by whitespace. So, for example, if I do echo \"days go by and still\" | xargs first.py the output I get is ./first.py days go by and still How to write the script such that the output is ./first.py days go by and still ?", "output": "Instead of using command line arguments I suggest reading from standard input (stdin). Python has a simple idiom for iterating over lines at stdin: import sys for line in sys.stdin: sys.stdout.write(line) My usage example (with above's code saved to iterate-stdin.py): $ echo -e \"first line\\nsecond line\" | python iterate-stdin.py first line second line With your example: $ echo \"days go by and still\" | python iterate-stdin.py days go by and still"} +{"question_id": 2356399, "score": 79, "creation_date": 1267453257, "tags": ["python", "interactive", "read-eval-print-loop"], "instruction": "Tell if Python is in interactive mode\n\nIn a Python script, is there any way to tell if the interpreter is running in interactive mode? This would be useful so that, for instance, when you run an interactive Python session and import a module, different code can be executed. What I mean is something like this: if __name__ == \"__main__\": # do stuff elif __pythonIsInteractive__: # do other stuff else: exit()", "output": "Python <= 3.12 __main__.__file__ doesn't exist in the interactive interpreter for Python versions <= 3.12: import __main__ print(hasattr(__main__, \"__file__\")) This also goes for code run via python -c, but not python -m."} +{"question_id": 6375942, "score": 79, "creation_date": 1308244685, "tags": ["python", "css", "image", "base64", "data-uri"], "instruction": "How do you base-64 encode a PNG image for use in a data-uri in a CSS file?\n\nI want to base-64 encode a PNG file, to include it in a data:url in my stylesheet. How can I do that? I\u2019m on a Mac, so something on the Unix command line would work great. A Python-based solution would also be grand.", "output": "This should do it in Python: import base64 binary_fc = open(filepath, 'rb').read() # fc aka file_content base64_utf8_str = base64.b64encode(binary_fc).decode('utf-8') ext = filepath.split('.')[-1] dataurl = f'data:image/{ext};base64,{base64_utf8_str}' Thanks to @cnst comment, we need the prefix data:image/{ext};base64, Thanks to @ramazanpolat answer, we need the decode('utf-8')"} +{"question_id": 7787120, "score": 79, "creation_date": 1318797182, "tags": ["python", "windows", "python-2.7"], "instruction": "Check if a process is running or not on Windows?\n\nI am trying to create a python script which I will later run as a service. Now I want to run a particular part of the code only when iTunes is running. I understand from some research that polling the entire command list and then searching for the application for that list is expensive. I found out that processes on UNIX-based operating systems create a lock file to notify that a program is currently running, at which point we can use os.stat(location_of_file) to check if the file exists to determine if a program is running or not. Is there a similar lock file created on Windows? If not what are the various ways in Python by which we can determine if a process is running or not? I am using python 2.7 and iTunes COM interface.", "output": "I found psutil very slow. A much more performant solution: import win32gui window_titles = set() def winEnumHandler( hwnd, ctx ): if win32gui.IsWindowVisible( hwnd ): window_titles.add(win32gui.GetWindowText( hwnd ) ) win32gui.EnumWindows( winEnumHandler, None ) print(window_titles)"} +{"question_id": 43146528, "score": 79, "creation_date": 1490981457, "tags": ["python", "python-3.x", "emoji"], "instruction": "How to extract all the emojis from text?\n\nConsider the following list: a_list = ['\ud83e\udd14 \ud83d\ude48 me as\u00ed, bla es se \ud83d\ude0c ds \ud83d\udc95\ud83d\udc6d\ud83d\udc59'] How can I extract in a new list all the emojis inside a_list?: new_lis = ['\ud83e\udd14 \ud83d\ude48 \ud83d\ude0c \ud83d\udc95 \ud83d\udc6d \ud83d\udc59'] I tried to use regex, but I do not have all the possible emojis encodings.", "output": "You can use the emoji library. You can check if a single codepoint is an emoji codepoint by checking if it is contained in emoji.UNICODE_EMOJI. import emoji def extract_emojis(s): return ''.join(c for c in s if c in emoji.UNICODE_EMOJI['en'])"} +{"question_id": 60814982, "score": 79, "creation_date": 1584971884, "tags": ["python", "pip", "pypi"], "instruction": "How to setup pip to download from mirror repository by default?\n\nI am forced to download python packages from local mirror PyPi repository. I do this by using the -i and --trusted-host options. Whole installation command looks like this: pip install -i https://sampleurl.com/pypi-remote/simple --trusted-host sample.host.com package_name Having to type in that options each time is kinda annoying though (in reality those are long URL's). I've tried to create get_package.bat file (I'm working on Windows 10) with following content: pip install -i https://sampleurl.com/pypi-remote/simple --trusted-host sample.host.com \"%1\" It works perfectly fine, although when I wanted to execute pip search command, it turned out to be useless since it has hard-coded install command and there is no way to use it with search. Is there any way in which I can setup pip to download from mirror repository by default, so that I can execute pip install package_name or pip search package_name without any additional options? Eventually I could try making .bat file that would take 2 parameters like this: pip %1 -i https://sampleurl.com/pypi-remote/simple --trusted-host sample.host.com \"%2\" But I wonder if there's more \"elegant\" way to do this.", "output": "using pip config, on user or global level. I have /etc/pip.conf configured like this: [global] index=https://my-company/nexus/repository/pypi-group/pypi index-url=https://my-company/nexus/repository/pypi-group/simple trusted-host=my-company but you can configure this using pip config on user or global level, something like: pip config --user set global.index https://my-company/nexus/repository/pypi-group/pypi pip config --user set global.index-url https://my-company/nexus/repository/pypi-group/simple pip config --user set global.trusted-host my-company #NOTES --index-url is used by pip install --index is used by pip search"} +{"question_id": 8130823, "score": 79, "creation_date": 1321324527, "tags": ["python", "matplotlib", "matplotlib-3d"], "instruction": "set matplotlib 3d plot aspect ratio\n\nimport matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D Setting the aspect ratio works for 2d plots: ax = plt.axes() ax.plot([0,1], [0,10]) ax.set_aspect('equal', 'box') But it does not work for 3d: ax = plt.axes(projection='3d') ax.plot([0,1], [0,1], [0,10]) ax.set_aspect('equal', 'box') How do I set the aspect ratio for 3d?", "output": "My understanding is basically that this isn't implemented yet (see this bug in GitHub). I'm also hoping that it is implemented soon. See This link for a possible solution (I haven't tested it myself)."} +{"question_id": 2957116, "score": 79, "creation_date": 1275477408, "tags": ["python", "python-3.x", "multithreading", "concurrency", "parallel-processing"], "instruction": "How to run multiple functions at the same time?\n\nI'm trying to run 2 functions at the same time. def func1(): print('Working') def func2(): print('Working') func1() func2() Does anyone know how to do this?", "output": "Do this: from threading import Thread def func1(): print('Working') def func2(): print(\"Working\") if __name__ == '__main__': Thread(target = func1).start() Thread(target = func2).start()"} +{"question_id": 17710672, "score": 79, "creation_date": 1374096826, "tags": ["python", "arrays", "numpy"], "instruction": "Create a two-dimensional array with two one-dimensional arrays\n\nMy function (name CovexHull(point)) accepts the argument as a two-dimensional array: hull = ConvexHull(points) Session In [1]: points.ndim Out[1]: 2 In [2]: points.shape Out[2]: (10, 2) In [3]: points Out[3]: array([[ 0. , 0. ], [ 1. , 0.8], [ 0.9, 0.8], [ 0.9, 0.7], [ 0.9, 0.6], [ 0.8, 0.5], [ 0.8, 0.5], [ 0.7, 0.5], [ 0.1, 0. ], [ 0. , 0. ]]) points is a NumPy array with ndim 2. I have two different NumPy arrays (tp and fp) like below: In [4]: fp.ndim Out[4]: 1 In [5]: fp.shape Out[5]: (10,) In [6]: fp Out[6]: array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.4, 0.5, 0.6, 0.9, 1. ]) How can I create a two-dimensional NumPy array effectively (like points mentioned above) with tp and fp?", "output": "If you wish to combine two 10 element one-dimensional arrays into a two-dimensional array, np.vstack((tp, fp)).T will do it. np.vstack((tp, fp)) will return an array of shape (2, 10), and the T attribute returns the transposed array with shape (10, 2) (i.e., with the two one-dimensional arrays forming columns rather than rows). >>> tp = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> tp.ndim 1 >>> tp.shape (10,) >>> fp = np.array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> fp.ndim 1 >>> fp.shape (10,) >>> combined = np.vstack((tp, fp)).T >>> combined array([[ 0, 10], [ 1, 11], [ 2, 12], [ 3, 13], [ 4, 14], [ 5, 15], [ 6, 16], [ 7, 17], [ 8, 18], [ 9, 19]]) >>> combined.ndim 2 >>> combined.shape (10, 2)"} +{"question_id": 16384109, "score": 79, "creation_date": 1367753815, "tags": ["python", "iteration", "cartesian-product"], "instruction": "Iterate over all combinations of values in multiple lists in Python\n\nGiven multiple list of possibly varying length, I want to iterate over all combinations of values, one item from each list. For example: first = [1, 5, 8] second = [0.5, 4] Then I want the output of to be: combined = [(1, 0.5), (1, 4), (5, 0.5), (5, 4), (8, 0.5), (8, 4)] I want to iterate over the combined list. How do I get this done?", "output": "itertools.product should do the trick. >>> import itertools >>> list(itertools.product([1, 5, 8], [0.5, 4])) [(1, 0.5), (1, 4), (5, 0.5), (5, 4), (8, 0.5), (8, 4)] Note that itertools.product returns an iterator, so you don't need to convert it into a list if you are only going to iterate over it once. eg. for x in itertools.product([1, 5, 8], [0.5, 4]): # do stuff"} +{"question_id": 14894993, "score": 79, "creation_date": 1360927783, "tags": ["python", "windows", "operating-system", "shell"], "instruction": "Running windows shell commands with python\n\nHow can we interact with OS shell using Python ? I want to run windows cmd commands via python. How can it be achieved ?", "output": "The newer subprocess.check_output and similar commands are supposed to replace os.system. See this page for details. While I can't test this on Windows (because I don't have access to any Windows machines), the following should work: from subprocess import check_output check_output(\"dir C:\", shell=True) check_output returns a string of the output from your command. Alternatively, subprocess.call just runs the command and returns the status of the command (usually 0 if everything is okay). Also note that, in python 3, that string output is now bytes output. If you want to change this into a string, you need something like from subprocess import check_output check_output(\"dir C:\", shell=True).decode() If necessary, you can tell it the kind of encoding your program outputs. The default is utf-8, which typically works fine, but other standard options are here. Also note that @bluescorpion says in the comments that Windows 10 needs a trailing backslash, as in check_output(\"dir C:\\\\\", shell=True). The double backslash is needed because \\ is a special character in python, so it has to be escaped. (Also note that even prefixing the string with r doesn't help if \\ is the very last character of the string \u2014 r\"dir C:\\\" is a syntax error, though r\"dir C:\\ \" is not.)"} +{"question_id": 703907, "score": 79, "creation_date": 1238554786, "tags": ["python", "datetime", "date", "time"], "instruction": "How would I compute exactly 30 days into the past with Python (down to the minute)?\n\nIn Python, I'm attempting to retrieve the date/time that is exactly 30 days (30*24hrs) into the past. At present, I'm simply doing: >>> import datetime >>> start_date = datetime.date.today() + datetime.timedelta(-30) Which returns a datetime object, but with no time data: >>> start_date.year 2009 >>> start_date.hour Traceback (most recent call last): File \"\", line 1, in AttributeError: 'datetime.date' object has no attribute 'hour'", "output": "You want to use a datetime object instead of just a date object: start_date = datetime.datetime.now() - datetime.timedelta(30) date just stores a date and time just a time. datetime is a date with a time."} +{"question_id": 4706451, "score": 79, "creation_date": 1295194319, "tags": ["python", "matplotlib", "figure"], "instruction": "How to save a figure remotely with pylab?\n\nI'm trying to generate a figure at a remote computer with the command pylab.savefig. But I got such error: Unable to access the X Display, is $DISPLAY set properly? How can I save the figure properly?", "output": "By default, matplotlib will use something like the TkAgg backend. This requires an X-server to be running. While you can just use X-forwarding, there will be a noticeable lag as matplotlib tries to connect with the remote X-server. If you don't need to interact with the plot, it's often nicer to speed things up by avoiding an X-connection entirely. If you want to make a plot without needing an X-server at all, use the Agg backend instead. E.g. do something like this: import matplotlib matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab! import matplotlib.pyplot as plt fig = plt.figure() plt.plot(range(10)) fig.savefig('temp.png') If you want this to be the default behavior, you can modify your matplotlibrc file to use the Agg backend by default. See this article for more information."} +{"question_id": 13495950, "score": 79, "creation_date": 1353509848, "tags": ["python", "unit-testing", "pytest"], "instruction": "How to I display why some tests where skipped while using py.test?\n\nI am using skipIf() from unittest for skipping tests in certain conditions. @unittest.skipIf(condition), \"this is why I skipped them!\") How do I tell py.test to display skipping conditions? I know that for unittest I need to enable the verbose mode (-v) but the same parameter added to py.test increase the verbosity by still does not display the skip reasons.", "output": "When you run py.test, you can pass -rsx to report skipped tests. From py.test --help: -r chars show extra test summary info as specified by chars (f)ailed, (E)error, (s)skipped, (x)failed, (X)passed. Also see this part of the documentation about skipping: http://doc.pytest.org/en/latest/skipping.html"} +{"question_id": 5899185, "score": 79, "creation_date": 1304604990, "tags": ["python", "oop"], "instruction": "Class with too many parameters: better design strategy?\n\nI am working with models of neurons. One class I am designing is a cell class which is a topological description of a neuron (several compartments connected together). It has many parameters but they are all relevant, for example: Number of axon segments, apical bifibrications, somatic length, somatic diameter, apical length, branching randomness, branching length and so on and so on... there are about 15 parameters in total! I can set all these to some default value but my class looks crazy with several lines for parameters. This kind of thing must happen occasionally to other people too, is there some obvious better way to design this or am I doing the right thing? As you can see this class has a huge number of parameters (>15) but they are all used and are necessary to define the topology of a cell. The problem essentially is that the physical object they create is very complex. How would experienced programmers do this differently to avoid so many parameters in the definition? class LayerV(__Cell): def __init__(self,somatic_dendrites=10,oblique_dendrites=10, somatic_bifibs=3,apical_bifibs=10,oblique_bifibs=3, L_sigma=0.0,apical_branch_prob=1.0, somatic_branch_prob=1.0,oblique_branch_prob=1.0, soma_L=30,soma_d=25,axon_segs=5,myelin_L=100, apical_sec1_L=200,oblique_sec1_L=40,somadend_sec1_L=60, ldecf=0.98): import random import math #make main the regions: axon=Axon(n_axon_seg=axon_segs) soma=Soma(diam=soma_d,length=soma_L) main_apical_dendrite=DendriticTree(bifibs= apical_bifibs,first_sec_L=apical_sec1_L, L_sigma=L_sigma,L_decrease_factor=ldecf, first_sec_d=9,branch_prob=apical_branch_prob) #make the somatic denrites somatic_dends=self.dendrite_list(num_dends=somatic_dendrites, bifibs=somatic_bifibs,first_sec_L=somadend_sec1_L, first_sec_d=1.5,L_sigma=L_sigma, branch_prob=somatic_branch_prob,L_decrease_factor=ldecf) #make oblique dendrites: oblique_dends=self.dendrite_list(num_dends=oblique_dendrites, bifibs=oblique_bifibs,first_sec_L=oblique_sec1_L, first_sec_d=1.5,L_sigma=L_sigma, branch_prob=oblique_branch_prob,L_decrease_factor=ldecf) #connect axon to soma: axon_section=axon.get_connecting_section() self.soma_body=soma.body soma.connect(axon_section,region_end=1) #connect apical dendrite to soma: apical_dendrite_firstsec=main_apical_dendrite.get_connecting_section() soma.connect(apical_dendrite_firstsec,region_end=0) #connect oblique dendrites to apical first section: for dendrite in oblique_dends: apical_location=math.exp(-5*random.random()) #for now connecting randomly but need to do this on some linspace apsec=dendrite.get_connecting_section() apsec.connect(apical_dendrite_firstsec,apical_location,0) #connect dendrites to soma: for dend in somatic_dends: dendsec=dend.get_connecting_section() soma.connect(dendsec,region_end=random.random()) #for now connecting randomly but need to do this on some linspace #assign public sections self.axon_iseg=axon.iseg self.axon_hill=axon.hill self.axon_nodes=axon.nodes self.axon_myelin=axon.myelin self.axon_sections=[axon.hill]+[axon.iseg]+axon.nodes+axon.myelin self.soma_sections=[soma.body] self.apical_dendrites=main_apical_dendrite.all_sections+self.seclist(oblique_dends) self.somatic_dendrites=self.seclist(somatic_dends) self.dendrites=self.apical_dendrites+self.somatic_dendrites self.all_sections=self.axon_sections+[self.soma_sections]+self.dendrites", "output": "This approach may be suited in your specific case, but it definitely has its downsides, see is kwargs an antipattern? Try this approach: class Neuron(object): def __init__(self, **kwargs): prop_defaults = { \"num_axon_segments\": 0, \"apical_bifibrications\": \"fancy default\", ... } for (prop, default) in prop_defaults.iteritems(): setattr(self, prop, kwargs.get(prop, default)) You can then create a Neuron like this: n = Neuron(apical_bifibrications=\"special value\")"} +{"question_id": 68804209, "score": 79, "creation_date": 1629122773, "tags": ["python", "python-3.10", "structural-pattern-matching"], "instruction": "How to do an else (default) in match-case?\n\nPython recently has released match-case in version 3.10. The question is how can we do a default case in Python? I can do if/elif but don't know how to do else. Below is the code: x = \"hello\" match x: case \"hi\": print(x) case \"hey\": print(x) default: print(\"not matched\") I added this default myself. I want to know the method to do this in Python.", "output": "You can define a default case in Python. For this you use a wild card (_). The following code demonstrates it: x = \"hello\" match x: case \"hi\": print(x) case \"hey\": print(x) case _: print(\"not matched\")"} +{"question_id": 4624416, "score": 79, "creation_date": 1294395033, "tags": ["python", "interactive", "mode"], "instruction": "Is there a possibility to execute a Python script while being in interactive mode\n\nNormally you can execute a Python script for example: python myscript.py, but if you are in the interactive mode, how is it possible to execute a Python script on the filesystem? >>> exec(File) ??? It should be possible to execute the script more than one time.", "output": "Use execfile('script.py') but it only work on python 2.x, if you are using 3.0 try exec(open('script.py').read())"} +{"question_id": 324214, "score": 79, "creation_date": 1227804474, "tags": ["python", "xml", "performance", "parsing"], "instruction": "What is the fastest way to parse large XML docs in Python?\n\nI am currently running the following code based on Chapter 12.5 of the Python Cookbook: from xml.parsers import expat class Element(object): def __init__(self, name, attributes): self.name = name self.attributes = attributes self.cdata = '' self.children = [] def addChild(self, element): self.children.append(element) def getAttribute(self,key): return self.attributes.get(key) def getData(self): return self.cdata def getElements(self, name=''): if name: return [c for c in self.children if c.name == name] else: return list(self.children) class Xml2Obj(object): def __init__(self): self.root = None self.nodeStack = [] def StartElement(self, name, attributes): element = Element(name.encode(), attributes) if self.nodeStack: parent = self.nodeStack[-1] parent.addChild(element) else: self.root = element self.nodeStack.append(element) def EndElement(self, name): self.nodeStack.pop() def CharacterData(self,data): if data.strip(): data = data.encode() element = self.nodeStack[-1] element.cdata += data def Parse(self, filename): Parser = expat.ParserCreate() Parser.StartElementHandler = self.StartElement Parser.EndElementHandler = self.EndElement Parser.CharacterDataHandler = self.CharacterData ParserStatus = Parser.Parse(open(filename).read(),1) return self.root I am working with XML documents of about 1 GB in size. Does anyone know a faster way to parse these?", "output": "I looks to me as if you do not need any DOM capabilities from your program. I would second the use of the (c)ElementTree library. If you use the iterparse function of the cElementTree module, you can work your way through the xml and deal with the events as they occur. Note however, Fredriks advice on using cElementTree iterparse function: to parse large files, you can get rid of elements as soon as you\u2019ve processed them: for event, elem in iterparse(source): if elem.tag == \"record\": ... process record elements ... elem.clear() The above pattern has one drawback; it does not clear the root element, so you will end up with a single element with lots of empty child elements. If your files are huge, rather than just large, this might be a problem. To work around this, you need to get your hands on the root element. The easiest way to do this is to enable start events, and save a reference to the first element in a variable: # get an iterable context = iterparse(source, events=(\"start\", \"end\")) # turn it into an iterator context = iter(context) # get the root element event, root = context.next() for event, elem in context: if event == \"end\" and elem.tag == \"record\": ... process record elements ... root.clear() The lxml.iterparse() does not allow this. The previous does not work on Python 3.7, consider the following way to get the first element. import xml.etree.ElementTree as ET # Get an iterable. context = ET.iterparse(source, events=(\"start\", \"end\")) for index, (event, elem) in enumerate(context): # Get the root element. if index == 0: root = elem if event == \"end\" and elem.tag == \"record\": # ... process record elements ... root.clear()"} +{"question_id": 1352528, "score": 79, "creation_date": 1251583976, "tags": ["python", "activestate", "activepython"], "instruction": "Why does ActivePython exist?\n\nWhat's ActivePython actually about? From what I've read it's just standard Python with OpenSSL and PyWin32 (on Windows). No big deal I guess; I could install them in matter of minutes, and most people don't need them anyway. All other mentioned libraries (zlib, bzip2, SQLite 3, Tkinter, ElementTree, ctypes, and multiprocessing) are part of the core Python distribution. Next up, the tag-line \"ActivePython is the industry-standard Python distribution\", isn't core Python distribution \"industry-standard\" (whatever that means?)? And the weirdest thing, is that ActiveState bundles it with crappy PythonWin, and not their own most-awesome Python editor/IDE, Komodo. What gives? I actually never got to installing ActivePython, so maybe I don't know something, but it seems pretty irrelevant, and I see the name quite often on forums or here.", "output": "It's a packaging, or \"distribution\", of Python, with some extras -- not (anywhere) quite as \"Sumo\" as Enthought's huge distribution of \"Python plus everything\", but still in a similar vein (and it first appeared much earlier). I don't think you're missing anything in particular, except perhaps the fact that David Ascher (Python enthusiast and my coauthor in the Python Cookbook) used to be CTO at ActiveState (and so no doubt internally pushed Python to go with other dynamic languages ActiveState focuses on), but he's gone now (he's CEO at the Mozilla-owned firm that deals with email and similar forms of communication -- Thunderbird and the like, in terms of programs). No doubt some firms prefer to purchase a distribution with commercially available support contracts, like ActivePython, just because that's the way some purchasing departments in several enterprises (and/or their IT departments) are used to work. Unless you care about such issues, I don't think you're missing anything by giving ActiveState's Python distribution a pass;-). (I feel similarly about costly Enterprise distributions of Linux, vs. Debian or Ubuntu or the like -- but then I'm not in purchasing, nor in an IT department, nor do I work for a very traditional enterprise anyway;-) )"} +{"question_id": 36548518, "score": 79, "creation_date": 1460377164, "tags": ["python", "python-3.x", "cpython", "python-internals"], "instruction": "Why is code using intermediate variables faster than code without?\n\nI have encountered this weird behavior and failed to explain it. These are the benchmarks: py -3 -m timeit \"tuple(range(2000)) == tuple(range(2000))\" 10000 loops, best of 3: 97.7 usec per loop py -3 -m timeit \"a = tuple(range(2000)); b = tuple(range(2000)); a==b\" 10000 loops, best of 3: 70.7 usec per loop How come comparison with variable assignment is faster than using a one liner with temporary variables by more than 27%? By the Python docs, garbage collection is disabled during timeit so it can't be that. Is it some sort of an optimization? The results may also be reproduced in Python 2.x though to lesser extent. Running Windows 7/10, CPython 3.5.1 all the way to 3.10.1, Intel i7 3.40 GHz, 64 bit both OS and Python. Seems like a different machine I've tried running at Intel i7 3.60 GHz with Python 3.5.0 does not reproduce the results. Running using the same Python process with timeit.timeit() @ 10000 loops produced 0.703 and 0.804 respectively. Still shows although to lesser extent. (~12.5%)", "output": "My results were similar to yours: the code that uses intermediate variables was consistently at least 10-20 % faster in Python 3.4. However when I used IPython on the very same Python 3.4 interpreter, I got these results: In [1]: %timeit -n10000 -r20 tuple(range(2000)) == tuple(range(2000)) 10000 loops, best of 20: 74.2 \u00b5s per loop In [2]: %timeit -n10000 -r20 a = tuple(range(2000)); b = tuple(range(2000)); a==b 10000 loops, best of 20: 75.7 \u00b5s per loop Notably, I never managed to get even close to the 74.2 \u00b5s for the former when I used -mtimeit from the command line. So this Heisenbug turned out to be something quite interesting. I decided to run the command with strace and indeed there is something fishy going on: % strace -o withoutvars python3 -m timeit \"tuple(range(2000)) == tuple(range(2000))\" 10000 loops, best of 3: 134 usec per loop % strace -o withvars python3 -mtimeit \"a = tuple(range(2000)); b = tuple(range(2000)); a==b\" 10000 loops, best of 3: 75.8 usec per loop % grep mmap withvars|wc -l 46 % grep mmap withoutvars|wc -l 41149 Now that is a good reason for the difference. The code that does not use variables causes the mmap system call be called almost 1000x more than the one that uses intermediate variables. The withoutvars is full of mmap/munmap for a 256k region; these same lines are repeated over and over again: mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000 munmap(0x7f32e56de000, 262144) = 0 mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000 munmap(0x7f32e56de000, 262144) = 0 mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f32e56de000 munmap(0x7f32e56de000, 262144) = 0 The mmap call seems to be coming from the function _PyObject_ArenaMmap from Objects/obmalloc.c; the obmalloc.c also contains the macro ARENA_SIZE, which is #defined to be (256 << 10) (that is 262144); similarly the munmap matches the _PyObject_ArenaMunmap from obmalloc.c. obmalloc.c says that Prior to Python 2.5, arenas were never free()'ed. Starting with Python 2.5, we do try to free() arenas, and use some mild heuristic strategies to increase the likelihood that arenas eventually can be freed. Thus these heuristics and the fact that Python object allocator releases these free arenas as soon as they're emptied lead to python3 -mtimeit 'tuple(range(2000)) == tuple(range(2000))' triggering pathological behaviour where one 256 kiB memory area is re-allocated and released repeatedly; and this allocation happens with mmap/munmap, which is comparatively costly as they're system calls - furthermore, mmap with MAP_ANONYMOUS requires that the newly mapped pages must be zeroed - even though Python wouldn't care. The behaviour is not present in the code that uses intermediate variables, because it is using slightly more memory and no memory arena can be freed as some objects are still allocated in it. That is because timeit will make it into a loop not unlike for n in range(10000) a = tuple(range(2000)) b = tuple(range(2000)) a == b Now the behaviour is that both a and b will stay bound until they're *reassigned, so in the second iteration, tuple(range(2000)) will allocate a 3rd tuple, and the assignment a = tuple(...) will decrease the reference count of the old tuple, causing it to be released, and increase the reference count of the new tuple; then the same happens to b. Therefore after the first iteration there are always at least 2 of these tuples, if not 3, so the thrashing doesn't occur. Most notably it cannot be guaranteed that the code using intermediate variables is always faster - indeed in some setups it might be that using intermediate variables will result in extra mmap calls, whereas the code that compares return values directly might be fine. Someone asked that why this happens, when timeit disables garbage collection. It is indeed true that timeit does it: Note By default, timeit() temporarily turns off garbage collection during the timing. The advantage of this approach is that it makes independent timings more comparable. This disadvantage is that GC may be an important component of the performance of the function being measured. If so, GC can be re-enabled as the first statement in the setup string. For example: However, the garbage collector of Python is only there to reclaim cyclic garbage, i.e. collections of objects whose references form cycles. It is not the case here; instead these objects are freed immediately when the reference count drops to zero."} +{"question_id": 3460650, "score": 79, "creation_date": 1281544507, "tags": ["python", "hash"], "instruction": "Asking \"is hashable\" about a Python value\n\nI am interested in taking an arbitrary dict and copying it into a new dict, mutating it along the way. One mutation I would like to do is swap keys and value. Unfortunately, some values are dicts in their own right. However, this generates a \"unhashable type: 'dict'\" error. I don't really mind just stringifying the value and giving it the key. But, I'd like to be able to do something like this: for key in olddict: if hashable(olddict[key]): newdict[olddict[key]] = key else newdict[str(olddict[key])] = key Is there a clean way to do this that doesn't involve trapping an exception and parsing the message string for \"unhashable type\" ?", "output": "Python 3.x Use collections.abc.Hashable or typing.Hashable. >>> import typing >>> isinstance({}, typing.Hashable) False >>> isinstance(0, typing.Hashable) True Note: both are the same one, the latter is simply an alias of the former. Also note that collections.Hashable was removed in Python 3.10+ (deprecated since 3.7). Python 2.6+ (an original answer) Since Python 2.6 you can use the abstract base class collections.Hashable: >>> import collections >>> isinstance({}, collections.Hashable) False >>> isinstance(0, collections.Hashable) True This approach is also mentioned briefly in the documentation for __hash__. Doing so means that not only will instances of the class raise an appropriate TypeError when a program attempts to retrieve their hash value, but they will also be correctly identified as unhashable when checking isinstance(obj, collections.Hashable) (unlike classes which define their own __hash__() to explicitly raise TypeError)."} +{"question_id": 59051631, "score": 79, "creation_date": 1574773462, "tags": ["python", "python-3.x", "subprocess", "system"], "instruction": "What is the use of stub files (.pyi ) in python?\n\nI am trying to understand the lower level implementations of python 3. There is one module named _posixsubprocess used by the subprocess module. I tried to find the location of this module in my system and found that it's a stub file. Could someone guide me as I have no idea about what are the stub files and how are they implemented at the lower level?", "output": "_posixsubprocess The file you are referencing is a Python module written in C. It's not a \"stub\" file. The real implementation can be found in the stdlib at Modules/_posixsubprocess.c. You can see how writing a C/C++ extension is written by having a look at Building C and C++ Extensions. This should help you understanding the code in _posixsubprocess.c. In order to add type-hints to that file (which is an \"Extension Module\" as it is written in C), the type hints are added to a \"stub\" file with the extension .pyi. That file can be found in the typeshed which is a collection of stub files. The typeshed also contains stubs for third-party modules which is a historical remnant. That is no longer needed since PEP-561 has been adopted. Concerning stub/pyi files Stub files contain type-hinting information of normal Python modules. The full official documentation can be found in the section about stub-files in PEP-484. For example, if you have a Python module mymodule.py like this: def myfunction(name): return \"Hello \" + name Then you can add type-hints via a stub-file mymodule.pyi. Note that here the ellipsis (...) is part of the syntax, so the code-block below really shows the complete file contents: def myfunction(name: str) -> str: ... They look very similar to C header files in that they contain only the function signatures, but their use is purely optional. You can also add type hints directly in the .py module like the following: def myfunction(name: str) -> str: return \"Hello \" + name But there are some cases where you want to keep them separate in stubs: You want to keep your code Python 2 compatible and don't like the # type: ... comment syntax You use function annotations for something else but still want to use type-hints You are adding type-hints into an existing code-base and want to keep code-churn in existing files minimal"} +{"question_id": 55627780, "score": 79, "creation_date": 1554970615, "tags": ["python", "machine-learning", "deep-learning", "pytorch", "autograd"], "instruction": "Evaluating pytorch models: `with torch.no_grad` vs `model.eval()`\n\nWhen I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()?", "output": "TL;DR: Use both. They do different things, and have different scopes. with torch.no_grad - disables tracking of gradients in autograd. model.eval() changes the forward() behaviour of the module it is called upon eg, it disables dropout and has batch norm use the entire population statistics with torch.no_grad The torch.autograd.no_grad documentation says: Context-manager that disabled [sic] gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True. model.eval() The nn.Module.eval documentation says: Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. The creator of pytorch said the documentation should be updated to suggest the usage of both, and I raised the pull request."} +{"question_id": 7152441, "score": 79, "creation_date": 1314040825, "tags": ["python", "comparison", "boolean", "readability"], "instruction": "Python: if not val, vs if val is None\n\nI've always coded in the style of if not value, however, a few guides have brought to my attention that while this style works, it seems to have 2 potential problems: It's not completely readable; if value is None is surely more understandable. This can have implications later (and cause subtle bugs), since things like [] and 0 will evaluate to False as well. I am also starting to apply this idea to other comparisons, such as: if not value vs if value is False if not value vs if value is [] And so goes the list... The question is, how far do you go with the principle? Where to draw the line, while keeping your code safe? Should I always use the if value is None style no matter what?", "output": "Use a comparison to None if that's what you want. Use if not value if you just want to check if the value is considered False (empty list, None, False). I find if not value to be cleaner looking and Pythonic. Also, be careful with lists. You should not use is when comparing for an empty list. If you know you're getting a list, use if to check if it has any contents (or len()). Try typing this into the interpreter: >>> a = [] >>> a is [] False This is because the temporary list you just made has a different address in memory than the one stored at 'a'. You don't see this with None, False, or True because these are all values that are singletons (they all refer to the same section of memory) so using the is keyword works. You'll also find that CPython interns strings so the following works. >>> 'a' is 'a' True You should not rely on this. It is an implementation detail and this is not specified to work with every version of Python."} +{"question_id": 20920956, "score": 79, "creation_date": 1388838415, "tags": ["python", "syntax"], "instruction": "Python, what does an underscore before parenthesis do\n\nLooking through some of the Django code in authentication forms I noticed the following syntax: label=_(\"Username\") Normally I would have just used a pair of quotes around the string. Can someone explain what the underscore and parenthesis around \"Username\" do?", "output": "The _ is the name of a callable (function, callable object). It's usually used for the gettext function, for example in Django: from django.utils.translation import gettext as _ print _(\"Hello!\") # Will print Hello! if the current language is English # \"Bonjour !\" in French # \u00a1Hol\u00e0! in Spanish, etc. As the doc says: Python\u2019s standard library gettext module installs _() into the global namespace, as an alias for gettext(). In Django, we have chosen not to follow this practice, for a couple of reasons: [...] The underscore character (_) is used to represent \u201cthe previous result\u201d in Python\u2019s interactive shell and doctest tests. Installing a global _() function causes interference. Explicitly importing gettext() as _() avoids this problem. Even if it's a convention, it may not be the case in your code. But be reassured, 99.9% of the time _ is an alias for gettext :)"} +{"question_id": 3467526, "score": 79, "creation_date": 1281615401, "tags": ["python", "class", "oop", "decorator", "class-method"], "instruction": "Attaching a decorator to all functions within a class\n\nIs there a way to bind a decorator to all functions within a class generically, rather than explicitly stating it for every function? I suppose it then becomes a kind of aspect, rather than a decorator and it does feel a bit odd, but was thinking for something like timing or auth it'd be pretty neat.", "output": "The cleanest way to do this, or to do other modifications to a class definition, is to define a metaclass. Alternatively, just apply your decorator at the end of the class definition using inspect: import inspect class Something: def foo(self): pass for name, fn in inspect.getmembers(Something, inspect.isfunction): setattr(Something, name, decorator(fn)) In practice of course you'll want to apply your decorator more selectively. As soon as you want to decorate all but one method you'll discover that it is easier and more flexible just to use the decorator syntax in the traditional way."} +{"question_id": 53452152, "score": 79, "creation_date": 1543001819, "tags": ["python", "pipenv"], "instruction": "What is pipenv [dev-packages] section for?\n\nI'm learning pipenv and there's something I don't quite understand. Apparently your Pipfile can contain two sections: [packages] ... [dev-packages] .... As I understand it, the packages section is where the packages that you install are listed. But what is the dev-packages section for? How is it different from the packages section?", "output": "This section is for development requirements. So stuff like linters, unit test libraries, etc. All that is not needed on the user's machine. To install a package as dev-requirement add -d to install command (i.e. pipenv install -d ...), to install the dev-requirements section add -d to sync command (i.e. pipenv sync -d ...)."} +{"question_id": 60047685, "score": 79, "creation_date": 1580766774, "tags": ["python", "validation", "design-patterns", "model", "pydantic"], "instruction": "Is it bad practice to include non-validating methods in a pydantic model?\n\nI'm using pydantic 1.3 to validate models for an API I am writing. Is it common/good practice to include arbitrary methods in a class that inherits from pydantic.BaseModel? I need some helper methods associated with the objects and I am trying to decide whether I need a \"handler\" class. These models are being converted to JSON and sent to a restful service that I am also writing. My model looks like this: class Foo(pydantic.BaseModel): name: str bar: int baz: int Is it poor practice to do something like: class Foo(pydantic.BaseModel): name: str bar: int baz: int def add_one(self): self.bar += 1 It makes some sense to me, but I can't find an example of anyone doing this.", "output": "Yes, it's fine. We should probably document it. The only problem comes when you have a field name which conflicts with the method, but that's not a problem if you know what your data looks like. Also, it's possible to over object orient your code, but you're a long way from that."} +{"question_id": 111983, "score": 79, "creation_date": 1222028582, "tags": ["python", "arrays", "numpy"], "instruction": "array.array versus numpy.array\n\nIf you are creating a 1d array in Python, is there any benefit to using the NumPy package?", "output": "It all depends on what you plan to do with the array. If all you're doing is creating arrays of simple data types and doing I/O, the array module will do just fine. If, on the other hand, you want to do any kind of numerical calculations, the array module doesn't provide any help with that. NumPy (and SciPy) give you a wide variety of operations between arrays and special functions that are useful not only for scientific work but for things like advanced image manipulation or in general anything where you need to perform efficient calculations with large amounts of data. Numpy is also much more flexible, e.g. it supports arrays of any type of Python objects, and is also able to interact \"natively\" with your own objects if they conform to the array interface."} +{"question_id": 13249415, "score": 79, "creation_date": 1352199049, "tags": ["python", "json", "indentation"], "instruction": "How to implement custom indentation when pretty-printing with the JSON module?\n\nSo I'm using Python 2.7, using the json module to encode the following data structure: 'layer1': { 'layer2': { 'layer3_1': [ long_list_of_stuff ], 'layer3_2': 'string' } } My problem is that I'm printing everything out using pretty printing, as follows: json.dumps(data_structure, indent=2) Which is great, except I want to indent it all, except for the content in \"layer3_1\" \u2014 It's a massive dictionary listing coordinates, and as such, having a single value set on each one makes pretty printing create a file with thousands of lines, with an example as follows: { \"layer1\": { \"layer2\": { \"layer3_1\": [ { \"x\": 1, \"y\": 7 }, { \"x\": 0, \"y\": 4 }, { \"x\": 5, \"y\": 3 }, { \"x\": 6, \"y\": 9 } ], \"layer3_2\": \"string\" } } } What I really want is something similar to the following: { \"layer1\": { \"layer2\": { \"layer3_1\": [{\"x\":1,\"y\":7},{\"x\":0,\"y\":4},{\"x\":5,\"y\":3},{\"x\":6,\"y\":9}], \"layer3_2\": \"string\" } } } I hear it's possible to extend the json module: Is it possible to set it to only turn off indenting when inside the \"layer3_1\" object? If so, would somebody please tell me how?", "output": "(Note: The code in this answer only works with json.dumps() which returns a JSON formatted string, but not with json.dump() which writes directly to file-like objects. There's a modified version of it that works with both in my answer to the question Write two-dimensional list to JSON file.) Updated Below is a version of my original answer that has been revised several times. Unlike the original, which I posted only to show how to get the first idea in J.F.Sebastian's answer to work, and which like his, returned a non-indented string representation of the object. The latest updated version returns the Python object JSON formatted in isolation. The keys of each coordinate dict will appear in sorted order, as per one of the OP's comments, but only if a sort_keys=True keyword argument is specified in the initial json.dumps() call driving the process, and it no longer changes the object's type to a string along the way. In other words, the actual type of the \"wrapped\" object is now maintained. I think not understanding the original intent of my post resulted in number of folks downvoting it\u2014so, primarily for that reason, I have \"fixed\" and improved my answer several times. The current version is a hybrid of my original answer coupled with some of the ideas @Erik Allik used in his answer, plus useful feedback from other users shown in the comments below this answer. The following code appears to work unchanged in both Python 2.7.16 and 3.7.4. from _ctypes import PyObj_FromPtr import json import re class NoIndent(object): \"\"\" Value wrapper. \"\"\" def __init__(self, value): self.value = value class MyEncoder(json.JSONEncoder): FORMAT_SPEC = '@@{}@@' regex = re.compile(FORMAT_SPEC.format(r'(\\d+)')) def __init__(self, **kwargs): # Save copy of any keyword argument values needed for use here. self.__sort_keys = kwargs.get('sort_keys', None) super(MyEncoder, self).__init__(**kwargs) def default(self, obj): return (self.FORMAT_SPEC.format(id(obj)) if isinstance(obj, NoIndent) else super(MyEncoder, self).default(obj)) def encode(self, obj): format_spec = self.FORMAT_SPEC # Local var to expedite access. json_repr = super(MyEncoder, self).encode(obj) # Default JSON. # Replace any marked-up object ids in the JSON repr with the # value returned from the json.dumps() of the corresponding # wrapped Python object. for match in self.regex.finditer(json_repr): # see https://stackoverflow.com/a/15012814/355230 id = int(match.group(1)) no_indent = PyObj_FromPtr(id) json_obj_repr = json.dumps(no_indent.value, sort_keys=self.__sort_keys) # Replace the matched id string with json formatted representation # of the corresponding Python object. json_repr = json_repr.replace( '\"{}\"'.format(format_spec.format(id)), json_obj_repr) return json_repr if __name__ == '__main__': from string import ascii_lowercase as letters data_structure = { 'layer1': { 'layer2': { 'layer3_1': NoIndent([{\"x\":1,\"y\":7}, {\"x\":0,\"y\":4}, {\"x\":5,\"y\":3}, {\"x\":6,\"y\":9}, {k: v for v, k in enumerate(letters)}]), 'layer3_2': 'string', 'layer3_3': NoIndent([{\"x\":2,\"y\":8,\"z\":3}, {\"x\":1,\"y\":5,\"z\":4}, {\"x\":6,\"y\":9,\"z\":8}]), 'layer3_4': NoIndent(list(range(20))), } } } print(json.dumps(data_structure, cls=MyEncoder, sort_keys=True, indent=2)) Output: { \"layer1\": { \"layer2\": { \"layer3_1\": [{\"x\": 1, \"y\": 7}, {\"x\": 0, \"y\": 4}, {\"x\": 5, \"y\": 3}, {\"x\": 6, \"y\": 9}, {\"a\": 0, \"b\": 1, \"c\": 2, \"d\": 3, \"e\": 4, \"f\": 5, \"g\": 6, \"h\": 7, \"i\": 8, \"j\": 9, \"k\": 10, \"l\": 11, \"m\": 12, \"n\": 13, \"o\": 14, \"p\": 15, \"q\": 16, \"r\": 17, \"s\": 18, \"t\": 19, \"u\": 20, \"v\": 21, \"w\": 22, \"x\": 23, \"y\": 24, \"z\": 25}], \"layer3_2\": \"string\", \"layer3_3\": [{\"x\": 2, \"y\": 8, \"z\": 3}, {\"x\": 1, \"y\": 5, \"z\": 4}, {\"x\": 6, \"y\": 9, \"z\": 8}], \"layer3_4\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] } } }"} +{"question_id": 21524482, "score": 78, "creation_date": 1391422342, "tags": ["python", "trac", "psycopg2"], "instruction": "psycopg2 TypeError: not all arguments converted during string formatting\n\nI'm trying execute a simple query, but getting this error no matter how I pass the parameters. Here is the query (I'm using Trac db object to connect to a DB): cursor.execute(\"\"\"SELECT name FROM \"%s\".customer WHERE firm_id='%s'\"\"\" % (schema, each['id'])) schema and each['id'] both are simple strings print(\"\"\"SELECT name FROM \"%s\".customer WHERE firm_id='%s'\"\"\" % (schema, each['id'])) Result: SELECT name FROM \"Planing\".customer WHERE firm_id='135' There is on error is a remove quote after firm_id=, but that way parameter is treated a an integer and ::text leads to the very same error.", "output": "It is recommended to not use string interpolation for passing variables in database queries, but using string interpolation to set the table name is fine as long as it's not an external input or you restrict the allowed value. Try: cursor.execute(\"\"\" SELECT name FROM %s.customer WHERE firm_id=%%s \"\"\" % schema, (each['id'],)) Rules for DB API usage provides guidance for programming against the database."} +{"question_id": 58422817, "score": 78, "creation_date": 1571267575, "tags": ["python", "windows", "jupyter-notebook", "tornado"], "instruction": "Jupyter Notebook with Python 3.8 - NotImplementedError\n\nUpgraded recently to Python 3.8, and installed jupyter. However, when trying to run jupyter notebook getting the following error: File \"c:\\users\\user\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\tornado\\platform\\asyncio.py\", line 99, in add_handler self.asyncio_loop.add_reader(fd, self._handle_events, fd, IOLoop.READ) File \"c:\\users\\user\\appdata\\local\\programs\\python\\python38\\lib\\asyncio\\events.py\", line 501, in add_reader raise NotImplementedError NotImplementedError I know Python 3.8 on windows switched to ProactorEventLoop by default, so I suspect it is related to this. Jupyter does not support Python 3.8 at the moment? Is there a work around?", "output": "EDIT This issue exists in older versions of Jupyter Notebook and was fixed in version 6.0.3 (released 2020-01-21). To upgrade to the latest version run: pip install notebook --upgrade Following on this issue through GitHub, it seems the problem is related to the tornado server that jupyter uses. For those that can't wait for an official fix, I was able to get it working by editing the file tornado/platform/asyncio.py, by adding: import sys if sys.platform == 'win32': asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) After the main imports. I expect an official fix for this soon, however."} +{"question_id": 56370720, "score": 78, "creation_date": 1559181628, "tags": ["python", "google-cloud-platform", "airflow", "google-cloud-composer"], "instruction": "How to control the parallelism or concurrency of an Airflow installation?\n\nIn some of my Apache Airflow installations, DAGs or tasks that are scheduled to run do not run even when the scheduler doesn't appear to be fully loaded. How can I increase the number of DAGs or tasks that can run concurrently? Similarly, if my installation is under high load and I want to limit how quickly my Airflow workers pull queued tasks (such as to reduce resource consumption), what can I adjust to reduce the average load?", "output": "Here's an expanded list of configuration options that are available since Airflow v1.10.2. Some can be set on a per-DAG or per-operator basis, but may also fall back to the setup-wide defaults when they are not specified. Options that can be specified on a per-DAG basis: concurrency: the number of task instances allowed to run concurrently across all active runs of the DAG this is set on. Defaults to core.dag_concurrency if not set max_active_runs: maximum number of active runs for this DAG. The scheduler will not create new active DAG runs once this limit is hit. Defaults to core.max_active_runs_per_dag if not set Examples: # Only allow one run of this DAG to be running at any given time dag = DAG('my_dag_id', max_active_runs=1) # Allow a maximum of 10 tasks to be running across a max of 2 active DAG runs dag = DAG('example2', concurrency=10, max_active_runs=2) Options that can be specified on a per-operator basis: pool: the pool to execute the task in. Pools can be used to limit parallelism for only a subset of tasks max_active_tis_per_dag: controls the number of concurrent running task instances across dag_runs per task. Example: t1 = BaseOperator(pool='my_custom_pool', max_active_tis_per_dag=12) Options that are specified across an entire Airflow setup: core.parallelism: maximum number of tasks running across an entire Airflow installation core.dag_concurrency: max number of tasks that can be running per DAG (across multiple DAG runs) core.non_pooled_task_slot_count: number of task slots allocated to tasks not running in a pool core.max_active_runs_per_dag: maximum number of active DAG runs, per DAG scheduler.max_threads: how many threads the scheduler process should use to use to schedule DAGs celery.worker_concurrency: max number of task instances that a worker will process at a time if using CeleryExecutor celery.sync_parallelism: number of processes CeleryExecutor should use to sync task state"} +{"question_id": 68836551, "score": 78, "creation_date": 1629306091, "tags": ["python", "keras"], "instruction": "Keras AttributeError: 'Sequential' object has no attribute 'predict_classes'\n\nIm attempting to find model performance metrics (F1 score, accuracy, recall) following this guide https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/ This exact code was working a few months ago but now returning all sorts of errors, very confusing since i havent changed one character of this code. Maybe a package update has changed things? I fit the sequential model with model.fit, then used model.evaluate to find test accuracy. Now i am attempting to use model.predict_classes to make class predictions (model is a multi-class classifier). Code shown below: model = Sequential() model.add(Dense(24, input_dim=13, activation='relu')) model.add(Dense(18, activation='relu')) model.add(Dense(6, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) - history = model.fit(X_train, y_train, batch_size = 256, epochs = 10, verbose = 2, validation_split = 0.2) - score, acc = model.evaluate(X_test, y_test,verbose=2, batch_size= 256) print('test accuracy:', acc) - yhat_classes = model.predict_classes(X_test) last line returns error \"AttributeError: 'Sequential' object has no attribute 'predict_classes'\" This exact code was working not long ago so struggling a bit, thanks for any help", "output": "This function was removed in TensorFlow version 2.6. According to the keras in rstudio reference update to predict_x=model.predict(X_test) classes_x=np.argmax(predict_x,axis=1) Or use TensorFlow 2.5.x . If you are using TensorFlow version 2.5, you will receive the following warning: tensorflow\\python\\keras\\engine\\sequential.py:455: UserWarning: model.predict_classes() is deprecated and will be removed after 2021-01-01. Please use instead:* np.argmax(model.predict(x), axis=-1), if your model does multi-class classification (e.g. if it uses a softmax last-layer activation).* (model.predict(x) > 0.5).astype(\"int32\"), if your model does binary classification (e.g. if it uses a sigmoid last-layer activation)."} +{"question_id": 48929553, "score": 78, "creation_date": 1519308779, "tags": ["python", "macos", "python-2.7", "hard-drive"], "instruction": "Get hard disk size in Python\n\nI am trying to get the hard drive size and free space using Python (I am using Python 2.7 with macOS). I am trying with os.statvfs('/'), especially with the following code. Is it correct what I am doing? Which definition of the variable giga shall I use? import os def get_machine_storage(): result=os.statvfs('/') block_size=result.f_frsize total_blocks=result.f_blocks free_blocks=result.f_bfree # giga=1024*1024*1024 giga=1000*1000*1000 total_size=total_blocks*block_size/giga free_size=free_blocks*block_size/giga print('total_size = %s' % total_size) print('free_size = %s' % free_size) get_machine_storage() EDIT: statvfs is deprecated in Python 3, do you know any alternative?", "output": "For Python 2 till Python 3.3 Note: As a few people mentioned in the comment section, this solution will work for Python 3.3 and above. For Python 2.7 it is best to use the psutil library, which has a disk_usage function, containing information about total, used and free disk space: import psutil hdd = psutil.disk_usage('/') print (\"Total: %d GiB\" % hdd.total / (2**30)) print (\"Used: %d GiB\" % hdd.used / (2**30)) print (\"Free: %d GiB\" % hdd.free / (2**30)) Python 3.3 and above: For Python 3.3 and above, you can use the shutil module, which has a disk_usage function, returning a named tuple with the amounts of total, used and free space in your hard drive. You can call the function as below and get all information about your disk's space: import shutil total, used, free = shutil.disk_usage(\"/\") print(\"Total: %d GiB\" % (total // (2**30))) print(\"Used: %d GiB\" % (used // (2**30))) print(\"Free: %d GiB\" % (free // (2**30))) Output: Total: 931 GiB Used: 29 GiB Free: 902 GiB"} +{"question_id": 14655969, "score": 78, "creation_date": 1359757042, "tags": ["python", "opencv"], "instruction": "OpenCV v1/v2 error: the function is not implemented\n\nI'm trying to get OpenCV working with Python on my Ubuntu machine. I've downloaded and installed OpenCV, but when I attempt to run the following python code (which should capture images from a webcam and push them to the screen) import cv cv.NamedWindow(\"w1\", cv.CV_WINDOW_AUTOSIZE) capture = cv.CaptureFromCAM(0) def repeat(): frame = cv.QueryFrame(capture) cv.ShowImage(\"w1\", frame) time.sleep(10) while True: repeat() I get the following error: The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script So I do what they ask: install the packages, move to the folder from whence I installed OpenCV, and run sudo make uninstall make sudo make install But when I try to run the python, it gives me the same error. Am I missing something?", "output": "If it's giving you errors with gtk, try qt. sudo apt-get install libqt4-dev cmake -D WITH_QT=ON .. make sudo make install If this doesn't work, there's an easy way out. sudo apt-get install libopencv-* This will download all the required dependencies(although it seems that you have all the required libraries installed, but still you could try it once). This will probably install OpenCV 2.3.1 (Ubuntu 12.04). But since you have OpenCV 2.4.3 in /usr/local/lib include this path in /etc/ld.so.conf and do ldconfig. So now whenever you use OpenCV, you'd use the latest version. This is not the best way to do it but if you're still having problems with qt or gtk, try this once. This should work. Update - 18th Jun 2019 I got this error on my Ubuntu(18.04.1 LTS) system for openCV 3.4.2, as the method call to cv2.imshow was failing (e.g., at the line of cv2.namedWindow(name) with error: cv2.error: OpenCV(3.4.2). The function is not implemented.). I am using anaconda. Just the below 2 steps helped me resolve: conda remove opencv conda install -c conda-forge opencv=4.1.0 If you are using pip, you can try pip install opencv-contrib-python"} +{"question_id": 44210656, "score": 78, "creation_date": 1495836002, "tags": ["python", "python-3.x", "module", "python-module"], "instruction": "How to check if a module is installed in Python and, if not, install it within the code?\n\nI would like to install the modules 'mutagen' and 'gTTS' for my code, but I want to have it so it will install the modules on every computer that doesn't have them, but it won't try to install them if they're already installed. I currently have: def install(package): pip.main(['install', package]) install('mutagen') install('gTTS') from gtts import gTTS from mutagen.mp3 import MP3 However, if you already have the modules, this will just add unnecessary clutter to the start of the program whenever you open it.", "output": "EDIT - 2020/02/03 The pip module has updated quite a lot since the time I posted this answer. I've updated the snippet with the proper way to install a missing dependency, which is to use subprocess and pkg_resources, and not pip. To hide the output, you can redirect the subprocess output to devnull: import sys import subprocess import pkg_resources required = {'mutagen', 'gTTS'} installed = {pkg.key for pkg in pkg_resources.working_set} missing = required - installed if missing: python = sys.executable subprocess.check_call([python, '-m', 'pip', 'install', *missing], stdout=subprocess.DEVNULL) Like @zwer mentioned, the above works, although it is not seen as a proper way of packaging your project. To look at this in better depth, read the the page How to package a Python App."} +{"question_id": 50725888, "score": 78, "creation_date": 1528304787, "tags": ["python", "visual-studio-code", "python-black"], "instruction": "VS Code Python + Black formatter arguments - python.formatting.blackArgs\n\nApril 2024 UPDATE, Please read: It appears the answer since late 2023 is the newly accepted answer provided by Mircea, this question is old so older, now incorrect answers, have more upvotes but please scroll down for that answer. original question: I'm using the May 2018 Python extension (released June 2018) for VS Code 1.23.1 on Windows, python 3.6 via Anaconda, conda installing black from conda-forge into my conda environment. In my user settings.json I have the below: \"python.formatting.blackArgs\": [ \"--line-length 80\" ], which I'd think would be the correct way to structure this to pass arguments to black in VS Code Python formatting. However, in my python Output pane I get the below: Formatting with black failed. Error: Error: no such option: --line-length 80 If I edit my settings.json to be no args, such as: \"python.formatting.blackArgs\": [], black works as expected. Does anyone know how to pass arguments correctly to the new (as of June 2018) black formatter?", "output": "Hopefully the answer for a more recent VSCode and Black helps: With VsCode Insiders (1.83.0-insider) and Black installed from extensions (v2023.4.1), installed from extensions (https://marketplace.visualstudio.com/items?itemName=ms-python.black-formatter) I had to add -l or --line-length and 80 as separate items to File->Preferences->Settings->[type]black->Black-formatter: Args->Add item. In user settings json (Ctrl + Shift + P --> Open User Settings) I have: \"black-formatter.args\": [\"--line-length\", \"80\"] If this doesn't work, there's useful information in the Output Window (you can select Black Formatter) to see the logs from Black."} +{"question_id": 35775207, "score": 78, "creation_date": 1457016150, "tags": ["python", "flask", "jinja2"], "instruction": "Remove unnecessary whitespace from Jinja rendered template\n\nI'm using curl to watch the output of my web app. When Flask and Jinja render templates, there's a lot of unnecessary white space in the output. It seems to be added by rendering various components from Flask-WTF and Flask-Bootstrap. I could strip this using sed, but is there a way to control this from Jinja?", "output": "Jinja has multiple ways to control whitespace. It does not have a way to prettify output, you have to manually make sure everything looks \"nice\". The broadest solution is to set trim_blocks and lstrip_blocks on the env. app.jinja_env.trim_blocks = True app.jinja_env.lstrip_blocks = True If you want to keep a newline at the end of the file, set strip_trailing_newlines = False. You can use control characters to modify how the whitespace around a tag works. - always removes whitespace, + always preserves it, overriding the env settings for that tag. The - character can go at the beginning or end (or both) of a tag to control the whitespace in that direction, the + character only makes sense at the beginning of a tag. {%- if ... %} strips before {%- if ... -%} strips before and after {%+ if ... %} preserves before {%+ if ... -%} preserves before and strips after remember that {% endif %} is treated separately The control characters only apply to templates you write. If you include a template or use a macro from a 3rd party, however they wrote the template will apply to that part."} +{"question_id": 1466741, "score": 78, "creation_date": 1253719693, "tags": ["python", "postgresql", "psycopg2"], "instruction": "Parameterized queries with psycopg2 / Python DB-API and PostgreSQL\n\nWhat's the best way to make psycopg2 pass parameterized queries to PostgreSQL? I don't want to write my own escpaing mechanisms or adapters and the psycopg2 source code and examples are difficult to read in a web browser. If I need to switch to something like PyGreSQL or another python pg adapter, that's fine with me. I just want simple parameterization.", "output": "psycopg2 follows the rules for DB-API 2.0 (set down in PEP-249). That means you can call execute method from your cursor object and use the pyformat binding style, and it will do the escaping for you. For example, the following should be safe (and work): cursor.execute(\"SELECT * FROM students WHERE last_name = %(lname)s\", {\"lname\": \"Robert'); DROP TABLE students;--\"}) Edit: tekHedd's comment rightly points out that the SELECT and the DROP TABLE used different table names, so I fixed it."} +{"question_id": 38191855, "score": 78, "creation_date": 1467664281, "tags": ["python", "numpy", "numpy-ndarray", "zero-padding"], "instruction": "Zero pad numpy array\n\nWhat's the more pythonic way to pad an array with zeros at the end? def pad(A, length): ... A = np.array([1,2,3,4,5]) pad(A, 8) # expected : [1,2,3,4,5,0,0,0] In my real use case, in fact I want to pad an array to the closest multiple of 1024. Ex: 1342 => 2048, 3000 => 3072", "output": "For your use case you can use resize() method: A = np.array([1,2,3,4,5]) A.resize(8) This resizes A in place. If there are refs to A numpy throws a vale error because the referenced value would be updated too. To allow this add refcheck=False option. The documentation states that missing values will be 0: Enlarging an array: as above, but missing entries are filled with zeros"} +{"question_id": 7406102, "score": 78, "creation_date": 1315935652, "tags": ["python", "filenames"], "instruction": "Create (sane/safe) filename from any (unsafe) string\n\nI want to create a sane/safe filename (i.e. somewhat readable, no \"strange\" characters, etc.) from some random Unicode string (which might contain just anything). (It doesn't matter for me whether the function is Cocoa, ObjC, Python, etc.) Of course, there might be infinite many characters which might be strange. Thus, it is not really a solution to have a blacklist and to add more and more to that list over the time. I could have a whitelist. However, I don't really know how to define it. [a-zA-Z0-9 .] is a start but I also want to accept unicode chars which can be displayed in a normal way.", "output": "Python: \"\".join(c for c in filename if c.isalpha() or c.isdigit() or c==' ').rstrip() this accepts Unicode characters but removes line breaks, etc. example: filename = u\"ad\\nbla'{-+\\)(\u00e7?\" gives: adbla\u00e7 edit str.isalnum() does alphanumeric on one step. \u2013 comment from queueoverflow below. danodonovan hinted on keeping a dot included. keepcharacters = (' ','.','_') \"\".join(c for c in filename if c.isalnum() or c in keepcharacters).rstrip()"} +{"question_id": 50788508, "score": 78, "creation_date": 1528668772, "tags": ["python", "pandas", "dataframe", "repeat"], "instruction": "How can I replicate rows of a Pandas DataFrame?\n\nMy pandas dataframe looks like this: Person ID ZipCode Gender 0 12345 882 38182 Female 1 32917 271 88172 Male 2 18273 552 90291 Female I want to replicate every row 3 times and reset the index to get: Person ID ZipCode Gender 0 12345 882 38182 Female 1 12345 882 38182 Female 2 12345 882 38182 Female 3 32917 271 88172 Male 4 32917 271 88172 Male 5 32917 271 88172 Male 6 18273 552 90291 Female 7 18273 552 90291 Female 8 18273 552 90291 Female I tried solutions such as: pd.concat([df[:5]]*3, ignore_index=True) which adds the rows to the end of the dataframe, instead of having 3 duplicate lines one after the other And: df.reindex(np.repeat(df.index.values, df['ID']), method='ffill') But none of them worked.", "output": "Solutions: Use np.repeat: Version 1: Try using np.repeat: newdf = pd.DataFrame(np.repeat(df.values, 3, axis=0)) newdf.columns = df.columns print(newdf) The above code will output: Person ID ZipCode Gender 0 12345 882 38182 Female 1 12345 882 38182 Female 2 12345 882 38182 Female 3 32917 271 88172 Male 4 32917 271 88172 Male 5 32917 271 88172 Male 6 18273 552 90291 Female 7 18273 552 90291 Female 8 18273 552 90291 Female np.repeat repeats the values of df, 3 times. Then we add the columns with assigning new_df.columns = df.columns. Version 2: You could also assign the column names in the first line, like below: newdf = pd.DataFrame(np.repeat(df.values, 3, axis=0), columns=df.columns) print(newdf) The above code will also output: Person ID ZipCode Gender 0 12345 882 38182 Female 1 12345 882 38182 Female 2 12345 882 38182 Female 3 32917 271 88172 Male 4 32917 271 88172 Male 5 32917 271 88172 Male 6 18273 552 90291 Female 7 18273 552 90291 Female 8 18273 552 90291 Female Version 3: You could shorten it with loc and only repeat the index, like below: newdf = df.loc[np.repeat(df.index, 3)].reset_index(drop=True) print(newdf) The above code will also output: Person ID ZipCode Gender 0 12345 882 38182 Female 1 12345 882 38182 Female 2 12345 882 38182 Female 3 32917 271 88172 Male 4 32917 271 88172 Male 5 32917 271 88172 Male 6 18273 552 90291 Female 7 18273 552 90291 Female 8 18273 552 90291 Female I use reset_index to replace the index with monotonic indexes (0, 1, 2, 3, 4...). Without np.repeat: Version 4: You could use the built-in pd.Index.repeat function, like the below: newdf = df.loc[df.index.repeat(3)].reset_index(drop=True) print(newdf) The above code will also output: Person ID ZipCode Gender 0 12345 882 38182 Female 1 12345 882 38182 Female 2 12345 882 38182 Female 3 32917 271 88172 Male 4 32917 271 88172 Male 5 32917 271 88172 Male 6 18273 552 90291 Female 7 18273 552 90291 Female 8 18273 552 90291 Female Remember to add reset_index to line-up the index. Version 5: Or by using concat with sort_index, like below: newdf = pd.concat([df] * 3).sort_index().reset_index(drop=True) print(newdf) The above code will also output: Person ID ZipCode Gender 0 12345 882 38182 Female 1 12345 882 38182 Female 2 12345 882 38182 Female 3 32917 271 88172 Male 4 32917 271 88172 Male 5 32917 271 88172 Male 6 18273 552 90291 Female 7 18273 552 90291 Female 8 18273 552 90291 Female Version 6: You could also use loc with Python list multiplication and sorted, like below: newdf = df.loc[sorted([*df.index] * 3)].reset_index(drop=True) print(newdf) The above code will also output: Person ID ZipCode Gender 0 12345 882 38182 Female 1 12345 882 38182 Female 2 12345 882 38182 Female 3 32917 271 88172 Male 4 32917 271 88172 Male 5 32917 271 88172 Male 6 18273 552 90291 Female 7 18273 552 90291 Female 8 18273 552 90291 Female Timings: Timing with the following code: import timeit import pandas as pd import numpy as np df = pd.DataFrame({'Person': {0: 12345, 1: 32917, 2: 18273}, 'ID': {0: 882, 1: 271, 2: 552}, 'ZipCode': {0: 38182, 1: 88172, 2: 90291}, 'Gender': {0: 'Female', 1: 'Male', 2: 'Female'}}) def version1(): newdf = pd.DataFrame(np.repeat(df.values, 3, axis=0)) newdf.columns = df.columns def version2(): newdf = pd.DataFrame(np.repeat(df.values, 3, axis=0), columns=df.columns) def version3(): newdf = df.loc[np.repeat(df.index, 3)].reset_index(drop=True) def version4(): newdf = df.loc[df.index.repeat(3)].reset_index(drop=True) def version5(): newdf = pd.concat([df] * 3).sort_index().reset_index(drop=True) def version6(): newdf = df.loc[sorted([*df.index] * 3)].reset_index(drop=True) print('Version 1 Speed:', timeit.timeit('version1()', 'from __main__ import version1', number=20000)) print('Version 2 Speed:', timeit.timeit('version2()', 'from __main__ import version2', number=20000)) print('Version 3 Speed:', timeit.timeit('version3()', 'from __main__ import version3', number=20000)) print('Version 4 Speed:', timeit.timeit('version4()', 'from __main__ import version4', number=20000)) print('Version 5 Speed:', timeit.timeit('version5()', 'from __main__ import version5', number=20000)) print('Version 6 Speed:', timeit.timeit('version6()', 'from __main__ import version6', number=20000)) Output: Version 1 Speed: 9.879425965991686 Version 2 Speed: 7.752138633004506 Version 3 Speed: 7.078321029010112 Version 4 Speed: 8.01169377300539 Version 5 Speed: 19.853051771002356 Version 6 Speed: 9.801617017001263 We can see that Versions 2 & 3 are faster than the others, the reason for this is because they both use the np.repeat function, and numpy functions are very fast because they are implemented with C. Version 3 wins against Version 2 marginally due to the usage of loc instead of DataFrame. Version 5 is significantly slower because of the functions concat and sort_index, since concat copies DataFrames quadratically, which takes longer time. Fastest Version: Version 3."} +{"question_id": 4917284, "score": 78, "creation_date": 1297039372, "tags": ["python", "extract", "unzip", "python-zipfile"], "instruction": "Extract files from zip without keeping the structure using python ZipFile?\n\nI try to extract all files from .zip containing subfolders in one folder. I want all the files from subfolders extract in only one folder without keeping the original structure. At the moment, I extract all, move the files to a folder, then remove previous subfolders. The files with same names are overwrited. Is it possible to do it before writing files? Here is a structure for example: my_zip/file1.txt my_zip/dir1/file2.txt my_zip/dir1/dir2/file3.txt my_zip/dir3/file4.txt At the end I whish this: my_dir/file1.txt my_dir/file2.txt my_dir/file3.txt my_dir/file4.txt What can I add to this code ? import zipfile my_dir = \"D:\\\\Download\\\\\" my_zip = \"D:\\\\Download\\\\my_file.zip\" zip_file = zipfile.ZipFile(my_zip, 'r') for files in zip_file.namelist(): zip_file.extract(files, my_dir) zip_file.close() if I rename files path from zip_file.namelist(), I have this error: KeyError: \"There is no item named 'file2.txt' in the archive\"", "output": "This opens file handles of members of the zip archive, extracts the filename and copies it to a target file (that's how ZipFile.extract works, without taking care of subdirectories). import os import shutil import zipfile my_dir = r\"D:\\Download\" my_zip = r\"D:\\Download\\my_file.zip\" with zipfile.ZipFile(my_zip) as zip_file: for member in zip_file.namelist(): filename = os.path.basename(member) # skip directories if not filename: continue # copy file (taken from zipfile's extract) source = zip_file.open(member) target = open(os.path.join(my_dir, filename), \"wb\") with source, target: shutil.copyfileobj(source, target)"} +{"question_id": 42581, "score": 78, "creation_date": 1220475633, "tags": ["python", "regex"], "instruction": "Python re.sub with a flag does not replace all occurrences\n\nThe Python docs say: re.MULTILINE: When specified, the pattern character '^' matches at the beginning of the string and at the beginning of each line (immediately following each newline)... By default, '^' matches only at the beginning of the string... So what's going on when I get the following unexpected result? >>> import re >>> s = \"\"\"// The quick brown fox. ... // Jumped over the lazy dog.\"\"\" >>> re.sub('^//', '', s, re.MULTILINE) ' The quick brown fox.\\n// Jumped over the lazy dog.'", "output": "Look at the definition of re.sub: re.sub(pattern, repl, string[, count, flags]) The 4th argument is the count, you are using re.MULTILINE (which is 8) as the count, not as a flag. Either use a named argument: re.sub('^//', '', s, flags=re.MULTILINE) Or compile the regex first: re.sub(re.compile('^//', re.MULTILINE), '', s)"} +{"question_id": 33496350, "score": 78, "creation_date": 1446545369, "tags": ["python", "python-3.x", "jupyter-notebook", "virtualenv", "jupyter"], "instruction": "Execute Python script within Jupyter notebook using a specific virtualenv\n\nI would like to execute a long running Python script from within a Jupyter notebook so that I can hack on the data structures generated mid-run. The script has many dependencies and command line arguments and is executed with a specific virtualenv. Is it possible to interactively run a Python script inside a notebook from a specified virtualenv (different to that of the Jupyter installation)?", "output": "Here's what worked for me (non conda python): (MacOS, brew version of python. if you are working with system python, you may (will) need prepend each command with sudo) First activate virtualenv. If starting afresh then, e.g., you could use virtualenvwrapper: $ pip install virtualenvwrapper $ mkvirtualenv -p python2 py2env $ workon py2env # This will activate virtualenv (py2env)$ # Then install jupyter within the active virtualenv (py2env)$ pip install jupyter # jupyter comes with ipykernel, but somehow you manage to get an error due to ipykernel, then for reference ipykernel package can be installed using: (py2env)$ pip install ipykernel Next, set up the kernel (py2env)$ python -m ipykernel install --user --name py2env --display-name \"Python2 (py2env)\" then start jupyter notebook (the venv need not be activated for this step) (py2env)$ jupyter notebook # or #$ jupyter notebook In the jupyter notebook dropdown menu: Kernel >> Change Kernel >> you should see Python2 (py2env) kernel. This also makes it easy to identify python version of kernel, and maintain either side by side. Here is the link to detailed docs: http://ipython.readthedocs.io/en/stable/install/kernel_install.html"} +{"question_id": 52431208, "score": 78, "creation_date": 1537468271, "tags": ["python", "sql", "postgresql", "performance", "sqlalchemy"], "instruction": "SQLAlchemy \"default\" vs \"server_default\" performance\n\nIs there a performance advantage (or disadvantage) when using default instead of server_default for mapping table column default values when using SQLAlchemy with PostgreSQL? My understanding is that default renders the expression in the INSERT (usually) and that server_default places the expression in the CREATE TABLE statement. Seems like server_default is analogous to typical handling of defaults directly in the db such as: CREATE TABLE example ( id serial PRIMARY KEY, updated timestamptz DEFAULT now() ); ...but it is not clear to me if it is more efficient to handle defaults on INSERT or via table creation. Would there be any performance improvement or degradation for row inserts if each of the default parameters in the example below were changed to server_default? from uuid import uuid4 from sqlalchemy import Column, Boolean, DateTime, Integer from sqlalchemy.dialects.postgresql import UUID from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.sql import func Base = declarative_base() class Item(Base): __tablename__ = 'item' id = Column(UUID(as_uuid=True), primary_key=True, default=uuid4) count = Column(Integer, nullable=False, default=0) flag = Column(Boolean, nullable=False, default=False) updated = Column(DateTime(timezone=True), nullable=False, default=func.now()) NOTE: The best explanation I found so far for when to use default instead of server_default does not address performance (see Mike Bayer's SO answer on the subject). My oversimplified summary of that explanation is that default is preferred over server_default when... The db can't handle the expression you need or want to use for the default value. You can't or don't want to modify the schema directly. ...so the question remains as to whether performance should be considered when choosing between default and server_default?", "output": "It is impossible to give you a 'this is faster' answer, because performance per default value expression can vary widely, both on the server and in Python. A function to retrieve the current time behaves differently from a scalar default value. Next, you must realise that defaults can be provided in five different ways: Client-side scalar defaults. A fixed value, such a 0 or True. The value is used in an INSERT statement. Client-side Python function. Called each time a default is needed, produces the value to insert, used the same way as a scalar default from there on out. These can be context sensitive (have access to the current execution context with values to be inserted). Client-side SQL expression; this generates an extra piece of SQL expression that is then used in the query and executed on the server to produce a value. Server-side DLL expression are SQL expressions that are then stored in the table definition, so are part of the schema. The server uses these to fill a value for any columns omitted from INSERT statements, or when a column value is set to DEFAULT in an INSERT or UPDATE statement. Server-side implicit defaults or triggers, where other DLL such as triggers or specific database features provide a default value for columns. Note that when it comes to a SQL expression determining the default value, be that a client-side SQL expression, a server-side DLL expression, or a trigger, it makes very little difference to a database where the default value expression is coming from. The query executor will need to know how to produce values for a given column, once that's parsed out of the DML statement or the schema definition, the server still has to execute the expression for each row. Choosing between these options is rarely going to be based on performance alone, performance should at most be but one of multiple aspects you consider. There are many factors involved here: default with a scalar or Python function directly produces a Python default value, then sends the new value to the server when inserting. Python code can access the default value before the data is inserted into the database. A client-side SQL expression, a server_default value, and server-side implicit defaults and triggers all have the server generate the default, which then must be fetched by the client if you want to be able to access it in the same SQLAlchemy session. You can't access the value until the object has been inserted into the database. Depending on the exact query and database support, SQLAlchemy may have to make extra SQL queries to either generate a default before the INSERT statement or run a separate SELECT afterwards to fetch the defaults that have been inserted. You can control when this happens (directly when inserting or on first access after flushing, with the eager_defaults mapper configuration). If you have multiple clients on different platforms accessing the same database, a server_default or other default attached to the schema (such as a trigger) ensures that all clients will use the same defaults, regardless, while defaults implemented in Python can't be accessed by other platforms. When using PostgreSQL, SQLAlchemy can make use of the RETURNING clause for DML statements, which gives a client access to server-side generated defaults in a single step. So when using a server_default column default that calculates a new value for each row (not a scalar value), you save a small amount of Python-side time, and save a small amount of network bandwidth as you are not sending data for that column over to the database. The database could be faster creating that same value, or it could be slower; it largely depends on the type of operation. If you need to have access to the generated default value from Python, in the same transaction, you do then have to wait for a return stream of data, parsed out by SQLAlchemy. All these details can become insignificant compared to everything else that happens around inserting or updating rows, however. Do understand that a ORM is not suitable to be used for high-performance bulk row inserts or updates; quoting from the SQAlchemy Performance FAQ entry: The SQLAlchemy ORM uses the unit of work pattern when synchronizing changes to the database. This pattern goes far beyond simple \u201cinserts\u201d of data. It includes that attributes which are assigned on objects are received using an attribute instrumentation system which tracks changes on objects as they are made, includes that all rows inserted are tracked in an identity map which has the effect that for each row SQLAlchemy must retrieve its \u201clast inserted id\u201d if not already given, and also involves that rows to be inserted are scanned and sorted for dependencies as needed. Objects are also subject to a fair degree of bookkeeping in order to keep all of this running, which for a very large number of rows at once can create an inordinate amount of time spent with large data structures, hence it\u2019s best to chunk these. Basically, unit of work is a large degree of automation in order to automate the task of persisting a complex object graph into a relational database with no explicit persistence code, and this automation has a price. ORMs are basically not intended for high-performance bulk inserts - this is the whole reason SQLAlchemy offers the Core in addition to the ORM as a first-class component. Because an ORM like SQLAlchemy comes with a hefty overhead price, any performance differences between a server-side or Python-side default quickly disappears in the noise of ORM operations. So if you are concerned about performance for large-quantity insert or update operations, you would want to use bulk operations for those, and enable the psycopg2 batch execution helpers to really get a speed boost. When using these bulk operations, I'd expect server-side defaults to improve performance just by saving bandwidth moving row data from Python to the server, but how much depends on the exact nature of the default values. If ORM insert and update performance outside of bulk operations is a big issue for you, you need to test your specific options. I'd start with the SQLAlchemy examples.performance package and add your own test suite using two models that differ only in a single server_default and default configuration."} +{"question_id": 50089498, "score": 78, "creation_date": 1525023573, "tags": ["python", "visual-studio-code"], "instruction": "How to set the root directory for Visual Studio Code Python Extension?\n\nI have no trouble running and debugging my project with VSCode Python Extension (ms-python.python), but since python sub-project root directory is not the whole project directory, all imports from my sources are underlined with red color and are listed in the problems and so Go to definition and some similar features don't work properly. How can I tell the IDE where's the start point of my project: Whole Project path: docs server entities user.py customer.py env viewer db The server directory is where the imports path are started from: from entities.user import User", "output": "You can create a .env file with: PYTHONPATH=server That will add your server folder to PYTHONPATH as needed. (You may need to restart VSCode for it to take PYTHONPATH into account correctly.) Edited to clarify... Create a file named .env under the repo root e.g. your_repo/.env. Also creating the file under the folder where your consuming code is, instead of under repo root, seems to work e.g. your_repo/service/.env. For more details, see documentation on environment variable definition files. For me this worked without restarting VSC, perhaps this is a matter of newer VSC and extensions versions."} +{"question_id": 34818723, "score": 78, "creation_date": 1452887250, "tags": ["python", "jupyter-notebook", "nbconvert"], "instruction": "export notebook to pdf without code\n\nI have a large notebook with a lot of figures and text. I want to convert it to a html file. However, I don't want to export the code. I am using the following command ipython nbconvert --to html notebook.ipynb But this option also exports the code. Is there a way to convert the notebook to html without the code?", "output": "I found this article interesting it explains how to remove the input columns : you have to create a template file named \"hidecode.tplx\" in same directory as the notebook you want to convert and add those line in it : ((*- extends 'article.tplx' -*)) ((* block input_group *)) ((*- if cell.metadata.get('nbconvert', {}).get('show_code', False) -*)) ((( super() ))) ((*- endif -*)) ((* endblock input_group *)) And after run this command it will use pdfLatex to convert the notebook in pdf via latex: jupyter nbconvert --to pdf --template hidecode Example.ipynb or if you want to edit you can convert it to a .tex document and use pdfLatex to put it in pdf : jupyter nbconvert --to latex --template hidecode Example.ipynb EDIT Sept 2018: ipython nbconvert is deprecated. It will be replaced by jupyter nbconvert: So we replace the command ipython with jupyter EDIT Feb 2021: (This is my best answer here, so let me take care of it) Following @Mrule comment adding --no-input flag will make it work without the template... jupyter nbconvert --to latex --no-input Example.ipynb PS: If you are getting issue saying : LaTeX error related to tcolorbox.sty not found Please refer to this guide to update your tex installation and this question"} +{"question_id": 16800375, "score": 78, "creation_date": 1369770972, "tags": ["python", "django", "django-models", "model", "primary-key"], "instruction": "How can I set two primary key fields for my models in Django?\n\nI have a model like this: class Hop(models.Model): migration = models.ForeignKey('Migration') host = models.ForeignKey(User, related_name='host_set') How can I have the primary key be the combination of migration and host?", "output": "Update Django 4.0 Django 4.0 documentation recommends using UniqueConstraint with the constraints option instead of unique_together. Use UniqueConstraint with the constraints option instead. UniqueConstraint provides more functionality than unique_together. unique_together may be deprecated in the future. class Hop(models.Model): migration = models.ForeignKey('Migration') host = models.ForeignKey(User, related_name='host_set') class Meta: constraints = [ models.UniqueConstraint( fields=['migration', 'host'], name='unique_migration_host_combination' ) ] Original Answer I would implement this slightly differently. I would use a default primary key (auto field), and use the meta class property, unique_together class Hop(models.Model): migration = models.ForeignKey('Migration') host = models.ForeignKey(User, related_name='host_set') class Meta: unique_together = ((\"migration\", \"host\"),) It would act as a \"surrogate\" primary key column. If you really want to create a multi-column primary key, look into this app"} +{"question_id": 922774, "score": 78, "creation_date": 1243539321, "tags": ["python", "typechecking", "python-2.5"], "instruction": "Check if input is a list/tuple of strings or a single string\n\nI've a method that I want to be able to accept either a single string (a path, but not necessarily one that exists on the machine running the code) or a list/tuple of strings. Given that strings act as lists of characters, how can I tell which kind the method has received? I'd like to be able to accept either standard or unicode strings for a single entry, and either lists or tuples for multiple, so isinstance doesn't seem to be the answer unless I'm missing a clever trick with it (like taking advantage of common ancestor classes?). Python version is 2.5", "output": "You can check if a variable is a string or unicode string with Python 3: isinstance(some_object, str) Python 2: isinstance(some_object, basestring) This will return True for both strings and unicode strings As you are using python 2.5, you could do something like this: if isinstance(some_object, basestring): ... elif all(isinstance(item, basestring) for item in some_object): # check iterable for stringness of all items. Will raise TypeError if some_object is not iterable ... else: raise TypeError # or something along that line Stringness is probably not a word, but I hope you get the idea"} +{"question_id": 24678144, "score": 78, "creation_date": 1404999781, "tags": ["python", "anaconda", "conda"], "instruction": "Conda environments and .BAT files\n\nI am setting up calls to python (Anaconda distribution) via BAT files and the windows task scheduler. I've now used environments for the first time and was trying to set a .bat file up as below: activate [my_env] python my_script.py deactivate Unfortunately it appears that the second command does not get executed.", "output": "Use the 'call' command when activating/deactivating the environment. call activate [my_env] python my_script.py call conda deactivate See https://github.com/conda/conda/issues/794"} +{"question_id": 9197324, "score": 78, "creation_date": 1328718256, "tags": ["python", "curly-braces", "parentheses", "braces"], "instruction": "What is the meaning of curly braces?\n\nJust starting to figure Python out. I've read this question and its responses: Is it true that I can't use curly braces in Python? and I still can't fathom how curly braces work, especially since pages like Simple Programs: http://wiki.python.org/moin/SimplePrograms use curly braces all over the place. I understand square brackets and regular curved parentheses, but I don't know what's meant by \"defining dictionaries\" or what they're supposed to represent.", "output": "\"Curly Braces\" are used in Python to define a dictionary. A dictionary is a data structure that maps one value to another - kind of like how an English dictionary maps a word to its definition. Python: dict = { \"a\" : \"Apple\", \"b\" : \"Banana\", } They are also used to format strings, instead of the old C style using %, like: ds = ['a', 'b', 'c', 'd'] x = ['has_{} 1'.format(d) for d in ds] print x ['has_a 1', 'has_b 1', 'has_c 1', 'has_d 1'] They are not used to denote code blocks as they are in many \"C-like\" languages. C: if (condition) { // do this } Update: In addition to Python's dict data types Python has (since Python 2.7) set as well, which uses curly braces too and are declared as follows: my_set = {1, 2, 3, 4}"} +{"question_id": 36952763, "score": 78, "creation_date": 1462005950, "tags": ["python", "neural-network", "nlp", "deep-learning", "keras"], "instruction": "How to return history of validation loss in Keras\n\nUsing Anaconda Python 2.7 Windows 10. I am training a language model using the Keras exmaple: print('Build model...') model = Sequential() model.add(GRU(512, return_sequences=True, input_shape=(maxlen, len(chars)))) model.add(Dropout(0.2)) model.add(GRU(512, return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense(len(chars))) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop') def sample(a, temperature=1.0): # helper function to sample an index from a probability array a = np.log(a) / temperature a = np.exp(a) / np.sum(np.exp(a)) return np.argmax(np.random.multinomial(1, a, 1)) # train the model, output generated text after each iteration for iteration in range(1, 3): print() print('-' * 50) print('Iteration', iteration) model.fit(X, y, batch_size=128, nb_epoch=1) start_index = random.randint(0, len(text) - maxlen - 1) for diversity in [0.2, 0.5, 1.0, 1.2]: print() print('----- diversity:', diversity) generated = '' sentence = text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: \"' + sentence + '\"') sys.stdout.write(generated) for i in range(400): x = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x[0, t, char_indices[char]] = 1. preds = model.predict(x, verbose=0)[0] next_index = sample(preds, diversity) next_char = indices_char[next_index] generated += next_char sentence = sentence[1:] + next_char sys.stdout.write(next_char) sys.stdout.flush() print() According to Keras documentation, the model.fit method returns a History callback, which has a history attribute containing the lists of successive losses and other metrics. hist = model.fit(X, y, validation_split=0.2) print(hist.history) After training my model, if I run print(model.history) I get the error: AttributeError: 'Sequential' object has no attribute 'history' How do I return my model history after training my model with the above code? UPDATE The issue was that: The following had to first be defined: from keras.callbacks import History history = History() The callbacks option had to be called model.fit(X_train, Y_train, nb_epoch=5, batch_size=16, callbacks=[history]) But now if I print print(history.History) it returns {} even though I ran an iteration.", "output": "It's been solved. The losses only save to the History over the epochs. I was running iterations instead of using the Keras built in epochs option. so instead of doing 4 iterations I now have model.fit(......, nb_epoch = 4) Now it returns the loss for each epoch run: print(hist.history) {'loss': [1.4358016599558268, 1.399221191623641, 1.381293383180471, 1.3758836857303727]}"} +{"question_id": 15707532, "score": 78, "creation_date": 1364575343, "tags": ["python", "datetime", "attributeerror"], "instruction": "\"import datetime\" v.s. \"from datetime import datetime\"\n\nI have a script that needs to execute the following at different lines in the script: today_date = datetime.date.today() date_time = datetime.strp(date_time_string, '%Y-%m-%d %H:%M') In my import statements I have the following: from datetime import datetime import datetime I get the following error: AttributeError: 'module' object has no attribute 'strp' If I change the order of the import statements to: import datetime from datetime import datetime I get the following error: AttributeError: 'method_descriptor' object has no attribute 'today' If I again change the import statement to: import datetime I get the following error: AttributeError: 'module' object has no attribute 'strp' What is going on here and how do I get both to work?", "output": "Your trouble is that you have some code that is expecting datetime to be a reference to the datetime module and other code that is expecting datetime to be a reference to the datetime class. Obviously, it can't be both. When you do: from datetime import datetime import datetime You are first setting datetime to be a reference to the class, then immediately setting it to be a reference to the module. When you do it the other way around, it instead ends up being a reference to the class. Last assignment \"wins.\" You need to rename one of these references. For example: import datetime as dt from datetime import datetime Then you can change references in the form datetime.xxxx that refer to the module to dt.xxxx. Or else just import datetime and change all references to use the module name. In other words, if something just says datetime(...) you need to change that reference to datetime.datetime. Python has a fair bit of this kind of thing in its library, unfortunately. If they followed their own naming guidelines in PEP 8, the datetime class would be named Datetime and there'd be no problem using both datetime to mean the module and Datetime to mean the class."} +{"question_id": 6825994, "score": 78, "creation_date": 1311660632, "tags": ["python", "excel"], "instruction": "check if a file is open in Python\n\nIn my app, I write to an excel file. After writing, the user is able to view the file by opening it. But if the user forgets to close the file before any further writing, a warning message should appear. So I need a way to check this file is open before the writing process. Could you supply me with some python code to do this task?", "output": "I assume that you're writing to the file, then closing it (so the user can open it in Excel), and then, before re-opening it for append/write operations, you want to check that the file isn't still open in Excel? This is how you could do that: while True: # repeat until the try statement succeeds try: myfile = open(\"myfile.csv\", \"r+\") # or \"a+\", whatever you need break # exit the loop except IOError: input(\"Could not open file! Please close Excel. Press Enter to retry.\") # restart the loop with myfile: do_stuff()"} +{"question_id": 79118841, "score": 78, "creation_date": 1729700979, "tags": ["python", "python-poetry", "uv"], "instruction": "How can I migrate from Poetry to UV package manager?\n\nI'm planning to switch from poetry to the uv Python package manager, but I can't find any migration guides. Currently, I'm using Poetry and already have a pyproject.toml file. What key(s) should be modified or added to migrate properly to uv? Here\u2019s the current pyproject.toml structure: [tool.poetry] name = \"name\" version = \"1.6.0\" description = \"\" authors = [ \"...\", ] maintainers = [ \"...\", ] readme = \"README.md\" [tool.poetry.dependencies] python = \"^3.12\" fastapi = \"^0.115.2\" uvicorn = { version = \"^0.32.0\", extras = [\"standard\"] } pydantic = \"^2.5.3\" pydantic-settings = \"^2\" [tool.poetry.group.dev.dependencies] pytest = \"^8.3.3\" flake8 = \"~7.1.1\" mypy = \"^1.12\" [tool.isort] profile = \"black\" multi_line_output = 3 [tool.mypy] strict = true ignore_missing_imports = true [tool.pytest.ini_options] filterwarnings = [ \"error\", \"ignore::DeprecationWarning\", \"ignore:.*unclosed.*:ResourceWarning\", ] env = [ \"...=...\", \"...=...\", ] [tool.coverage.run] omit = [ \"...=...\", \"...=...\", ] [tool.coverage.report] exclude_lines = [ \"pragma: no cover\", \"if TYPE_CHECKING\", ] [build-system] requires = [\"poetry-core>=1.0.0\"] build-backend = \"poetry.core.masonry.api\" Additionally, in the [build-system] section, I currently have poetry-core. Should this be replaced with something specific for uv during the migration?", "output": "Another very recently released tool is migrate-to-uv. Run: uvx migrate-to-uv to migrate from Poetry or pipenv to uv. This will rewrite your pyproject.toml file and remove poetry.lock. You still need to uv lock afterwards."} +{"question_id": 19923877, "score": 78, "creation_date": 1384243458, "tags": ["python", "django", "django-queryset", "django-orm"], "instruction": "Django orm get latest for each group\n\nI am using Django 1.6 with Mysql. I have these models: class Student(models.Model): username = models.CharField(max_length=200, unique = True) class Score(models.Model): student = models.ForeignKey(Student) date = models.DateTimeField() score = models.IntegerField() I want to get the latest score record for each student. I have tried: Score.objects.values('student').annotate(latest_date=Max('date')) and: Score.objects.values('student__username').annotate(latest_date=Max('date')) as described Django ORM - Get the latest record for the group but it did not help.", "output": "This should work on Django 1.2+ and MySQL: Score.objects.annotate( max_date=Max('student__score__date') ).filter( date=F('max_date') )"} +{"question_id": 19356055, "score": 78, "creation_date": 1381738786, "tags": ["python", "counter"], "instruction": "Summing the contents of two collections.Counter() objects\n\nI am working with collections.Counter() counters. I would like to combine two of them in a meaningful manner. Suppose I have 2 counters, say, Counter({'menu': 20, 'good': 15, 'happy': 10, 'bar': 5}) and Counter({'menu': 1, 'good': 1, 'bar': 3}) I am trying to end up with: Counter({'menu': 21, 'good': 16, 'happy': 10,'bar': 8}) How can I do this?", "output": "All you need to do is add them: >>> from collections import Counter >>> a = Counter({'menu': 20, 'good': 15, 'happy': 10, 'bar': 5}) >>> b = Counter({'menu': 1, 'good': 1, 'bar': 3}) >>> a + b Counter({'menu': 21, 'good': 16, 'happy': 10, 'bar': 8}) From the docs: Several mathematical operations are provided for combining Counter objects to produce multisets (counters that have counts greater than zero). Addition and subtraction combine counters by adding or subtracting the counts of corresponding elements. Note that if you want to save memory by modifying the Counter in-place rather than creating a new one, you can do a.update(b) or b.update(a)."} +{"question_id": 4228637, "score": 78, "creation_date": 1290194919, "tags": ["python", "pdb"], "instruction": "Getting started with the Python debugger, pdb\n\nI want to add pdb\u2014the Python debugger\u2014to my toolbox. What's the best way to get started?", "output": "Here's a list of resources to get started with the Python debugger: Read Steve Ferb's article \"Debugging in Python\" Watch Eric Holscher's screencast \"Using pdb, the Python Debugger\" Read the Python documentation for pdb \u2014 The Python Debugger Read Chapter 9\u2014When You Don't Even Know What to Log: Using Debuggers\u2014of Karen Tracey's Django 1.1 Testing and Debugging."} +{"question_id": 16433338, "score": 78, "creation_date": 1367991216, "tags": ["python", "sqlalchemy", "foreign-keys", "relationship", "flask-sqlalchemy"], "instruction": "Inserting new records with one-to-many relationship in sqlalchemy\n\nI'm following the flask-sqlalchemy tutorial on declaring models regarding one-to-many relationship. The example code is as follows: class Person(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(50)) addresses = db.relationship('Address', backref='person', lazy='dynamic') class Address(db.Model): id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(50)) person_id = db.Column(db.Integer, db.ForeignKey('person.id')) Now I'm wondering how to insert new records into DB using such model. I assume I need a constructor init, but I have difficulties to understand how it should be implemented and used. The main problem for me here is that Person depends on Address and Address has ForeignKey to Person, so it should know about the Person in advance. Plase help me to understand how it should be performed. Thank you in advance.", "output": "You dont need to write a constructor, you can either treat the addresses property on a Person instance as a list: a = Address(email='foo@bar.com') p = Person(name='foo') p.addresses.append(a) Or you can pass a list of addresses to the Person constructor a = Address(email='foo@bar.com') p = Person(name='foo', addresses=[a]) In either case you can then access the addresses on your Person instance like so: db.session.add(p) db.session.add(a) db.session.commit() print(p.addresses.count()) # 1 print(p.addresses[0]) #
print(p.addresses.filter_by(email='foo@bar.com').count()) # 1"} +{"question_id": 6783000, "score": 78, "creation_date": 1311283534, "tags": ["python", "max"], "instruction": "Which maximum does Python pick in the case of a tie?\n\nWhen using the max() function in Python to find the maximum value in a list (or tuple, dict etc.) and there is a tie for maximum value, which one does Python pick? Is it random? This is relevant if, for instance, one has a list of tuples and one selects a maximum (using a key=) based on the first element of the tuple but there are different second elements. How does Python decide which one to pick as the maximum?", "output": "It picks the first element it sees. See the documentation for max(): If multiple items are maximal, the function returns the first one encountered. This is consistent with other sort-stability preserving tools such as sorted(iterable, key=keyfunc, reverse=True)[0] and heapq.nlargest(1, iterable, key=keyfunc). In the source code this is implemented in ./Python/bltinmodule.c by builtin_max, which wraps the more general min_max function. min_max will iterate through the values and use PyObject_RichCompareBool to see if they are greater than the current value. If so, the greater value replaces it. Equal values will be skipped over. The result is that the first maximum will be chosen in the case of a tie."} +{"question_id": 32679589, "score": 78, "creation_date": 1442753351, "tags": ["python", "python-imaging-library", "image-formats"], "instruction": "How to get the format of image with PIL?\n\nAfter loading an image file with PIL.Image, how can I determine whether the image file is a PNG/JPG/BMP/GIF? I understand very little about these file formats, can PIL get the format metadata from the file header? Or does it need to 'analyze' the data within the file? If PIL doesn't provide such an API, is there any python library that does?", "output": "Try: from PIL import Image img = Image.open(filename) print(img.format) # 'JPEG' More info https://pillow.readthedocs.io/en/latest/reference/Image.html#PIL.Image.Image.format https://pillow.readthedocs.io/en/latest/handbook/image-file-formats.html"} +{"question_id": 54152653, "score": 78, "creation_date": 1547233836, "tags": ["python", "windows", "pathlib"], "instruction": "Renaming file extension using pathlib (python 3)\n\nI am using windows 10 and winpython. I have a file with a .dwt extension (it is a text file). I want to change the extension of this file to .txt. My code does not throw any errors, but it does not change the extension. from pathlib import Path filename = Path(\"E:\\\\seaborn_plot\\\\x.dwt\") print(filename) filename_replace_ext = filename.with_suffix('.txt') print(filename_replace_ext) Expected results are printed out (as shown below) in winpython's ipython window output: E:\\seaborn_plot\\x.dwt E:\\seaborn_plot\\x.txt But when I look for a file with a renamed extension, the extension has not been changed, only the original file exists. I suspect windows file permissions.", "output": "You have to actually rename the file not just print out the new name. Use Path.rename() from pathlib import Path my_file = Path(\"E:\\\\seaborn_plot\\\\x.dwt\") my_file.rename(my_file.with_suffix('.txt')) Note: To replace the target if it exists use Path.replace() Use os.rename() import os my_file = 'E:\\\\seaborn_plot\\\\x.dwt' new_ext = '.txt' # Gets my_file minus the extension name_without_ext = os.path.splitext(my_file)[0] os.rename(my_file, name_without_ext + new_ext) Ref: os.path.splitext(path) PurePath.with_suffix(suffix)"} +{"question_id": 3327946, "score": 78, "creation_date": 1280030348, "tags": ["python", "datetime", "time", "utc"], "instruction": "How can I get the current time (now) in UTC?\n\nI have a python datetime object (representing five minutes from now) which I would like to convert to UTC. I am planning to output it in RFC 2822 format to put in an HTTP header, but I am not sure if that matters for this question. I found some information on this site about converting time objects, and it looks simpler that way, but this time I really want to use datetime objects, because I am using timedeltas to adjust them: I tried something like this: from datetime import datetime, timedelta now = datetime.now() fiveMinutesLater = datetime.now() + timedelta(minutes=5) fiveMinutesLaterUtc = ??? Nothing in the time or datetime module looks like it would help me. It seems like I may be able to do it by passing the datetime object through 3 or 4 functions, but I am wondering if there is a simpler way. I would prefer not to use third-party modules, but I may if it is the only reasonable choice.", "output": "Run this to obtain a naive datetime in UTC (and to add five minutes to it): >>> from datetime import datetime, timedelta >>> datetime.utcnow() datetime.datetime(2021, 1, 26, 15, 41, 52, 441598) >>> datetime.utcnow() + timedelta(minutes=5) datetime.datetime(2021, 1, 26, 15, 46, 52, 441598) If you would prefer a timezone-aware datetime object, run this in Python 3.2 or higher: >>> from datetime import datetime, timezone >>> datetime.now(timezone.utc) datetime.datetime(2021, 1, 26, 15, 43, 54, 379421, tzinfo=datetime.timezone.utc)"} +{"question_id": 14267452, "score": 78, "creation_date": 1357852801, "tags": ["python"], "instruction": "Iterate over individual bytes in Python 3\n\nWhen iterating over a bytes object in Python 3, one gets the individual bytes as ints: >>> [b for b in b'123'] [49, 50, 51] How to get 1-length bytes objects instead? The following is possible, but not very obvious for the reader and most likely performs bad: >>> [bytes([b]) for b in b'123'] [b'1', b'2', b'3']", "output": "int.to_bytes int objects have a to_bytes method which can be used to convert an int to its corresponding byte: >>> import sys >>> [i.to_bytes(1, sys.byteorder) for i in b'123'] [b'1', b'2', b'3'] As with some other other answers, it's not clear that this is more readable than the OP's original solution: the length and byteorder arguments make it noisier I think. struct.unpack Another approach would be to use struct.unpack, though this might also be considered difficult to read, unless you are familiar with the struct module: >>> import struct >>> struct.unpack('3c', b'123') (b'1', b'2', b'3') (As jfs observes in the comments, the format string for struct.unpack can be constructed dynamically; in this case we know the number of individual bytes in the result must equal the number of bytes in the original bytestring, so struct.unpack(str(len(bytestring)) + 'c', bytestring) is possible.) Performance >>> import random, timeit >>> bs = bytes(random.randint(0, 255) for i in range(100)) >>> # OP's solution >>> timeit.timeit(setup=\"from __main__ import bs\", stmt=\"[bytes([b]) for b in bs]\") 46.49886950897053 >>> # Accepted answer from jfs >>> timeit.timeit(setup=\"from __main__ import bs\", stmt=\"[bs[i:i+1] for i in range(len(bs))]\") 20.91463226894848 >>> # Leon's answer >>> timeit.timeit(setup=\"from __main__ import bs\", stmt=\"list(map(bytes, zip(bs)))\") 27.476876026019454 >>> # guettli's answer >>> timeit.timeit(setup=\"from __main__ import iter_bytes, bs\", stmt=\"list(iter_bytes(bs))\") 24.107485140906647 >>> # user38's answer (with Leon's suggested fix) >>> timeit.timeit(setup=\"from __main__ import bs\", stmt=\"[chr(i).encode('latin-1') for i in bs]\") 45.937552741961554 >>> # Using int.to_bytes >>> timeit.timeit(setup=\"from __main__ import bs;from sys import byteorder\", stmt=\"[x.to_bytes(1, byteorder) for x in bs]\") 32.197659170022234 >>> # Using struct.unpack, converting the resulting tuple to list >>> # to be fair to other methods >>> timeit.timeit(setup=\"from __main__ import bs;from struct import unpack\", stmt=\"list(unpack('100c', bs))\") 1.902243083808571 struct.unpack seems to be at least an order of magnitude faster than other methods, presumably because it operates at the byte level. int.to_bytes, on the other hand, performs worse than most of the \"obvious\" approaches."} +{"question_id": 58259682, "score": 78, "creation_date": 1570382962, "tags": ["python", "python-3.x", "list", "tuples"], "instruction": "Why does b+=(4,) work and b = b + (4,) doesn't work when b is a list?\n\nIf we take b = [1,2,3] and if we try doing: b+=(4,) It returns b = [1,2,3,4], but if we try doing b = b + (4,) it doesn't work. b = [1,2,3] b+=(4,) # Prints out b = [1,2,3,4] b = b + (4,) # Gives an error saying you can't add tuples and lists I expected b+=(4,) to fail as you can't add a list and a tuple, but it worked. So I tried b = b + (4,) expecting to get the same result, but it didn't work.", "output": "The problem with \"why\" questions is that usually they can mean multiple different things. I will try to answer each one I think you might have in mind. \"Why is it possible for it to work differently?\" which is answered by e.g. this. Basically, += tries to use different methods of the object: __iadd__ (which is only checked on the left-hand side), vs __add__ and __radd__ (\"reverse add\", checked on the right-hand side if the left-hand side doesn't have __add__) for +. \"What exactly does each version do?\" In short, the list.__iadd__ method does the same thing as list.extend (but because of the language design, there is still an assignment back). This also means for example that >>> a = [1,2,3] >>> b = a >>> a += [4] # uses the .extend logic, so it is still the same object >>> b # therefore a and b are still the same list, and b has the `4` added [1, 2, 3, 4] >>> b = b + [5] # makes a new list and assigns back to b >>> a # so now a is a separate list and does not have the `5` [1, 2, 3, 4] +, of course, creates a new object, but explicitly requires another list instead of trying to pull elements out of a different sequence. \"Why is it useful for += to do this? It's more efficient; the extend method doesn't have to create a new object. Of course, this has some surprising effects sometimes (like above), and generally Python is not really about efficiency, but these decisions were made a long time ago. \"What is the reason not to allow adding lists and tuples with +?\" See here (thanks, @splash58); one idea is that (tuple + list) should produce the same type as (list + tuple), and it's not clear which type the result should be. += doesn't have this problem, because a += b obviously should not change the type of a."} +{"question_id": 34030373, "score": 78, "creation_date": 1449004747, "tags": ["python", "windows", "environment-variables", "anaconda"], "instruction": "anaconda - path environment variable in windows\n\nI'm trying to run Python from the Windows command prompt (windows 10). So the result is the typical one when the path environment variable is not configured c:\\windows\\system32>python 'python' is not recognized as an internal or external command, operable program or batch file However, I'm unsure which directory I should set up in the path variable. I tried a few variations, and none of them work, including: c:\\users\\xxx\\anaconda3 c:\\users\\xxx\\anaconda3\\Scripts c:\\users\\xxx\\anaconda3\\libs\\python34 and none of them works. Does anyone have experience with this system constellation (windows, anaconda)? Thanks.", "output": "It turns out I was mistaken. Here is the solution: In anaconda (as well as in other implementations), set the path environment variable to the directory where 'python.exe' is installed. As a default, the python.exe file in anaconda is in: c:\\.....\\anaconda After you do that, the Python command works, in my case, yielding the following: python Python 3.4.3 |Anaconda 2.2.0. (64|bit)|(default, Nov 7 2015), etc, etc"} +{"question_id": 6697259, "score": 78, "creation_date": 1310663443, "tags": ["python", "keyboard", "matplotlib", "interactive"], "instruction": "Interactive matplotlib plot with two sliders\n\nI used matplotlib to create some plot, which depends on 8 variables. I would like to study how the plot changes when I change some of them. I created some script that calls the matplotlib one and generates different snapshots that later I convert into a movie, it is not bad, but a bit clumsy. I wonder if somehow I could interact with the plot regeneration using keyboard keys to increase / decrease values of some of the variables and see instantly how the plot changes. What is the best approach for this? Also if you can point me to interesting links or a link with a plot example with just two sliders?", "output": "In addition to what @triplepoint mentioned, have a look at the slider widget. There's an example on the matplotlib examples page. It's a graphical slider bar rather than keyboard bindings, but it works quite well for what you want to do. Also note that to guarantee the sliders and buttons remain responsive and not garbage-collected, references to the objects (amp_slider, freq_slider, etc.) should be maintained by yourself. (I'm making this community wiki, as I'm just copy-pasting from the example. This particular example teaches bad habits (e.g. from pylab import *), but it gets the point across. The example has been fixed to avoid the use of pylab.) from numpy import pi, sin import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider, Button, RadioButtons def signal(amp, freq): return amp * sin(2 * pi * freq * t) axis_color = 'lightgoldenrodyellow' fig = plt.figure() ax = fig.add_subplot(111) # Adjust the subplots region to leave some space for the sliders and buttons fig.subplots_adjust(left=0.25, bottom=0.25) t = np.arange(0.0, 1.0, 0.001) amp_0 = 5 freq_0 = 3 # Draw the initial plot # The 'line' variable is used for modifying the line later [line] = ax.plot(t, signal(amp_0, freq_0), linewidth=2, color='red') ax.set_xlim([0, 1]) ax.set_ylim([-10, 10]) # Add two sliders for tweaking the parameters # Define an axes area and draw a slider in it amp_slider_ax = fig.add_axes([0.25, 0.15, 0.65, 0.03], facecolor=axis_color) amp_slider = Slider(amp_slider_ax, 'Amp', 0.1, 10.0, valinit=amp_0) # Draw another slider freq_slider_ax = fig.add_axes([0.25, 0.1, 0.65, 0.03], facecolor=axis_color) freq_slider = Slider(freq_slider_ax, 'Freq', 0.1, 30.0, valinit=freq_0) # Define an action for modifying the line when any slider's value changes def sliders_on_changed(val): line.set_ydata(signal(amp_slider.val, freq_slider.val)) fig.canvas.draw_idle() amp_slider.on_changed(sliders_on_changed) freq_slider.on_changed(sliders_on_changed) # Add a button for resetting the parameters reset_button_ax = fig.add_axes([0.8, 0.025, 0.1, 0.04]) reset_button = Button(reset_button_ax, 'Reset', color=axis_color, hovercolor='0.975') def reset_button_on_clicked(mouse_event): freq_slider.reset() amp_slider.reset() reset_button.on_clicked(reset_button_on_clicked) # Add a set of radio buttons for changing color color_radios_ax = fig.add_axes([0.025, 0.5, 0.15, 0.15], facecolor=axis_color) color_radios = RadioButtons(color_radios_ax, ('red', 'blue', 'green'), active=0) def color_radios_on_clicked(label): line.set_color(label) fig.canvas.draw_idle() color_radios.on_clicked(color_radios_on_clicked) plt.show()"} +{"question_id": 42126794, "score": 78, "creation_date": 1486604302, "tags": ["python", "python-3.x", "syntax-error", "python-3.5"], "instruction": "Why am I getting \"invalid syntax\" from an f-string?\n\nI cannot get f-strings to work in Python 3. I tried this at the REPL: In [1]: state = \"Washington\" In [2]: state Out[2]: 'Washington' In [3]: my_message = f\"I live in {state}\" File \"\", line 1 my_message = f\"I live in {state}\" ^ SyntaxError: invalid syntax I figured my machine was defaulting to python 2, but a quick check reveals: Python 3.5.2 (default, Nov 17 2016, 17:05:23) Type \"copyright\", \"credits\" or \"license\" for more information. IPython 5.2.2 -- An enhanced Interactive Python. What is wrong, and how do I fix it?", "output": "As suggested by Josh Lee in the comment section, that kind of string interpolation was added in Python 3.6 only, see What\u2019s New In Python 3.6 (here it's called \"PEP 498: Formatted string literals\"). However, you seem to be using Python 3.5.2, which does not support that syntax."} +{"question_id": 54613753, "score": 78, "creation_date": 1549777958, "tags": ["python", "python-3.x", "sequence", "slice", "range-checking"], "instruction": "Why does Python allow out-of-range slice indexes for sequences?\n\nSo I just came across what seems to me like a strange Python feature and wanted some clarification about it. The following array manipulation somewhat makes sense: p = [1,2,3] p[3:] = [4] p = [1,2,3,4] I imagine it is actually just appending this value to the end, correct? Why can I do this, however? p[20:22] = [5,6] p = [1,2,3,4,5,6] And even more so this: p[20:100] = [7,8] p = [1,2,3,4,5,6,7,8] This just seems like wrong logic. It seems like this should throw an error! Any explanation? -Is it just a weird thing Python does? -Is there a purpose to it? -Or am I thinking about this the wrong way?", "output": "Part of question regarding out-of-range indices Slice logic automatically clips the indices to the length of the sequence. Allowing slice indices to extend past end points was done for convenience. It would be a pain to have to range check every expression and then adjust the limits manually, so Python does it for you. Consider the use case of wanting to display no more than the first 50 characters of a text message. The easy way (what Python does now): preview = msg[:50] Or the hard way (do the limit checks yourself): n = len(msg) preview = msg[:50] if n > 50 else msg Manually implementing that logic for adjustment of end points would be easy to forget, would be easy to get wrong (updating the 50 in two places), would be wordy, and would be slow. Python moves that logic to its internals where it is succint, automatic, fast, and correct. This is one of the reasons I love Python :-) Part of question regarding assignments length mismatch from input length The OP also wanted to know the rationale for allowing assignments such as p[20:100] = [7,8] where the assignment target has a different length (80) than the replacement data length (2). It's easiest to see the motivation by an analogy with strings. Consider, \"five little monkeys\".replace(\"little\", \"humongous\"). Note that the target \"little\" has only six letters and \"humongous\" has nine. We can do the same with lists: >>> s = list(\"five little monkeys\") >>> i = s.index('l') >>> n = len('little') >>> s[i : i+n ] = list(\"humongous\") >>> ''.join(s) 'five humongous monkeys' This all comes down to convenience. Prior to the introduction of the copy() and clear() methods, these used to be popular idioms: s[:] = [] # clear a list t = u[:] # copy a list Even now, we use this to update lists when filtering: s[:] = [x for x in s if not math.isnan(x)] # filter-out NaN values Hope these practical examples give a good perspective on why slicing works as it does."} +{"question_id": 46278288, "score": 78, "creation_date": 1505734448, "tags": ["python", "pip", "pipenv"], "instruction": "Git - Should Pipfile.lock be committed to version control?\n\nWhen two developers are working on a project with different operating systems, the Pipfile.lock is different (especially the part inside host-environment-markers). For PHP, most people recommend to commit composer.lock file. Do we have to do the same for Python?", "output": "Short - Yes! The lock file tells pipenv exactly which version of each dependency needs to be installed. You will have consistency across all machines. // update: Same question on github"} +{"question_id": 42138482, "score": 78, "creation_date": 1486648190, "tags": ["python", "apache-spark", "pyspark", "apache-spark-sql", "apache-spark-ml"], "instruction": "How do I convert an array (i.e. list) column to Vector\n\nShort version of the question! Consider the following snippet (assuming spark is already set to some SparkSession): from pyspark.sql import Row source_data = [ Row(city=\"Chicago\", temperatures=[-1.0, -2.0, -3.0]), Row(city=\"New York\", temperatures=[-7.0, -7.0, -5.0]), ] df = spark.createDataFrame(source_data) Notice that the temperatures field is a list of floats. I would like to convert these lists of floats to the MLlib type Vector, and I'd like this conversion to be expressed using the basic DataFrame API rather than going via RDDs (which is inefficient because it sends all data from the JVM to Python, the processing is done in Python, we don't get the benefits of Spark's Catalyst optimizer, yada yada). How do I do this? Specifically: Is there a way to get a straight cast working? See below for details (and a failed attempt at a workaround)? Or, is there any other operation that has the effect I was after? Which is more efficient out of the two alternative solutions I suggest below (UDF vs exploding/reassembling the items in the list)? Or are there any other almost-but-not-quite-right alternatives that are better than either of them? A straight cast doesn't work This is what I would expect to be the \"proper\" solution. I want to convert the type of a column from one type to another, so I should use a cast. As a bit of context, let me remind you of the normal way to cast it to another type: from pyspark.sql import types df_with_strings = df.select( df[\"city\"], df[\"temperatures\"].cast(types.ArrayType(types.StringType()))), ) Now e.g. df_with_strings.collect()[0][\"temperatures\"][1] is '-7.0'. But if I cast to an ml Vector then things do not go so well: from pyspark.ml.linalg import VectorUDT df_with_vectors = df.select(df[\"city\"], df[\"temperatures\"].cast(VectorUDT())) This gives an error: pyspark.sql.utils.AnalysisException: \"cannot resolve 'CAST(`temperatures` AS STRUCT<`type`: TINYINT, `size`: INT, `indices`: ARRAY, `values`: ARRAY>)' due to data type mismatch: cannot cast ArrayType(DoubleType,true) to org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7;; 'Project [city#0, unresolvedalias(cast(temperatures#1 as vector), None)] +- LogicalRDD [city#0, temperatures#1] \" Yikes! Any ideas how to fix this? Possible alternatives Alternative 1: Using VectorAssembler There is a Transformer that seems almost ideal for this job: the VectorAssembler. It takes one or more columns and concatenates them into a single vector. Unfortunately it only takes Vector and Float columns, not Array columns, so the follow doesn't work: from pyspark.ml.feature import VectorAssembler assembler = VectorAssembler(inputCols=[\"temperatures\"], outputCol=\"temperature_vector\") df_fail = assembler.transform(df) It gives this error: pyspark.sql.utils.IllegalArgumentException: 'Data type ArrayType(DoubleType,true) is not supported.' The best work around I can think of is to explode the list into multiple columns and then use the VectorAssembler to collect them all back up again: from pyspark.ml.feature import VectorAssembler TEMPERATURE_COUNT = 3 assembler_exploded = VectorAssembler( inputCols=[\"temperatures[{}]\".format(i) for i in range(TEMPERATURE_COUNT)], outputCol=\"temperature_vector\" ) df_exploded = df.select( df[\"city\"], *[df[\"temperatures\"][i] for i in range(TEMPERATURE_COUNT)] ) converted_df = assembler_exploded.transform(df_exploded) final_df = converted_df.select(\"city\", \"temperature_vector\") This seems like it would be ideal, except that TEMPERATURE_COUNT be more than 100, and sometimes more than 1000. (Another problem is that the code would be more complicated if you don't know the size of the array in advance, although that is not the case for my data.) Does Spark actually generate an intermediate data set with that many columns, or does it just consider this an intermediate step that individual items pass through transiently (or indeed does it optimise this away step entirely when it sees that the only use of these columns is to be assembled into a vector)? Alternative 2: use a UDF A rather simpler alternative is to use a UDF to do the conversion. This lets me express quite directly what I want to do in one line of code, and doesn't require making a data set with a crazy number of columns. But all that data has to be exchanged between Python and the JVM, and every individual number has to be handled by Python (which is notoriously slow for iterating over individual data items). Here is how that looks: from pyspark.ml.linalg import Vectors, VectorUDT from pyspark.sql.functions import udf list_to_vector_udf = udf(lambda l: Vectors.dense(l), VectorUDT()) df_with_vectors = df.select( df[\"city\"], list_to_vector_udf(df[\"temperatures\"]).alias(\"temperatures\") ) Ignorable remarks The remaining sections of this rambling question are some extra things I came up with while trying to find an answer. They can probably be skipped by most people reading this. Not a solution: use Vector to begin with In this trivial example it's possible to create the data using the vector type to begin with, but of course my data isn't really a Python list that I'm parallelizing, but instead is being read from a data source. But for the record, here is how that would look: from pyspark.ml.linalg import Vectors from pyspark.sql import Row source_data = [ Row(city=\"Chicago\", temperatures=Vectors.dense([-1.0, -2.0, -3.0])), Row(city=\"New York\", temperatures=Vectors.dense([-7.0, -7.0, -5.0])), ] df = spark.createDataFrame(source_data) Inefficient solution: use map() One possibility is to use the RDD map() method to transform the list to a Vector. This is similar to the UDF idea, except that its even worse because the cost of serialisation etc. is incurred for all the fields in each row, not just the one being operated on. For the record, here's what that solution would look like: df_with_vectors = df.rdd.map(lambda row: Row( city=row[\"city\"], temperatures=Vectors.dense(row[\"temperatures\"]) )).toDF() Failed attempt at a workaround for cast In desperation, I noticed that Vector is represented internally by a struct with four fields, but using a traditional cast from that type of struct doesn't work either. Here is an illustration (where I built the struct using a udf but the udf isn't the important part): from pyspark.ml.linalg import Vectors, VectorUDT from pyspark.sql.functions import udf list_to_almost_vector_udf = udf(lambda l: (1, None, None, l), VectorUDT.sqlType()) df_almost_vector = df.select( df[\"city\"], list_to_almost_vector_udf(df[\"temperatures\"]).alias(\"temperatures\") ) df_with_vectors = df_almost_vector.select( df_almost_vector[\"city\"], df_almost_vector[\"temperatures\"].cast(VectorUDT()) ) This gives the error: pyspark.sql.utils.AnalysisException: \"cannot resolve 'CAST(`temperatures` AS STRUCT<`type`: TINYINT, `size`: INT, `indices`: ARRAY, `values`: ARRAY>)' due to data type mismatch: cannot cast StructType(StructField(type,ByteType,false), StructField(size,IntegerType,true), StructField(indices,ArrayType(IntegerType,false),true), StructField(values,ArrayType(DoubleType,false),true)) to org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7;; 'Project [city#0, unresolvedalias(cast(temperatures#5 as vector), None)] +- Project [city#0, (temperatures#1) AS temperatures#5] +- LogicalRDD [city#0, temperatures#1] \"", "output": "For pyspark>=3.1.0 Since 3.1.0 there is a build-it solution: array_to_vector. Given your case: from pyspark.ml.functions import vector_to_array df = df.withColumn(\"temperatures_vectorized\", vector_to_array(\"temperatures\")) P.S: Also since 3.0.0 there is an opposite operation as well: vector_to_array"} +{"question_id": 33990673, "score": 77, "creation_date": 1448851131, "tags": ["python", "list"], "instruction": "How to create a list of empty lists\n\nApologies if this has been answered before, but I couldn't find a similar question on here. I am pretty new to Python and what I am trying to create is as follows: list1 = [] list2 = [] results = [list1, list2] This code works absolutely fine, but I was wondering if there was a quicker way to do this in one line. I tried the following, which didn't work, but I hope it demonstrates the sort of thing that I'm after: result = [list1[], list2[]] Also, in terms of complexity, would having it on one line really make any difference? Or would it be three assignments in either case?", "output": "For manually creating a specified number of lists, this would be good: empty_list = [ [], [], ..... ] In case, you want to generate a bigger number of lists, then putting it inside a for loop would be good: empty_lists = [ [] for _ in range(n) ]"} +{"question_id": 55422929, "score": 77, "creation_date": 1553881322, "tags": ["python", "linux", "ubuntu", "pip", "ubuntu-18.04"], "instruction": "\"E: Unable to locate package python-pip\" on Ubuntu 18.04\n\nI am trying to install virtualenv on Ubuntu. First it said command 'pip' not found, so I typed sudo apt install python-pip then it said E: Unable to locate package python-pip I tried to reset WSL, download using cmd but it doesn't work with Ubuntu. I don't know why. Even though I have downloaded python3, virtualenv, and pip using cmd. It doesn't work with Ubuntu 18.04. It also fails on Ubuntu 14.04. aiki@LAPTOP-886AEJJG:~$ pip Command 'pip' not found, but can be installed with: sudo apt install python-pip aiki@LAPTOP-886AEJJG:~$ sudo apt install python-pip [sudo] password for aiki: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package python-pip I'm trying to install jarvis and mycroft on win 10, but I need to use Ubuntu because it only works with Linux.", "output": "Try following command sequence on Ubuntu terminal: sudo apt-get install software-properties-common sudo apt-add-repository universe sudo apt-get update sudo apt-get install python3-pip"} +{"question_id": 7833807, "score": 77, "creation_date": 1319102879, "tags": ["python", "audio"], "instruction": "Get .wav file length or duration\n\nI'm looking for a way to find out the duration of a audio file (.wav) in python. So far i had a look at python wave library, mutagen, pymedia, pymad i was not able to get the duration of the wav file. Pymad gave me the duration but its not consistent.", "output": "The duration is equal to the number of frames divided by the framerate (frames per second): import wave import contextlib fname = '/tmp/test.wav' with contextlib.closing(wave.open(fname,'r')) as f: frames = f.getnframes() rate = f.getframerate() duration = frames / float(rate) print(duration) Regarding @edwards' comment, here is some code to produce a 2-channel wave file: import math import wave import struct FILENAME = \"/tmp/test.wav\" freq = 440.0 data_size = 40000 frate = 1000.0 amp = 64000.0 nchannels = 2 sampwidth = 2 framerate = int(frate) nframes = data_size comptype = \"NONE\" compname = \"not compressed\" data = [(math.sin(2 * math.pi * freq * (x / frate)), math.cos(2 * math.pi * freq * (x / frate))) for x in range(data_size)] try: wav_file = wave.open(FILENAME, 'w') wav_file.setparams( (nchannels, sampwidth, framerate, nframes, comptype, compname)) for values in data: for v in values: wav_file.writeframes(struct.pack('h', int(v * amp / 2))) finally: wav_file.close() If you play the resultant file in an audio player, you'll find that is 40 seconds in duration. If you run the code above it also computes the duration to be 40 seconds. So I believe the number of frames is not influenced by the number of channels and the formula above is correct."} +{"question_id": 15285534, "score": 77, "creation_date": 1362708731, "tags": ["python", "primes"], "instruction": "isPrime Function for Python Language\n\nSo I was able to solve this problem with a little bit of help from the internet and this is what I got: def isPrime(n): for i in range(2,int(n**0.5)+1): if n%i==0: return False return True But my question really is how to do it, but WHY. I understand that 1 is not considered a \"prime\" number even though it is, and I understand that if it divides by ANYTHING within the range it is automatically not a prime thus the return False statement. but my question is what role does the square-rooting the \"n\" play here? P.s. I am very inexperienced and have just been introduced to programming a month ago.", "output": "Of many prime number tests floating around the Internet, consider the following Python function: def is_prime(n): if n == 2 or n == 3: return True if n < 2 or n%2 == 0: return False if n < 9: return True if n%3 == 0: return False r = int(n**0.5) # since all primes > 3 are of the form 6n \u00b1 1 # start with f=5 (which is prime) # and test f, f+2 for being prime # then loop by 6. f = 5 while f <= r: print('\\t',f) if n % f == 0: return False if n % (f+2) == 0: return False f += 6 return True Since all primes > 3 are of the form 6n \u00b1 1, once we eliminate that n is: not 2 or 3 (which are prime) and not even (with n%2) and not divisible by 3 (with n%3) then we can test every 6th n \u00b1 1. Consider the prime number 5003: print is_prime(5003) Prints: 5 11 17 23 29 35 41 47 53 59 65 True The line r = int(n**0.5) evaluates to 70 (the square root of 5003 is 70.7318881411 and int() truncates this value) Consider the next odd number (since all even numbers other than 2 are not prime) of 5005, same thing prints: 5 False The limit is the square root since x*y == y*x The function only has to go 1 loop to find that 5005 is divisible by 5 and therefore not prime. Since 5 X 1001 == 1001 X 5 (and both are 5005), we do not need to go all the way to 1001 in the loop to know what we know at 5! Now, let's look at the algorithm you have: def isPrime(n): for i in range(2, int(n**0.5)+1): if n % i == 0: return False return True There are two issues: It does not test if n is less than 2, and there are no primes less than 2; It tests every number between 2 and n**0.5 including all even and all odd numbers. Since every number greater than 2 that is divisible by 2 is not prime, we can speed it up a little by only testing odd numbers greater than 2. So: def isPrime2(n): if n==2 or n==3: return True if n%2==0 or n<2: return False for i in range(3, int(n**0.5)+1, 2): # only odd numbers if n%i==0: return False return True OK -- that speeds it up by about 30% (I benchmarked it...) The algorithm I used is_prime is about 2x times faster still, since only every 6th integer is looping through the loop. (Once again, I benchmarked it.) Side note: x**0.5 is the square root: >>> import math >>> math.sqrt(100)==100**0.5 True Side note 2: primality testing is an interesting problem in computer science."} +{"question_id": 37573483, "score": 77, "creation_date": 1464796422, "tags": ["python", "python-requests"], "instruction": "Progress Bar while download file over http with Requests\n\nI need to download a sizable (~200MB) file. I figured out how to download and save the file with here. It would be nice to have a progress bar to know how much has been downloaded. I found ProgressBar but I'm not sure how to incorporate the two together. Here's the code I tried, but it didn't work. bar = progressbar.ProgressBar(max_value=progressbar.UnknownLength) with closing(download_file()) as r: for i in range(20): bar.update(i)", "output": "I suggest you try tqdm, it's very easy to use. Example code for downloading with requests library: from tqdm import tqdm import requests url = \"https://proof.ovh.net/files/10Mb.dat\" filepath = \"test.dat\" # Streaming, so we can iterate over the response. response = requests.get(url, stream=True) # Sizes in bytes. total_size = int(response.headers.get(\"content-length\", 0)) block_size = 1024 with tqdm(total=total_size, unit=\"B\", unit_scale=True) as progress_bar: with open(filepath, \"wb\") as file: for data in response.iter_content(block_size): progress_bar.update(len(data)) file.write(data) if total_size != 0 and progress_bar.n != total_size: raise RuntimeError(\"Could not download file\")"} +{"question_id": 51046454, "score": 77, "creation_date": 1530026631, "tags": ["python", "selenium", "selenium-webdriver", "google-colaboratory"], "instruction": "How can we use Selenium Webdriver in colab.research.google.com?\n\nI want to use Selenium Webdriver of Chrome in colab.research.google.com for fast processing. I was able to install Selenium using !pip install selenium but the webdriver of chrome needs a path to webdriverChrome.exe. How am I suppose to use it? P.S.- colab.research.google.com is an online platform which provides GPU for fast computational problems related to deep learning. Please refrain from solutions such as webdriver.Chrome(path).", "output": "Recently Google collab was upgraded and since Ubuntu 20.04+ no longer distributes chromium-browser outside of a snap package, you can install a compatible version from the Debian buster repository: %%shell # Ubuntu no longer distributes chromium-browser outside of snap # # Proposed solution: https://askubuntu.com/questions/1204571/how-to-install-chromium-without-snap # Add debian buster cat > /etc/apt/sources.list.d/debian.list <<'EOF' deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster.gpg] http://deb.debian.org/debian buster main deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster-updates.gpg] http://deb.debian.org/debian buster-updates main deb [arch=amd64 signed-by=/usr/share/keyrings/debian-security-buster.gpg] http://deb.debian.org/debian-security buster/updates main EOF # Add keys apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DCC9EFBF77E11517 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 112695A0E562B32A apt-key export 77E11517 | gpg --dearmour -o /usr/share/keyrings/debian-buster.gpg apt-key export 22F3D138 | gpg --dearmour -o /usr/share/keyrings/debian-buster-updates.gpg apt-key export E562B32A | gpg --dearmour -o /usr/share/keyrings/debian-security-buster.gpg # Prefer debian repo for chromium* packages only # Note the double-blank lines between entries cat > /etc/apt/preferences.d/chromium.pref << 'EOF' Package: * Pin: release a=eoan Pin-Priority: 500 Package: * Pin: origin \"deb.debian.org\" Pin-Priority: 300 Package: chromium* Pin: origin \"deb.debian.org\" Pin-Priority: 700 EOF # Install chromium and chromium-driver apt-get update apt-get install chromium chromium-driver # Install selenium pip install selenium Then you can run selenium like this: from selenium import webdriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.headless = True wd = webdriver.Chrome('chromedriver',options=chrome_options) wd.get(\"https://www.webite-url.com\")"} +{"question_id": 44729727, "score": 77, "creation_date": 1498250932, "tags": ["python", "pandas", "dataframe", "slice"], "instruction": "Pandas - Slice large dataframe into chunks\n\nI have a large dataframe (>3MM rows) that I'm trying to pass through a function (the one below is largely simplified), and I keep getting a Memory Error message. I think I'm passing too large of a dataframe into the function, so I'm trying to: 1) Slice the dataframe into smaller chunks (preferably sliced by AcctName) 2) Pass the dataframe into the function 3) Concatenate the dataframes back into one large dataframe def trans_times_2(df): df['Double_Transaction'] = df['Transaction'] * 2 large_df AcctName Timestamp Transaction ABC 12/1 12.12 ABC 12/2 20.89 ABC 12/3 51.93 DEF 12/2 13.12 DEF 12/8 9.93 DEF 12/9 92.09 GHI 12/1 14.33 GHI 12/6 21.99 GHI 12/12 98.81 I know that my function works properly, since it will work on a smaller dataframe (e.g. 40,000 rows). I tried the following, but I was unsuccessful with concatenating the small dataframes back into one large dataframe. def split_df(df): new_df = [] AcctNames = df.AcctName.unique() DataFrameDict = {elem: pd.DataFrame for elem in AcctNames} key_list = [k for k in DataFrameDict.keys()] new_df = [] for key in DataFrameDict.keys(): DataFrameDict[key] = df[:][df.AcctNames == key] trans_times_2(DataFrameDict[key]) rejoined_df = pd.concat(new_df) How I envision the dataframes being split: df1 AcctName Timestamp Transaction Double_Transaction ABC 12/1 12.12 24.24 ABC 12/2 20.89 41.78 ABC 12/3 51.93 103.86 df2 AcctName Timestamp Transaction Double_Transaction DEF 12/2 13.12 26.24 DEF 12/8 9.93 19.86 DEF 12/9 92.09 184.18 df3 AcctName Timestamp Transaction Double_Transaction GHI 12/1 14.33 28.66 GHI 12/6 21.99 43.98 GHI 12/12 98.81 197.62", "output": "You can use list comprehension to split your dataframe into smaller dataframes contained in a list. n = 200000 #chunk row size list_df = [df[i:i+n] for i in range(0,df.shape[0],n)] Or use numpy array_split, list_df = np.array_split(df, math.ceil(len(df)/n)) You can access the chunks with: list_df[0] list_df[1] etc... Then you can assemble it back into a one dataframe using pd.concat. By AcctName list_df = [] for n,g in df.groupby('AcctName'): list_df.append(g)"} +{"question_id": 4924068, "score": 77, "creation_date": 1297098072, "tags": ["python", "virtualenv", "python-idle"], "instruction": "How to launch python Idle from a virtual environment (virtualenv)\n\nI have a package that I installed from a virtual environment. If I just launch the python interpreter, that package can be imported just fine. However, if I launch Idle, that package cannot be imported (since it's only available in one particular virtualenv and not global). How can I launch Idle from a virtualenv, so that all packages from the virtualenv would be available?", "output": "For Python 3.6+, please see Paul Wicking's answer below. In Python prior to 3.6, IDLE is essentially from idlelib.PyShell import main if __name__ == '__main__': main() So you can launch it yourself unless you built the virtualenv without default packages."} +{"question_id": 39773560, "score": 77, "creation_date": 1475160642, "tags": ["python", "orm", "sqlalchemy"], "instruction": "SQLAlchemy: How do you delete multiple rows without querying\n\nI have a table that has millions of rows. I want to delete multiple rows via an in clause. However, using the code: session.query(Users).filter(Users.id.in_(subquery....)).delete() The above code will query the results, and then execute the delete. I don't want to do that. I want speed. I want to be able to execute (yes I know about the session.execute):Delete from users where id in () So the Question: How can I get the best of two worlds, using the ORM? Can I do the delete without hard coding the query?", "output": "Yep! You can call delete() on the table object with an associated where clause. Something like this: stmt = Users.__table__.delete().where(Users.id.in_(subquery...)) (and then don't forget to execute the statement: engine.execute(stmt)) source"} +{"question_id": 48899051, "score": 77, "creation_date": 1519192783, "tags": ["python", "pandas", "csv", "dataframe"], "instruction": "How to drop a specific column of csv file while reading it using pandas?\n\nI need to remove a column with label name at the time of loading a csv using pandas. I am reading csv as follows and want to add parameters inside it to do so. Thanks. pd.read_csv(\"sample.csv\") I know this to do after reading csv: df.drop('name', axis=1)", "output": "If you know the column names prior, you can do it by setting usecols parameter When you know which columns to use Suppose you have csv file with columns [\"id\", \"name\", \"last_name\"] and you want just [\"name\", \"last_name\"]. You can do it as below: import pandas as pd df = pd.read_csv(\"sample.csv\", usecols=[\"name\", \"last_name\"]) when you want first N columns If you don't know the column names but you want first N columns from dataframe. You can do it by import pandas as pd df = pd.read_csv(\"sample.csv\", usecols=[i for i in range(n)]) Edit When you know name of the column to be dropped # Read column names from file cols = list(pd.read_csv(\"sample_data.csv\", nrows=1)) print(cols) # Use list comprehension to remove the unwanted column in **usecol** df= pd.read_csv(\"sample_data.csv\", usecols =[i for i in cols if i != \"name\"])"} +{"question_id": 57708, "score": 77, "creation_date": 1221168526, "tags": ["python", "html", "entities"], "instruction": "Convert XML/HTML Entities into Unicode String in Python\n\nI'm doing some web scraping and sites frequently use HTML entities to represent non ascii characters. Does Python have a utility that takes a string with HTML entities and returns a unicode type? For example: I get back: ǎ which represents an \"\u01ce\" with a tone mark. In binary, this is represented as the 16 bit 01ce. I want to convert the html entity into the value u'\\u01ce'", "output": "The standard lib\u2019s very own HTMLParser has an undocumented function unescape() which does exactly what you think it does: up to Python 3.4: import HTMLParser h = HTMLParser.HTMLParser() h.unescape('© 2010') # u'\\xa9 2010' h.unescape('© 2010') # u'\\xa9 2010' Python 3.4+: import html html.unescape('© 2010') # u'\\xa9 2010' html.unescape('© 2010') # u'\\xa9 2010'"} +{"question_id": 43880426, "score": 77, "creation_date": 1494366515, "tags": ["python", "django", "postgresql", "migrate"], "instruction": "How to force migrations to a DB if some tables already exist in Django?\n\nI have a Python/Django proyect. Due to some rolls back, and other mixed stuff we ended up in a kind of odd scenario. The current scenario is like this: DB has the correct tables DB can't be rolled back or dropped Code is up to date Migrations folder is behind the DB by one or two migrations. (These migrations were applied from somewhere else and that \"somewhere else\" doesn't exist anymore) I add and alter some models I run makemigrations New migrations are created, but it's a mix of new tables and some tables that already exist in the DB. If I run migrate it will complain that some of the tables that I'm trying to create already exist. What I need: To be able to run the migrations and kind of \"ignore\" the existing tables and apply the new ones. Or any alternative way to achieve this. Is that possible?", "output": "When you apply a migration, Django inserts a row in a table called django_migrations. That's the only way Django knows which migrations have been applied already and which have not. So the rows in that table have to match the files in your migrations directory. If you've lost the migration files after they were applied, or done anything else to get things out of sync, you'll have problems.. because the migration numbers in your database refer to different migration files than the ones in your project. So before you do anything else, you need to bring things back into sync by deleting the django_migrations table rows for any migration files that you've lost somehow and can't get back. The table should contain rows for only those migrations that you do have and that were actually applied to the database correctly. Now you need to deal with any changes in your database that Django Migrations doesn't know about.. and for that there are a few options: If things worked out such that the database changes that were already applied to the database are in different migration files than the ones that weren't, then you can fix it by running your migrations one at a time using the --fake option on any changes that are in reality already in the database. The fake option just writes the row to the django_migrations table marking the migration as done. Only do this if the database does in fact already have all the changes contained in that migration file. And those migration files that contain only changes which have not been applied to the database, run without the --fake option and Django will apply them. eg: # database already has it manage.py migrate myapp 0003 --fake # need it manage.py migrate myapp 0004 # database already has it manage.py migrate myapp 0005 --fake If you have migration files where some but not all of the changes have been applied, then you have a bigger problem. In that case, there are several ways to go about it (choose ONLY ONE): Edit the migration files to put changes that have already been applied (whether Django did it or you did it manually does not matter) into lower number migrations, and put everything you need done into higher numbered files. Now you can --fake the lower number ones, and run the higher numbered ones as normal. Let's say you have 10 changes you made to your models, and 5 of those changes are actually in the database already, but Django doesn't know about them.. so when you run makemigrations, a new migration is created with all 10 changes. This will normally fail because the database server can't for example add a column that already exists. Move these already-applied changes out of your new migration file, into the previous (already applied) migration file. Django will then assume that these were applied with the previous migration and will not try to apply them again. You can then migrate as normal and the new changes will be applied. If you don't want to touch your older migration file, a cleaner way to do this is to first run makemigrations --empty appname to create an empty migration file. Then run makemigrations which will create another migration with all the changes that Django thinks need to be done. Move the already done migrations from that file into the empty migration you created.. then --fake that one. This will put Django's understanding of what the database looks like will be in sync with reality and you can then migrate as normal, applying the changes in the last migration file. Get rid of any new migrations you just created using makemigrations. Now, comment out or put back anything in your models that has not been applied to the database, leaving your code matching what's actually in the database. Now you can do makemigrations and migrate appname --fake and you will get things back in sync. Then uncomment your new code and run 'makemigrations' then migrate as normal and the changes will be applied. If the changes are small (for example, adding a few fields), sometimes this is easiest. If the changes are large, it isn't.... You can go ahead and (carefully) make the database changes yourself, bringing the database up to date. Now just run migrate --fake and if you didn't mess up then everything will be ok. Again, this is easy for smaller changes, not as easy for complicated ones. You can run manage.py sqlmigrate > mychanges.sql. This generates mychanges.sql containing all the SQL Django WOULD have executed against the database. Now edit that file to remove any changes that have already been applied, leaving what needs to be done. Execute that SQL using pgadmin or psql (you're using postgresql I hope). Now the changes have all been made.. so you can run manage.py migrate --fake, this will bring Django into sync with reality and you should be all set. If your SQL skills are sufficient, this is probably the most straightforward solution. I should add two warnings: First, if you apply a later migration, eg 0003_foobar.py, and then things don't work out and you decide to try going back and apply 0002_bazbuz.py, then Django will TAKE STUFF OUT OF YOUR DATABASE. For example a column you might have added in 0003 will be dropped along with its data. Since you say you can't lose data, be very careful about going back. Second, do not rush into running --fake migrations. Make sure that the entire migration you are about to fake is actually in the database already. Else it gets very confusing. If you do regret faking migrations and don't want to roll back, you can erase django's knowledge of the faked migration by deleting that row from the django_migrations table. It is ok to do this.. if you understand what you are doing. If you know that the migration really was not applied, then it's ok."} +{"question_id": 7853628, "score": 77, "creation_date": 1319221356, "tags": ["python", "image-processing", "opencv"], "instruction": "How do I find an image contained within an image?\n\nI'm currently building what basically amounts to a cross between a search engine and a gallery for web comics that's focused on citing sources and giving authors credit. I'm trying to figure out a way to search an image to find characters within it. For example: Assuming I have the red character and the green character saved as Red Man and Green Man how do I determine if an image contains one or the other. This doesn't need to have 100% recognition or anything is this is more of an added feature I'd like to create, I'm just not sure where to start. I've done a lot of googling for image recognition but haven't found much helpful. For what it's worth, I'd prefer to do this using Python.", "output": "For anyone who stumbles across this in the future. This can be done with template matching. To summarize (my understanding), template matching looks for an exact match of one image within another image. Here's an example of how to do it within Python: import cv2 method = cv2.TM_SQDIFF_NORMED # Read the images from the file small_image = cv2.imread('small_image.png') large_image = cv2.imread('large_image.jpeg') result = cv2.matchTemplate(small_image, large_image, method) # We want the minimum squared difference mn,_,mnLoc,_ = cv2.minMaxLoc(result) # Draw the rectangle: # Extract the coordinates of our best match MPx,MPy = mnLoc # Step 2: Get the size of the template. This is the same size as the match. trows,tcols = small_image.shape[:2] # Step 3: Draw the rectangle on large_image cv2.rectangle(large_image, (MPx,MPy),(MPx+tcols,MPy+trows),(0,0,255),2) # Display the original image with the rectangle around the match. cv2.imshow('output',large_image) # The image is only displayed if we call this cv2.waitKey(0)"} +{"question_id": 66869413, "score": 77, "creation_date": 1617100802, "tags": ["python", "visual-studio-code", "jupyter", "virtual-environment"], "instruction": "Visual Studio Code does not detect Virtual Environments\n\nVisual Studio Code does not detect virtual environments. I run vscode in the folder where the venv folder is located, when I try to select the kernel in vscode I can see the main environment and one located elsewhere on the disk. Jupyter running in vscode also doesn't see this environment. I have installed ipykernel in this environment. I tried to reinstall vscode and python extension. I tried to set the path in settings.json inside .vscode: { \"python.pythonPath\": \".\\\\venv\\\\Scripts\\\\python.exe\" } Windows 10 Python 3.6.7 (64-bit) VSCode 1.54.3", "output": "In VSCode open your command palette \u2014 Ctrl+Shift+P by default Look for Python: Select Interpreter In Select Interpreter choose Enter interpreter path... and then Find... Navigate to your venv folder \u2014 eg, ~/pyenvs/myenv/ or \\Users\\Foo\\Bar\\PyEnvs\\MyEnv\\ In the virtual environment folder choose /bin/python or /bin/python3 The issue is that VSCode's Python extension by default uses the main python or python3 program while venv effectively creates a \"new\" python/python3 executable (that is kind of the point of venv) so the extension does not have access to anything (available modules, namespaces, etc) that you have installed through a venv since the venv specific installations are not available to the main Python interpreter (again, this is by design\u2014like how applications installed in a VM are not available to the host OS)"} +{"question_id": 47876079, "score": 77, "creation_date": 1513629554, "tags": ["python", "pep8", "flake8"], "instruction": "How to tell flake8 to ignore comments\n\nI'm using flake8 in emacs in order to clean up my python code. I find it annoying to have my comments flagged as errors (E501 line too long (x > 79 characters)). I'm wondering if anyone knows a way to kindly ask flake8 to ignore comments, both single and multi-line, but still let me know when my non-comment lines are too long? Thanks in advance!", "output": "I've figured out a possible solution to this, but there might be something better. If you write a comment that will raise an E501 error, i.e. it is too long, you can append that line with # noqa: E501, and flake8 will ignore it. For example: # This is a really really long comment that would usually be flagged by flake8 because it is longer than 79 characters would usually raise an E501, but # This is a really really long comment that would usually be flagged by flake8 because it is longer than 79 characters # noqa: E501 will not. documented here."} +{"question_id": 24571005, "score": 77, "creation_date": 1404466045, "tags": ["python", "pandas", "max", "dataframe"], "instruction": "return max value from pandas dataframe as a whole, not based on column or rows\n\nI am trying to get the max value from a panda dataframe as a whole. I am not interested in what row or column it came from. I am just interested in a single max value within the DataFrame. Here is my DataFrame: df = pd.DataFrame({'group1': ['a','a','a','b','b','b','c','c','d','d','d','d','d'], 'group2': ['c','c','d','d','d','e','f','f','e','d','d','d','e'], 'value1': [1.1,2,3,4,5,6,7,8,9,1,2,3,4], 'value2': [7.1,8,9,10,11,12,43,12,34,5,6,2,3]}) This is what it looks like: group1 group2 value1 value2 0 a c 1.1 7.1 1 a c 2.0 8.0 2 a d 3.0 9.0 3 b d 4.0 10.0 4 b d 5.0 11.0 5 b e 6.0 12.0 6 c f 7.0 43.0 7 c f 8.0 12.0 8 d e 9.0 34.0 9 d d 1.0 5.0 10 d d 2.0 6.0 11 d d 3.0 2.0 12 d e 4.0 3.0 Expected output: 43.0 I was under the assumption that df.max() would do this job but it returns a max value for each column but I am not interested in that. I need the max from an entire dataframe.", "output": "The max of all the values in the DataFrame can be obtained using df.to_numpy().max(), or for pandas < 0.24.0 we use df.values.max(): In [10]: df.to_numpy().max() Out[10]: 'f' The max is f rather than 43.0 since, in CPython2, In [11]: 'f' > 43.0 Out[11]: True In CPython2, Objects of different types ... are ordered by their type names. So any str compares as greater than any int since 'str' > 'int'. In Python3, comparison of strings and ints raises a TypeError. To find the max value in the numeric columns only, use df.select_dtypes(include=[np.number]).max()"} +{"question_id": 37693373, "score": 77, "creation_date": 1465360643, "tags": ["python", "sorting", "tuples", "key"], "instruction": "How to sort a list with two keys but one in reverse order?\n\nI was wondering what would be a Pythonic way of sorting a list of tuples by two keys whereby sorting with one (and only one) key would be in a reverse order and sorting with the the other would be case insensitive. More specifically, I have a list containing tuples like: myList = [(ele1A, ele2A),(ele1B, ele2B),(ele1C, ele2C)] I can use the following code to sort it with two keys: sortedList = sorted(myList, key = lambda y: (y[0].lower(), y[1])) To sort in reverse order I can use sortedList = sorted(myList, key = lambda y: (y[0].lower(), y[1]), reverse = True) But this would sort in a reverse order with two keys.", "output": "Two keys will be used when we need to sort a list with two constraints: one in ascending order and the other in descending, in the same list or any In your example, sortedList = sorted(myList, key = lambda y: (y[0].lower(), y[1])) you can sort entire list only in one order. You can try these and check what's happening: sortedList = sorted(myList, key = lambda y: (y[0].lower(), -y[1])) sortedList = sorted(myList, key = lambda y: (-y[0].lower(), y[1])) sortedList = sorted(myList, key = lambda y: (-y[0].lower(), -y[1]))"} +{"question_id": 8588126, "score": 77, "creation_date": 1324462130, "tags": ["python", "postgresql", "sqlalchemy", "psycopg2"], "instruction": "SQLAlchemy or psycopg2?\n\nI am writing a quick and dirty script which requires interaction with a database (PG). The script is a pragmatic, tactical solution to an existing problem. however, I envisage that the script will evolve over time into a more \"refined\" system. Given the fact that it is currently being put together very quickly (i.e. I don't have the time to pour over huge reams of documentation), I am tempted to go the quick and dirty route, using psycopg. The advantages for psycopg2 (as I currently understand it) is that: written in C, so faster than sqlAlchemy (written in Python)? No abstraction layer over the DBAPI since works with one db and one db only (implication -> fast) (For now), I don't need an ORM, so I can directly execute my SQL statements without having to learn a new ORM syntax (i.e. lightweight) Disadvantages: I KNOW that I will want an ORM further down the line psycopg2 is (\"dated\"?) - don't know how long it will remain around for Are my perceptions of SqlAlchemy (slow/interpreted, bloated, steep learning curve) true - IS there anyway I can use sqlAlchemy in the \"rough and ready\" way I want to use psycopg - namely: execute SQL statements directly without having to mess about with the ORM layer, etc. Any examples of doing this available?", "output": "SQLAlchemy is a ORM, psycopg2 is a database driver. These are completely different things: SQLAlchemy generates SQL statements and psycopg2 sends SQL statements to the database. SQLAlchemy depends on psycopg2 or other database drivers to communicate with the database! As a rather complex software layer SQLAlchemy does add some overhead but it also is a huge boost to development speed, at least once you learned the library. SQLAlchemy is an excellent library and will teach you the whole ORM concept, but if you don't want to generate SQL statements to begin with then you don't want SQLAlchemy."} +{"question_id": 12451531, "score": 77, "creation_date": 1347837744, "tags": ["python", "lambda", "try-except"], "instruction": "Python try except block inside lambda\n\nIs it possible to use a try except block inside of a lambda function? I need the lambda function to convert a certain variable into an integer, but not all of the values will be able to be converted into integers.", "output": "Nope. A Python lambda can only be a single expression. Use a named function. It is convenient to write a generic function for converting types: def tryconvert(value, default, *types): for t in types: try: return t(value) except (ValueError, TypeError): continue return default Then you can write your lambda: lambda v: tryconvert(v, 0, int) You could also write tryconvert() so it returns a function that takes the value to be converted; then you don't need the lambda: def tryconvert(default, *types): def convert(value): for t in types: try: return t(value) except (ValueError, TypeError): continue return default # set name of conversion function to something more useful namext = (\"_%s_\" % default) + \"_\".join(t.__name__ for t in types) if hasattr(convert, \"__qualname__\"): convert.__qualname__ += namext convert.__name__ += namext return convert Now tryconvert(0, int) returns a function convert_0_int that takes a value and converts it to an integer, and returns 0 if this can't be done. You can use this function right away (not saving a copy): mynumber = tryconert(0, int)(value) Or save it to call it later: intconvert = tryconvert(0, int) # later... mynumber = intconvert(value)"} +{"question_id": 5210778, "score": 77, "creation_date": 1299418346, "tags": ["python", "path"], "instruction": "Elegant way to make all dirs in a path\n\nHere are four paths: p1=r'\\foo\\bar\\foobar.txt' p2=r'\\foo\\bar\\foo\\foo\\foobar.txt' p3=r'\\foo\\bar\\foo\\foo2\\foobar.txt' p4=r'\\foo2\\bar\\foo\\foo\\foobar.txt' The directories may or may not exist on a drive. What would be the most elegant way to create the directories in each path? I was thinking about using os.path.split() in a loop, and checking for a dir with os.path.exists, but I don't know it there's a better approach.", "output": "You are looking for os.makedirs() which does exactly what you need. The documentation states: Recursive directory creation function. Like mkdir(), but makes all intermediate-level directories needed to contain the leaf directory. Raises an error exception if the leaf directory already exists or cannot be created. By default it fails if the leaf directory already exists; you'll want to test for existence before calling os.makedirs() or use os.makedirs(..., exist_ok=True) to ignore existence."} +{"question_id": 12597370, "score": 77, "creation_date": 1348647290, "tags": ["python", "regex"], "instruction": "Python replace string pattern with output of function\n\nI have a string in Python, say The quick @red fox jumps over the @lame brown dog. I'm trying to replace each of the words that begin with @ with the output of a function that takes the word as an argument. def my_replace(match): return match + str(match.index('e')) #Psuedo-code string = \"The quick @red fox jumps over the @lame brown dog.\" string.replace('@%match', my_replace(match)) # Result \"The quick @red2 fox jumps over the @lame4 brown dog.\" Is there a clever way to do this?", "output": "You can pass a function to re.sub. The function will receive a match object as the argument, use .group() to extract the match as a string. >>> def my_replace(match): ... match = match.group() ... return match + str(match.index('e')) ... >>> string = \"The quick @red fox jumps over the @lame brown dog.\" >>> re.sub(r'@\\w+', my_replace, string) 'The quick @red2 fox jumps over the @lame4 brown dog.'"} +{"question_id": 67988828, "score": 77, "creation_date": 1623769428, "tags": ["python", "c++", "recursion", "stack-overflow"], "instruction": "Why is Python recursion so expensive and what can we do about it?\n\nSuppose we want to compute some Fibonacci numbers, modulo 997. For n=500 in C++ we can run #include #include std::array fib(unsigned n) { if (!n) return {1, 1}; auto x = fib(n - 1); return {(x[0] + x[1]) % 997, (x[0] + 2 * x[1]) % 997}; } int main() { std::cout << fib(500)[0]; } and in Python def fib(n): if n==1: return (1, 2) x=fib(n-1) return ((x[0]+x[1]) % 997, (x[0]+2*x[1]) % 997) if __name__=='__main__': print(fib(500)[0]) Both will find the answer 996 without issues. We are taking modulos to keep the output size reasonable and using pairs to avoid exponential branching. For n=5000, the C++ code outputs 783, but Python will complain RecursionError: maximum recursion depth exceeded in comparison If we add a couple of lines import sys def fib(n): if n==1: return (1, 2) x=fib(n-1) return ((x[0]+x[1]) % 997, (x[0]+2*x[1]) % 997) if __name__=='__main__': sys.setrecursionlimit(5000) print(fib(5000)[0]) then Python too will give the right answer. For n=50000 C++ finds the answer 151 within milliseconds while Python crashes (at least on my machine). Why are recursive calls so much cheaper in C++? Can we somehow modify the Python compiler to make it more receptive to recursion? Of course, one solution is to replace recursion with iteration. For Fibonacci numbers, this is easy to do. However, this will swap the initial and the terminal conditions, and the latter is tricky for many problems (e.g. alpha\u2013beta pruning). So generally, this will require a lot of hard work on the part of the programmer.", "output": "A solution is a trampoline: the recursive function, instead of calling another function, returns a function that makes that call with the appropriate arguments. There's a loop one level higher that calls all those functions in a loop until we have the final result. I'm probably not explaining it very well; you can find resources online that do a better job. The point is that this converts recursion to iteration. I don't think this is faster, maybe it's even slower, but the recursion depth stays low. An implementation could look like below. I split the pair x into a and b for clarity. I then converted the recursive function to a version that keeps track of a and b as arguments, making it tail recursive. def fib_acc(n, a, b): if n == 1: return (a, b) return lambda: fib_acc(n - 1, (a+b) % 997, (a+2*b) % 997) def fib(n): x = fib_acc(n, 1, 2) while callable(x): x = x() return x if __name__=='__main__': print(fib(50000)[0])"} +{"question_id": 18705373, "score": 77, "creation_date": 1378754374, "tags": ["python", "python-3.x", "ruby", "try-catch", "language-comparisons"], "instruction": "Ruby equivalent for Python's \"try\"?\n\nI'm trying to convert some Python code into Ruby. Is there an equivalent in Ruby to the try statement in Python?", "output": "Use this as an example: begin # \"try\" block puts 'I am before the raise.' raise 'An error has occurred.' # optionally: `raise Exception, \"message\"` puts 'I am after the raise.' # won't be executed rescue # optionally: `rescue StandardError => ex` puts 'I am rescued.' ensure # will always get executed puts 'Always gets executed.' end The equivalent code in Python would be: try: # try block print('I am before the raise.') raise Exception('An error has occurred.') # throw an exception print('I am after the raise.') # won't be executed except: # optionally: `except Exception as ex:` print('I am rescued.') finally: # will always get executed print('Always gets executed.')"} +{"question_id": 3168096, "score": 77, "creation_date": 1278094140, "tags": ["python", "timezone", "utc"], "instruction": "Getting computer's UTC offset in Python\n\nIn Python, how do you find what UTC time offset the computer is set to?", "output": "gmtime() will return the UTC time and localtime() will return the local time so subtracting the two should give you the utc offset. From https://pubs.opengroup.org/onlinepubs/009695399/functions/gmtime.html The gmtime() function shall convert the time in seconds since the Epoch pointed to by timer into a broken-down time, expressed as Coordinated Universal Time (UTC). So, despite the name gmttime, the function returns UTC."} +{"question_id": 21285885, "score": 77, "creation_date": 1390401550, "tags": ["python", "matplotlib", "legend", "linechart", "line-plot"], "instruction": "Remove line through legend marker with .plot\n\nI have a matplotlib plot generated with the following code: import matplotlib.pyplot as pyplot Fig, ax = pyplot.subplots() for i, (mark, color) in enumerate(zip( ['s', 'o', 'D', 'v'], ['r', 'g', 'b', 'purple'])): ax.plot(i+1, i+1, color=color, marker=mark, markerfacecolor='None', markeredgecolor=color, label=i) ax.set_xlim(0,5) ax.set_ylim(0,5) ax.legend() with this as the generated figure: I don't like the lines through the markers in the legend. How can I get rid of them?", "output": "You can specify linestyle='None' or linestyle='' as a keyword argument in the plot command. Also, ls= can replace linestyle=. import matplotlib.pyplot as plt fig, ax = plt.subplots() for i, (mark, color) in enumerate(zip( ['s', 'o', 'D', 'v'], ['r', 'g', 'b', 'purple'])): ax.plot(i+1, i+1, color=color, marker=mark, markerfacecolor='None', markeredgecolor=color, linestyle='None', label=i) ax.set_xlim(0, 5) ax.set_ylim(0, 5) ax.legend(numpoints=1) plt.show() Since you're only plotting single points, you can't see the line attribute except for in the legend."} +{"question_id": 32249960, "score": 77, "creation_date": 1440680047, "tags": ["python", "pandas", "indexing", "dataframe"], "instruction": "Start row index from 1 instead of zero without creating additional column in pandas\n\nI know that I can reset the indices like so df.reset_index(inplace=True) but this will start the index from 0. I want to start it from 1. How do I do that without creating any extra columns and by keeping the index/reset_index functionality and options? I do not want to create a new dataframe, so inplace=True should still apply.", "output": "Just assign directly a new index array: df.index = np.arange(1, len(df)+1) Or if the index is already 0 based, just: df.index += 1 Example: In [151]: df = pd.DataFrame({'a': np.random.randn(5)}) df Out[151]: a 0 0.443638 1 0.037882 2 -0.210275 3 -0.344092 4 0.997045 In [152]: df.index = np.arange(1, len(df)+1) df Out[152]: a 1 0.443638 2 0.037882 3 -0.210275 4 -0.344092 5 0.997045 TIMINGS For some reason I can't take timings on reset_index but the following are timings on a 100,000 row df: In [160]: %timeit df.index = df.index + 1 The slowest run took 6.45 times longer than the fastest. This could mean that an intermediate result is being cached 10000 loops, best of 3: 107 \u00b5s per loop In [161]: %timeit df.index = np.arange(1, len(df)+1) 10000 loops, best of 3: 154 \u00b5s per loop So without the timing for reset_index I can't say definitively, however it looks like just adding 1 to each index value will be faster if the index is already 0 based"} +{"question_id": 24163579, "score": 77, "creation_date": 1402491507, "tags": ["python", "flask", "jinja2", "string-length"], "instruction": "Length of string in Jinja/Flask\n\nJinja unfortunately does not support executing arbitrary Python code, such as {% if len(some_var)>1 %} ... {% endif %} My current workaround is to use the deprecated, ugly, double-underscore method: {% if some_var.__len__()>1 %} ... {% endif %} Although this works, I'm afraid that some future implementation of strings might break this code. Is there a better way to do this?", "output": "You can use the length filter: {% if some_var|length > 1 %}"} +{"question_id": 24438976, "score": 77, "creation_date": 1403812050, "tags": ["python", "debugging", "line-numbers", "python-inspect"], "instruction": "Debugging: Get filename and line number from which a function is called?\n\nI'm currently building quite a complex system in Python, and when I'm debugging I often put simple print statements in several scripts. To keep an overview I often also want to print out the file name and line number where the print statement is located. I can of course do that manually, or with something like this: from inspect import currentframe, getframeinfo print getframeinfo(currentframe()).filename + ':' + str(getframeinfo(currentframe()).lineno) + ' - ', 'what I actually want to print out here' Which prints something like: filenameX.py:273 - what I actually want to print out here To make it more simple, I want to be able to do something like: print debuginfo(), 'what I actually want to print out here' So I put it into a function somewhere and tried doing: from debugutil import debuginfo print debuginfo(), 'what I actually want to print out here' print debuginfo(), 'and something else here' Unfortunately, I get: debugutil.py:3 - what I actually want to print out here debugutil.py:3 - and something else here It prints out the file name and line number on which I defined the function, instead of the line on which I call debuginfo(). This is obvious, because the code is located in the debugutil.py file. So my question is actually: How can I get the filename and line number from which this debuginfo() function is called?", "output": "The function inspect.stack() returns a list of frame records, starting with the caller and moving out, which you can use to get the information you want: from inspect import getframeinfo, stack def debuginfo(message): caller = getframeinfo(stack()[1][0]) print(\"%s:%d - %s\" % (caller.filename, caller.lineno, message)) # python3 syntax print def grr(arg): debuginfo(arg) # <-- stack()[1][0] for this line grr(\"aargh\") # <-- stack()[2][0] for this line Output: example.py:8 - aargh"} +{"question_id": 28009370, "score": 77, "creation_date": 1421582072, "tags": ["python", "pandas", "dataframe", "datetime", "dayofweek"], "instruction": "Get weekday/day-of-week for Datetime column of DataFrame\n\nI have a DataFrame df like the following (excerpt, 'Timestamp' are the index): Timestamp Value 2012-06-01 00:00:00 100 2012-06-01 00:15:00 150 2012-06-01 00:30:00 120 2012-06-01 01:00:00 220 2012-06-01 01:15:00 80 ...and so on. I need a new column df['weekday'] with the respective weekday/day-of-week of the timestamps. How can I get this?", "output": "Use the new dt.dayofweek property: In [2]: df['weekday'] = df['Timestamp'].dt.dayofweek df Out[2]: Timestamp Value weekday 0 2012-06-01 00:00:00 100 4 1 2012-06-01 00:15:00 150 4 2 2012-06-01 00:30:00 120 4 3 2012-06-01 01:00:00 220 4 4 2012-06-01 01:15:00 80 4 In the situation where the Timestamp is your index you need to reset the index and then call the dt.dayofweek property: In [14]: df = df.reset_index() df['weekday'] = df['Timestamp'].dt.dayofweek df Out[14]: Timestamp Value weekday 0 2012-06-01 00:00:00 100 4 1 2012-06-01 00:15:00 150 4 2 2012-06-01 00:30:00 120 4 3 2012-06-01 01:00:00 220 4 4 2012-06-01 01:15:00 80 4 Strangely if you try to create a series from the index in order to not reset the index you get NaN values as does using the result of reset_index to call the dt.dayofweek property without assigning the result of reset_index back to the original df: In [16]: df['weekday'] = pd.Series(df.index).dt.dayofweek df Out[16]: Value weekday Timestamp 2012-06-01 00:00:00 100 NaN 2012-06-01 00:15:00 150 NaN 2012-06-01 00:30:00 120 NaN 2012-06-01 01:00:00 220 NaN 2012-06-01 01:15:00 80 NaN In [17]: df['weekday'] = df.reset_index()['Timestamp'].dt.dayofweek df Out[17]: Value weekday Timestamp 2012-06-01 00:00:00 100 NaN 2012-06-01 00:15:00 150 NaN 2012-06-01 00:30:00 120 NaN 2012-06-01 01:00:00 220 NaN 2012-06-01 01:15:00 80 NaN EDIT As pointed out to me by user @joris you can just access the weekday attribute of the index so the following will work and is more compact: df['Weekday'] = df.index.weekday"} +{"question_id": 8609597, "score": 77, "creation_date": 1324587201, "tags": ["python", "regex"], "instruction": "Python regular expressions OR\n\nSuppose I want a regular expression that matches both \"Sent from my iPhone\" and \"Sent from my iPod\". How do I write such an expression? I tried things like: re.compile(\"Sent from my [iPhone]|[iPod]\") but doesn't seem to work.", "output": "re.compile(\"Sent from my (iPhone|iPod)\") See in action here."} +{"question_id": 69950010, "score": 77, "creation_date": 1636758794, "tags": ["python", "list", "caching"], "instruction": "Why is Python list slower when sorted?\n\nIn the following code, I create two lists with the same values: one list unsorted (s_not), the other sorted (s_yes). The values are created by randint(). I run some loop for each list and time it. import random import time for x in range(1,9): r = 10**x # do different val for the bound in randint() m = int(r/2) print(\"For rand\", r) # s_not is non sorted list s_not = [random.randint(1,r) for i in range(10**7)] # s_yes is sorted s_yes = sorted(s_not) # do some loop over the sorted list start = time.time() for i in s_yes: if i > m: _ = 1 else: _ = 1 end = time.time() print(\"yes\", end-start) # do the same to the unsorted list start = time.time() for i in s_not: if i > m: _ = 1 else: _ = 1 end = time.time() print(\"not\", end-start) print() With output: For rand 10 yes 1.0437555313110352 not 1.1074268817901611 For rand 100 yes 1.0802974700927734 not 1.1524150371551514 For rand 1000 yes 2.5082249641418457 not 1.129960298538208 For rand 10000 yes 3.145440101623535 not 1.1366300582885742 For rand 100000 yes 3.313387393951416 not 1.1393756866455078 For rand 1000000 yes 3.3180911540985107 not 1.1336982250213623 For rand 10000000 yes 3.3231537342071533 not 1.13503098487854 For rand 100000000 yes 3.311596393585205 not 1.1345293521881104 So, when increasing the bound in the randint(), the loop over the sorted list gets slower. Why?", "output": "Cache misses. When N int objects are allocated back-to-back, the memory reserved to hold them tends to be in a contiguous chunk. So crawling over the list in allocation order tends to access the memory holding the ints' values in sequential, contiguous, increasing order too. Shuffle it, and the access pattern when crawling over the list is randomized too. Cache misses abound, provided there are enough different int objects that they don't all fit in cache. At r==1, and r==2, CPython happens to treat such small ints as singletons, so, e.g., despite that you have 10 million elements in the list, at r==2 it contains only (at most) 100 distinct int objects. All the data for those fit in cache simultaneously. Beyond that, though, you're likely to get more, and more, and more distinct int objects. Hardware caches become increasingly useless then when the access pattern is random. Illustrating: >>> from random import randint, seed >>> seed(987987987) >>> for x in range(1, 9): ... r = 10 ** x ... js = [randint(1, r) for _ in range(10_000_000)] ... unique = set(map(id, js)) ... print(f\"{r:12,} {len(unique):12,}\") ... 10 10 100 100 1,000 7,440,909 10,000 9,744,400 100,000 9,974,838 1,000,000 9,997,739 10,000,000 9,999,908 100,000,000 9,999,998"} +{"question_id": 67599119, "score": 77, "creation_date": 1621411068, "tags": ["python", "asynchronous", "async-await", "python-asyncio", "fastapi"], "instruction": "FastAPI asynchronous background tasks blocks other requests?\n\nI want to run a simple background task in FastAPI, which involves some computation before dumping it into the database. However, the computation would block it from receiving any more requests. from fastapi import BackgroundTasks, FastAPI app = FastAPI() db = Database() async def task(data): otherdata = await db.fetch(\"some sql\") newdata = somelongcomputation(data,otherdata) # this blocks other requests await db.execute(\"some sql\",newdata) @app.post(\"/profile\") async def profile(data: Data, background_tasks: BackgroundTasks): background_tasks.add_task(task, data) return {} What is the best way to solve this issue?", "output": "Your task is defined as async, which means fastapi (or rather starlette) will run it in the asyncio event loop. And because somelongcomputation is synchronous (i.e. not waiting on some IO, but doing computation) it will block the event loop as long as it is running. I see a few ways of solving this: Use more workers (e.g. uvicorn main:app --workers 4). This will allow up to 4 somelongcomputation in parallel. Rewrite your task to not be async (i.e. define it as def task(data): ... etc). Then starlette will run it in a separate thread. Use fastapi.concurrency.run_in_threadpool, which will also run it in a separate thread. Like so: from fastapi.concurrency import run_in_threadpool async def task(data): otherdata = await db.fetch(\"some sql\") newdata = await run_in_threadpool(lambda: somelongcomputation(data, otherdata)) await db.execute(\"some sql\", newdata) Or use asyncios's run_in_executor directly (which run_in_threadpool uses under the hood): import asyncio async def task(data): otherdata = await db.fetch(\"some sql\") loop = asyncio.get_running_loop() newdata = await loop.run_in_executor(None, lambda: somelongcomputation(data, otherdata)) await db.execute(\"some sql\", newdata) You could even pass in a concurrent.futures.ProcessPoolExecutor as the first argument to run_in_executor to run it in a separate process. Spawn a separate thread / process yourself. E.g. using concurrent.futures. Use something more heavy-handed like celery. (Also mentioned in the fastapi docs here)."} +{"question_id": 71644405, "score": 77, "creation_date": 1648455910, "tags": ["python", "performance", "comparison"], "instruction": "Why is it faster to compare strings that match than strings that do not?\n\nHere are two measurements: timeit.timeit('\"toto\"==\"1234\"', number=100000000) 1.8320042459999968 timeit.timeit('\"toto\"==\"toto\"', number=100000000) 1.4517491540000265 As you can see, comparing two strings that match is faster than comparing two strings with the same size that do not match. This is quite disturbing: During a string comparison, I believed that Python was testing strings character by character, so \"toto\"==\"toto\" should be longer to test than \"toto\"==\"1234\" as it requires four tests against one for the non-matching comparison. Maybe the comparison is hash-based, but in this case, timings should be the same for both comparisons. Why?", "output": "Combining my comment and the comment by @khelwood: TL;DR: When analysing the bytecode for the two comparisons, it reveals the 'time' and 'time' strings are assigned to the same object. Therefore, an up-front identity check (at C-level) is the reason for the increased comparison speed. The reason for the same object assignment is that, as an implementation detail, CPython interns strings which contain only 'name characters' (i.e. alpha and underscore characters). This enables the object's identity check. Bytecode: import dis In [24]: dis.dis(\"'time'=='time'\") 1 0 LOAD_CONST 0 ('time') # <-- same object (0) 2 LOAD_CONST 0 ('time') # <-- same object (0) 4 COMPARE_OP 2 (==) 6 RETURN_VALUE In [25]: dis.dis(\"'time'=='1234'\") 1 0 LOAD_CONST 0 ('time') # <-- different object (0) 2 LOAD_CONST 1 ('1234') # <-- different object (1) 4 COMPARE_OP 2 (==) 6 RETURN_VALUE Assignment Timing: The 'speed-up' can also be seen in using assignment for the time tests. The assignment (and compare) of two variables to the same string, is faster than the assignment (and compare) of two variables to different strings. Further supporting the hypothesis the underlying logic is performing an object comparison. This is confirmed in the next section. In [26]: timeit.timeit(\"x='time'; y='time'; x==y\", number=1000000) Out[26]: 0.0745926329982467 In [27]: timeit.timeit(\"x='time'; y='1234'; x==y\", number=1000000) Out[27]: 0.10328884399496019 Python source code: As helpfully provided by @mkrieger1 and @Masklinn in their comments, the source code for unicodeobject.c performs a pointer comparison first and if True, returns immediately. int _PyUnicode_Equal(PyObject *str1, PyObject *str2) { assert(PyUnicode_CheckExact(str1)); assert(PyUnicode_CheckExact(str2)); if (str1 == str2) { // <-- Here return 1; } if (PyUnicode_READY(str1) || PyUnicode_READY(str2)) { return -1; } return unicode_compare_eq(str1, str2); } Appendix: Reference answer nicely illustrating how to read the disassembled bytecode output. Courtesy of @Delgan Reference answer which nicely describes CPython's string interning. Coutresy of @ShadowRanger"} +{"question_id": 54241226, "score": 77, "creation_date": 1547745838, "tags": ["python", "numpy", "scikit-image"], "instruction": "ImportError: cannot import name '_validate_lengths'\n\nI have started learning Tensorflow. I am using Pycharm and my environment is Ubuntu 16.04. I am following the tutorial. I cross check the nump. It is up-to-date. I don't know the reason of this error. from numpy.lib.arraypad import _validate_lengths ImportError: cannot import name '_validate_lengths' Need hint to resolve this error. Thank you. import tensorflow as tf from skimage import transform from skimage import data import matplotlib.pyplot as plt import os import numpy as np from skimage.color import rgb2gray import random #listdir: This method returns a list containing the names of the entries in the directory given by path. # Return True if path is an existing directory def load_data(data_dir): # Get all subdirectories of data_dir. Each represents a label. directories = [d for d in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, d))] # Loop through the label directories and collect the data in # two lists, labels and images. labels = [] images = [] for d in directories: label_dir = os.path.join(data_dir, d) file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(\".ppm\")] for f in file_names: images.append(data.imread(f)) labels.append(int(d)) return images, labels ROOT_PATH = \"/home/tahir/PhD Study/Traffic Signs Using Tensorflow/\" train_data_dir = os.path.join(ROOT_PATH, \"TrafficSigns/Training\") test_data_dir = os.path.join(ROOT_PATH, \"TrafficSigns/Testing\") images, labels = load_data(train_data_dir) # Print the `images` dimensions print(images.ndim) # Print the number of `images`'s elements print(images.size) # Print the first instance of `images` images[0]", "output": "I updated my skimage package. pip install --upgrade scikit-image And the problem was solved. It's a problem of version of Skimage, which is solved in 0.14.2. PLus, this version is quite stable. Installing collected packages: dask, scikit-image Found existing installation: dask 0.19.1 Uninstalling dask-0.19.1: Successfully uninstalled dask-0.19.1 Found existing installation: scikit-image 0.13.0 Uninstalling scikit-image-0.13.0: Successfully uninstalled scikit-image-0.13.0 Successfully installed dask-1.0.0 scikit-image-0.14.2"} +{"question_id": 826082, "score": 77, "creation_date": 1241545194, "tags": ["python"], "instruction": "Python: finding uid/gid for a given username/groupname (for os.chown)\n\nWhat's a good way to find the uid/gid for a given username or groupname using Python? I need to set file ownership with os.chown and need the integer ids instead of the alphabetic. [Quick note]: getpwnam works great but is not available on windows, so here's some code that creates stubs to allow you to run the same code on windows and unix. try: from pwd import getpwnam except: getpwnam = lambda x: (0,0,0) os.chown = lambda x, y, z: True os.chmod = lambda x, y: True os.fchown = os.chown os.fchmod = os.chmod", "output": "Use the pwd and grp modules: from pwd import getpwnam print getpwnam('someuser')[2] # or print getpwnam('someuser').pw_uid print grp.getgrnam('somegroup')[2]"} +{"question_id": 10282674, "score": 77, "creation_date": 1335191843, "tags": ["python", "math"], "instruction": "Difference between the built-in pow() and math.pow() for floats, in Python?\n\nIs there a difference in the results returned by Python's built-in pow(x, y) (no third argument) and the values returned by math.pow(), in the case of two float arguments. I am asking this question because the documentation for math.pow() implies that pow(x, y) (i.e. x**y) is essentially the same as math.pow(x, y): math.pow(x, y) Return x raised to the power y. Exceptional cases follow Annex \u2018F\u2019 of the C99 standard as far as possible. In particular, pow(1.0, x) and pow(x, 0.0) always return 1.0, even when x is a zero or a NaN. If both x and y are finite, x is negative, and y is not an integer then pow(x, y) is undefined, and raises ValueError. Changed in version 2.6: The outcome of 1**nan and nan**0 was undefined. Note the last line: the documentation implies that the behavior of math.pow() is that of the exponentiation operator ** (and therefore of pow(x, y)). Is this officially guaranteed? Background: My goal is to provide an implementation of both the built-in pow() and of math.pow() for numbers with uncertainty that behaves in the same way as with regular Python floats (same numerical results, same exceptions, same results for corner cases, etc.). I have already implemented something that works quite well, but there are some corner cases that need to be handled.", "output": "Quick Check From the signatures, we can tell that they are different: pow(x, y[, z]) math.pow(x, y) Also, trying it in the shell will give you a quick idea: >>> pow is math.pow False Testing the differences Another way to understand the differences in behaviour between the two functions is to test for them: import math import traceback import sys inf = float(\"inf\") NaN = float(\"nan\") vals = [inf, NaN, 0.0, 1.0, 2.2, -1.0, -0.0, -2.2, -inf, 1, 0, 2] tests = set([]) for vala in vals: for valb in vals: tests.add( (vala, valb) ) tests.add( (valb, vala) ) for a,b in tests: print(\"math.pow(%f,%f)\"%(a,b) ) try: print(\" %f \"%math.pow(a,b)) except: traceback.print_exc() print(\"__builtins__.pow(%f,%f)\"%(a,b) ) try: print(\" %f \"%__builtins__.pow(a,b)) except: traceback.print_exc() We can then notice some subtle differences. For example: math.pow(0.000000,-2.200000) ValueError: math domain error __builtins__.pow(0.000000,-2.200000) ZeroDivisionError: 0.0 cannot be raised to a negative power There are other differences, and the test list above is not complete (no long numbers, no complex, etc...), but this will give us a pragmatic list of how the two functions behave differently. I would also recommend extending the above test to check for the type that each function returns. You could probably write something similar that creates a report of the differences between the two functions. math.pow() math.pow() handles its arguments very differently from the builtin ** or pow(). This comes at the cost of flexibility. Having a look at the source, we can see that the arguments to math.pow() are cast directly to doubles: static PyObject * math_pow(PyObject *self, PyObject *args) { PyObject *ox, *oy; double r, x, y; int odd_y; if (! PyArg_UnpackTuple(args, \"pow\", 2, 2, &ox, &oy)) return NULL; x = PyFloat_AsDouble(ox); y = PyFloat_AsDouble(oy); /*...*/ The checks are then carried out against the doubles for validity, and then the result is passed to the underlying C math library. builtin pow() The built-in pow() (same as the ** operator) on the other hand behaves very differently, it actually uses the Objects's own implementation of the ** operator, which can be overridden by the end user if need be by replacing a number's __pow__(), __rpow__() or __ipow__(), method. For built-in types, it is instructive to study the difference between the power function implemented for two numeric types, for example, floats, long and complex. Overriding the default behaviour Emulating numeric types is described here. essentially, if you are creating a new type for numbers with uncertainty, what you will have to do is provide the __pow__(), __rpow__() and possibly __ipow__() methods for your type. This will allow your numbers to be used with the operator: class Uncertain: def __init__(self, x, delta=0): self.delta = delta self.x = x def __pow__(self, other): return Uncertain( self.x**other.x, Uncertain._propagate_power(self, other) ) @staticmethod def _propagate_power(A, B): return math.sqrt( ((B.x*(A.x**(B.x-1)))**2)*A.delta*A.delta + (((A.x**B.x)*math.log(B.x))**2)*B.delta*B.delta ) In order to override math.pow() you will have to monkey patch it to support your new type: def new_pow(a,b): _a = Uncertain(a) _b = Uncertain(b) return _a ** _b math.pow = new_pow Note that for this to work you'll have to wrangle the Uncertain class to cope with an Uncertain instance as an input to __init__()"} +{"question_id": 58451650, "score": 77, "creation_date": 1571404383, "tags": ["python", "python-3.x", "pip", "python-3.7"], "instruction": "pip no longer working after update error 'module' object is not callable\n\nAfter a pip update, pip has stopped working completely. Z:\\>pip install matplotlib Traceback (most recent call last): File \"c:\\program files\\python37\\lib\\runpy.py\", line 193, in _run_module_as_main \"__main__\", mod_spec) File \"c:\\program files\\python37\\lib\\runpy.py\", line 85, in _run_code exec(code, run_globals) File \"C:\\Program Files\\Python37\\Scripts\\pip.exe\\__main__.py\", line 9, in TypeError: 'module' object is not callable Any help please? Edit: I am working on windows 10", "output": "All credit for this goes to user han_solo, who left the answer as a comment, instead of as an answer: Instead of pip install stdlib_list --user use python -m pip install stdlib_list --user Replace stdlib_list with matplotlib or whatever the name is of the package you want."} +{"question_id": 4496680, "score": 77, "creation_date": 1292911766, "tags": ["python", "multithreading", "performance"], "instruction": "Python threads all executing on a single core\n\nI have a Python program that spawns many threads, runs 4 at a time, and each performs an expensive operation. Pseudocode: for object in list: t = Thread(target=process, args=(object)) # if fewer than 4 threads are currently running, t.start(). Otherwise, add t to queue But when the program is run, Activity Monitor in OS X shows that 1 of the 4 logical cores is at 100% and the others are at nearly 0. Obviously I can't force the OS to do anything but I've never had to pay attention to performance in multi-threaded code like this before so I was wondering if I'm just missing or misunderstanding something. Thanks.", "output": "Note that in many cases (and virtually all cases where your \"expensive operation\" is a calculation implemented in Python), multiple threads will not actually run concurrently due to Python's Global Interpreter Lock (GIL). The GIL is an interpreter-level lock. This lock prevents execution of multiple threads at once in the Python interpreter. Each thread that wants to run must wait for the GIL to be released by the other thread, which means your multi-threaded Python application is essentially single threaded, right? Yes. Not exactly. Sort of. CPython uses what\u2019s called \u201coperating system\u201d threads under the covers, which is to say each time a request to make a new thread is made, the interpreter actually calls into the operating system\u2019s libraries and kernel to generate a new thread. This is the same as Java, for example. So in memory you really do have multiple threads and normally the operating system controls which thread is scheduled to run. On a multiple processor machine, this means you could have many threads spread across multiple processors, all happily chugging away doing work. However, while CPython does use operating system threads (in theory allowing multiple threads to execute within the interpreter simultaneously), the interpreter also forces the GIL to be acquired by a thread before it can access the interpreter and stack and can modify Python objects in memory all willy-nilly. The latter point is why the GIL exists: The GIL prevents simultaneous access to Python objects by multiple threads. But this does not save you (as illustrated by the Bank example) from being a lock-sensitive creature; you don\u2019t get a free ride. The GIL is there to protect the interpreters memory, not your sanity. See the Global Interpreter Lock section of Jesse Noller's post for more details. To get around this problem, check out Python's multiprocessing module. multiple processes (with judicious use of IPC) are[...] a much better approach to writing apps for multi-CPU boxes than threads. -- Guido van Rossum (creator of Python) Edit based on a comment from @spinkus: If Python can't run multiple threads simultaneously, then why have threading at all? Threads can still be very useful in Python when doing simultaneous operations that do not need to modify the interpreter's state. This includes many (most?) long-running function calls that are not in-Python calculations, such as I/O (file access or network requests)) and [calculations on Numpy arrays][6]. These operations release the GIL while waiting for a result, allowing the program to continue executing. Then, once the result is received, the thread must re-acquire the GIL in order to use that result in \"Python-land\""} +{"question_id": 22804252, "score": 77, "creation_date": 1396423416, "tags": ["python", "django", "django-queryset", "django-orm", "django-managers"], "instruction": "Django ORM - objects.filter() vs. objects.all().filter() - which one is preferred?\n\nVery often I see constructs like MyModel.objects.all().filter(...) which will return a QuerySet of the default Mananger. At first all() seems to be quite redundant, because MyMode.objects.filter(...) delivers the same result. However, this seems to be safe for the default Manager only, because of the following two statements in the Django documentation: Excerpt from the Chapter \"Adding extra manager methods\" A custom Manager method can return anything you want. It doesn\u2019t have to return a QuerySet. Definition of the all() manager method: all() Returns a copy of the current QuerySet (or QuerySet subclass). This can be useful in situations where you might want to pass in either a model manager or a QuerySet and do further filtering on the result. After calling all() on either object, you\u2019ll definitely have a QuerySet to work with. This seems a bit like a contradiction to me. On one hand Django offers the freedom to let a manager method return whatever object type is preferred and on the other hand it requires a QuerySet for the all() method. I'm aware that each manager has a get_queryset method which is called by all(). But who stops me from overriding all() in my custom manager? Although I agree it would be bad design to do so. So as far as I can see, the all() method does not guarantee to return a QuerySet. What exactly does MyModel.objects return? Does this statement call all()? or `get_queryset()? Do you prefer MyModel.objects.filter(...) or MyModel.objects.all().filter(...). And if so, why? Have you ever encountered wonky managers that would mess with those methods in a undesirable way?", "output": "The method all() on a manager just delegates to get_queryset(), as you can see in the Django source code: def all(self): return self.get_queryset() So it's just a way to get the QuerySet from the Manager. This can be handy to ensure that you're dealing with a QuerySet and not a Manager, because MyModel.objects returns a Manager. For example, if you want to iterate over all the items, you can't do this: for item in MyModel.objects: # this won't work Because you can't iterate over a Manager. However, all() returns the QuerySet, you can iterate over a QuerySet: for item in MyModel.objects.all(): # do someting with item Generally, you should never overwrite all(). You can overwrite get_queryset() but this method must return a QuerySet. If you would use a filter method like filter() or exclude(), you would already have the QuerySet, because these methods are proxied to the QuerySet. So you don't have to do something like all().filter()."} +{"question_id": 22586895, "score": 77, "creation_date": 1395546969, "tags": ["python", "enums", "python-3.4"], "instruction": "Python Enum, when and where to use?\n\nPython 3.4.0 introduced enum, I've read the doc but still don't know the usage of it. From my perspective, enum.Enum is an extended namedtuple type, which may not be true. So these are what I want to know about Enum: When and where to use Enum? Why do we need Enum? What are the advantages? What exactly is an Enum?", "output": "1. When and where to use enums? When you have a variable that takes one of a limited set of possible values. For example, the days of the week: class Weekday(Enum): MONDAY = 1 TUESDAY = 2 WEDNESDAY = 3 THURSDAY = 4 FRIDAY = 5 SATURDAY = 6 SUNDAY = 7 2. Why do we need enum? What are the advantages? Enums are advantageous because they give a name to a constant, which makes code more readable; and because the individual members cannot be rebound, making Python Enums semi-constant (because the Enum itself could still be rebound). Besides more readable code, debugging is also easier as you see a name along with the value, not just the value Desired behavior can be added to Enums For example, as anyone who has worked with the datetime module knows, datetime and date have two different representations for the days of the week: 0-6 or 1-7. Rather than keep track of that ourselves we can add a method to the Weekday enum to extract the day from the datetime or date instance and return the matching enum member: @classmethod def from_date(cls, date): return cls(date.isoweekday()) 3. What exactly is Enum? Enum is a type, whose members are named constants, that all belong to (or should) a logical group of values. So far I have created Enums for: - the days of the week - the months of the year - US Federal Holidays in a year FederalHoliday is my most complex; it uses this recipe, and has methods to return the actual date the holiday takes place on for the year given, the next business day if the day in question is a holiday (or the range of days skipped includes the holiday or weekends), and the complete set of dates for a year. Here it is: class FederalHoliday(AutoEnum): NewYear = \"First day of the year.\", 'absolute', Month.JANUARY, 1 MartinLutherKingJr = \"Birth of Civil Rights leader.\", 'relative', Month.JANUARY, Weekday.MONDAY, 3 President = \"Birth of George Washington\", 'relative', Month.FEBRUARY, Weekday.MONDAY, 3 Memorial = \"Memory of fallen soldiers\", 'relative', Month.MAY, Weekday.MONDAY, 5 Independence = \"Declaration of Independence\", 'absolute', Month.JULY, 4 Labor = \"American Labor Movement\", 'relative', Month.SEPTEMBER, Weekday.MONDAY, 1 Columbus = \"Americas discovered\", 'relative', Month.OCTOBER, Weekday.MONDAY, 2 Veterans = \"Recognition of Armed Forces service\", 'relative', Month.NOVEMBER, 11, 1 Thanksgiving = \"Day of Thanks\", 'relative', Month.NOVEMBER, Weekday.THURSDAY, 4 Christmas = \"Birth of Jesus Christ\", 'absolute', Month.DECEMBER, 25 def __init__(self, doc, type, month, day, occurrence=None): self.__doc__ = doc self.type = type self.month = month self.day = day self.occurrence = occurrence def date(self, year): \"returns the observed date of the holiday for `year`\" if self.type == 'absolute' or isinstance(self.day, int): holiday = Date(year, self.month, self.day) if Weekday(holiday.isoweekday()) is Weekday.SUNDAY: holiday = holiday.replace(delta_day=1) return holiday days_in_month = days_per_month(year) target_end = self.occurrence * 7 + 1 if target_end > days_in_month[self.month]: target_end = days_in_month[self.month] target_start = target_end - 7 target_week = list(xrange(start=Date(year, self.month, target_start), step=one_day, count=7)) for holiday in target_week: if Weekday(holiday.isoweekday()) is self.day: return holiday @classmethod def next_business_day(cls, date, days=1): \"\"\" Return the next `days` business day from date. \"\"\" holidays = cls.year(date.year) years = set([date.year]) while days > 0: date = date.replace(delta_day=1) if date.year not in years: holidays.extend(cls.year(date.year)) years.add(date.year) if Weekday(date.isoweekday()) in (Weekday.SATURDAY, Weekday.SUNDAY) or date in holidays: continue days -= 1 return date @classmethod def year(cls, year): \"\"\" Return a list of the actual FederalHoliday dates for `year`. \"\"\" holidays = [] for fh in cls: holidays.append(fh.date(year)) return holidays Notes: Date is from my dbf package the enhanced xrange (supporting a range of dates) is also custom, but I don't think I have included it anywhere; I'll stuff it in my dbf package next time I tinker with it. Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library."} +{"question_id": 21125228, "score": 77, "creation_date": 1389738061, "tags": ["python", "python-2.7", "python-internals"], "instruction": "Where is the \"from __future__ import braces\" code?\n\nI was wondering what is exactly the code that gets executed on the command: >>> from __future__ import braces SyntaxError: not a chance Since Python is open-sourced, I opened C:\\Python27\\Lib\\__future__.py and looked. Surprisingly, I found nothing there that handles importing the braces module. So, my question is, where is the code that handles this? What happens when I run that command?", "output": "The code is in future.c: future_check_features(PyFutureFeatures *ff, stmt_ty s, const char *filename) // ... else if (strcmp(feature, \"braces\") == 0) { PyErr_SetString(PyExc_SyntaxError, \"not a chance\"); PyErr_SyntaxLocation(filename, s->lineno); return 0; }"} +{"question_id": 63886762, "score": 77, "creation_date": 1600095170, "tags": ["python", "tensorflow", "keras", "deep-learning"], "instruction": "Tensorflow: None of the MLIR optimization passes are enabled (registered 1)\n\nI am using a very small model for testing purposes using tensorflow 2.3 and keras. Looking at my terminal, I get the following warning: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:118] None of the MLIR optimization passes are enabled (registered 1) However, the code works as expected. But what does this message mean? Thanks.", "output": "MLIR is being used as another solution to implementing and optimizing Tensorflow logic. This informative message is benign and is saying MLIR was not being used. This is expected as in TF 2.3, the MLIR based implementation is still being developed and proven, so end users are generally not expected to use the MLIR implementation and are instead expected to use the non-MLIR feature complete implementation. Update: still experimental on version 2.9.1. On the docs it is written: DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT."} +{"question_id": 21819649, "score": 77, "creation_date": 1392603520, "tags": ["python", "python-3.x", "package"], "instruction": "Namespace vs regular package\n\nWhat's the difference between a namespace Python package (no __init__.py) and a regular Python package (has an __init__.py), especially when __init__.py is empty for a regular package? I am curious because recently I've been forgetting to make __init__.py in packages I make, and I never noticed any problems. In fact, they seem to behave identically to regular packages. Edit: Namespace packages only supported from Python 3.3 (see PEP 420), so naturally, this question only applies to Python 3.", "output": "Namespace packages As of Python 3.3, we get namespace packages. These are a special kind of package that allows you to unify two packages with the same name at different points on your Python-path. For example, consider path1 and path2 as separate entries on your Python-path: path1 +--namespace +--module1.py +--module2.py path2 +--namespace +--module3.py +--module4.py with this arrangement you should be able to do the following: from namespace import module1, module3 thus you get the unification of two packages with the same name in a single namespace. If either one of them gain an __init__.py that becomes the package - and you no longer get the unification as the other directory is ignored. If both of them have an __init__.py, the first one in the PYTHONPATH (sys.path) is the one used. __init__.py used to be required to make directory a package Namespace packages are packages without the __init__.py. For an example of a simple package, if you have a directory: root +--package +--file1.py +--file2.py ... While you could run these files independently in the package directory, e.g. with python2 file1.py, under Python 2 you wouldn't be able to import the files as modules in the root directory, e.g. import package.file1 would fail, and in order for it to work, you at least need this: package +--__init__.py +--file1.py +--file2.py ... __init__.py initializes the package so you can have code in the __init__.py that is run when the module is first imported: run_initial_import_setup() provide an __all__ list of names to be imported, __all__ = ['star_import', 'only', 'these', 'names'] if the package is imported with the following: from module import * or you can leave the __init__.py completely empty if you only want to be able to import the remaining .py files in the directory. Namespaces with __init__.py using pkgutil: You could originally use pkgutil, available since Python 2.3. to accomplish adding namespaces, by adding the following into each separate package's __init__.py: from pkgutil import extend_path __path__ = extend_path(__path__, __name__) Setuptools uses a similar method, again, all __init__.py files should contain the following (with no other code): import pkg_resources pkg_resources.declare_namespace(__name__) Namespaces were more thoroughly addressed in PEP 420 See also more discussion on setuptools and Namespaces here: http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages"} +{"question_id": 48177914, "score": 77, "creation_date": 1515538868, "tags": ["python", "pandas"], "instruction": "Why use pandas.assign rather than simply initialize new column?\n\nI just discovered the assign method for pandas dataframes, and it looks nice and very similar to dplyr's mutate in R. However, I've always gotten by by just initializing a new column 'on the fly'. Is there a reason why assign is better? For instance (based on the example in the pandas documentation), to create a new column in a dataframe, I could just do this: df = DataFrame({'A': range(1, 11), 'B': np.random.randn(10)}) df['ln_A'] = np.log(df['A']) but the pandas.DataFrame.assign documentation recommends doing this: df.assign(ln_A = lambda x: np.log(x.A)) # or newcol = np.log(df['A']) df.assign(ln_A=newcol) Both methods return the same dataframe. In fact, the first method (my 'on the fly' assignment) is significantly faster (0.202 seconds for 1000 iterations) than the .assign method (0.353 seconds for 1000 iterations). So is there a reason I should stop using my old method in favour of df.assign?", "output": "The difference concerns whether you wish to modify an existing frame, or create a new frame while maintaining the original frame as it was. In particular, DataFrame.assign returns you a new object that has a copy of the original data with the requested changes ... the original frame remains unchanged. In your particular case: >>> df = DataFrame({'A': range(1, 11), 'B': np.random.randn(10)}) Now suppose you wish to create a new frame in which A is everywhere 1 without destroying df. Then you could use .assign >>> new_df = df.assign(A=1) If you do not wish to maintain the original values, then clearly df[\"A\"] = 1 will be more appropriate. This also explains the speed difference, by necessity .assign must copy the data while [...] does not."} +{"question_id": 5481686, "score": 77, "creation_date": 1301457279, "tags": ["python", "pywin32", "pywin"], "instruction": "Why can't I find any pywin32 documentation/resources\n\nI cannot find pywin32 documentation or even a little synopsis of what the module is (I am aware its for win32 API stuff). Is there any pywin32 documentation or resources? Maybe some examples?", "output": "The PyWin32 installation includes a .chm help file at [Pythonpath]\\Lib\\site-packages\\PyWin32.chm. The same info is online at http://timgolden.me.uk/pywin32-docs/index.html ActiveState used to keep this documentation online as well, including listings of modules and objects, but that seems to be taken offline."} +{"question_id": 54668000, "score": 77, "creation_date": 1550053921, "tags": ["python", "python-typing", "python-dataclasses"], "instruction": "type hint for an instance of a non specific dataclass\n\nI have a function that accepts an instance of any dataclass. what would be an appropriate type hint for it ? haven't found something official in the python documentation this is what I have been doing, but i don't think it's correct from typing import Any, NewType DataClass = NewType('DataClass', Any) def foo(obj: DataClass): ... another idea is to use a Protocol with these class attributes __dataclass_fields__, __dataclass_params__.", "output": "Despite its name, dataclasses.dataclass doesn't expose a class interface. It just allows you to declare a custom class in a convenient way that makes it obvious that it is going to be used as a data container. So, in theory, there is little opportunity to write something that only works on dataclasses, because dataclasses really are just ordinary classes. In practice, there a couple of reasons why you would want to declare dataclass-only functions anyway, and something like this is how you should go about it: from dataclasses import dataclass from typing import ClassVar, Dict, Protocol, Any class IsDataclass(Protocol): # as already noted in comments, checking for this attribute is currently # the most reliable way to ascertain that something is a dataclass __dataclass_fields__: ClassVar[Dict[str, Any]] def dataclass_only(x: IsDataclass): ... # do something that only makes sense with a dataclass @dataclass class Foo: pass class Bar: pass dataclass_only(Foo()) # a static type check should show that this line is fine .. dataclass_only(Bar()) # .. and this one is not This approach is also what you alluded to in your question. If you want to go for it, keep in mind that you'll need a third party library such as mypy to do the static type checking for you, and if you are on python 3.7 or earlier, you need to manually install typing_extensions since Protocol only became part of the standard library in 3.8. Also noted that older version of mypy (>=0.982) mistakenly expect __dataclass_fields__ to be an instance attribute, so the protocol should be just __dataclass_fields__: Dict[1]. When I first wrote it, this post also featured The Old Way of Doing Things, back when we had to make do without type checkers. I'm leaving it up, but it's not recommended to handle this kind of feature with runtime-only failures any more: from dataclasses import is_dataclass def dataclass_only(x): \"\"\"Do something that only makes sense with a dataclass. Raises: ValueError if something that is not a dataclass is passed. ... more documentation ... \"\"\" if not is_dataclass(x): raise ValueError(f\"'{x.__class__.__name__}' is not a dataclass!\") ... [1]Kudos to @Kound for updating and testing the ClassVar behavior."} +{"question_id": 26804421, "score": 77, "creation_date": 1415373688, "tags": ["python", "path", "pytest"], "instruction": "Python project directory structure / pytest trouble\n\nThis should be the easiest problem on earth, but even after extensive searching and tinkering, I'm still in deep trouble with finding a \"correct\" way to lay a directory structure and manage to run pytest etc correctly. Let's say my I have a program called apple. |- README.md |- apple | |-- __init__.py | |-- apple.py | - tests | |-- test_everything.py The apple.py contains some functions, for examples sake let's call one eat(). And the test_everything.py file contains some tests like assert eat()==\"foobar\". So good so easy, but then the fun begins: What about the __init__.py in the apple directory... correct? Empty or what should be inside? Is it best practice to call py.test from the root directory? Or py.test tests? So many projects have a __init__.py in their test directory, but that's explicitly said to be wrong in the py.test documentation. So why god why What comes at the top of the test_everything.py file: an import apple or from apple import *? or something else entirely Do you call the functions then by eat() or apple.eat()? Some even recommend manipulating os.path.dirname in python This should be easy, but I've seen every combination of the above, not even speaking about tox and the myriad of other tools. Yet with the slightest error, you get thrown some ImportError: No module named 'apple' or some other funky error. What is the \"right\" way? The advice and the existing code on github etc follows extremely different conventions. For a medium-experienced coder, this should be much easier.", "output": "What about the __init__.py in the apple directory... correct? Empty or what should be inside? Yes, correct. Most frequently empty. If you put foo = 42 in it you can later do from apple import foo while you'll need to do from apple.apple import foo if you put it in apple.py. While it might seem convenient you should use it sparingly. Is it best practice to call py.test from the root directory? Or py.test tests? py.test should be able to find your tests regardless, but see below.. So many projects have a __init__.py in their test directory, but that's explicitly said to be wrong in the py.test documentation. So why god why So you can import a file in tests that provide common test functionality. In py.test that might be better achieved by creating fixtures in a file called tests/conftest.py. What comes at the top of the test_everything.py file: an import apple or from apple import *? or something else entirely from apple import apple Do you call the functions then by eat() or apple.eat()? apple.eat() Some even recommend manipulating os.path.dirname in python That seems very fragile. I would suggest either (a) set the environment variable PYTHONPATH to point to the folder where README.md is, or better (b) create a setup.py file (at the same level as your README.md file), here's a minimal one: from setuptools import setup setup(name='apple', packages=['apple']) Run the file like so: python setup.py develop now apple is globally available and you should never see a no module named apple problem again, i.e. you can run py.test from the root folder or the tests folder. You can read more about setup.py in the Python Packaging User Guide at https://packaging.python.org/"} +{"question_id": 2061222, "score": 76, "creation_date": 1263426827, "tags": ["python", "data-structures", "hash", "dictionary", "hashtable"], "instruction": "What is the true difference between a dictionary and a hash table?\n\nI've always used dictionaries. I write in Python.", "output": "A dictionary is a general concept that maps keys to values. There are many ways to implement such a mapping. A hashtable is a specific way to implement a dictionary. Besides hashtables, another common way to implement dictionaries is red-black trees. Each method has its own pros and cons. A red-black tree can always perform a lookup in O(log N). A hashtable can perform a lookup in O(1) time although that can degrade to O(N) depending on the input."} +{"question_id": 10623727, "score": 76, "creation_date": 1337189839, "tags": ["python", "string", "alignment"], "instruction": "Python spacing and aligning strings\n\nI am trying to add spacing to align text in between two strings vars without using \" \" to do so Trying to get the text to look like this, with the second column being aligned. Location: 10-10-10-10 Revision: 1 District: Tower Date: May 16, 2012 User: LOD Time: 10:15 Currently have it coded like this, just using spaces... \"Location: \" + Location + \" Revision: \" + Revision + '\\n' I tried working with string.rjust & srting.ljust but to no avail. Suggestions?", "output": "You should be able to use the format method: \"Location: {0:20} Revision {1}\".format(Location, Revision) You will have to figure out the format length for each line depending on the length of the label. The User line will need a wider format width than the Location or District lines."} +{"question_id": 36262748, "score": 76, "creation_date": 1459169549, "tags": ["python", "plotly"], "instruction": "save plotly plot to local file and insert into html\n\nI am using python and plotly to product interactive html report. This post gives a nice framework. If I produce the plot(via plotly) online, and insert the url into the html file, it works but refreshing the charts takes a long time. I wonder if I could produce the chart offline and have it embedded in the html report, so that loading speed is not a problem. I find plot offline would generate a html for the chart, but I don't know how to embed it in another html. Anyone could help?", "output": "Option 1: Use plotly's offline functionality in your Jupyter Notebook (I suppose you are using a Jupyter Notebook from the link you are providing). You can simply save the whole notebook as a HTML file. When I do this, the only external reference is to JQuery; plotly.js will be inlined in the HTML source. Option 2: The best way is probably to code directly against plotly's JavaScript library. Documentation for this can be found here: https://plot.ly/javascript/ Update: Calling an internal function has never been a good idea. I recommend to use the approach given by @Fermin Silva. In newer versions, there now is also a dedicated function for this: plotly.io.to_html (see https://plotly.com/python-api-reference/generated/plotly.io.to_html.html) Hacky Option 3 (original version for reference only): If you really want to continue using Python, you can use some hack to extract the HTML it generates. You need some recent version of plotly (I tested it with plotly.__version__ == '1.9.6'). Now, you can use an internal function to get the generated HTML: from plotly.offline.offline import _plot_html data_or_figure = [{\"x\": [1, 2, 3], \"y\": [3, 1, 6]}] plot_html, plotdivid, width, height = _plot_html( data_or_figure, False, \"\", True, '100%', 525) print(plot_html) You can simply paste the output somewhere in the body of your HTML document. Just make sure that you include a reference to plotly in the head: Alternatively, you can also reference the exact plotly version you used to generate the HTML or inline the JavaScript source (which removes any external dependencies; be aware of the legal aspects however). You end up with some HTML code like this:
Note: The underscore at the beginning of the function's name suggests that _plot_html is not meant to be called from external code. So it is likely that this code will break with future versions of plotly."} +{"question_id": 41993565, "score": 76, "creation_date": 1486004846, "tags": ["python", "machine-learning", "scikit-learn", "normalization"], "instruction": "Save MinMaxScaler model in sklearn\n\nI'm using the MinMaxScaler model in sklearn to normalize the features of a model. training_set = np.random.rand(4,4)*10 training_set [[ 6.01144787, 0.59753007, 2.0014852 , 3.45433657], [ 6.03041646, 5.15589559, 6.64992437, 2.63440202], [ 2.27733136, 9.29927394, 0.03718093, 7.7679183 ], [ 9.86934288, 7.59003904, 6.02363739, 2.78294206]] scaler = MinMaxScaler() scaler.fit(training_set) scaler.transform(training_set) [[ 0.49184811, 0. , 0.29704831, 0.15972182], [ 0.4943466 , 0.52384506, 1. , 0. ], [ 0. , 1. , 0. , 1. ], [ 1. , 0.80357559, 0.9052909 , 0.02893534]] Now I want to use the same scaler to normalize the test set: [[ 8.31263467, 7.99782295, 0.02031658, 9.43249727], [ 1.03761228, 9.53173021, 5.99539478, 4.81456067], [ 0.19715961, 5.97702519, 0.53347403, 5.58747666], [ 9.67505429, 2.76225253, 7.39944931, 8.46746594]] But I don't want so use the scaler.fit() with the training data all the time. Is there a way to save the scaler and load it later from a different file?", "output": "So I'm actually not an expert with this but from a bit of research and a few helpful links, I think pickle and sklearn.externals.joblib are going to be your friends here. The package pickle lets you save models or \"dump\" models to a file. I think this link is also helpful. It talks about creating a persistence model. Something that you're going to want to try is: # could use: import pickle... however let's do something else from sklearn.externals import joblib # this is more efficient than pickle for things like large numpy arrays # ... which sklearn models often have. # then just 'dump' your file joblib.dump(clf, 'my_dope_model.pkl') Here is where you can learn more about the sklearn externals. Let me know if that doesn't help or I'm not understanding something about your model. Note: sklearn.externals.joblib is deprecated. Install and use the pure joblib instead"} +{"question_id": 602846, "score": 76, "creation_date": 1236009353, "tags": ["python", "module"], "instruction": "How can I access the current executing module or class name in Python?\n\nI would like to be able to dynamically retrieve the current executing module or class name from within an imported module. Here is some code: foo.py: def f(): print __name__ bar.py: from foo import f def b(): f() This obviously does not work as __name__ is the name of the module that contains the function. What I would like to be access inside the foo module is the name of the current executing module that is using foo. So in the case above it would be bar but if any other module imported foo I would like foo to dynamically have access to the name of that module. Edit: The inspect module looks quite promising but it is not exactly what I was looking for. What I was hoping for was some sort of global or environment-level variable that I could access that would contain the name of the current executing module. Not that I am unwilling to traverse the stack to find that information - I just thought that Python may have exposed that data already. Edit: Here is how I am trying to use this. I have two different Django applications that both need to log errors to file. Lets say that they are called \"AppOne\" and \"AppTwo\". I also have a place to which I would like to log these files: \"/home/hare/app_logs\". In each application at any given point I would like to be able to import my logger module and call the log function which writes the log string to file. However what I would like to do is create a directory under app_logs that is the name of the current application (\"AppOne\" or \"AppTwo\") so that each application's log files will go in their respective logging directories. In order to do this I thought that the best way would be for the logger module to have access to some sort of global variable that denotes the current application's name as it is responsible for knowing the location of the parent logging directory and creating the application's logging directory if it does not yet exist.", "output": "From the comment -- not the question. I am simply curious to see if what I am trying to do is possible. The answer to \"is it possible\" is always \"yes\". Always. Unless your question involves time travel, anti-gravity or perpetual motion. Since the answer is always \"yes\", your question is ill-formed. The real question is \"what's a good way to have my logging module know the name of the client?\" or something like that. The answer is \"Accept it as a parameter.\" Don't mess around with inspecting or looking for mysterious globals or other tricks. Just follow the design pattern of logging.getLogger() and use explicitly-named loggers. A common idiom is the following logger= logging.getLogger( __name__ ) That handles almost all log naming perfectly."} +{"question_id": 70098133, "score": 76, "creation_date": 1637765125, "tags": ["python", "macos", "npm", "node-gyp"], "instruction": "NPM Error \"Can't find Python executable\" in MacOS Big Sur\n\nI've been looking for the answer to this for a good solid week now, with no success. I've looked at every StackOverflow post, every article from Google and every related Github issue I could find. Most related errors seem to be older, so I'm wondering if my issue is slightly different due to me being on macOS Big Sur. The issue: When I try to run yarn install in my local repo, I receive an error related to node-gyp and a python executable that is unable to be found. Here is what my terminal shows: yarn install v1.22.17 ...other stuff [4/4] \ud83d\udd28 Building fresh packages... [6/13] \u2810 node-sass [2/13] \u2810 node-sass [10/13] \u2810 metrohash [4/13] \u2810 fsevents error /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash: Command failed. Exit code: 1 Command: node-gyp rebuild Arguments: Directory: /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash Output: gyp info it worked if it ends with ok gyp info using node-gyp@3.8.0 gyp info using node@12.18.0 | darwin | x64 gyp ERR! configure error gyp ERR! stack Error: Can't find Python executable \"/usr/local/opt/python@3.9/bin/python3\", you can set the PYTHON env variable. gyp ERR! stack at PythonFinder.failNoPython (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/node-gyp/lib/configure.js:484:19) gyp ERR! stack at PythonFinder. (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/node-gyp/lib/configure.js:406:16) gyp ERR! stack at F (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/which/which.js:68:16) gyp ERR! stack at E (/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/which/which.js:80:29) gyp ERR! stack at /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/which/which.js:89:16 gyp ERR! stack at /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/isexe/index.js:42:5 gyp ERR! stack at /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/isexe/mode.js:8:5 gyp ERR! stack at FSReqCallback.oncomplete (fs.js:167:21) gyp ERR! System Darwin 20.6.0 gyp ERR! command \"/Users/jimmiejackson/.nvm/versions/node/v12.18.0/bin/node\" \"/Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash/node_modules/.bin/node-gyp\" \"rebuild\" gyp ERR! cwd /Users/jimmiejackson/Documents/repositories/repo-name/node_modules/metrohash I'm not entirely sure what this error means or why this node module is searching for python3. I've tried running npm set config /path/to/python, downloading python3, setting the PYTHON path in my .zshrc profile, but nothing seems to be working. It's entirely possible that my lack of understanding of the issue means that I'm on the right path but didn't quite get something right. Any ideas?", "output": "Reading the gyp-node source might helps. Here are some steps you can try. Install python2. You should make sure that in the terminal, which -a python2 only returns one python2 and python2 -V returns the correct 2.x version. override PYTHON env. export PYTHON=python2. Rerun the install. If there's still an error, probably the error message is different."} +{"question_id": 13428318, "score": 76, "creation_date": 1353133956, "tags": ["python", "csv"], "instruction": "Reading rows from a CSV file in Python\n\nI have a CSV file, here is a sample of what it looks like: Year: Dec: Jan: 1 50 60 2 25 50 3 30 30 4 40 20 5 10 10 I know how to read the file in and print each column (for ex. - ['Year', '1', '2', '3', etc]). But what I actually want to do is read the rows, which would be like this ['Year', 'Dec', 'Jan'] and then ['1', '50', '60'] and so on. And then I would like to store those numbers ['1', '50', '60'] into variables so I can total them later for ex.: Year_1 = ['50', '60']. Then I can do sum(Year_1) = 110. How would I go about doing that in Python 3?", "output": "You could do something like this: with open(\"data1.txt\") as f: lis = [line.split() for line in f] # create a list of lists for i, x in enumerate(lis): #print the list items print \"line{0} = {1}\".format(i, x) # output line0 = ['Year:', 'Dec:', 'Jan:'] line1 = ['1', '50', '60'] line2 = ['2', '25', '50'] line3 = ['3', '30', '30'] line4 = ['4', '40', '20'] line5 = ['5', '10', '10'] or : with open(\"data1.txt\") as f: for i, line in enumerate(f): print \"line {0} = {1}\".format(i, line.split()) # output line 0 = ['Year:', 'Dec:', 'Jan:'] line 1 = ['1', '50', '60'] line 2 = ['2', '25', '50'] line 3 = ['3', '30', '30'] line 4 = ['4', '40', '20'] line 5 = ['5', '10', '10'] Edit: with open('data1.txt') as f: print \"{0}\".format(f.readline().split()) for x in f: x = x.split() print \"{0} = {1}\".format(x[0],sum(map(int, x[1:]))) # output ['Year:', 'Dec:', 'Jan:'] 1 = 110 2 = 75 3 = 60 4 = 60 5 = 20"} +{"question_id": 5508888, "score": 76, "creation_date": 1301629088, "tags": ["python", "django", "django-models", "django-templates", "django-views"], "instruction": "matching query does not exist Error in Django\n\nI have implemented a password recovery functionality in django. With my method, the new password will be sent to the email id entered. It works fine when given the correct email (e-mail id which exists in the database). But when given an email id which is not in the database, it gives the error: 'DoesNotExist at /forgotPassword/ UniversityDetails matching query does not exist.' How can I resolve this issue? forgotPassword.html() def forgotPassword(request): if request.POST: email=request.POST.get(\"email\") user = UniversityDetails.objects.get(email=email) if(not user): print \"No user\" return render_to_response(\"forgotPassword.html\") else: newPassword = user.password send_mail('Password Recovery', 'The password for your site is '+ newPassword, 'rv_nair@gmail.com', ['rv_ks@gmail.com'], fail_silently=False) return render_to_response(\"passwordRecovery.html\") return render_to_response('forgotPassword.html') html
Enter your E-mail ID
", "output": "try: user = UniversityDetails.objects.get(email=email) except UniversityDetails.DoesNotExist: user = None I also see you're storing your passwords in plaintext (a big security no-no!). Consider using the built-in auth system instead."} +{"question_id": 45466020, "score": 76, "creation_date": 1501690568, "tags": ["python", "tensorflow", "keras"], "instruction": "How to export Keras .h5 to tensorflow .pb?\n\nI have fine-tuned inception model with a new dataset and saved it as \".h5\" model in Keras. now my goal is to run my model on android Tensorflow which accepts \".pb\" extension only. question is that is there any library in Keras or tensorflow to do this conversion? I have seen this post so far : https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html but can't figure out yet.", "output": "Keras does not include by itself any means to export a TensorFlow graph as a protocol buffers file, but you can do it using regular TensorFlow utilities. Here is a blog post explaining how to do it using the utility script freeze_graph.py included in TensorFlow, which is the \"typical\" way it is done. However, I personally find a nuisance having to make a checkpoint and then run an external script to obtain a model, and instead prefer to do it from my own Python code, so I use a function like this: def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True): \"\"\" Freezes the state of a session into a pruned computation graph. Creates a new computation graph where variable nodes are replaced by constants taking their current value in the session. The new graph will be pruned so subgraphs that are not necessary to compute the requested outputs are removed. @param session The TensorFlow session to be frozen. @param keep_var_names A list of variable names that should not be frozen, or None to freeze all the variables in the graph. @param output_names Names of the relevant graph outputs. @param clear_devices Remove the device directives from the graph for better portability. @return The frozen graph definition. \"\"\" graph = session.graph with graph.as_default(): freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or [])) output_names = output_names or [] output_names += [v.op.name for v in tf.global_variables()] input_graph_def = graph.as_graph_def() if clear_devices: for node in input_graph_def.node: node.device = \"\" frozen_graph = tf.graph_util.convert_variables_to_constants( session, input_graph_def, output_names, freeze_var_names) return frozen_graph Which is inspired in the implementation of freeze_graph.py. The parameters are similar to the script too. session is the TensorFlow session object. keep_var_names is only needed if you want to keep some variable not frozen (e.g. for stateful models), so generally not. output_names is a list with the names of the operations that produce the outputs that you want. clear_devices just removes any device directives to make the graph more portable. So, for a typical Keras model with one output, you would do something like: from keras import backend as K # Create, compile and train model... frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.outputs]) Then you can write the graph to a file as usual with tf.train.write_graph: tf.train.write_graph(frozen_graph, \"some_directory\", \"my_model.pb\", as_text=False)"} +{"question_id": 18180763, "score": 76, "creation_date": 1376288991, "tags": ["python", "pandas", "dataframe"], "instruction": "set difference for pandas\n\nA simple pandas question: Is there a drop_duplicates() functionality to drop every row involved in the duplication? An equivalent question is the following: Does pandas have a set difference for dataframes? For example: In [5]: df1 = pd.DataFrame({'col1':[1,2,3], 'col2':[2,3,4]}) In [6]: df2 = pd.DataFrame({'col1':[4,2,5], 'col2':[6,3,5]}) In [7]: df1 Out[7]: col1 col2 0 1 2 1 2 3 2 3 4 In [8]: df2 Out[8]: col1 col2 0 4 6 1 2 3 2 5 5 so maybe something like df2.set_diff(df1) will produce this: col1 col2 0 4 6 2 5 5 However, I don't want to rely on indexes because in my case, I have to deal with dataframes that have distinct indexes. By the way, I initially thought about an extension of the current drop_duplicates() method, but now I realize that the second approach using properties of set theory would be far more useful in general. Both approaches solve my current problem, though. Thanks!", "output": "from pandas import DataFrame df1 = DataFrame({'col1':[1,2,3], 'col2':[2,3,4]}) df2 = DataFrame({'col1':[4,2,5], 'col2':[6,3,5]}) print(df2[~df2.isin(df1).all(1)]) print(df2[(df2!=df1)].dropna(how='all')) print(df2[~(df2==df1)].dropna(how='all'))"} +{"question_id": 17949508, "score": 76, "creation_date": 1375193687, "tags": ["python", "file"], "instruction": "Python: read all text file lines in loop\n\nI want to read huge text file line by line (and stop if a line with \"str\" found). How to check, if file-end is reached? fn = 't.log' f = open(fn, 'r') while not _is_eof(f): ## how to check that end is reached? s = f.readline() print s if \"str\" in s: break", "output": "There's no need to check for EOF in python, simply do: with open('t.ini') as f: for line in f: # For Python3, use print(line) print line if 'str' in line: break Why the with statement: It is good practice to use the with keyword when dealing with file objects. This has the advantage that the file is properly closed after its suite finishes, even if an exception is raised on the way."} +{"question_id": 38854282, "score": 76, "creation_date": 1470754875, "tags": ["python", "types", "unions", "dynamic-typing"], "instruction": "Do union types actually exist in python?\n\nSince python is dynamically typed, of course we can do something like this: def f(x): return 2 if x else \"s\" But is this the way python was actually intended to be used? Or in other words, do union types exist in the sense they do in Racket for example? Or do we only use them like this: def f(x): if x: return \"s\" where the only \"union\" we need is with None?", "output": "Union typing is only needed when you have a statically typed language, as you need to declare that an object can return one of multiple types (in your case an int or str, or in the other example str or NoneType). Python deals in objects only, so there is never a need to even consider 'union types'. Python functions return what they return, if the programmer wants to return different types for different results then that's their choice. The choice is then an architecture choice, and makes no difference to the Python interpreter (so there is nothing to 'benchmark' here). Python 3.5 does introduce a standard for creating optional type hints, and that standard includes Union[...] and Optional[...] annotations. Type hinting adds optional static type checking outside of the runtime, the same way types in TypeScript are not part of the JavaScript runtime."} +{"question_id": 19165259, "score": 76, "creation_date": 1380820611, "tags": ["python", "numpy", "matplotlib", "scipy", "curve-fitting"], "instruction": "python numpy/scipy curve fitting\n\nI have some points and I am trying to fit curve for this points. I know that there exist scipy.optimize.curve_fit function, but I do not understand the documentation, i.e. how to use this function. My points: np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) Can anybody explain how to do that?", "output": "I suggest you to start with simple polynomial fit, scipy.optimize.curve_fit tries to fit a function f that you must know to a set of points. This is a simple 3 degree polynomial fit using numpy.polyfit and poly1d, the first performs a least squares polynomial fit and the second calculates the new points: import numpy as np import matplotlib.pyplot as plt points = np.array([(1, 1), (2, 4), (3, 1), (9, 3)]) # get x and y vectors x = points[:,0] y = points[:,1] # calculate polynomial z = np.polyfit(x, y, 3) f = np.poly1d(z) # calculate new x's and y's x_new = np.linspace(x[0], x[-1], 50) y_new = f(x_new) plt.plot(x,y,'o', x_new, y_new) plt.xlim([x[0]-1, x[-1] + 1 ]) plt.show()"} +{"question_id": 5785154, "score": 76, "creation_date": 1303783309, "tags": ["python", "insertion", "mysql-python"], "instruction": "Python MySQLdb issues (TypeError: %d format: a number is required, not str)\n\nI am trying to do the following insert operation: cursor.execute(\"\"\" insert into tree (id,parent_id,level,description,code,start,end) values (%d,%d,%d,%s,%s,%f,%f) \"\"\", (1,1,1,'abc','def',1,1) ) The structure of my MYSQL table is: id int(255), parent_id int(255), level int(11), description varchar(255), code varchar(255), start decimal(25,4), end decimal(25,4) However when I run my program, I get the error \" File \"/usr/lib/pymodules/python2.6/MySQLdb/cursors.py\", line 151, in execute query = query % db.literal(args) TypeError: %d format: a number is required, not str\"", "output": "The format string is not really a normal Python format string. You must always use %s for all fields. refer official document: If args is a list or tuple, %s can be used as a placeholder in the query. If args is a dict, %(name)s can be used as a placeholder in the query. -> that is: here %s is NOT formatter, but is a placeholder"} +{"question_id": 12130883, "score": 76, "creation_date": 1345991054, "tags": ["python", "r"], "instruction": "R expand.grid() function in Python\n\nIs there a Python function similar to the expand.grid() function in R ? Thanks in advance. (EDIT) Below are the description of this R function and an example. Create a Data Frame from All Combinations of Factors Description: Create a data frame from all combinations of the supplied vectors or factors. > x <- 1:3 > y <- 1:3 > expand.grid(x,y) Var1 Var2 1 1 1 2 2 1 3 3 1 4 1 2 5 2 2 6 3 2 7 1 3 8 2 3 9 3 3 (EDIT2) Below is an example with the rpy package. I would like to get the same output object but without using R : >>> from rpy import * >>> a = [1,2,3] >>> b = [5,7,9] >>> r.assign(\"a\",a) [1, 2, 3] >>> r.assign(\"b\",b) [5, 7, 9] >>> r(\"expand.grid(a,b)\") {'Var1': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'Var2': [5, 5, 5, 7, 7, 7, 9, 9, 9]} EDIT 02/09/2012: I'm really lost with Python. Lev Levitsky's code given in his answer does not work for me: >>> a = [1,2,3] >>> b = [5,7,9] >>> expandgrid(a, b) Traceback (most recent call last): File \"\", line 1, in File \"\", line 2, in expandgrid NameError: global name 'itertools' is not defined However the itertools module seems to be installed (typing from itertools import * does not return any error message)", "output": "Here's an example that gives output similar to what you need: import itertools def expandgrid(*itrs): product = list(itertools.product(*itrs)) return {'Var{}'.format(i+1):[x[i] for x in product] for i in range(len(itrs))} >>> a = [1,2,3] >>> b = [5,7,9] >>> expandgrid(a, b) {'Var1': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'Var2': [5, 7, 9, 5, 7, 9, 5, 7, 9]} The difference is related to the fact that in itertools.product the rightmost element advances on every iteration. You can tweak the function by sorting the product list smartly if it's important. EDIT (by S. Laurent) To have the same as R: def expandgrid(*itrs): # https://stackoverflow.com/a/12131385/1100107 \"\"\" Cartesian product. Reversion is for compatibility with R. \"\"\" product = list(itertools.product(*reversed(itrs))) return [[x[i] for x in product] for i in range(len(itrs))][::-1]"} +{"question_id": 36286894, "score": 76, "creation_date": 1459260797, "tags": ["python", "python-typing"], "instruction": "Name not defined in type annotation\n\nI'm currently working on creating a python linear algebra module for fun and for practice with the language. I recently tried to add type annotations to the module, as such: class Vector: # Various irrelevant implementation details def __add__(self, other: Vector) -> Vector: # More implementation details.... However, when I try to import this, it spits out a NameError: Name 'Vector' is not defined. I acknowledge that this question has already been answered, in a form, here, but it doesn't seem to wholly provide an answer for my situation. What I'd like to know: I've defined the class literally in this file. Why does it say the name isn't defined? How do I define Vector in such a way that it can be used for annotations (as a type)?", "output": "You have a forward declaration; functions (to be bound as methods) are created before the class is, so the name Vector doesn't yet exist. Only when all of the class body has been executed, can Python create the class object and bind the name Vector to it. Simply use a string with the name instead: class Vector: # Various irrelevant implementation details def __add__(self, other: 'Vector') -> 'Vector': # More implementation details.... This doesn't affect how your IDE sees the declaration; strings are looked up once the whole module is loaded, and are resolved as a valid Python expression in the current context. Since the class Vector exists once the whole module is loaded, the string 'Vector' can properly be converted to the class object. Also see the specification on forward references: When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later. [...] The string literal should contain a valid Python expression [...] and it should evaluate without errors once the module has been fully loaded. As of Python 3.7 you can make all annotations in a given module behave like forward annotations (without enclosing them in a string literal), by adding the from __future__ import annotations directive at the top of the module. It was originally planned for this to be the default in Python 3.10 and up, but this decision has now been deferred indefinitely. See PEP 563 -- Postponed Evaluation of Annotations for details. Note that outside of annotations you may still need to use forward reference syntax (string literals), e.g. in a type alias (which is a regular variable assignment as far as Python is concerned)."} +{"question_id": 10741339, "score": 76, "creation_date": 1337876095, "tags": ["python", "django", "security"], "instruction": "Do CSRF attacks apply to API's?\n\nI'm writing a Django RESTful API to back an iOS application, and I keep running into Django's CSRF protections whenever I write methods to deal with POST requests. My understanding is that cookies managed by iOS are not shared by applications, meaning that my session cookies are safe, and no other application can ride on them. Is this true? If so, can I just mark all my API functions as CSRF-exempt?", "output": "That's not the purpose of CSRF protection. CSRF protection is to prevent direct posting of data to your site. In other words, the client must actually post through an approved path, i.e. view the form page, fill it out, submit the data. An API pretty much precludes CSRF, because its entire purpose is generally to allow 3rd-party entities to access and manipulate data on your site (the \"cross-site\" in CSRF). So, yes, I think as a rule any API view should be CSRF exempt. However, you should still follow best practices and protect every API-endpoint that actually makes a change with some form of authentication, such as OAuth."} +{"question_id": 53097952, "score": 76, "creation_date": 1541062660, "tags": ["python", "numpy", "scipy", "stride"], "instruction": "How to understand numpy strides for layman?\n\nI am currently going through numpy and there is a topic in numpy called \"strides\". I understand what it is. But how does it work? I did not find any useful information online. Can anyone let me understand in a layman's terms?", "output": "The actual data of a numpy array is stored in a homogeneous and contiguous block of memory called data buffer. For more information see NumPy internals. Using the (default) row-major order, a 2D array looks like this: To map the indices i,j,k,... of a multidimensional array to the positions in the data buffer (the offset, in bytes), NumPy uses the notion of strides. Strides are the number of bytes to jump-over in the memory in order to get from one item to the next item along each direction/dimension of the array. In other words, it's the byte-separation between consecutive items for each dimension. For example: >>> a = np.arange(1,10).reshape(3,3) >>> a array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) This 2D array has two directions, axes-0 (running vertically downwards across rows), and axis-1 (running horizontally across columns), with each item having size: >>> a.itemsize # in bytes 4 So to go from a[0, 0] -> a[0, 1] (moving horizontally along the 0th row, from the 0th column to the 1st column) the byte-step in the data buffer is 4. Same for a[0, 1] -> a[0, 2], a[1, 0] -> a[1, 1] etc. This means that the number of strides for the horizontal direction (axis-1) is 4 bytes. However, to go from a[0, 0] -> a[1, 0] (moving vertically along the 0th column, from the 0th row to the 1st row), you need first to traverse all the remaining items on the 0th row to get to the 1st row, and then move through the 1st row to get to the item a[1, 0], i.e. a[0, 0] -> a[0, 1] -> a[0, 2] -> a[1, 0]. Therefore the number of strides for the vertical direction (axis-0) is 3*4 = 12 bytes. Note that going from a[0, 2] -> a[1, 0], and in general from the last item of the i-th row to the first item of the (i+1)-th row, is also 4 bytes because the array a is stored in the row-major order. That's why >>> a.strides # (strides[0], strides[1]) (12, 4) Here's another example showing that the strides in the horizontal direction (axis-1), strides[1], of a 2D array is not necessary equal to the item size (e.g. an array with column-major order): >>> b = np.array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]).T >>> b array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> b.strides (4, 12) Here strides[1] is a multiple of the item-size. Although the array b looks identical to the array a, it's a different array: internally b is stored as |1|4|7|2|5|8|3|6|9| (because transposing doesn't affect the data buffer but only swaps the strides and the shape), whereas a as |1|2|3|4|5|6|7|8|9|. What makes them look alike is the different strides. That is, the byte-step for b[0, 0] -> b[0, 1] is 3*4=12 bytes and for b[0, 0] -> b[1, 0] is 4 bytes, whereas for a[0, 0] -> a[0, 1] is 4 bytes and for a[0, 0] -> a[1, 0] is 12 bytes. Last but not least, NumPy allows to create views of existing arrays with the option of modifying the strides and the shape, see stride tricks. For example: >>> np.lib.stride_tricks.as_strided(a, shape=a.shape[::-1], strides=a.strides[::-1]) array([[1, 4, 7], [2, 5, 8], [3, 6, 9]]) which is equivalent to transposing the array a. Let me just add, but without going into much detail, that one can even define strides that are not multiples of the item size. Here's an example: >>> a = np.lib.stride_tricks.as_strided(np.array([1, 512, 0, 3], dtype=np.int16), shape=(3,), strides=(3,)) >>> a array([1, 2, 3], dtype=int16) >>> a.strides[0] 3 >>> a.itemsize 2"} +{"question_id": 16994174, "score": 76, "creation_date": 1370646234, "tags": ["python", "logging", "flask"], "instruction": "In Flask: How to access app Logger within Blueprint\n\nWhat is the standard way for a blueprint to access the application logger?", "output": "Inside the blueprint add: from flask import current_app and when needed, call: current_app.logger.info('grolsh')"} +{"question_id": 19329601, "score": 76, "creation_date": 1381542542, "tags": ["python", "pycharm", "python-idle"], "instruction": "interactive shell debugging with pycharm\n\nI am new to PyCharm. I have been using IDLE for a long time. It is very convenient to use Python objects after script execution in IDLE. Is there any way to use script objects after its execution with interactive python shell using PyCharm? For example, we have a 'test' project with one file 'test.py': a = '123' print a after execution we can get the result: 123 Process finished with exit code 0 How can I use string 'a' with interactive shell? Added after 8 years. My pycharm tutorial in russian: https://youtu.be/-RzKJlLXw54", "output": "Built-in python shell for the current debugging session Set a breakpoint at the line of interest in your code (i.e. by clicking the gutter), and launch debug (right-click in the editor then pick Debug myfile.py...). When the breakpoint is reached, locate the Debug > Console tab, and then click the Show command line icon (see screenshot). This will enable a python shell (notice the green >>> on the screenshot) where you can access all the variables in the current scope, and do everything you usually do in the Python shell. In recent pycharm versions you get the full ipython interpreter instead of the plain python shell (if ipython is installed). The Evaluate expression window As a more comfortable alternative, if you only need to inspect some variables, access members or call methods on an object in scope, once a breakpoint is reached, select an expression in the current scope, right-click -> Evaluate Expression (or use the hotkey shown in the menu under Run \u2192 Evaluate Expression...), edit as needed \u2014 you can type any python expression, with auto-completion available \u2014 and then press Enter (or click Evaluate) to inspect the result. Multiple and multiline expressions are supported: to get a multiline version of the Inspect dialog click the Code fragment mode or select more than one line of code in the editor and then evaluate the expression. You can assign values to existing variables (new ones cannot be defined), and run entire chunks of code. Mouse hover over variables To see the value of a variable after you hit a breakpoint in debug mode, hover the mouse pointer over the variable (1-2 seconds) and the value will be shown in a tooltip. The hint will contain a \u2795 icon \u2014 clicking it will open the inspector in a popup. For the variables in scope the inspector is shown in the bottom panel under Debug > Debugger. For pandas data frames you will see a View as DataFrame link in the variable inspector panel or in the hover popup \u2014 clicking the link will display the dataframe as a table in the Data View panel. Update In the more recent Pycharm versions (2019+) the interpreter icon now looks different: Also in the unittest/pytest debugging UI the icon is placed first in the icon bar."} +{"question_id": 40895785, "score": 76, "creation_date": 1480531095, "tags": ["python", "opencv", "image-processing", "alphablending", "alpha-transparency"], "instruction": "Using openCV to overlay/blend transparent image onto another image\n\nHow can I overlay a transparent PNG onto another image without losing it's transparency using openCV in python? import cv2 background = cv2.imread('field.jpg') overlay = cv2.imread('dice.png') # Help please cv2.imwrite('combined.png', background) Desired output: Sources: Background Image Overlay", "output": "import cv2 background = cv2.imread('field.jpg') overlay = cv2.imread('dice.png') added_image = cv2.addWeighted(background,0.4,overlay,0.1,0) cv2.imwrite('combined.png', added_image)"} +{"question_id": 29463274, "score": 76, "creation_date": 1428275603, "tags": ["python", "xlsxwriter"], "instruction": "Simulate autofit column in xslxwriter\n\nI would like to simulate the Excel autofit function in Python's xlsxwriter. According to this url, it is not directly supported: http://xlsxwriter.readthedocs.io/worksheet.html However, it should be quite straightforward to loop through each cell on the sheet and determine the maximum size for the column and just use worksheet.set_column(row, col, width) to set the width. The complications that is keeping me from just writing this are: That URL does not specify what the units are for the third argument to set_column. I can not find a way to measure the width of the item that I want to insert into the cell. xlsxwriter does not appear to have a method to read back a particular cell. This means I need to keep track of each cell width as I write the cell. It would be better if I could just loop through all the cells, that way a generic routine could be written.", "output": "[NOTE: as of Jan 2023 xslxwriter added a new method called autofit. See jmcnamara's answer below] As a general rule, you want the width of the columns a bit larger than the size of the longest string in the column. The with of 1 unit of the xlsxwriter columns is about equal to the width of one character. So, you can simulate autofit by setting each column to the max number of characters in that column. Per example, I tend to use the code below when working with pandas dataframes and xlsxwriter. It first finds the maximum width of the index, which is always the left column for a pandas to excel rendered dataframe. Then, it returns the maximum of all values and the column name for each of the remaining columns moving left to right. It shouldn't be too difficult to adapt this code for whatever data you are using. def get_col_widths(dataframe): # First we find the maximum length of the index column idx_max = max([len(str(s)) for s in dataframe.index.values] + [len(str(dataframe.index.name))]) # Then, we concatenate this to the max of the lengths of column name and its values for each column, left to right return [idx_max] + [max([len(str(s)) for s in dataframe[col].values] + [len(col)]) for col in dataframe.columns] for i, width in enumerate(get_col_widths(dataframe)): worksheet.set_column(i, i, width)"} +{"question_id": 26312219, "score": 76, "creation_date": 1413012453, "tags": ["python", "django", "django-rest-framework"], "instruction": "OperationalError, no such column. Django\n\nI am going through the Django REST framework tutorial found at http://www.django-rest-framework.org/ I am almost finished with it and just added authentication. Now I am getting : OperationalError at /snippets/ no such column: snippets_snippet.owner_id Request Method: GET Request URL: http://localhost:8000/snippets/ Django Version: 1.7 Exception Type: OperationalError Exception Value: no such column: snippets_snippet.owner_id Exception Location: /Users/taylorallred/Desktop/env/lib/python2.7/site-packages/django/db/backends/sqlite3/base.py in execute, line 485 Python Executable: /Users/taylorallred/Desktop/env/bin/python Python Version: 2.7.5 Python Path: ['/Users/taylorallred/Desktop/tutorial', '/Users/taylorallred/Desktop/env/lib/python27.zip', '/Users/taylorallred/Desktop/env/lib/python2.7', '/Users/taylorallred/Desktop/env/lib/python2.7/plat-darwin', '/Users/taylorallred/Desktop/env/lib/python2.7/plat-mac', '/Users/taylorallred/Desktop/env/lib/python2.7/plat-mac/lib-scriptpackages', '/Users/taylorallred/Desktop/env/Extras/lib/python', '/Users/taylorallred/Desktop/env/lib/python2.7/lib-tk', '/Users/taylorallred/Desktop/env/lib/python2.7/lib-old', '/Users/taylorallred/Desktop/env/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/Users/taylorallred/Desktop/env/lib/python2.7/site-packages'] Server time: Sat, 11 Oct 2014 07:02:34 +0000 I have looked in several places on the web, not just StackOverflow for the solution, it seems like in general that the problem is with my database and need to delete it then remake it, I have done this several times, the tutorial even has me delete the database and remake it at the point. Here is my models.py: from django.db import models from pygments.lexers import get_all_lexers from pygments.styles import get_all_styles from pygments.lexers import get_lexer_by_name from pygments.formatters.html import HtmlFormatter from pygments import highlight LEXERS = [item for item in get_all_lexers() if item[1]] LANGUAGE_CHOICES = sorted([(item[1][0], item[0]) for item in LEXERS]) STYLE_CHOICES = sorted((item, item) for item in get_all_styles()) class Snippet(models.Model): owner = models.ForeignKey('auth.User', related_name='snippets') highlighted = models.TextField() created = models.DateTimeField(auto_now_add=True) title = models.CharField(max_length=100, blank=True, default='') code = models.TextField() linenos = models.BooleanField(default=False) language = models.CharField(choices=LANGUAGE_CHOICES, default='python', max_length=100) style = models.CharField(choices=STYLE_CHOICES, default='friendly', max_length=100) class Meta: ordering = ('created',) def save(self, *args, **kwargs): \"\"\" Use the 'pygments' library to create a highlighted HTML representation of the code snippet. \"\"\" lexer = get_lexer_by_name(self.language) linenos = self.linenos and 'table' or False options = self.title and {'title': self.title} or {} formatter = HtmlFormatter(style=self.style, linenos=linenos, full=true, **options) self.highlighted = highlight(self.code, lexer, formatter) super(Snippet, self).save(*args, **kwargs) My serializers.py: from django.forms import widgets from rest_framework import serializers from snippets.models import Snippet, LANGUAGE_CHOICES, STYLE_CHOICES from django.contrib.auth.models import User class SnippetSerializer(serializers.ModelSerializer): owner = serializers.Field(source='owner.username') class Meta: model = Snippet fields = ('id', 'title', 'code', 'linenos', 'language', 'style', 'owner') class UserSerializer(serializers.ModelSerializer): snippets = serializers.PrimaryKeyRelatedField(many=True) class Meta: model = User fields = ('id', 'username', 'snippets') My views.py: from snippets.models import Snippet from snippets.serializers import SnippetSerializer from rest_framework import generics from django.contrib.auth.models import User from snippets.serializers import UserSerializer from rest_framework import permissions class SnippetList(generics.ListCreateAPIView): \"\"\" List all snippets, or create a new snippet. \"\"\" queryset = Snippet.objects.all() serializer_class = SnippetSerializer def pre_save(self, obj): obj.owner = self.request.user permission_classes = (permissions.IsAuthenticatedOrReadOnly,) class SnippetDetail(generics.RetrieveUpdateDestroyAPIView): \"\"\" Retrieve, update or delete a nippet instance. \"\"\" queryset = Snippet.objects.all() serializer_class = SnippetSerializer def pre_save(self, obj): obj.owner = self.request.user permission_classes = (permissions.IsAuthenticatedOrReadOnly,) class UserList(generics.ListAPIView): queryset = User.objects.all() serializer_class = UserSerializer class UserDetail(generics.RetrieveAPIView): queryset = User.objects.all() serializer_class = UserSerializer And finally my urls.py from django.conf.urls import include from django.conf.urls import patterns, url from rest_framework.urlpatterns import format_suffix_patterns from snippets import views urlpatterns = patterns('', url(r'^snippets/$', views.SnippetList.as_view()), url(r'^snippets/(?P[0-9]+)/$', views.SnippetDetail.as_view()), url(r'^users/$', views.UserList.as_view()), url(r'^users/(?P[0-9]+)/$', views.UserDetail.as_view()), ) urlpatterns = format_suffix_patterns(urlpatterns) urlpatterns += patterns('', url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')), ) DB Schema: CREATE TABLE \"snippets_snippet\" (\"id\" integer NOT NULL PRIMARY KEY AUTOINCREMENT, \"created\" datetime NOT NULL, \"title\" varchar(100) NOT NULL, \"code\" text NOT NULL, \"linenos\" bool NOT NULL, \"language\" varchar(100) NOT NULL, \"style\" varchar(100) NOT NULL); After doing some digging I found that when deleting and recreating the DB (as the tutorial says to) instead of using the make migrations command it would not only NOT add the columns but it would also not tell me something was wrong when running the make migrations command it tells me: You are trying to add a non-nullable field 'highlighted' to snippet without a default; we can't do that (the database needs something to populate existing rows). Please select a fix: 1) Provide a one-off default now (will be set on all existing rows) 2) Quit, and let me add a default in models.py If I comment out the highlighted section in models.py it will post the same message above but for the owner line. So it wants a default for both highlighted and owner, but I am not sure what to use as it. As well as the tutorial isn't helping me either on it.", "output": "As you went through the tutorial you must have come across the section on migration, as this was one of the major changes in Django 1.7 Prior to Django 1.7, the syncdb command never made any change that had a chance to destroy data currently in the database. This meant that if you did syncdb for a model, then added a new row to the model (a new column, effectively), syncdb would not affect that change in the database. So either you dropped that table by hand and then ran syncdb again (to recreate it from scratch, losing any data), or you manually entered the correct statements at the database to add only that column. Then a project came along called south which implemented migrations. This meant that there was a way to migrate forward (and reverse, undo) any changes to the database and preserve the integrity of data. In Django 1.7, the functionality of south was integrated directly into Django. When working with migrations, the process is a bit different. Make changes to models.py (as normal). Create a migration. This generates code to go from the current state to the next state of your model. This is done with the makemigrations command. This command is smart enough to detect what has changed and will create a script to effect that change to your database. Next, you apply that migration with migrate. This command applies all migrations in order. So your normal syncdb is now a two-step process, python manage.py makemigrations followed by python manage.py migrate. Now, on to your specific problem: class Snippet(models.Model): owner = models.ForeignKey('auth.User', related_name='snippets') highlighted = models.TextField() created = models.DateTimeField(auto_now_add=True) title = models.CharField(max_length=100, blank=True, default='') code = models.TextField() linenos = models.BooleanField(default=False) language = models.CharField(choices=LANGUAGE_CHOICES, default='python', max_length=100) style = models.CharField(choices=STYLE_CHOICES, default='friendly', max_length=100) In this model, you have two fields highlighted and code that is required (they cannot be null). Had you added these fields from the start, there wouldn't be a problem because the table has no existing rows? However, if the table has already been created and you add a field that cannot be null, you have to define a default value to provide for any existing rows - otherwise, the database will not accept your changes because they would violate the data integrity constraints. This is what the command is prompting you about. You can tell Django to apply a default during migration, or you can give it a \"blank\" default highlighted = models.TextField(default='') in the model itself."} +{"question_id": 73929564, "score": 76, "creation_date": 1664747002, "tags": ["python", "django", "digital-ocean"], "instruction": "'EntryPoints' object has no attribute 'get' - Digital ocean\n\nI have made a deplyoment to Digital ocean, on staging (Heroku server) the app is working well, but Digital ocean it's failing with the error below, what could be the issue : AttributeError at /admin/ 'EntryPoints' object has no attribute 'get' Request Method: GET Request URL: https://xxxx/admin/ Django Version: 3.1 Exception Type: AttributeError Exception Value: 'EntryPoints' object has no attribute 'get' Exception Location: /usr/local/lib/python3.7/site-packages/markdown/util.py, line 85, in Python Executable: /usr/local/bin/python Python Version: 3.7.5 Python Path: ['/opt/app', '/usr/local/bin', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7', '/usr/local/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/site-packages', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf'] Server time: Sun, 02 Oct 2022 21:41:00 +0000", "output": "Because importlib-metadata releases v5.0.0 yesterday which it remove deprecated endpoint. You can set importlib-metadata<5.0 in ur setup.py so it does not install latest version. Or if you use requirements.txt, you can as well set importlib-metadata below version 5.0 e.g importlib-metadata==4.13.0 For more info: https://importlib-metadata.readthedocs.io/en/latest/history.html"} +{"question_id": 22278993, "score": 76, "creation_date": 1394344904, "tags": ["python", "python-3.x", "python-3.3"], "instruction": "AttributeError: 'module' object has no attribute 'request'\n\nWhen I run the following code in Python 3.3: import urllib tempfile = urllib.request.urlopen(\"http://yahoo.com\") I get the following error: I did this too to verify: What am I doing wrong?", "output": "The urllib module has been split into parts and renamed in Python 3 to urllib.request, urllib.parse, and urllib.error. Import urllib.request instead of urllib. import urllib.request"} +{"question_id": 49299574, "score": 76, "creation_date": 1521117107, "tags": ["python", "conda"], "instruction": "How to list all installed Jupyter kernels?\n\nListing all the available environments is as simple as: $ conda env list Now how does one list the currently installed kernels, without having to go to the path: $ ls /home/{{user}}/.local/share/jupyter/kernels/", "output": "With Jupyter installed you get the list of currently installed kernels with: $ jupyter kernelspec list python2 /usr/local/lib/python2.7/dist-packages/ipykernel/resources testenv /home/{{user}}/.local/share/jupyter/kernels/sparkenv"} +{"question_id": 37433157, "score": 76, "creation_date": 1464168964, "tags": ["python", "asynchronous", "contextmanager"], "instruction": "Asynchronous context manager\n\nI have an asynchronous API which I'm using to connect and send mail to an SMTP server which has some setup and tear down to it. So it fits nicely into using a contextmanager from Python 3's contextlib. Though, I don't know if it's possible write because they both use the generator syntax to write. This might demonstrate the problem (contains a mix of yield-base and async-await syntax to demonstrate the difference between async calls and yields to the context manager). @contextmanager async def smtp_connection(): client = SMTPAsync() ... try: await client.connect(smtp_url, smtp_port) await client.starttls() await client.login(smtp_username, smtp_password) yield client finally: await client.quit() Is this kind of thing possible within python currently? and how would I use a with as statement if it is? If not is there a alternative way I could achieve this - maybe using the old style context manager?", "output": "Since Python 3.7, you can write: from contextlib import asynccontextmanager @asynccontextmanager async def smtp_connection(): client = SMTPAsync() ... try: await client.connect(smtp_url, smtp_port) await client.starttls() await client.login(smtp_username, smtp_password) yield client finally: await client.quit() Before 3.7, you can use the async_generator package for this. On 3.6, you can write: # This import changed, everything else is the same from async_generator import asynccontextmanager @asynccontextmanager async def smtp_connection(): client = SMTPAsync() ... try: await client.connect(smtp_url, smtp_port) await client.starttls() await client.login(smtp_username, smtp_password) yield client finally: await client.quit() And if you want to work all the way back to 3.5, you can write: # This import changed again: from async_generator import asynccontextmanager, async_generator, yield_ @asynccontextmanager @async_generator # <-- added this async def smtp_connection(): client = SMTPAsync() ... try: await client.connect(smtp_url, smtp_port) await client.starttls() await client.login(smtp_username, smtp_password) await yield_(client) # <-- this line changed finally: await client.quit()"} +{"question_id": 1136106, "score": 76, "creation_date": 1247731710, "tags": ["python", "sql", "django", "sqlite", "insert"], "instruction": "What is an efficient way of inserting thousands of records into an SQLite table using Django?\n\nI have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute. At the moment I'm using a for loop to iterate through all the items and then insert them one by one. Example: for item in items: entry = Entry(a1=item.a1, a2=item.a2) entry.save() What is an efficient way of doing this? Edit: A little comparison between the two insertion methods. Without commit_manually decorator (11245 records): nox@noxdevel marinetraffic]$ time python manage.py insrec real 1m50.288s user 0m6.710s sys 0m23.445s Using commit_manually decorator (11245 records): [nox@noxdevel marinetraffic]$ time python manage.py insrec real 0m18.464s user 0m5.433s sys 0m10.163s Note: The test script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.", "output": "You want to check out django.db.transaction.commit_manually. http://docs.djangoproject.com/en/dev/topics/db/transactions/#django-db-transaction-commit-manually So it would be something like: from django.db import transaction @transaction.commit_manually def viewfunc(request): ... for item in items: entry = Entry(a1=item.a1, a2=item.a2) entry.save() transaction.commit() Which will only commit once, instead at each save(). In django 1.3 context managers were introduced. So now you can use transaction.commit_on_success() in a similar way: from django.db import transaction def viewfunc(request): ... with transaction.commit_on_success(): for item in items: entry = Entry(a1=item.a1, a2=item.a2) entry.save() In django 1.4, bulk_create was added, allowing you to create lists of your model objects and then commit them all at once. NOTE the save method will not be called when using bulk create. >>> Entry.objects.bulk_create([ ... Entry(headline=\"Django 1.0 Released\"), ... Entry(headline=\"Django 1.1 Announced\"), ... Entry(headline=\"Breaking: Django is awesome\") ... ]) In django 1.6, transaction.atomic was introduced, intended to replace now legacy functions commit_on_success and commit_manually. from the django documentation on atomic: atomic is usable both as a decorator: from django.db import transaction @transaction.atomic def viewfunc(request): # This code executes inside a transaction. do_stuff() and as a context manager: from django.db import transaction def viewfunc(request): # This code executes in autocommit mode (Django's default). do_stuff() with transaction.atomic(): # This code executes inside a transaction. do_more_stuff()"} +{"question_id": 30447083, "score": 76, "creation_date": 1432595004, "tags": ["python", "pandas", "missing-data"], "instruction": "Python, Pandas : Return only those rows which have missing values\n\nWhile working in Pandas in Python... I'm working with a dataset that contains some missing values, and I'd like to return a dataframe which contains only those rows which have missing data. Is there a nice way to do this? (My current method to do this is an inefficient \"look to see what index isn't in the dataframe without the missing values, then make a df out of those indices.\")", "output": "You can use any axis=1 to check for least one True per row, then filter with boolean indexing: null_data = df[df.isnull().any(axis=1)]"} +{"question_id": 53482760, "score": 76, "creation_date": 1543240848, "tags": ["python", "pandas"], "instruction": "Filter data frame based on index value in Python\n\nI have a data frame df with thousands of rows, and a sample is this: Index A B C D E F EX-A.1.A.B-1A 18 7 2 2 9 8 EX-A.1.A.B-1C 0 0 0 0 0 0 EX-A.1.A.B-4A 6 4 8 6 1 1 EX-A.1.A.B-4C 0 0 0 0 0 0 EX-A.1.A.B-4F 0 0 0 0 0 0 I also have a list my_list = [\"EX-A.1.A.B-1A\",\"EX-A.1.A.B-4A\",\"EX-A.1.A.B-4F\"] and I want to filter the df based on this list, therefore I want to keep the rows for which the index value is in the list my_list. I tried this in order to create a new filtered df: Filter_df = df[df.index in my_list] and I get this error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all(). Any ideas on how I could do this? Thanks", "output": "To check if row indices are in my_list: Filter_df = df[df.index.isin(my_list)]"} +{"question_id": 20965046, "score": 76, "creation_date": 1389075064, "tags": ["python", "pandas", "dataframe", "cumulative-sum"], "instruction": "Cumulative sum and percentage on column?\n\nI have a DataFrame like this: df: fruit val1 val2 0 orange 15 3 1 apple 10 13 2 mango 5 5 How do I get Pandas to give me a cumulative sum and percentage column on only val1? Desired output: df_with_cumsum: fruit val1 val2 cum_sum cum_perc 0 orange 15 3 15 50.00 1 apple 10 13 25 83.33 2 mango 5 5 30 100.00 I tried df.cumsum(), but it's giving me this error: TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''", "output": "df['cum_sum'] = df['val1'].cumsum() df['cum_perc'] = 100*df['cum_sum']/df['val1'].sum() This will add the columns to df. If you want a copy, copy df first and then do these operations on the copy."} +{"question_id": 43328632, "score": 76, "creation_date": 1491842332, "tags": ["python", "pytorch", "reshape", "tensor"], "instruction": "PyTorch reshape tensor dimension\n\nI want to reshape a vector of shape (5,) into a matrix of shape (1, 5). With numpy, I can do: >>> import numpy as np >>> a = np.array([1, 2, 3, 4, 5]) >>> a.shape (5,) >>> a = np.reshape(a, (1, 5)) >>> a.shape (1, 5) >>> a array([[1, 2, 3, 4, 5]]) But how do I do this with PyTorch?", "output": "Use torch.unsqueeze(input, dim, out=None): >>> import torch >>> a = torch.Tensor([1, 2, 3, 4, 5]) >>> a 1 2 3 4 5 [torch.FloatTensor of size 5] >>> a = a.unsqueeze(0) >>> a 1 2 3 4 5 [torch.FloatTensor of size 1x5]"} +{"question_id": 65622045, "score": 76, "creation_date": 1610067417, "tags": ["python", "python-3.x", "jsonserializer", "pydantic"], "instruction": "pydantic convert to jsonable dict (not full json string)\n\nI'd like to use pydantic for handling data (bidirectionally) between an api and datastore due to it's nice support for several types I care about that are not natively json-serializable. It has better read/validation support than the current approach, but I also need to create json-serializable dict objects to write out. from uuid import UUID, uuid4 from pydantic import BaseModel class Model(BaseModel): the_id: UUID instance = Model(the_id=uuid4()) print(\"1: %s\" % instance.dict() print(\"2: %s\" % instance.json() prints {'the_id': UUID('4108356a-556e-484b-9447-07b56a664763')} >>> inst.json() '{\"the_id\": \"4108356a-556e-484b-9447-07b56a664763\"}' Id like the following: {\"the_id\": \"4108356a-556e-484b-9447-07b56a664763\"} # eg \"json-compatible\" dict It appears that while pydantic has all the mappings, but I can't find any usage of the serialization outside the standard json ~recursive encoder (json.dumps( ... default=pydantic_encoder)) in pydantic/main.py. but I'd prefer to keep to one library for both validate raw->obj (pydantic is great at this) as well as the obj->raw(dict) so that I don't have to manage multiple serialization mappings. I suppose I could implement something similar to the json usage of the encoder, but this should be a common use case? Other approaches such as dataclasses(builtin) + libraries such as dataclasses_jsonschema provide this ~serialization to json-ready dict, but again, hoping to use pydantic for the more robust input validation while keeping things symmetrical.", "output": "it appears this functionality has been proposed, and (may be) favored by pydantic's author samuel colvin, as https://github.com/samuelcolvin/pydantic/issues/951#issuecomment-552463606 which proposes adding a simplify parameter to Model.dict() to output jsonable data. This code runs in a production api layer, and is exersized such that we can't use the one-line workaround suggested (just doing a full serialize (.json()) + full deserialize). We implemented a custom function to do this, descending the result of .dict() and converting types to jsonable - hopefully the above proposed functionality is added to pydantic in the future."} +{"question_id": 46741423, "score": 76, "creation_date": 1507960823, "tags": ["python", "pandas", "date"], "instruction": "Add months to a date in Pandas\n\nI'm trying to figure out how to add 3 months to a date in a Pandas dataframe, while keeping it in the date format, so I can use it to lookup a range. This is what I've tried: #create dataframe df = pd.DataFrame([pd.Timestamp('20161011'), pd.Timestamp('20161101') ], columns=['date']) #create a future month period plus_month_period = 3 #calculate date + future period df['future_date'] = plus_month_period.astype(\"timedelta64[M]\") However, I get the following error: AttributeError: 'int' object has no attribute 'astype'", "output": "You could use pd.DateOffset In [1756]: df.date + pd.DateOffset(months=plus_month_period) Out[1756]: 0 2017-01-11 1 2017-02-01 Name: date, dtype: datetime64[ns] Details In [1757]: df Out[1757]: date 0 2016-10-11 1 2016-11-01 In [1758]: plus_month_period Out[1758]: 3"} +{"question_id": 5975952, "score": 76, "creation_date": 1305192475, "tags": ["python", "http", "post", "basehttpserver", "basehttprequesthandler"], "instruction": "How to extract HTTP message body in BaseHTTPRequestHandler.do_POST()?\n\nIn the do_POST() method of BaseHTTPRequestHandler I can access the headers of the POST request simply via the property self.headers. But I can't find a similar property for accessing the body of the message. How do I then go about doing that?", "output": "You can access POST body in do_POST method like this: for python 2 content_len = int(self.headers.getheader('content-length', 0)) for python 3 content_len = int(self.headers.get('Content-Length')) and then read the data post_body = self.rfile.read(content_len)"} +{"question_id": 15605925, "score": 76, "creation_date": 1364171672, "tags": ["python", "exception", "try-catch", "read-eval-print-loop"], "instruction": "How to get the last exception object after an error is raised at a Python prompt?\n\nWhen debugging Python code at the interactive prompt (REPL), often I'll write some code which raises an exception, but I haven't wrapped it in a try/except, so once the error is raised, I've forever lost the exception object. Often the traceback and error message Python prints out isn't enough. For example, when fetching a URL, the server might return a 40x error, and you need the content of the response via error.read() ... but you haven't got the error object anymore. For example: >>> import urllib2 >>> f = urllib2.urlopen('http://example.com/api/?foo=bad-query-string') Traceback (most recent call last): File \"\", line 1, in ... urllib2.HTTPError: HTTP Error 400: Bad Request Drat, what did the body of the response say? It probably had valuable error information in it... I realize it's usually easy to re-run your code wrapped in a try/except, but that's not ideal. I also realize that in this specific case if I were using the requests library (which doesn't raise for HTTP errors), I wouldn't have this problem ... but I'm really wondering if there's a more general way to get the last exception object at a Python prompt in these cases.", "output": "The sys module provides some functions for post-hoc examining of exceptions: sys.last_type, sys.last_value, and sys.last_traceback. sys.last_value is the one you're looking for."} +{"question_id": 19986662, "score": 76, "creation_date": 1384457561, "tags": ["python", "floating-point", "decimal", "rounding"], "instruction": "Rounding a number in Python but keeping ending zeros\n\nI've been working on a script that takes data from an Excel spreadsheet, rounds the numbers, and removes the decimal point, for example, 2606.89579999999 becomes 26069. However, I need the number to round to two decimal places even if there would be a trailing zero, so 2606.89579999999 should become 260690. I currently have it so i takes the data from the cell in Excel, and rounds it to two decimal places (i = round(i, 2)) which gives me the single decimal point in the above example. I've tried figuring out how to get this to work with Decimal, but I can't seem to get it working. All other numbers that get rounded, if the rounded value doesn't end in '0', work fine with round(i, 2), but if the numbers just so happen to end in *.x0, that 0 gets dropped off and messes with the data.", "output": "As you are talking about trailing zeros, this is a question about representation as string, you can use >>> \"%.2f\" % round(2606.89579999999, 2) '2606.90' Or use modern style with format function: >>> '{:.2f}'.format(round(2606.89579999999, 2)) '2606.90' and remove point with replace or translate (_ refers to result of previous command in python console): >>> _.translate(None, '.') '260690' Note that rounding is not needed here, as .2f format applies the same rounding: >>> \"%.2f\" % 2606.89579999999 '2606.90' But as you mentioned excel, you probably would opt to roll your own rounding function, or use decimal, as float.round can lead to strange results due to float representation: >>> round(2.675, 2) 2.67 >>> round(2606.89579999999, 2) 2606.89 With decimal use quantize: >>> from decimal import * >>> x = Decimal('2606.8950000000001') # Decimal('2606.8950000000001') >>> '{}'.format(x.quantize(Decimal('.01'), rounding=ROUND_HALF_EVEN)) '2606.90' That, for your original task, becomes: >>> x = Decimal('2606.8950000000001') >>> int((x*100).quantize(1, rounding=ROUND_HALF_EVEN)) 260690 And the reason of strange rounding comes to the front with Decimal: >>> x = Decimal(2606.8950000000001) # Decimal('2606.89499999999998181010596454143524169921875') # internal float repr"} +{"question_id": 13203601, "score": 76, "creation_date": 1351893570, "tags": ["python", "list", "big-o"], "instruction": "Big-O of list slicing\n\nSay I have some Python list, my_list which contains N elements. Single elements may be indexed by using my_list[i_1], where i_1 is the index of the desired element. However, Python lists may also be indexed my_list[i_1:i_2] where a \"slice\" of the list from i_1 to i_2 is desired. What is the Big-O (worst-case) notation to slice a list of size N? Personally, if I were coding the \"slicer\" I would iterate from i_1 to i_2, generate a new list and return it, implying O(N), is this how Python does it? Thank you,", "output": "Getting a slice is O(i_2 - i_1). This is because Python's internal representation of a list is an array, so you can start at i_1 and iterate to i_2. For more information, see the Python Time Complexity wiki entry You can also look at the implementation in the CPython source if you want to."} +{"question_id": 10162679, "score": 76, "creation_date": 1334498435, "tags": ["python", "heap"], "instruction": "Python: delete element from heap\n\nPython has heapq module which implements heap data structure and it supports some basic operations (push, pop). How to remove i-th element from the heap in O(log n)? Is it even possible with heapq or do I have to use another module? Note, there is an example at the bottom of the documentation: http://docs.python.org/library/heapq.html which suggest a possible approach - this is not what I want. I want the element to remove, not to merely mark as removed.", "output": "You can remove the i-th element from a heap quite easily: h[i] = h[-1] h.pop() heapq.heapify(h) Just replace the element you want to remove with the last element and remove the last element then re-heapify the heap. This is O(n), if you want you can do the same thing in O(log(n)) but you'll need to call a couple of the internal heapify functions, or better as larsmans pointed out just copy the source of _siftup/_siftdown out of heapq.py into your own code: h[i] = h[-1] h.pop() if i < len(h): heapq._siftup(h, i) heapq._siftdown(h, 0, i) Note that in each case you can't just do h[i] = h.pop() as that would fail if i references the last element. If you special case removing the last element then you could combine the overwrite and pop. Note that depending on the typical size of your heap you might find that just calling heapify while theoretically less efficient could be faster than re-using _siftup/_siftdown: a little bit of introspection will reveal that heapify is probably implemented in C but the C implementation of the internal functions aren't exposed. If performance matter to you then consider doing some timing tests on typical data to see which is best. Unless you have really massive heaps big-O may not be the most important factor. Edit: someone tried to edit this answer to remove the call to _siftdown with a comment that: _siftdown is not needed. New h[i] is guaranteed to be the smallest of the old h[i]'s children, which is still larger than old h[i]'s parent (new h[i]'s parent). _siftdown will be a no-op. I have to edit since I don't have enough rep to add a comment yet. What they've missed in this comment is that h[-1] might not be a child of h[i] at all. The new value inserted at h[i] could come from a completely different branch of the heap so it might need to be sifted in either direction. Also to the comment asking why not just use sort() to restore the heap: calling _siftup and _siftdown are both O(log n) operations, calling heapify is O(n). Calling sort() is an O(n log n) operation. It is quite possible that calling sort will be fast enough but for large heaps it is an unnecessary overhead. Edited to avoid the issue pointed out by @Seth Bruder. When i references the end element the _siftup() call would fail, but in that case popping an element off the end of the heap doesn't break the heap invariant. Note Someone suggested an edit (rejected before I got to it) changing the heapq._siftdown(h, 0, i) to heapq._siftdown(h, o, len(h)). This would be incorrect as the final parameter on _siftdown() is the position of the element to move, not some limit on where to move it. The second parameter 0 is the limit. Sifting up temporarily removes from the list the new item at the specified position and moves the smallest child of that position up repeating that until all smaller children have been moved up and a leaf is left empty. The removed item is inserted in the empty leaf node then _siftdown() is called to move it below any larger parent node. The catch is the call to _siftdown() inside _siftup() uses the second parameter to terminate the sift at the original position. The extra call to _siftdown() in the code I gave is used to continue the sift down as far as the root of the heap. It only does something if the new element actually needs to be further down than the position it got inserted. For the avoidance of doubt: sift up moves to higher indexes in the list. Sift down moves to lower indexes i.e. earlier in the list. The heap has its root at position 0 and its leaves at higher numbers."} +{"question_id": 50468951, "score": 76, "creation_date": 1526995674, "tags": ["python", "pip", "config", "pypi"], "instruction": "Credentials in pip.conf for private PyPI\n\nI have a private PyPI repository. Is there any way to store credentials in pip.conf similar to .pypirc? What I mean. Currently in .pypirc you can have such configuration: [distutils] index-servers = custom [custom] repository: https://pypi.example.com username: johndoe password: changeme From what I've found that you can put in pip.conf: [global] index = https://username:password@pypi.example.com/pypi index-url = https://username:password@pypi.example.com/simple cert = /etc/ssl/certs/ca-certificates.crt But here I see two problems: For each url you'll need each time to specify the same username and password. Username and password become visible in the logs, cause they are part of the url. Is there any way to store username and password outside of url?", "output": "Ideally, you should configure Pip's keyring support (see that link for caveats). Some backends store credentials in an encrypted/protected form. Other parts of the packaging ecosystem, like Twine, also support keyring. Alternatively, you could store credentials for Pip to use in ~/.netrc like this: machine pypi.example.com login johndoe password changeme Pip will use these credentials when accessing https://pypi.example.com but won't log them. You must specify the index server separately (such as in pip.conf as in the question). Note that ~/.netrc must be owned by the user pip executes as. It must not be readable by any other user, either. An invalid file is silently ignored. You can ensure the permissions are correct like this: chown $USER ~/.netrc chmod 0600 ~/.netrc This permissions check doesn't apply before Python 3.4, but it's a good idea in any case. Internally Pip uses requests when making HTTP requests. requests uses the standard library netrc module to read the file, so the character set is limited to an ASCII subset."} +{"question_id": 14365027, "score": 76, "creation_date": 1358360248, "tags": ["python", "rest", "post", "urllib2", "redmine"], "instruction": "Python POST binary data\n\nI am writing some code to interface with redmine and I need to upload some files as part of the process, but I am not sure how to do a POST request from python containing a binary file. I am trying to mimic the commands here: curl --data-binary \"@image.png\" -H \"Content-Type: application/octet-stream\" -X POST -u login:password http://redmine/uploads.xml In python (below), but it does not seem to work. I am not sure if the problem is somehow related to encoding the file or if something is wrong with the headers. import urllib2, os FilePath = \"C:\\somefolder\\somefile.7z\" FileData = open(FilePath, \"rb\") length = os.path.getsize(FilePath) password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm() password_manager.add_password(None, 'http://redmine/', 'admin', 'admin') auth_handler = urllib2.HTTPBasicAuthHandler(password_manager) opener = urllib2.build_opener(auth_handler) urllib2.install_opener(opener) request = urllib2.Request( r'http://redmine/uploads.xml', FileData) request.add_header('Content-Length', '%d' % length) request.add_header('Content-Type', 'application/octet-stream') try: response = urllib2.urlopen( request) print response.read() except urllib2.HTTPError as e: error_message = e.read() print error_message I have access to the server and it looks like a encoding error: ... invalid byte sequence in UTF-8 Line: 1 Position: 624 Last 80 unconsumed characters: 7z\u00bc\u00af'\u00c5\u00d0\u00d0\u00b72^\u00d4\u00f8\u00eb4g\u00b8R>> False.__sizeof__() 24 >>> True.__sizeof__() 24 Python 3.x >>> False.__sizeof__() 24 >>> True.__sizeof__() 28 What changed in Python 3 that makes the size of True greater than the size of False?", "output": "It is because bool is a subclass of int in both Python 2 and 3. >>> issubclass(bool, int) True But the int implementation has changed. In Python 2, int was the one that was 32 or 64 bits, depending on the system, as opposed to arbitrary-length long. In Python 3, int is arbitrary-length - the long of Python 2 was renamed to int and the original Python 2 int dropped altogether. In Python 2 you get the exactly same behaviour for long objects 1L and 0L: Python 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import sys >>> sys.getsizeof(1L) 28 >>> sys.getsizeof(0L) 24 The long/Python 3 int is a variable-length object, just like a tuple - when it is allocated, enough memory is allocated to hold all the binary digits required to represent it. The length of the variable part is stored in the object head. 0 requires no binary digits (its variable length is 0), but even 1 spills over, and requires extra digits. I.e. 0 is represented as binary string of length 0: <> and 1 is represented as a 30-bit binary string: <000000000000000000000000000001> The default configuration in Python uses 30 bits in a uint32_t; so 2**30 - 1 still fits in 28 bytes on x86-64, and 2**30 will require 32; 2**30 - 1 will be presented as <111111111111111111111111111111> i.e. all 30 value bits set to 1; 2**30 will need more, and it will have internal representation <000000000000000000000000000001000000000000000000000000000000> As for True using 28 bytes instead of 24 - you need not worry. True is a singleton and therefore only 4 bytes are lost in total in any Python program, not 4 for every usage of True."} +{"question_id": 41924453, "score": 76, "creation_date": 1485714684, "tags": ["python", "torch", "pytorch"], "instruction": "PyTorch: How to use DataLoaders for custom Datasets\n\nHow to make use of the torch.utils.data.Dataset and torch.utils.data.DataLoader on your own data (not just the torchvision.datasets)? Is there a way to use the inbuilt DataLoaders which they use on TorchVisionDatasets to be used on any dataset?", "output": "Yes, that is possible. Just create the objects by yourself, e.g. import torch.utils.data as data_utils train = data_utils.TensorDataset(features, targets) train_loader = data_utils.DataLoader(train, batch_size=50, shuffle=True) where features and targets are tensors. features has to be 2-D, i.e. a matrix where each line represents one training sample, and targets may be 1-D or 2-D, depending on whether you are trying to predict a scalar or a vector. EDIT: response to @sarthak's question Basically yes. If you create an object of type TensorData, then the constructor investigates whether the first dimensions of the feature tensor (which is actually called data_tensor) and the target tensor (called target_tensor) have the same length: assert data_tensor.size(0) == target_tensor.size(0) However, if you want to feed these data into a neural network subsequently, then you need to be careful. While convolution layers work on data like yours, (I think) all of the other types of layers expect the data to be given in matrix form. So, if you run into an issue like this, then an easy solution would be to convert your 4D-dataset (given as some kind of tensor, e.g. FloatTensor) into a matrix by using the method view. For your 5000xnxnx3 dataset, this would look like this: 2d_dataset = 4d_dataset.view(5000, -1) (The value -1 tells PyTorch to figure out the length of the second dimension automatically.)"} +{"question_id": 2950971, "score": 76, "creation_date": 1275405511, "tags": ["python", "linux", "windows", "py2exe"], "instruction": "Packaging a Python script on Linux into a Windows executable\n\nI have a Python script that I'd like to compile into a Windows executable. Now, py2exe works fine from Windows, but I'd like to be able to run this from Linux. I do have Windows on my development machine, but Linux is my primary dev platform and I'm getting kind of sick of rebooting into Windows just to create the .exe. Nor do I want to have to buy a second Windows license to run in a virtual machine such as VirtualBox. Any ideas? PS: I am aware that py2exe doesn't exactly compile the python file as much as package your script with the Python interpreter. But either way, the result is that you don't need Python installed to run the script.", "output": "Did you look at PyInstaller? It seems that versions through 1.4 support cross-compilation (support was removed in 1.5+). See this answer for how to do it with PyInstaller 1.5+ under Wine. Documentation says: Add support for cross-compilation: PyInstaller is now able to build Windows executables when running under Linux. See documentation for more details. I didn't try it myself. I hope it helps"} +{"question_id": 26879073, "score": 76, "creation_date": 1415765105, "tags": ["python", "pandas", "chained-assignment"], "instruction": "Checking whether data frame is copy or view in Pandas\n\nIs there an easy way to check whether two data frames are different copies or views of the same underlying data that doesn't involve manipulations? I'm trying to get a grip on when each is generated, and given how idiosyncratic the rules seem to be, I'd like an easy way to test. For example, I thought \"id(df.values)\" would be stable across views, but they don't seem to be: # Make two data frames that are views of same data. df = pd.DataFrame([[1,2,3,4],[5,6,7,8]], index = ['row1','row2'], columns = ['a','b','c','d']) df2 = df.iloc[0:2,:] # Demonstrate they are views: df.iloc[0,0] = 99 df2.iloc[0,0] Out[70]: 99 # Now try and compare the id on values attribute # Different despite being views! id(df.values) Out[71]: 4753564496 id(df2.values) Out[72]: 4753603728 # And we can of course compare df and df2 df is df2 Out[73]: False Other answers I've looked up that try to give rules, but don't seem consistent, and also don't answer this question of how to test: What rules does Pandas use to generate a view vs a copy? Pandas: Subindexing dataframes: Copies vs views Understanding pandas dataframe indexing Re-assignment in Pandas: Copy or view? And of course: - http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy UPDATE: Comments below seem to answer the question -- looking at the df.values.base attribute rather than df.values attribute does it, as does a reference to the df._is_copy attribute (though the latter is probably very bad form since it's an internal).", "output": "Answers from HYRY and Marius in comments! One can check either by: testing equivalence of the values.base attribute rather than the values attribute, as in: df.values.base is df2.values.base instead of df.values is df2.values. or using the (admittedly internal) _is_view attribute (df2._is_view is True)."} +{"question_id": 56134588, "score": 76, "creation_date": 1557850255, "tags": ["python", "pip", "virtualenv", "conda"], "instruction": "Is it a bad idea to use conda and pip install on the same environment?\n\nIn many cases a package can be installed using either conda install or pip install. Are there cases where it's better to stick to using only pip install, or only conda install, in the same environment? Can using both conda install and pip install in a single environment cause problems? See also: Specific reasons to favor pip or conda for installing Python packages", "output": "Don't mix conda install and pip install within conda environment. Probably, decide to use conda or virtualenv+piponce and for all. And here is how you decide which one suits you best: Conda installs various (not only python) conda-adopted packages within conda environment. It gets your environments right if you are into environments. Pip installs python packages within Python environment (virtualenv is one of them). It gets your python packages installed right. Safe way to use conda: don't rush for the latest stuff and stick to the available packages and you'll be fine. Safe way to use pip+virtualenv: if you see a dependency issue or wish to remove and clean up after package - don't. Just burn the house, abandon your old environment and create a new one. One command line and 2-5 minutes later things gonna be nice and tidy again. Pip is the best tool for installing Python packages among the two of them. Since pip packages normally come out first and only later are adopted for conda (by conda staff or contributors). Chances are, after updating or installing the latest version of Python some of the packages would only be available through pip. And the latest freshest versions of packages would only be available in pip. And mixing pip and conda packages together can be a nightmare (at least if you want to utilize conda's advantages). Conda is the best when it comes to managing dependencies and replicating environments. When uninstalling a package conda can properly clean up after itself and has better control over conflicting dependency versions. Also, conda can export environment config and, if the planets are right at the moment and the new machine is not too different, replicate that environment somewhere else. Also, conda can have larger control over the environment and can, for example, have a different version of Python installed inside of it (virtualenv - only the Python available in the system). You can always create a conda package when you have no freedom of choosing what to use. Some relevant facts: Conda takes more space and time to setup Conda might be better if you don't have admin rights on the system Conda will help when you have no system Python virtualenv+pip will free you up of knowing lots of details like that Some outdated notions: Conda used to be better for novice developers back in the day (2012ish). There is no usability gap anymore Conda was linked to Continuum Analytics too much. Now Conda itself is open source, the packages - not so much."} +{"question_id": 30109449, "score": 76, "creation_date": 1431024931, "tags": ["python", "python-3.x", "ssl", "ssl-certificate", "python-asyncio"], "instruction": "What does \"SSLError: [SSL] PEM lib (_ssl.c:2532)\" mean using the Python ssl library?\n\nI am trying to use connect to another party using Python 3 asyncio module and get this error: 36 sslcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1) ---> 37 sslcontext.load_cert_chain(cert, keyfile=ca_cert) 38 SSLError: [SSL] PEM lib (_ssl.c:2532) The question is just what the error mean. My certificate is correct, the keyfile (CA certificate) might not.", "output": "Assuming that version 3.6 is being used: See: https://github.com/python/cpython/blob/3.6/Modules/_ssl.c#L3523-L3534 PySSL_BEGIN_ALLOW_THREADS_S(pw_info.thread_state); r = SSL_CTX_check_private_key(self->ctx); PySSL_END_ALLOW_THREADS_S(pw_info.thread_state); if (r != 1) { _setSSLError(NULL, 0, __FILE__, __LINE__); goto error; } What it is saying is that SSL_CTX_check_private_key failed; thus, the private key is not correct. Reference to the likely version: https://github.com/python/cpython/blob/3.4/Modules/_ssl.c#L2529-L2535"} +{"question_id": 32682293, "score": 76, "creation_date": 1442770361, "tags": ["python", "django", "git", "migration"], "instruction": "django migrations - workflow with multiple dev branches\n\nI'm curious how other django developers manage multiple code branches (in git for instance) with migrations. My problem is as follows: - we have multiple feature branches in git, some of them with django migrations (some of them altering fields, or removing them altogether) - when I switch branches (with git checkout some_other_branch) the database does not reflect always the new code, so I run into \"random\" errors, where a db table column does not exist anymore, etc... Right now, I simply drop the db and recreate it, but it means I have to recreate a bunch of dummy data to restart work. I can use fixtures, but it requires keeping track of what data goes where, it's a bit of a hassle. Is there a good/clean way of dealing with this use-case? I'm thinking a post-checkout git hook script could run the necessary migrations, but I don't even know if migration rollbacks are at all possible.", "output": "Migrations rollback are possible and usually handled automatically by django. Considering the following model: class MyModel(models.Model): pass If you run python manage.py makemigrations myapp, it will generate the initial migration script. You can then run python manage.py migrate myapp 0001 to apply this initial migration. If after that you add a field to your model: class MyModel(models.Model): my_field = models.CharField() Then regenerate a new migration, and apply it, you can still go back to the initial state. Just run python manage.py migrate myapp 0001 and the ORM will go backward, removing the new field. It's more tricky when you deal with data migrations, because you have to write the forward and backward code. Considering an empty migration created via python manage.py makemigrations myapp --empty, you'll end up with something like: # -*- coding: utf-8 -*- from __future__ import unicode_literals from django.db import models, migrations def forward(apps, schema_editor): # load some data MyModel = apps.get_model('myapp', 'MyModel') while condition: instance = MyModel() instance.save() def backward(apps, schema_editor): # delete previously loaded data MyModel = apps.get_model('myapp', 'MyModel') while condition: instance = MyModel.objects.get(myargs) instance.delete() class Migration(migrations.Migration): dependencies = [ ('myapp', '0003_auto_20150918_1153'), ] operations = [ migrations.RunPython(forward, backward), ] For pure data-loading migrations, you usually don't need the backward migration. But when you alter the schema and update existing rows (like converting all values in a column to slug), you'll generally have to write the backward step. In our team, we try to avoid working on the same models at the same time to avoid collision. If it is not possible, and two migration with the same number (e.g 0002) are created, you can still rename one of them to change the order in which they will be applied (also remember to update the dependencies attribute on the migration class to your new order). If you end up working on the same model fields at the same time in different features, you'll still be in trouble, but it may mean these features are related and should be handled together in a single branch. For the git-hooks part, it's probably possible to write something, Assuming your are on branch mybranch and want to check out another feature branch myfeature: Just before switching, you dump the list of currently applied migrations into a temporary file mybranch_database_state.txt Then, you apply myfeature branch migrations, if any Then, when checking back mybranch, you reapply your previous database state by looking to the dump file. However, it seems a bit hackish to me, and it would probably be really difficult to handle properly all scenarios: rebasing, merging, cherry-picking, etc. Handling the migrations conflicts when they occurs seems easier to me."} +{"question_id": 13654688, "score": 75, "creation_date": 1354318407, "tags": ["python", "nginx", "flask", "gunicorn"], "instruction": "What is the correct way to leave gunicorn running?\n\nI want to make a Flask+Nginx+Gunicorn deployment. I have Nginx setup and running and I run gunicorn as described in the docs: gunicorn app:app But when I logout of the server the gunicorn process exits? What is the correct way to make sure it stay running for Nginx to connect to, and restarts if it crashes?", "output": "I'd look into something like Supervisor. Very useful tutorial can be found here https://www.codingforentrepreneurs.com/blog/hello-linux-setup-gunicorn-and-supervisor/"} +{"question_id": 16749121, "score": 75, "creation_date": 1369481634, "tags": ["python", "syntax", "operators", "python-2.x"], "instruction": "What does `<>` mean in Python?\n\nI'm trying to use in Python 3.3 an old library (dating from 2003!). When I import it, Python throws me an error because there are <> signs in the source file, e.g.: if (cnum < 1000 and nnum <> 1000 and ntext[-1] <> \"s\": ... I guess it's a now-abandoned sign in the language. What exactly does it mean, and which (more recent) sign should I replace it with?", "output": "It means not equal to. It was taken from ABC (python's predecessor) see here: x < y, x <= y, x >= y, x > y, x = y, x <> y, 0 <= d < 10 Order tests (<> means 'not equals') I believe ABC took it from Pascal, a language Guido began programming with. It has now been removed in Python 3. Use != instead. If you are CRAZY you can scrap != in the REPL (not in a script) and allow only <> in Py3K using this easter egg: >>> from __future__ import barry_as_FLUFL >>> 1 != 2 File \"\", line 1 1 != 2 ^ SyntaxError: with Barry as BDFL, use '<>' instead of '!=' >>> 1 <> 2 True"} +{"question_id": 19379120, "score": 75, "creation_date": 1381833515, "tags": ["python"], "instruction": "How to read a config file using python\n\nI have a config file abc.txt which looks somewhat like: path1 = \"D:\\test1\\first\" path2 = \"D:\\test2\\second\" path3 = \"D:\\test2\\third\" I want to read these paths from the abc.txt to use it in my program to avoid hard coding.", "output": "In order to use my example, your file \"abc.txt\" needs to look like this. [your-config] path1 = \"D:\\test1\\first\" path2 = \"D:\\test2\\second\" path3 = \"D:\\test2\\third\" Then in your code you can use the config parser. import ConfigParser configParser = ConfigParser.RawConfigParser() configFilePath = r'c:\\abc.txt' configParser.read(configFilePath) As human.js noted in his comment, in Python 3, ConfigParser has been renamed configparser. See Python 3 ImportError: No module named 'ConfigParser' for more details."} +{"question_id": 69564817, "score": 75, "creation_date": 1634182883, "tags": ["python", "plotly", "typeerror", "google-colaboratory", "pyyaml"], "instruction": "TypeError: load() missing 1 required positional argument: 'Loader' in Google Colab\n\nI am trying to do a regular import in Google Colab. This import worked up until now. If I try: import plotly.express as px or import pingouin as pg I get an error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 import plotly.express as px 9 frames /usr/local/lib/python3.7/dist-packages/plotly/express/__init__.py in () 13 ) 14 ---> 15 from ._imshow import imshow 16 from ._chart_types import ( # noqa: F401 17 scatter, /usr/local/lib/python3.7/dist-packages/plotly/express/_imshow.py in () 9 10 try: ---> 11 import xarray 12 13 xarray_imported = True /usr/local/lib/python3.7/dist-packages/xarray/__init__.py in () 1 import pkg_resources 2 ----> 3 from . import testing, tutorial, ufuncs 4 from .backends.api import ( 5 load_dataarray, /usr/local/lib/python3.7/dist-packages/xarray/tutorial.py in () 11 import numpy as np 12 ---> 13 from .backends.api import open_dataset as _open_dataset 14 from .backends.rasterio_ import open_rasterio as _open_rasterio 15 from .core.dataarray import DataArray /usr/local/lib/python3.7/dist-packages/xarray/backends/__init__.py in () 4 formats. They should not be used directly, but rather through Dataset objects. 5 ----> 6 from .cfgrib_ import CfGribDataStore 7 from .common import AbstractDataStore, BackendArray, BackendEntrypoint 8 from .file_manager import CachingFileManager, DummyFileManager, FileManager /usr/local/lib/python3.7/dist-packages/xarray/backends/cfgrib_.py in () 14 _normalize_path, 15 ) ---> 16 from .locks import SerializableLock, ensure_lock 17 from .store import StoreBackendEntrypoint 18 /usr/local/lib/python3.7/dist-packages/xarray/backends/locks.py in () 11 12 try: ---> 13 from dask.distributed import Lock as DistributedLock 14 except ImportError: 15 DistributedLock = None /usr/local/lib/python3.7/dist-packages/dask/distributed.py in () 1 # flake8: noqa 2 try: ----> 3 from distributed import * 4 except ImportError: 5 msg = ( /usr/local/lib/python3.7/dist-packages/distributed/__init__.py in () 1 from __future__ import print_function, division, absolute_import 2 ----> 3 from . import config 4 from dask.config import config 5 from .actor import Actor, ActorFuture /usr/local/lib/python3.7/dist-packages/distributed/config.py in () 18 19 with open(fn) as f: ---> 20 defaults = yaml.load(f) 21 22 dask.config.update_defaults(defaults) TypeError: load() missing 1 required positional argument: 'Loader' I think it might be a problem with Google Colab or some basic utility package that has been updated, but I can not find a way to solve it.", "output": "Found the problem. I was installing pandas_profiling, and this package updated pyyaml to version 6.0 which is not compatible with the current way Google Colab imports packages. So just reverting back to pyyaml version 5.4.1 solved the problem. For more information check versions of pyyaml here. See this issue and formal answers in GitHub ################################################################## For reverting back to pyyaml version 5.4.1 in your code, add the next line at the end of your packages installations: !pip install pyyaml==5.4.1 It is important to put it at the end of the installation, some of the installations will change the pyyaml version."} +{"question_id": 37274835, "score": 75, "creation_date": 1463484513, "tags": ["python", "scrapy", "web-crawler"], "instruction": "getting Forbidden by robots.txt: scrapy\n\nwhile crawling website like https://www.netflix.com, getting Forbidden by robots.txt: https://www.netflix.com/> ERROR: No response downloaded for: https://www.netflix.com/", "output": "In the new version (scrapy 1.1) launched 2016-05-11 the crawl first downloads robots.txt before crawling. To change this behavior change in your settings.py with ROBOTSTXT_OBEY ROBOTSTXT_OBEY = False Here are the release notes"} +{"question_id": 20577840, "score": 75, "creation_date": 1386977904, "tags": ["python", "dictionary"], "instruction": "python dictionary sorting in descending order based on values\n\nI want to sort this dictionary d based on value of sub key key3 in descending order. See below: d = { '123': { 'key1': 3, 'key2': 11, 'key3': 3 }, '124': { 'key1': 6, 'key2': 56, 'key3': 6 }, '125': { 'key1': 7, 'key2': 44, 'key3': 9 }, } So final dictionary would look like this. d = { '125': { 'key1': 7, 'key2': 44, 'key3': 9 }, '124': { 'key1': 6, 'key2': 56, 'key3': 6 }, '123': { 'key1': 3, 'key2': 11, 'key3': 3 }, } My approach was to form another dictionary e from d, whose key would be value of key3 and then use reversed(sorted(e)) but since value of key3 can be same, so dictionary e lost some of the keys and their values. makes sense? How I can accomplish this? This is not a tested code. I am just trying to understand the logic.", "output": "Dictionaries do not have any inherent order. Or, rather, their inherent order is \"arbitrary but not random\", so it doesn't do you any good. In different terms, your d and your e would be exactly equivalent dictionaries. What you can do here is to use an OrderedDict: from collections import OrderedDict d = { '123': { 'key1': 3, 'key2': 11, 'key3': 3 }, '124': { 'key1': 6, 'key2': 56, 'key3': 6 }, '125': { 'key1': 7, 'key2': 44, 'key3': 9 }, } d_ascending = OrderedDict(sorted(d.items(), key=lambda kv: kv[1]['key3'])) d_descending = OrderedDict(sorted(d.items(), key=lambda kv: kv[1]['key3'], reverse=True)) The original d has some arbitrary order. d_ascending has the order you thought you had in your original d, but didn't. And d_descending has the order you want for your e. If you don't really need to use e as a dictionary, but you just want to be able to iterate over the elements of d in a particular order, you can simplify this: for key, value in sorted(d.items(), key=lambda kv: kv[1]['key3'], reverse=True): do_something_with(key, value) If you want to maintain a dictionary in sorted order across any changes, instead of an OrderedDict, you want some kind of sorted dictionary. There are a number of options available that you can find on PyPI, some implemented on top of trees, others on top of an OrderedDict that re-sorts itself as necessary, etc."} +{"question_id": 31431924, "score": 75, "creation_date": 1436967285, "tags": ["python", "django", "django-admin"], "instruction": "Force django-admin startproject if project folder already exists\n\nI want to start a new Django project in an already existing folder, and obviously get CommandError: '/home/user/projectfolder' already exists. Is there some way to force startproject command to create the project in an existing folder? I have some important data in that folder and also the git folder, so I don't want to move it somewhere and then move it back.", "output": "Just use the current directory: cd /home/user/projectfolder django-admin.py startproject project . The use of . just instructs Django to create a project in the current directory while: django-admin.py startproject instructs Django to create a project and create the necessary directory If only the project name is given, both the project directory and project package will be named and the project directory will be created in the current working directory. This fails because of the existing directory which is not a bug but a constrain in order to prevent accidents."} +{"question_id": 5927149, "score": 75, "creation_date": 1304854164, "tags": ["python", "position", "character", "alphabet"], "instruction": "Get character position in alphabet\n\nI'm 90% sure there is a built in function that does this. I need to find the position of a character in an alphabet. So the character \"b\" is position 1 (counting from 0), etc. Does anyone know what the function is called? What I'm trying to do is to send all the characters X amount of \"steps\" back in the alpha bet, so if I have a string with \"hi\", it would be \"gh\" if I send it back one step. There might be a better way of doing it, any tips?", "output": "It is called index. For example: >>> import string >>> string.ascii_lowercase.index('b') 1 >>> Note: in Python 2, string.ascii_lowercase was named string.lowercase."} +{"question_id": 36220829, "score": 75, "creation_date": 1458912367, "tags": ["python", "matplotlib", "plot", "seaborn"], "instruction": "Fine control over the font size in Seaborn plots\n\nI'm currently trying to use Seaborn to create plots for my academic papers. The plots look great and easy to generate, but one problem that I'm having some trouble with is having the fine control on the font size in the plots. My font size in my paper is 9pt and I would like to make sure the font size in my plots are either 9pt or 10pt. But in seaborn, the font size is mainly controlled through font scale sns.set_context(\"paper\", font_scale=0.9). So it's hard for me to find the right font size except through trial and error. Is there a more efficient way to do this? I also want to make sure the font size is consistent between different seaborn plots. But not all my seaborn plots have the same dimension, so it seems like using the same font_scale on all the plots does not necessarily create the same font size across these different plots? I've attached my code below. I appreciate any comments on how to format the plot for a two column academic paper. My goal is to be able to control the size of the figure without distorting the font size or the plot. I use Latex to write my paper. # Seaborn setting sns.set(style='whitegrid', rc={\"grid.linewidth\": 0.1}) sns.set_context(\"paper\", font_scale=0.9) plt.figure(figsize=(3.1, 3)) # Two column paper. Each column is about 3.15 inch wide. color = sns.color_palette(\"Set2\", 6) # Create a box plot for my data splot = sns.boxplot(data=df, palette=color, whis=np.inf, width=0.5, linewidth = 0.7) # Labels and clean up on the plot splot.set_ylabel('Normalized WS') plt.xticks(rotation=90) plt.tight_layout() splot.yaxis.grid(True, clip_on=False) sns.despine(left=True, bottom=True) plt.savefig('test.pdf', bbox_inches='tight')", "output": "You are right. This is a badly documented issue. But you can change the font size parameter (by opposition to font scale) directly after building the plot. Check the following example: import seaborn as sns import matplotlib.pyplot as plt tips = sns.load_dataset(\"tips\") b = sns.boxplot(x=tips[\"total_bill\"]) b.axes.set_title(\"Title\",fontsize=50) b.set_xlabel(\"X Label\",fontsize=30) b.set_ylabel(\"Y Label\",fontsize=20) b.tick_params(labelsize=5) plt.show() , which results in this: To make it consistent in between plots I think you just need to make sure the DPI is the same. By the way it' also a possibility to customize a bit the rc dictionaries since \"font.size\" parameter exists but I'm not too sure how to do that. NOTE: And also I don't really understand why they changed the name of the font size variables for axis labels and ticks. Seems a bit un-intuitive."} +{"question_id": 57335636, "score": 75, "creation_date": 1564809171, "tags": ["python", "pyright"], "instruction": "Is it possible to ignore pyright checking for one line?\n\nI need to ignore pyright checking for one line. Is there any special comment for it? def create_slog(group: SLogGroup, data: Optional[dict] = None): SLog.insert_one(SLog(group=group, data=data)) # pyright: disable # pyright: disable -- doesn't work", "output": "Yes it is with \"# type: ignore\", for example: try: return int(maybe_digits_string) # type: ignore except Exception: return None"} +{"question_id": 29034928, "score": 75, "creation_date": 1426257180, "tags": ["python", "pandas", "dataframe"], "instruction": "Pandas convert a column of list to dummies\n\nI have a dataframe where one column is a list of groups each of my users belongs to. Something like: index groups 0 ['a','b','c'] 1 ['c'] 2 ['b','c','e'] 3 ['a','c'] 4 ['b','e'] And what I would like to do is create a series of dummy columns to identify which groups each user belongs to in order to run some analyses index a b c d e 0 1 1 1 0 0 1 0 0 1 0 0 2 0 1 1 0 1 3 1 0 1 0 0 4 0 1 0 0 0 pd.get_dummies(df['groups']) won't work because that just returns a column for each different list in my column. The solution needs to be efficient as the dataframe will contain 500,000+ rows.", "output": "Using s for your df['groups']: In [21]: s = pd.Series({0: ['a', 'b', 'c'], 1:['c'], 2: ['b', 'c', 'e'], 3: ['a', 'c'], 4: ['b', 'e'] }) In [22]: s Out[22]: 0 [a, b, c] 1 [c] 2 [b, c, e] 3 [a, c] 4 [b, e] dtype: object This is a possible solution: In [23]: pd.get_dummies(s.explode()).groupby(level=0).sum() Out[23]: a b c e 0 1 1 1 0 1 0 0 1 0 2 0 1 1 1 3 1 0 1 0 4 0 1 0 1 The logic of this is: .explode() flattens the series of lists to a series of single values (with the index keeping track of the original row number) pd.get_dummies( ) creating the dummies .groupby(level=0).sum() for combining the different rows that should be one row (by summing up grouped by the index (level=0), i.e. the original row number)) If this will be efficient enough, I don't know, but in any case, if performance is important, storing lists in a dataframe is not a very good idea. Updates since original answer Since version 0.25, s.explode() can be used to flatten the Series of lists, instead of the original s.apply(pd.Series).stack() Since version 1.3.0, using the level keyword in aggregations is deprecated and will be removed from newer versions soon, so is recommended to use df.groupby(level=0).sum() instead of df.sum(level=0)"} +{"question_id": 7894384, "score": 75, "creation_date": 1319569020, "tags": ["python", "url"], "instruction": "Python: Get URL path sections\n\nHow do I get specific path sections from a url? For example, I want a function which operates on this: http://www.mydomain.com/hithere?image=2934 and returns \"hithere\" or operates on this: http://www.mydomain.com/hithere/something/else and returns the same thing (\"hithere\") I know this will probably use urllib or urllib2 but I can't figure out from the docs how to get only a section of the path.", "output": "Extract the path component of the URL with urlparse (Python 2.7): import urlparse path = urlparse.urlparse('http://www.example.com/hithere/something/else').path print path > '/hithere/something/else' or urllib.parse (Python 3): import urllib.parse path = urllib.parse.urlparse('http://www.example.com/hithere/something/else').path print(path) > '/hithere/something/else' Split the path into components with os.path.split: >>> import os.path >>> os.path.split(path) ('/hithere/something', 'else') The dirname and basename functions give you the two pieces of the split; perhaps use dirname in a while loop: >>> while os.path.dirname(path) != '/': ... path = os.path.dirname(path) ... >>> path '/hithere'"} +{"question_id": 13537901, "score": 75, "creation_date": 1353726641, "tags": ["python", "rubygems", "npm"], "instruction": "Python equivalent of npm or rubygems?\n\nI've been looking around for a package manager that can be used with python. I want to list project dependencies in a file. For example ruby uses Gemfile where you can use bundle install. How can I achieve this in Python?", "output": "The pip tool is becoming the standard in equivalent of Ruby's gems. Like distribute, pip uses the PyPI package repository (by default) for resolving and downloading dependencies. pip can install dependencies from a file listing project dependencies (called requirements.txt by convention): pip install -r requirements.txt You can \"freeze\" the current packages on the Python path using pip as well: pip freeze > requirements.txt When used in combination with the virtualenv package, you can reliably create project Python environments with a project's required dependencies."} +{"question_id": 23111654, "score": 75, "creation_date": 1397656784, "tags": ["python", "websocket", "flask", "uwsgi", "gevent"], "instruction": "Websockets in Flask\n\nI'm currently researching websocket support in Python and am a bit confused with the offerings. On one hand it's possible to use Flask + gevent. On the other hand, uwsgi has socket support and at last there is an extension that bundles both uwsgi and gevent. What's the problem with implementing websockets with only one of these? What do I win by mixing them? Changing the question What does adding gevent do that threaded uwsgi won't?", "output": "In regular HTTP requests the connections between client and server are short-lived, a client connects to the server, sends a request, receives the response and then closes the connection. In this model the server can serve a large number of clients using a small number of workers. The concurrency model in this situation is typically based on threads, processes or a combination of both. When you use websocket the problem is more complex, because a websocket connection is open for a long period of time, so the server cannot use a small pool of workers to serve a large number of clients, each client needs to get its own dedicated worker. If you use threads and/or processes then your app will not scale to support a large number of clients because you can't have large number of threads/processes. This is where gevent enters the picture. Gevent has a concurrency model based on greenlets, which scale much better than threads/processes. So serving websocket connections with a gevent based server allows you support more clients, due to the lightweight nature of greenlets. With uWSGI you have a choice of concurrency models to use with web sockets, and that includes the greenlet based model from gevent. You can also use gevent's web server standalone if you want. But note that gevent does not know anything about web sockets, it is just a server. To use websocket connections you have to add an implementation of the websocket server. There are two extensions for Flask that simplify the use of websockets. The Flask-Sockets extension by Kenneth Reitz is a wrapper for gevent and gevent-websocket. The Flask-SocketIO extension (shameless plug as I'm the author) is a wrapper for gevent and gevent-socketio on the server, plus Socket.IO on the client. Socket.IO is higher level socket protocol that can use web socket if available but can also use other transport mechanisms on older browsers."} +{"question_id": 2136739, "score": 75, "creation_date": 1264465289, "tags": ["python", "sqlalchemy"], "instruction": "How to check and handle errors in SQLAlchemy\n\nHow do you handle errors in SQLAlchemy? I am relatively new to SQLAlchemy and do not know yet. Before I used SQLAlchemy, I would do things like status = db.query(\"INSERT INTO users ...\") if (!status): raise Error, db.error But now I am coding in SQLAlchemy and I do things like user = User('Boda Cydo') session.add(user) session.commit() No error checking whatsoever! I do not like this coding style without error checking at all. Please advice on how to check and handle errors in SQLAlchemy!", "output": "Your example says: status = db.query(\"INSERT INTO users ...\") if (!status): raise Error, db.error That seems to mean that you want to raise an exception if there's some error on the query (with raise Error, db.error). However sqlalchemy already does that for you - so user = User('Boda Cydo') session.add(user) session.commit() Is just the same. The check-and-raise part is already inside SQLAlchemy. Here is a list of the errors sqlalchemy itself can raise, taken from help(sqlalchemy.exc) and help(sqlalchemy.orm.exc): sqlalchemy.exc: ArgumentError - Raised when an invalid or conflicting function argument is supplied. This error generally corresponds to construction time state errors. CircularDependencyError - Raised by topological sorts when a circular dependency is detected CompileError - Raised when an error occurs during SQL compilation ConcurrentModificationError DBAPIError - Raised when the execution of a database operation fails. If the error-raising operation occured in the execution of a SQL statement, that statement and its parameters will be available on the exception object in the statement and params attributes. The wrapped exception object is available in the orig attribute. Its type and properties are DB-API implementation specific. DataError Wraps a DB-API DataError. DatabaseError - Wraps a DB-API DatabaseError. DisconnectionError - A disconnect is detected on a raw DB-API connection. be raised by a PoolListener so that the host pool forces a disconnect. FlushError IdentifierError - Raised when a schema name is beyond the max character limit IntegrityError - Wraps a DB-API IntegrityError. InterfaceError - Wraps a DB-API InterfaceError. InternalError - Wraps a DB-API InternalError. InvalidRequestError - SQLAlchemy was asked to do something it can't do. This error generally corresponds to runtime state errors. NoReferenceError - Raised by ForeignKey to indicate a reference cannot be resolved. NoReferencedColumnError - Raised by ForeignKey when the referred Column cannot be located. NoReferencedTableError - Raised by ForeignKey when the referred Table cannot be located. NoSuchColumnError - A nonexistent column is requested from a RowProxy. NoSuchTableError - Table does not exist or is not visible to a connection. NotSupportedError - Wraps a DB-API NotSupportedError. OperationalError - Wraps a DB-API OperationalError. ProgrammingError - Wraps a DB-API ProgrammingError. SADeprecationWarning - Issued once per usage of a deprecated API. SAPendingDeprecationWarning - Issued once per usage of a deprecated API. SAWarning - Issued at runtime. SQLAlchemyError - Generic error class. SQLError - Raised when the execution of a database operation fails. TimeoutError - Raised when a connection pool times out on getting a connection. UnboundExecutionError - SQL was attempted without a database connection to execute it on. UnmappedColumnError sqlalchemy.orm.exc: ConcurrentModificationError - Rows have been modified outside of the unit of work. FlushError - A invalid condition was detected during flush(). MultipleResultsFound - A single database result was required but more than one were found. NoResultFound - A database result was required but none was found. ObjectDeletedError - A refresh() operation failed to re-retrieve an object's row. UnmappedClassError - A mapping operation was requested for an unknown class. UnmappedColumnError - Mapping operation was requested on an unknown column. UnmappedError - TODO UnmappedInstanceError - A mapping operation was requested for an unknown instance."} +{"question_id": 47201358, "score": 75, "creation_date": 1510228978, "tags": ["python", "python-3.x", "pytest"], "instruction": "How can you test that two dictionaries are equal with pytest in python\n\nTrying to assert that two dictionaries that have nested contents are equal to each other (order doesn't matter) with pytest. What's the pythonic way to do this?", "output": "Don't spend your time writing this logic yourself. Just use the functions provided by the default testing library unittest from unittest import TestCase TestCase().assertDictEqual(expected_dict, actual_dict)"} +{"question_id": 11329917, "score": 75, "creation_date": 1341407962, "tags": ["python"], "instruction": "Restart python-script from within itself\n\nI have a python-based GTK application that loads several modules. It is run from the (linux) terminal like so: ./myscript.py --some-flag setting From within the program the user can download (using Git) newer versions. If such exists/are downloaded, a button appear that I wish would restart the program with newly compiled contents (including dependencies/imports). Preferably it would also restart it using the contents of sys.argv to keep all the flags as they were. So what I fail to find/need is a nice restart procedure that kills the current instance of the program and starts a new using the same arguments. Preferably the solution should work for Windows and Mac as well but it is not essential.", "output": "You're looking for os.exec*() family of commands. To restart your current program with exact the same command line arguments as it was originally run, you could use the following: os.execv(sys.argv[0], sys.argv)"} +{"question_id": 64095876, "score": 75, "creation_date": 1601266466, "tags": ["python", "multiprocessing", "fork", "python-multiprocessing", "spawn"], "instruction": "multiprocessing fork() vs spawn()\n\nI was reading the description of the two from the python doc: spawn The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process objects run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver. [Available on Unix and Windows. The default on Windows and macOS.] fork The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic. [Available on Unix only. The default on Unix.] And my question is: is it that the fork is much quicker 'cuz it does not try to identify which resources to copy? is it that, since fork duplicates everything, it would \"waste\" much more resources comparing to spawn()?", "output": "is it that the fork is much quicker 'cuz it does not try to identify which resources to copy? Yes, it's much quicker. The kernel can clone the whole process and only copies modified memory-pages as a whole. Piping resources to a new process and booting the interpreter from scratch is not necessary. is it that, since fork duplicates everything, it would \"waste\" much more resources comparing to spawn()? Fork on modern kernels does only \"copy-on-write\" and it only affects memory-pages which actually change. The caveat is that \"write\" already encompasses merely iterating over an object in CPython. That's because the reference-count for the object gets incremented. If you have long running processes with lots of small objects in use, this can mean you waste more memory than with spawn. Anecdotally I recall Facebook claiming to have memory-usage reduced considerably with switching from \"fork\" to \"spawn\" for their Python-processes."} +{"question_id": 39899005, "score": 75, "creation_date": 1475764972, "tags": ["python", "json", "pandas", "dataframe", "flatten"], "instruction": "How to flatten a pandas dataframe with some columns as json?\n\nI have a dataframe df that loads data from a database. Most of the columns are json strings while some are even list of jsons. For example: id name columnA columnB 1 John {\"dist\": \"600\", \"time\": \"0:12.10\"} [{\"pos\": \"1st\", \"value\": \"500\"},{\"pos\": \"2nd\", \"value\": \"300\"},{\"pos\": \"3rd\", \"value\": \"200\"}, {\"pos\": \"total\", \"value\": \"1000\"}] 2 Mike {\"dist\": \"600\"} [{\"pos\": \"1st\", \"value\": \"500\"},{\"pos\": \"2nd\", \"value\": \"300\"},{\"pos\": \"total\", \"value\": \"800\"}] ... As you can see, not all the rows have the same number of elements in the json strings for a column. What I need to do is keep the normal columns like id and name as it is and flatten the json columns like so: id name columnA.dist columnA.time columnB.pos.1st columnB.pos.2nd columnB.pos.3rd columnB.pos.total 1 John 600 0:12.10 500 300 200 1000 2 Mark 600 NaN 500 300 Nan 800 I have tried using json_normalize like so: from pandas.io.json import json_normalize json_normalize(df) But there seems to be some problems with keyerror. What is the correct way of doing this?", "output": "Here's a solution using json_normalize() again by using a custom function to get the data in the correct format understood by json_normalize function. import ast from pandas.io.json import json_normalize def only_dict(d): ''' Convert json string representation of dictionary to a python dict ''' return ast.literal_eval(d) def list_of_dicts(ld): ''' Create a mapping of the tuples formed after converting json strings of list to a python list ''' return dict([(list(d.values())[1], list(d.values())[0]) for d in ast.literal_eval(ld)]) A = json_normalize(df['columnA'].apply(only_dict).tolist()).add_prefix('columnA.') B = json_normalize(df['columnB'].apply(list_of_dicts).tolist()).add_prefix('columnB.pos.') Finally, join the DFs on the common index to get: df[['id', 'name']].join([A, B]) EDIT:- As per the comment by @MartijnPieters, the recommended way of decoding the json strings would be to use json.loads() which is much faster when compared to using ast.literal_eval() if you know that the data source is JSON."} +{"question_id": 16976264, "score": 75, "creation_date": 1370579528, "tags": ["python", "unit-testing", "mocking"], "instruction": "unittest.mock: asserting partial match for method argument\n\nRubyist writing Python here. I've got some code that looks kinda like this: result = database.Query('complicated sql with an id: %s' % id) database.Query is mocked out, and I want to test that the ID gets injected in correctly without hardcoding the entire SQL statement into my test. In Ruby/RR, I would have done this: mock(database).query(/#{id}/) But I can't see a way to set up a 'selective mock' like that in unittest.mock, at least without some hairy side_effect logic. So I tried using the regexp in the assertion instead: with patch(database) as MockDatabase: instance = MockDatabase.return_value ... instance.Query.assert_called_once_with(re.compile(\"%s\" % id)) But that doesn't work either. This approach does work, but it's ugly: with patch(database) as MockDatabase: instance = MockDatabase.return_value ... self.assertIn(id, instance.Query.call_args[0][0]) Better ideas?", "output": "import mock class AnyStringWith(str): def __eq__(self, other): return self in other ... result = database.Query('complicated sql with an id: %s' % id) database.Query.assert_called_once_with(AnyStringWith(id)) ... Preemptively requires a matching string def arg_should_contain(x): def wrapper(arg): assert str(x) in arg, \"'%s' does not contain '%s'\" % (arg, x) return wrapper ... database.Query = arg_should_contain(id) result = database.Query('complicated sql with an id: %s' % id) UPDATE Using libraries like callee, you don't need to implement AnyStringWith. from callee import Contains database.Query.assert_called_once_with(Contains(id)) https://callee.readthedocs.io/en/latest/reference/operators.html#callee.operators.Contains"} +{"question_id": 21767900, "score": 75, "creation_date": 1392334334, "tags": ["python", "pandas", "group-by", "multi-index"], "instruction": "How to move pandas data from index to column after multiple groupby\n\nI have the following pandas dataframe: token year uses books 386 xanthos 1830 3 3 387 xanthos 1840 1 1 388 xanthos 1840 2 2 389 xanthos 1868 2 2 390 xanthos 1875 1 1 I aggregate the rows with duplicate token and years like so: dfalph = dfalph[['token','year','uses','books']].groupby(['token', 'year']).agg([np.sum]) dfalph.columns = dfalph.columns.droplevel(1) which results in uses books token year xanthos 1830 3 3 1840 3 3 1867 2 2 1868 2 2 1875 1 1 Instead of having the 'token' and 'year' fields in the index, I would like to return them to columns and have an integer index.", "output": "Method #1: reset_index() >>> g uses books sum sum token year xanthos 1830 3 3 1840 3 3 1868 2 2 1875 1 1 [4 rows x 2 columns] >>> g = g.reset_index() >>> g token year uses books sum sum 0 xanthos 1830 3 3 1 xanthos 1840 3 3 2 xanthos 1868 2 2 3 xanthos 1875 1 1 [4 rows x 4 columns] Method #2: don't make the index in the first place, using as_index=False >>> g = dfalph[['token', 'year', 'uses', 'books']].groupby(['token', 'year'], as_index=False).sum() >>> g token year uses books 0 xanthos 1830 3 3 1 xanthos 1840 3 3 2 xanthos 1868 2 2 3 xanthos 1875 1 1 [4 rows x 4 columns]"} +{"question_id": 72449482, "score": 75, "creation_date": 1654006677, "tags": ["python", "enums", "f-string"], "instruction": "f-string representation different than str()\n\nI had always thought that f-strings invoked the __str__ method. That is, f'{x}' was always the same as str(x). However, with this class class Thing(enum.IntEnum): A = 0 f'{Thing.A}' is '0' while str(Thing.A) is 'Thing.A'. This example doesn't work if I use enum.Enum as the base class. What functionality do f-strings invoke?", "output": "From \"Formatted string literals\" in the Python reference: f-strings invoke the \"format() protocol\", meaning that the __format__ magic method is called instead of __str__. class Foo: def __repr__(self): return \"Foo()\" def __str__(self): return \"A wild Foo\" def __format__(self, format_spec): if not format_spec: return \"A formatted Foo\" return f\"A formatted Foo, but also {format_spec}!\" >>> foo = Foo() >>> repr(foo) 'Foo()' >>> str(foo) 'A wild Foo' >>> format(foo) 'A formatted Foo' >>> f\"{foo}\" 'A formatted Foo' >>> format(foo, \"Bar\") 'A formatted Foo, but also Bar!' >>> f\"{foo:Bar}\" 'A formatted Foo, but also Bar!' If you don't want __format__ to be called, you can specify !s (for str), !r (for repr) or !a (for ascii) after the expression: >>> foo = Foo() >>> f\"{foo}\" 'A formatted Foo' >>> f\"{foo!s}\" 'A wild Foo' >>> f\"{foo!r}\" 'Foo()' This is occasionally useful with strings: >>> key = 'something\\n nasty!' >>> error_message = f\"Key not found: {key!r}\" >>> error_message \"Key not found: 'something\\\\n nasty!'\""} +{"question_id": 74844262, "score": 75, "creation_date": 1671396136, "tags": ["python", "numpy"], "instruction": "How can I solve error \"module 'numpy' has no attribute 'float'\" in Python?\n\nI am using NumPy 1.24.0. On running this sample code line, import numpy as np num = np.float(3) I am getting this error: Traceback (most recent call last): File \"\", line 1, in File \"/home/ubuntu/.local/lib/python3.8/site-packages/numpy/__init__.py\", line 284, in __getattr__ raise AttributeError(\"module {!r} has no attribute \" AttributeError: module 'numpy' has no attribute 'float' How can I fix it?", "output": "The answer is already provided in the comments by @mattdmo and @tdelaney: NumPy 1.20 (release notes) deprecated numpy.float, numpy.int, and similar aliases, causing them to issue a deprecation warning NumPy 1.24 (release notes) removed these aliases altogether, causing an error when they are used In many cases you can simply replace the deprecated NumPy types by the equivalent Python built-in type, e.g., numpy.float becomes a \"plain\" Python float. For detailed guidelines on how to deal with various deprecated types, have a closer look at the table and guideline in the release notes for 1.20: ... To give a clear guideline for the vast majority of cases, for the types bool, object, str (and unicode) using the plain version is shorter and clear, and generally a good replacement. For float and complex you can use float64 and complex128 if you wish to be more explicit about the precision. For np.int a direct replacement with np.int_ or int is also good and will not change behavior, but the precision will continue to depend on the computer and operating system. If you want to be more explicit and review the current use, you have the following alternatives: np.int64 or np.int32 to specify the precision exactly. This ensures that results cannot depend on the computer or operating system. np.int_ or int (the default), but be aware that it depends on the computer and operating system. The C types: np.cint (int), np.int_ (long), np.longlong. np.intp which is 32bit on 32bit machines 64bit on 64bit machines. This can be the best type to use for indexing. ... If you have dependencies that use the deprecated types, a quick workaround would be to roll back your NumPy version to 1.24 or less (as suggested in some of the other answers), while waiting for the dependency to catch up. Alternatively, you could create a patch yourself and open a pull request, or monkey patch the dependency in your own code."} +{"question_id": 16856788, "score": 75, "creation_date": 1370001620, "tags": ["python", "numpy"], "instruction": "Slice 2d array into smaller 2d arrays\n\nIs there a way to slice a 2d array in numpy into smaller 2d arrays? Example [[1,2,3,4], -> [[1,2] [3,4] [5,6,7,8]] [5,6] [7,8]] So I basically want to cut down a 2x4 array into 2 2x2 arrays. Looking for a generic solution to be used on images.", "output": "There was another question a couple of months ago which clued me in to the idea of using reshape and swapaxes. The h//nrows makes sense since this keeps the first block's rows together. It also makes sense that you'll need nrows and ncols to be part of the shape. -1 tells reshape to fill in whatever number is necessary to make the reshape valid. Armed with the form of the solution, I just tried things until I found the formula that works. You should be able to break your array into \"blocks\" using some combination of reshape and swapaxes: def blockshaped(arr, nrows, ncols): \"\"\" Return an array of shape (n, nrows, ncols) where n * nrows * ncols = arr.size If arr is a 2D array, the returned array should look like n subblocks with each subblock preserving the \"physical\" layout of arr. \"\"\" h, w = arr.shape assert h % nrows == 0, f\"{h} rows is not evenly divisible by {nrows}\" assert w % ncols == 0, f\"{w} cols is not evenly divisible by {ncols}\" return (arr.reshape(h//nrows, nrows, -1, ncols) .swapaxes(1,2) .reshape(-1, nrows, ncols)) turns c np.random.seed(365) c = np.arange(24).reshape((4, 6)) print(c) [out]: [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23]] into print(blockshaped(c, 2, 3)) [out]: [[[ 0 1 2] [ 6 7 8]] [[ 3 4 5] [ 9 10 11]] [[12 13 14] [18 19 20]] [[15 16 17] [21 22 23]]] I've posted an inverse function, unblockshaped, here, and an N-dimensional generalization here. The generalization gives a little more insight into the reasoning behind this algorithm. Note that there is also superbatfish's blockwise_view. It arranges the blocks in a different format (using more axes) but it has the advantage of (1) always returning a view and (2) being capable of handling arrays of any dimension."} +{"question_id": 50559078, "score": 75, "creation_date": 1527481257, "tags": ["python", "pandas", "datetime", "random"], "instruction": "Generating random dates within a given range in pandas\n\nThis is a self-answered post. A common problem is to randomly generate dates between a given start and end date. There are two cases to consider: random dates with a time component, and random dates without time For example, given some start date 2015-01-01 and an end date 2018-01-01, how can I sample N random dates between this range using pandas?", "output": "We can speed up @akilat90's approach about twofold (in @coldspeed's benchmark) by using the fact that datetime64 is just a rebranded int64 hence we can view-cast: def pp(start, end, n): start_u = start.value//10**9 end_u = end.value//10**9 return pd.DatetimeIndex((10**9*np.random.randint(start_u, end_u, n, dtype=np.int64)).view('M8[ns]'))"} +{"question_id": 44630642, "score": 75, "creation_date": 1497875455, "tags": ["python", "arrays", "django"], "instruction": "Is it possible to store an array in Django model?\n\nI was wondering if it's possible to store an array in a Django model? I'm asking this because I need to store an array of int (e.g [1,2,3]) in a field and then be able to search a specific array and get a match with it or by it's possible combinations. I was thinking to store that arrays as strings in CharFields and then, when I need to search something, concatenate the values(obtained by filtering other model) with '[', ']' and ',' and then use a object filter with that generated string. The problem is that I will have to generate each possible combination and then filter them one by one until I get a match, and I believe that this might be inefficient. So, I hope you can give me other ideas that I could try. I'm not asking for code, necessarily, any ideas on how to achieve this will be good.", "output": "Two possibilities: Use ArrayField if you are using PostgreSQL as your database. You can read more about ArrayField here. Encode your array as JSON and store it either as a plain string or using a JSONField. Django added a cross-platform JSONField in version 3.1. For earlier versions, here is an example library. I'd personally prefer option number 1, since that is cleaner but that might not be available to you."} +{"question_id": 18383384, "score": 75, "creation_date": 1377181892, "tags": ["python", "copy", "duplicates", "directory"], "instruction": "Python copy files to a new directory and rename if file name already exists\n\nI've already read this thread but when I implement it into my code it only works for a few iterations. I'm using python to iterate through a directory (lets call it move directory) to copy mainly pdf files (matching a unique ID) to another directory (base directory) to the matching folder (with the corresponding unique ID). I started using shutil.copy but if there are duplicates it overwrites the existing file. I'd like to be able to search the corresponding folder to see if the file already exists, and iteratively name it if more than one occurs. e.g. copy file 1234.pdf to folder in base directory 1234. if 1234.pdf exists to name it 1234_1.pdf, if another pdf is copied as 1234.pdf then it would be 1234_2.pdf. Here is my code: import arcpy import os import re import sys import traceback import collections import shutil movdir = r\"C:\\Scans\" basedir = r\"C:\\Links\" try: #Walk through all files in the directory that contains the files to copy for root, dirs, files in os.walk(movdir): for filename in files: #find the name location and name of files path = os.path.join(root, filename) print path #file name and extension ARN, extension = os.path.splitext(filename) print ARN #Location of the corresponding folder in the new directory link = os.path.join(basedir,ARN) # if the folder already exists in new directory if os.path.exists(link): #this is the file location in the new directory file = os.path.join(basedir, ARN, ARN) linkfn = os.path.join(basedir, ARN, filename) if os.path.exists(linkfn): i = 0 #if this file already exists in the folder print \"Path exists already\" while os.path.exists(file + \"_\" + str(i) + extension): i+=1 print \"Already 2x exists...\" print \"Renaming\" shutil.copy(path, file + \"_\" + str(i) + extension) else: shutil.copy(path, link) print ARN + \" \" + \"Copied\" else: print ARN + \" \" + \"Not Found\"", "output": "Sometimes it is just easier to start over... I apologize if there is any typo, I haven't had the time to test it thoroughly. movdir = r\"C:\\Scans\" basedir = r\"C:\\Links\" # Walk through all files in the directory that contains the files to copy for root, dirs, files in os.walk(movdir): for filename in files: # I use absolute path, case you want to move several dirs. old_name = os.path.join( os.path.abspath(root), filename ) # Separate base from extension base, extension = os.path.splitext(filename) # Initial new name new_name = os.path.join(basedir, base, filename) # If folder basedir/base does not exist... You don't want to create it? if not os.path.exists(os.path.join(basedir, base)): print os.path.join(basedir,base), \"not found\" continue # Next filename elif not os.path.exists(new_name): # folder exists, file does not shutil.copy(old_name, new_name) else: # folder exists, file exists as well ii = 1 while True: new_name = os.path.join(basedir,base, base + \"_\" + str(ii) + extension) if not os.path.exists(new_name): shutil.copy(old_name, new_name) print \"Copied\", old_name, \"as\", new_name break ii += 1"} +{"question_id": 40993626, "score": 75, "creation_date": 1481021756, "tags": ["python", "numpy", "memory", "ipython", "jupyter-notebook"], "instruction": "list memory usage in ipython and jupyter\n\nI have a few (almost ten) Gb of memory taken by the ipython kernel. I think this is coming from large objects (matrices, lists, numpy arrays, ...) that I might have produced during some operation and now I do not need anymore. I would like to list all of the objects I have defined and sort them by their memory footprint. Is there a simple way to do that? For certain types there is nbytes method, but not for all ... so I am looking for a general way to list all objects I have made and their memory occupation.", "output": "Assuming that you are using ipython or jupyter, you will need to do a little bit of work to get a list all of the objects you have defined. That means taking everything available in globals() and filtering out objects that are modules, builtins, ipython objects, etc. Once you are sure you have those objects, then you can proceed to grabbing their sizes with sys.getsizeof. This can be summed up as follows: import sys # These are the usual ipython objects, including this one you are creating ipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars'] # Get a sorted list of the objects and their sizes sorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True) Please keep in mind that for python objects (those created with python's builtin functions), sys.getsizeof will be very accurate. But it can be a bit inaccurate on objects created using third-party libraries. Furthermore, please be mindful that sys.getsizeof adds an additional garbage collector overhead if the object is managed by the garbage collector. So, some things may look a bit heavier than they actually are. As a side note, numpy's .nbytes method can be somewhat misleading in that it does not include memory consumed by non-element attributes of the array object."} +{"question_id": 1367514, "score": 75, "creation_date": 1251895819, "tags": ["python", "decorator"], "instruction": "How to decorate a method inside a class?\n\nI am attempting to decorate a method inside a class but python is throwing an error. My class looks like this: from pageutils import formatHeader class myPage(object): def __init__(self): self.PageName = '' def createPage(self): pageHeader = self.createHeader() @formatHeader #<----- decorator def createHeader(self): return \"Page Header \",self.PageName if __name__==\"__main__\": page = myPage() page.PageName = 'My Page' page.createPage() pageutils.py: def formatHeader(fn): def wrapped(): return '
'+fn()+'
' return wrapped Python throws the following error self.createHeader() TypeError: wrapped() takes no arguments (1 given) Where am I goofing?", "output": "Python automatically passes the class instance as reference. (The self argument which is seen in all instance methods). You could do: def formatHeader(fn): def wrapped(self=None): return '
'+fn(self)+'
' return wrapped"} +{"question_id": 8416208, "score": 75, "creation_date": 1323265166, "tags": ["python", "unit-testing", "contextmanager"], "instruction": "In python, is there a good idiom for using context managers in setup/teardown\n\nI am finding that I am using plenty of context managers in Python. However, I have been testing a number of things using them, and I am often needing the following: class MyTestCase(unittest.TestCase): def testFirstThing(self): with GetResource() as resource: u = UnderTest(resource) u.doStuff() self.assertEqual(u.getSomething(), 'a value') def testSecondThing(self): with GetResource() as resource: u = UnderTest(resource) u.doOtherStuff() self.assertEqual(u.getSomething(), 'a value') When this gets to many tests, this is clearly going to get boring, so in the spirit of SPOT/DRY (single point of truth/dont repeat yourself), I'd want to refactor those bits into the test setUp() and tearDown() methods. However, trying to do that has lead to this ugliness: def setUp(self): self._resource = GetSlot() self._resource.__enter__() def tearDown(self): self._resource.__exit__(None, None, None) There must be a better way to do this. Ideally, in the setUp()/tearDown() without repetitive bits for each test method (I can see how repeating a decorator on each method could do it). Edit: Consider the undertest object to be internal, and the GetResource object to be a third party thing (which we aren't changing). I've renamed GetSlot to GetResource here\u2014this is more general than specific case\u2014where context managers are the way which the object is intended to go into a locked state and out.", "output": "Looks like this discussion is still relevant 10 years later! To add to @ncoghlan's excellent answer it looks like unittest.TestCase added this exact functionality via the enterContext helper method as of python 3.11! From the docs: enterContext(cm) Enter the supplied context manager. If successful, also add its __exit__() method as a cleanup function by addCleanup() and return the result of the __enter__() method. New in version 3.11. It looks like this precludes the need to manually addCleanup() to close the stack of context managers, as it's added when you provide the context manager to enterContext. So it seems like all that's needed nowadays is: def setUp(self): self._resource = self.enterContext(GetResource()) # if you need a reference to it in tests # self._resource implicitly released during cleanups after tearDown() (I guess unittest got tired of everyone flocking to pytest because of their helpful fixtures)"} +{"question_id": 16972501, "score": 75, "creation_date": 1370554057, "tags": ["python", "numpy"], "instruction": "Size of data type using NumPy\n\nIn NumPy, I can get the size (in bytes) of a particular data type by: datatype(...).itemsize or: datatype(...).nbytes For example: np.float32(5).itemsize # 4 np.float32(5).nbytes # 4 I have two questions. First, is there a way to get this information without creating an instance of the datatype? Second, what's the difference between itemsize and nbytes?", "output": "You need an instance of the dtype to get the itemsize, but you shouldn't need an instance of the ndarray. (As will become clear in a second, nbytes is a property of the array, not the dtype.) E.g. np.dtype(float).itemsize # 8 np.dtype(np.float32).itemsize # 4 np.dtype('|S10').itemsize # 10 As far as the difference between itemsize and nbytes, nbytes is just x.itemsize * x.size. E.g. np.arange(100).itemsize # 8 np.arange(100).nbytes # 800"} +{"question_id": 28372597, "score": 75, "creation_date": 1423248079, "tags": ["python", "scipy", "curve-fitting"], "instruction": "Python curve_fit with multiple independent variables\n\nPython's curve_fit calculates the best-fit parameters for a function with a single independent variable, but is there a way, using curve_fit or something else, to fit for a function with multiple independent variables? For example: def func(x, y, a, b, c): return log(a) + b*log(x) + c*log(y) where x and y are the independent variable and we would like to fit for a, b, and c.", "output": "You can pass curve_fit a multi-dimensional array for the independent variables, but then your func must accept the same thing. For example, calling this array X and unpacking it to x, y for clarity: import numpy as np from scipy.optimize import curve_fit def func(X, a, b, c): x,y = X return np.log(a) + b*np.log(x) + c*np.log(y) # some artificially noisy data to fit x = np.linspace(0.1,1.1,101) y = np.linspace(1.,2., 101) a, b, c = 10., 4., 6. z = func((x,y), a, b, c) * 1 + np.random.random(101) / 100 # initial guesses for a,b,c: p0 = 8., 2., 7. print(curve_fit(func, (x,y), z, p0)) Gives the fit: (array([ 9.99933937, 3.99710083, 6.00875164]), array([[ 1.75295644e-03, 9.34724308e-05, -2.90150983e-04], [ 9.34724308e-05, 5.09079478e-06, -1.53939905e-05], [ -2.90150983e-04, -1.53939905e-05, 4.84935731e-05]]))"} +{"question_id": 14672753, "score": 75, "creation_date": 1359896579, "tags": ["python", "flask"], "instruction": "Handling multiple requests in Flask\n\nMy Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation. Is there any way that I can make my Flask application accept requests from multiple users?", "output": "Yes, deploy your application on a different WSGI server, see the Flask deployment options documentation. The server component that comes with Flask is really only meant for when you are developing your application; even though it can be configured to handle concurrent requests with app.run(threaded=True) (as of Flask 1.0 this is the default). The above document lists several options for servers that can handle concurrent requests and are far more robust and tunable."} +{"question_id": 49039436, "score": 75, "creation_date": 1519855760, "tags": ["python", "python-3.x", "python-import", "python-module", "python-packaging"], "instruction": "How to import a module from a different folder?\n\nI have a project which I want to structure like this: myproject \u251c\u2500\u2500 api \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 api.py \u251c\u2500\u2500 backend \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 backend.py \u251c\u2500\u2500 models \u2502 \u251c\u2500\u2500 __init__.py \u2502 \u2514\u2500\u2500 some_model.py \u2514\u2500\u2500 __init__.py Now, I want to import the module some_model.py in both api.py and backend.py. How do I properly do this? I tried: from models import some_model but that fails with ModuleNotFoundError: No module named 'models'. I also tried: from ..models import some_model which gave me ValueError: attempted relative import beyond top-level package. What am I doing wrong here? How can I import a file from a different directory, which is not a subdirectory?", "output": "Firstly, this import statement: from models import some_model should be namespaced: # in myproject/backend/backend.py or myproject/api/api.py from myproject.models import some_model Then you will need to get the directory which contains myproject, let's call this /path/to/parent, into the sys.path list. You can do this temporarily by setting an environment variable: export PYTHONPATH=/path/to/parent Or, preferably, you can do it by writing a pyproject.toml file and installing your package. Follow the PyPA packaging guide. After you have written your pyproject.toml file, from within the same directory, execute this to setup the correct entries in sys.path: pip install --editable ."} +{"question_id": 60466436, "score": 75, "creation_date": 1582988501, "tags": ["python", "performance"], "instruction": "Why is a.insert(0,0) much slower than a[0:0]=[0]?\n\nUsing a list's insert function is much slower than achieving the same effect using slice assignment: > python -m timeit -n 100000 -s \"a=[]\" \"a.insert(0,0)\" 100000 loops, best of 5: 19.2 usec per loop > python -m timeit -n 100000 -s \"a=[]\" \"a[0:0]=[0]\" 100000 loops, best of 5: 6.78 usec per loop (Note that a=[] is only the setup, so a starts empty but then grows to 100,000 elements.) At first I thought maybe it's the attribute lookup or function call overhead or so, but inserting near the end shows that that's negligible: > python -m timeit -n 100000 -s \"a=[]\" \"a.insert(-1,0)\" 100000 loops, best of 5: 79.1 nsec per loop Why is the presumably simpler dedicated \"insert single element\" function so much slower? I can also reproduce it at repl.it: from timeit import repeat for _ in range(3): for stmt in 'a.insert(0,0)', 'a[0:0]=[0]', 'a.insert(-1,0)': t = min(repeat(stmt, 'a=[]', number=10**5)) print('%.6f' % t, stmt) print() # Example output: # # 4.803514 a.insert(0,0) # 1.807832 a[0:0]=[0] # 0.012533 a.insert(-1,0) # # 4.967313 a.insert(0,0) # 1.821665 a[0:0]=[0] # 0.012738 a.insert(-1,0) # # 5.694100 a.insert(0,0) # 1.899940 a[0:0]=[0] # 0.012664 a.insert(-1,0) I use Python 3.8.1 32-bit on Windows 10 64-bit. repl.it uses Python 3.8.1 64-bit on Linux 64-bit.", "output": "I think it's probably just that they forgot to use memmove in list.insert. If you take a look at the code list.insert uses to shift elements, you can see it's just a manual loop: for (i = n; --i >= where; ) items[i+1] = items[i]; while list.__setitem__ on the slice assignment path uses memmove: memmove(&item[ihigh+d], &item[ihigh], (k - ihigh)*sizeof(PyObject *)); memmove typically has a lot of optimization put into it, such as taking advantage of SSE/AVX instructions."} +{"question_id": 17726954, "score": 74, "creation_date": 1374159347, "tags": ["python", "pytest"], "instruction": "py.test: how to get the current test's name from the setup method?\n\nI am using py.test and wonder if/how it is possible to retrieve the name of the currently executed test within the setup method that is invoked before running each test. Consider this code: class TestSomething(object): def setup(self): test_name = ... def teardown(self): pass def test_the_power(self): assert \"foo\" != \"bar\" def test_something_else(self): assert True Right before TestSomething.test_the_power becomes executed, I would like to have access to this name in setup as outlined in the code via test_name = ... so that test_name == \"TestSomething.test_the_power\". Actually, in setup, I allocate some resource for each test. In the end, looking at the resources that have been created by various unit tests, I would like to be able to see which one was created by which test. Best thing would be to just use the test name upon creation of the resource.", "output": "You can also do this using the Request Fixture like this: def test_name1(request): testname = request.node.name assert testname == 'test_name1'"} +{"question_id": 9841303, "score": 74, "creation_date": 1332514211, "tags": ["python"], "instruction": "Removing duplicate characters from a string\n\nHow can I remove duplicate characters from a string using Python? For example, let's say I have a string: foo = 'mppmt' How can I make the string: foo = 'mpt' NOTE: Order is not important", "output": "If order does not matter, you can use \"\".join(set(foo)) set() will create a set of unique letters in the string, and \"\".join() will join the letters back to a string in arbitrary order. If order does matter, you can use a dict instead of a set, which since Python 3.7 preserves the insertion order of the keys. (In the CPython implementation, this is already supported in Python 3.6 as an implementation detail.) foo = \"mppmt\" result = \"\".join(dict.fromkeys(foo)) resulting in the string \"mpt\". In earlier versions of Python, you can use collections.OrderedDict, which has been available starting from Python 2.7."} +{"question_id": 26615756, "score": 74, "creation_date": 1414520751, "tags": ["python", "python-requests"], "instruction": "Python requests module sends JSON string instead of x-www-form-urlencoded param string\n\nI was under the impression that POSTSs using x-www-form-urlencoded specifications should send a URL encoded param string in the body of the post. However, when I do this data = json.dumps({'param1': 'value1', 'param2': 'value2'}) Requests.post(url, data=data) The body of the request on the receiving end looks like this: {\"param1\": \"value1\", \"param2\": \"value2\"} But I was expecting to get this param1=value1¶m2=value2 How I can get Requests to send the data in the second form?", "output": "The reason you're getting JSON is because you're explicitly calling json.dumps to generate a JSON string. Just don't do that, and you won't get a JSON string. In other words, change your first line to this: data = {'param1': 'value1', 'param2': 'value2'} As the docs explain, if you pass a dict as the data value, it will be form-encoded, while if you pass a string, it will be sent as-is. For example, in one terminal window: $ nc -kl 8765 In another: $ python3 >>> import requests >>> d = {'spam': 20, 'eggs': 3} >>> requests.post(\"http://localhost:8765\", data=d) ^C >>> import json >>> j = json.dumps(d) >>> requests.post(\"http://localhost:8765\", data=j) ^C In the first terminal, you'll see that the first request body is this (and Content-Type application/x-www-form-urlencoded): spam=20&eggs=3 \u2026 while the second is this (and has no Content-Type): {\"spam\": 20, \"eggs\": 3}"} +{"question_id": 18643998, "score": 74, "creation_date": 1378407798, "tags": ["python", "django", "homebrew", "geodjango"], "instruction": "GeoDjango GEOSException error\n\nTrying to install a GeoDjango on my machine. I'm really new to Python and being brought into a project that has been a very tricky install for the other team members. I installed Python 2.7 and GEOS using brew, and running PSQL 9.2.4 but keep getting this error when I try to get the webserver running: __import__(name) File \"/Users/armynante/Desktop/uclass-files/uclass-env/lib/python2.7/site packages/django/contrib/gis/geometry/backend/geos.py\", line 1, in from django.contrib.gis.geos import ( File \"/Users/armynante/Desktop/uclass-files/uclass-env/lib/python2.7/site packages/django/contrib/gis/geos/__init__.py\", line 6, in from django.contrib.gis.geos.geometry import GEOSGeometry, wkt_regex, hex_regex File \"/Users/armynante/Desktop/uclass-files/uclass-env/lib/python2.7/site packages/django/contrib/gis/geos/geometry.py\", line 14, in from django.contrib.gis.geos.coordseq import GEOSCoordSeq File \"/Users/armynante/Desktop/uclass-files/uclass-env/lib/python2.7/site- packages/django/contrib/gis/geos/coordseq.py\", line 9, in from django.contrib.gis.geos.libgeos import CS_PTR File \"/Users/armynante/Desktop/uclass-files/uclass-env/lib/python2.7/site- packages/django/contrib/gis/geos/libgeos.py\", line 119, in _verinfo = geos_version_info() File \"/Users/armynante/Desktop/uclass-files/uclass-env/lib/python2.7/site packages/django/contrib/gis/geos/libgeos.py\", line 115, in geos_version_info if not m: raise GEOSException('Could not parse version info string \"%s\"' % ver) django.contrib.gis.geos.error.GEOSException: Could not parse version info string \"3.4.2-CAPI-1.8.2 r3921\" Cant seem to find anything relevant to this trace on SO or the web. I think it might be a regex failure? I'm currently trying to reinstall PSQL and GEOS to see if I can get it running. Here is my requirements file: django==1.4 psycopg2==2.4.4 py-bcrypt==0.4 python-memcached==1.48 south==0.7.3 # Debug Tools sqlparse==0.1.3 django-debug-toolbar==0.9.1 django-devserver==0.3.1 # Deployment fabric==1.4 # AWS # boto==2.1.1 django-storages==1.1.4 django-ses==0.4.1 # ECL http://packages.elmcitylabs.com/ecl_django-0.5.3.tar.gz#ecl_django http://packages.elmcitylabs.com/ecl_google-0.2.14.tar.gz#ecl_google # https://packages.elmcitylabs.com/ecl_tools-0.3.7.tar.gz#ecl_tools # https://packages.elmcitylabs.com/chargemaster-0.2.19.tar.gz # https://packages.elmcitylabs.com/ecl_facebook-0.3.12.tar.gz#ecl_facebook # https://packages.elmcitylabs.com/ecl_twitter-0.3.3.tar.gz#ecl_twitter # Search #https://github.com/elmcitylabs/django-haystack/tarball/issue-522#django-haystack -e git+https://github.com/toastdriven/django-haystack.git#egg=django-haystack pysolr==2.1.0-beta # whoosh==2.3.2 # Misc # PIL # django-shorturls==1.0.1 # suds==0.4 django-mptt sorl-thumbnail stripe pytz==2013b", "output": "This is my solution (obviously it is ugly, like my English, but works). The problem is that the versions string has an white space unwanted in the RegEx. The error says: GEOSException: Could not parse version info string \"3.4.2-CAPI-1.8.2 r3921\" And the geos_version_info warns: Regular expression should be able to parse version strings such as '3.0.0rc4-CAPI-1.3.3', '3.0.0-CAPI-1.4.1' or '3.4.0dev-CAPI-1.8.0' Edit this file: site-packages/django/contrib/gis/geos/libgeos.py Look for the function: geos_version_info And change this line: ver = geos_version().decode() With this line: ver = geos_version().decode().split(' ')[0] There is also another problem, where there is a whitespace at the end but no more information is provided. Such version also doesn't match version regular expression, so strip()-ping the version may be expected behaviour as a quick fix. In my example it was: '3.8.0-CAPI-1.13.1 '"} +{"question_id": 2157035, "score": 74, "creation_date": 1264704460, "tags": ["python"], "instruction": "Accessing an attribute using a variable in Python\n\nHow do I reference this_prize.left or this_prize.right using a variable? from collections import namedtuple import random Prize = namedtuple(\"Prize\", [\"left\", \"right\"]) this_prize = Prize(\"FirstPrize\", \"SecondPrize\") if random.random() > .5: choice = \"left\" else: choice = \"right\" # retrieve the value of \"left\" or \"right\" depending on the choice print(\"You won\", this_prize.choice) AttributeError: 'Prize' object has no attribute 'choice'", "output": "The expression this_prize.choice is telling the interpreter that you want to access an attribute of this_prize with the name \"choice\". But this attribute does not exist in this_prize. What you actually want is to return the attribute of this_prize identified by the value of choice. So you just need to change your last line using the getattr() method... from collections import namedtuple import random Prize = namedtuple(\"Prize\", [\"left\", \"right\" ]) this_prize = Prize(\"FirstPrize\", \"SecondPrize\") if random.random() > .5: choice = \"left\" else: choice = \"right\" # retrieve the value of \"left\" or \"right\" depending on the choice print \"You won\", getattr(this_prize, choice)"} +{"question_id": 25577352, "score": 74, "creation_date": 1409353546, "tags": ["python", "pandas", "series", "cdf"], "instruction": "Plotting CDF of a pandas series in python\n\nIs there a way to do this? I cannot seem an easy way to interface pandas series with plotting a CDF (cumulative distribution function).", "output": "In case you are also interested in the values, not just the plot. import pandas as pd # If you are in jupyter %matplotlib inline This will always work (discrete and continuous distributions) # Define your series s = pd.Series([9, 5, 3, 5, 5, 4, 6, 5, 5, 8, 7], name = 'value') df = pd.DataFrame(s) # Get the frequency, PDF and CDF for each value in the series # Frequency stats_df = df \\ .groupby('value') \\ ['value'] \\ .agg('count') \\ .pipe(pd.DataFrame) \\ .rename(columns = {'value': 'frequency'}) # PDF stats_df['pdf'] = stats_df['frequency'] / sum(stats_df['frequency']) # CDF stats_df['cdf'] = stats_df['pdf'].cumsum() stats_df = stats_df.reset_index() stats_df # Plot the discrete Probability Mass Function and CDF. # Technically, the 'pdf label in the legend and the table the should be 'pmf' # (Probability Mass Function) since the distribution is discrete. # If you don't have too many values / usually discrete case stats_df.plot.bar(x = 'value', y = ['pdf', 'cdf'], grid = True) Alternative example with a sample drawn from a continuous distribution or you have a lot of individual values: # Define your series s = pd.Series(np.random.normal(loc = 10, scale = 0.1, size = 1000), name = 'value') # ... all the same calculation stuff to get the frequency, PDF, CDF # Plot stats_df.plot(x = 'value', y = ['pdf', 'cdf'], grid = True) For continuous distributions only Please note if it is very reasonable to make the assumption that there is only one occurence of each value in the sample (typically encountered in the case of continuous distributions) then the groupby() + agg('count') is not necessary (since the count is always 1). In this case, a percent rank can be used to get to the cdf directly. Use your best judgment when taking this kind of shortcut! :) # Define your series s = pd.Series(np.random.normal(loc = 10, scale = 0.1, size = 1000), name = 'value') df = pd.DataFrame(s) # Get to the CDF directly df['cdf'] = df.rank(method = 'average', pct = True) # Sort and plot df.sort_values('value').plot(x = 'value', y = 'cdf', grid = True)"} +{"question_id": 62691279, "score": 74, "creation_date": 1593675342, "tags": ["python", "pytorch", "huggingface-transformers", "huggingface-tokenizers"], "instruction": "How to disable TOKENIZERS_PARALLELISM=(true | false) warning?\n\nI use pytorch to train huggingface-transformers model, but every epoch, always output the warning: The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) How to disable this warning?", "output": "Set the environment variable to the string \"false\" either by TOKENIZERS_PARALLELISM=false in your shell or by: import os os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\" in the Python script"} +{"question_id": 42980704, "score": 74, "creation_date": 1490283562, "tags": ["python", "apache-spark", "dictionary", "pyspark", "apache-spark-sql"], "instruction": "PySpark create new column with mapping from a dict\n\nUsing Spark 1.6, I have a Spark DataFrame column (named let's say col1) with values A, B, C, DS, DNS, E, F, G and H. I want to create a new column (say col2) with the values from the dict here below. How do I map this? (e,g. 'A' needs to be mapped to 'S' etc.) dict = {'A': 'S', 'B': 'S', 'C': 'S', 'DS': 'S', 'DNS': 'S', 'E': 'NS', 'F': 'NS', 'G': 'NS', 'H': 'NS'}", "output": "Inefficient solution with UDF (version independent): from pyspark.sql.types import StringType from pyspark.sql.functions import udf def translate(mapping): def translate_(col): return mapping.get(col) return udf(translate_, StringType()) df = sc.parallelize([('DS', ), ('G', ), ('INVALID', )]).toDF(['key']) mapping = { 'A': 'S', 'B': 'S', 'C': 'S', 'DS': 'S', 'DNS': 'S', 'E': 'NS', 'F': 'NS', 'G': 'NS', 'H': 'NS'} df.withColumn(\"value\", translate(mapping)(\"key\")) with the result: +-------+-----+ | key|value| +-------+-----+ | DS| S| | G| NS| |INVALID| null| +-------+-----+ Much more efficient (Spark >= 2.0, Spark < 3.0) is to create a MapType literal: from pyspark.sql.functions import col, create_map, lit from itertools import chain mapping_expr = create_map([lit(x) for x in chain(*mapping.items())]) df.withColumn(\"value\", mapping_expr.getItem(col(\"key\"))) with the same result: +-------+-----+ | key|value| +-------+-----+ | DS| S| | G| NS| |INVALID| null| +-------+-----+ but more efficient execution plan: == Physical Plan == *Project [key#15, keys: [B,DNS,DS,F,E,H,C,G,A], values: [S,S,S,NS,NS,NS,S,NS,S][key#15] AS value#53] +- Scan ExistingRDD[key#15] compared to UDF version: == Physical Plan == *Project [key#15, pythonUDF0#61 AS value#57] +- BatchEvalPython [translate_(key#15)], [key#15, pythonUDF0#61] +- Scan ExistingRDD[key#15] In Spark >= 3.0 getItem should be replaced with __getitem__ ([]), i.e: from pyspark.sql.functions import col, create_map, lit from itertools import chain mapping_expr = create_map([lit(x) for x in chain(*mapping.items())]) df.withColumn(\"value\", mapping_expr[col(\"key\")])"} +{"question_id": 7929364, "score": 74, "creation_date": 1319805528, "tags": ["python", "mysql", "sql-injection"], "instruction": "Python: best practice and securest way to connect to MySQL and execute queries\n\nWhat is the safest way to run queries on MySQL? I am aware of the dangers involved with MySQL and SQL injection. However, I do not know how I should run my queries to prevent injection on the variables to which other users (webclients) can manipulate. I used to write my own escape function, but apparently this is \"not-done\". What should I use and how should I use it to query and do inserts safely on a MySQL database through python without risking MySQL injection?", "output": "To avoid injections, use execute with %s in place of each variable, then pass the value via a list or tuple as the second parameter of execute. Here is an example from the documentation: c=db.cursor() max_price=5 c.execute(\"\"\"SELECT spam, eggs, sausage FROM breakfast WHERE price < %s\"\"\", (max_price,)) Note that this is using a comma, not % (which would be a direct string substitution, not escaped). Don't do this: c.execute(\"\"\"SELECT spam, eggs, sausage FROM breakfast WHERE price < %s\"\"\" % (max_price,)) In addition, you must not use single quotes around the position holder ('%s') if the parameter is a string as the driver provides these."} +{"question_id": 52861946, "score": 74, "creation_date": 1539803109, "tags": ["python", "linux", "subprocess", "imagemagick"], "instruction": "ImageMagick not authorized to convert PDF to an image\n\nI have a program, in which I need to convert a PDF to an image using Image Magick. I do that using the subprocess package: cmd = 'magick convert -density 300 '+pdfFile+'['+str(rangeTuple[0])+'-'+str(rangeTuple[1])+'] -depth 8 '+'temp.tiff' #WINDOWS if(os.path.isfile('temp.tiff')): os.remove('temp.tiff') subprocess.call(cmd,shell=True) im = Image.open('temp.tiff') The error I got is: convert-im6.q16: not authorized `temp2.pdf' @ error/constitute.c/ReadImage/412. convert-im6.q16: no images defined `temp.tiff' @ error/convert.c/ConvertImageCommand/3258. Traceback (most recent call last): File \"UKExtraction2.py\", line 855, in doItAllUpper(\"A0\",\"UK5.csv\",\"temp\",59,70,\"box\",2,1000,firstPageCoordsUK,boxCoordUK,voterBoxCoordUK,internalBoxNumberCoordUK,externalBoxNumberCoordUK,addListInfoUK) File \"UKExtraction2.py\", line 776, in doItAllUpper doItAll(tempPDFName,outputCSV,2,pdfs,formatType,n_blocks,writeBlockSize,firstPageCoords,boxCoord,voterBoxCoord,internalBoxNumberCoord,externalBoxNumberCoord,addListInfo,pdfName) File \"UKExtraction2.py\", line 617, in doItAll mainProcess(pdfName,(0,noOfPages-1),formatType,n_blocks,outputCSV,writeBlockSize,firstPageCoords,boxCoord,voterBoxCoord,internalBoxNumberCoord,externalBoxNumberCoord,addListInfo,bigPDFName,basePages) File \"UKExtraction2.py\", line 542, in mainProcess im = Image.open('temp.tiff') File \"/home/rohit/.local/lib/python3.6/site-packages/PIL/Image.py\", line 2609, in open fp = builtins.open(filename, \"rb\") FileNotFoundError: [Errno 2] No such file or directory: 'temp.tiff' The most important of which is: convert-im6.q16: not authorized `temp2.pdf' @ error/constitute.c/ReadImage/412. I think this is because ImageMagick isn't authorized to access the PDF. What should be done now? I'm on a Linux server. Any help is appreciated.", "output": "emcconville is correct. More specifically edit the Imagemagick policy.xml file to uncomment this line: And change it from rights=\"none\" to rights=\"read|write\" This was a recent addition to the policy.xml file, I believe, due to a security flaw found in the Ghostscript delegate. I think that flaw has now been fixed in the current version of Ghostscript, which is 9.25. NOTE: On some systems the policy line will have domain=\"coder\" rather than domain=\"module\""} +{"question_id": 25466904, "score": 74, "creation_date": 1408832771, "tags": ["python", "flask", "wsgi"], "instruction": "Print raw HTTP request in Flask or WSGI\n\nI am debugging a microcontroller I've built which is writing raw HTTP requests line by line. I am using Flask for my backend and I would like to see the entire request as it appears in this format: GET / HTTP/1.1 Content-length: 123 User-agent: blah ... I know Flask is based on WSGI. Is there anyway to get this to work with Flask?", "output": "With flask you have access to the request object which contains all the HTTP details: from flask import request @app.route('/') def index(): print(request.headers)"} +{"question_id": 20503373, "score": 74, "creation_date": 1386703253, "tags": ["python", "datetime", "mocking", "pytest"], "instruction": "How to monkeypatch python's datetime.datetime.now with py.test?\n\nI need to test functions which uses datetime.datetime.now(). What is the easiest way to do this?", "output": "You need to monkeypatch datetime.now function. In example below, I'm creating fixture which I can re-use later in other tests: import datetime import pytest FAKE_TIME = datetime.datetime(2020, 12, 25, 17, 5, 55) @pytest.fixture def patch_datetime_now(monkeypatch): class mydatetime(datetime.datetime): @classmethod def now(cls): return FAKE_TIME monkeypatch.setattr(datetime, 'datetime', mydatetime) def test_patch_datetime(patch_datetime_now): assert datetime.datetime.now() == FAKE_TIME"} +{"question_id": 44026515, "score": 74, "creation_date": 1495028266, "tags": ["python", "redis"], "instruction": "Python-redis keys() returns list of bytes objects instead of strings\n\nI'm using the regular redis package in order to connect my Python code to my Redis server. As part of my code I check if a string object is existed in my Redis server keys. string = 'abcde' if string in redis.keys(): do something.. For some reasons, redis.keys() returns a list with bytes objects, such as [b'abcde'], while my string is, of course, a str object. I already tried to set charset, encoding and decode_responses in my redis generator, but it did not help. My goal is to insert the data as string ahead, and not iterate over the keys list and change each element to str() while checking it. Thanks ahead", "output": "You can configure the Redis client to automatically convert responses from bytes to strings using the decode_responses argument to the StrictRedis constructor: r = redis.StrictRedis('localhost', 6379, charset=\"utf-8\", decode_responses=True) Make sure you are consistent with the charset option between clients. Note You would be better off using the EXISTS command and restructuring your code like: string = 'abcde' if redis.exists(string): do something.. The KEYS operation returns every key in your Redis database and will cause serious performance degradation in production. As a side effect you avoid having to deal with the binary to string conversion."} +{"question_id": 16468717, "score": 74, "creation_date": 1368124484, "tags": ["python", "matrix", "numpy"], "instruction": "Iterating over Numpy matrix rows to apply a function each?\n\nI want to be able to iterate over the matrix to apply a function to each row. How can I do it for a Numpy matrix ?", "output": "You can use numpy.apply_along_axis(). Assuming that your array is 2D, you can use it like: import numpy as np myarray = np.array([[11, 12, 13], [21, 22, 23], [31, 32, 33]]) def myfunction(x): return x[0] + x[1]**2 + x[2]**3 print(np.apply_along_axis(myfunction, axis=1, arr=myarray)) #[ 2352 12672 36992]"} +{"question_id": 57381430, "score": 74, "creation_date": 1565113426, "tags": ["python", "python-3.x", "numpy", "tensorflow", "artificial-intelligence"], "instruction": "\"synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\" problem in TensorFlow\n\nI installed TensorFlow 1.10.1 but when I tried to import TensorFlow it said that I need TensorFlow version 1.10.0. Thus, I installed it and now I get the following warnings: >>> import tensorflow C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([(\"resource\", np.ubyte, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)]) C:\\Users\\PC\\Anaconda3\\envs\\tut\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([(\"resource\", np.ubyte, 1)])", "output": "It just a warning, not an error. It occurring because your current numpy libray version is not compatible with tensorflow version. You need to downgrade numpy version. tensorflow 1.10.0 has requirement numpy<=1.14.5,>=1.13.3, but you must have some higher version installed(this warning message occurs with newest numpy version 1.17.0)."} +{"question_id": 7019643, "score": 74, "creation_date": 1313023685, "tags": ["python", "inheritance", "properties", "overriding"], "instruction": "Overriding properties in python\n\nSo, I'm trying to figure out the best (most elegant with the least amount of code) way to allow overriding specific functions of a property (e.g., just the getter, just the setter, etc.) in python. I'm a fan of the following way of doing properties, due to the fact that all of their methods are encapsulated in the same indented block of code (it's easier to see where the functions dealing with one property stop and the functions dealing with the next begin): @apply def foo(): \"\"\"A foobar\"\"\" def fget(self): return self._foo def fset(self, val): self._foo = val return property(**locals()) However, if I want to inherit from a class that defines properties in this manner, and then, say, override the foo setter function, it seems tricky. I've done some searching and most of the answers I've found have been to define separate functions in the base class (e.g. getFoo and setFoo), explicitly create a property definition from them (e.g. foo = property(lambda x: x.getFoo(), lambda x, y: x.setFoo(y), lambda x: x.delFoo())), and then override getFoo, setFoo, and delFoo as needed. I dislike this solution because it means I have to define lambas for every single property, and then write out each function call (when before I could have just done property(**locals())). I also don't get the encapsulation that I had originally. Ideally, what I would like to be able to do would be something like this: class A(object): def __init__(self): self.foo = 8 @apply def foo(): \"\"\"A foobar\"\"\" def fget(self): return self._foo def fset(self, val): self._foo = val return property(**locals()) class ATimesTwo(A): @some_decorator def foo(): def fset(self, val): self._foo = val * 2 return something And then the output would look something like: >>> a = A() >>> a.foo 8 >>> b = ATimesTwo() >>> b.foo 16 Basically, ATimesTwo inherits the getter function from A but overrides the setter function. Does anybody know of a way to do this (in a manner that looks similar to the example above)? What function would the some_decorator look like, and what should the foo function return?", "output": "I'm sure you've heard this before, but apply has been deprecated for eight years, since Python 2.3. Don't use it. Your use of locals() is also contrary to the Zen of Python -- explicit is better than implicit. If you really like the increased indentation, there is no need to create a throwaway object, just do if True: @property def foo(self): return self._foo @foo.setter def foo(self, val): self._foo = val Which doesn't abuse locals, use apply, require creation of an extra object, or need a line afterwards with foo = foo() making it harder to see the end of the block. It works just as well for your old-fashioned way of using property -- just do foo = property(fget, fset) as normal. If you want to override a property in an arbitrary subclass, you can use a recipe like this. If the subclass knows where the property was defined, just do: class ATimesTwo(A): @A.foo.setter def foo(self, val): self._foo = val * 2"} +{"question_id": 13839554, "score": 74, "creation_date": 1355314442, "tags": ["python", "logging", "configuration", "python-logging"], "instruction": "How to change filehandle with Python logging on the fly with different classes and imports\n\nI cannot perform an on-the-fly logging fileHandle change. For example, I have 3 classes one.py import logging class One(): def __init__(self,txt=\"?\"): logging.debug(\"Hey, I'm the class One and I say: %s\" % txt) two.py import logging class Two(): def __init__(self,txt=\"?\"): logging.debug(\"Hey, I'm the class Two and I say: %s\" % txt) config.py import logging class Config(): def __init__(self,logfile=None): logging.debug(\"Reading config\") self.logfile(logfile) myapp from one import One from two import Two from config import Config import logging #Set default logging logging.basicConfig( level=logging.getLevelName(DEBUG), format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', filename=None ) logging.info(\"Starting with stdout\") o=One(txt=\"STDOUT\") c=Config(logfile=\"/tmp/logfile\") # Here must be the code that change the logging configuration and set the filehandler t=One(txt=\"This must be on the file, not STDOUT\") If I try loggin.basicConfig() again, it doesn't work.", "output": "Indeed, logging.basicConfig does nothing if a handler has been set up already: This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True. You'll need to either add force=True (requires Python 3.8 or newer), or, alternatively, replace the current handler on the root logger: import logging fileh = logging.FileHandler('/tmp/logfile', 'a') formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fileh.setFormatter(formatter) log = logging.getLogger() # root logger for hdlr in log.handlers[:]: # remove all old handlers log.removeHandler(hdlr) log.addHandler(fileh) # set the new handler See the Configuring Logging chapter in the Python Logging HOWTO."} +{"question_id": 43184491, "score": 74, "creation_date": 1491221948, "tags": ["python", "python-3.x", "pandas", "dataframe", "duplicates"], "instruction": "df.unique() on whole DataFrame based on a column\n\nI have a DataFrame df filled with rows and columns where there are duplicate Id's: Index Id Type 0 a1 A 1 a2 A 2 b1 B 3 b3 B 4 a1 A ... When I use: uniqueId = df[\"Id\"].unique() I get a list of unique IDs. How can I apply this filtering on the whole DataFrame such that it keeps the structure but that the duplicates (based on \"Id\") are removed?", "output": "It seems you need DataFrame.drop_duplicates with parameter subset which specify where are test duplicates: #keep first duplicate value df = df.drop_duplicates(subset=['Id']) print (df) Id Type Index 0 a1 A 1 a2 A 2 b1 B 3 b3 B #keep last duplicate value df = df.drop_duplicates(subset=['Id'], keep='last') print (df) Id Type Index 1 a2 A 2 b1 B 3 b3 B 4 a1 A #remove all duplicate values df = df.drop_duplicates(subset=['Id'], keep=False) print (df) Id Type Index 1 a2 A 2 b1 B 3 b3 B"} +{"question_id": 16423774, "score": 74, "creation_date": 1367943106, "tags": ["python", "numpy"], "instruction": "string representation of a numpy array with commas separating its elements\n\nI have a numpy array, for example: points = np.array([[-468.927, -11.299, 76.271, -536.723], [-429.379, -694.915, -214.689, 745.763], [ 0., 0., 0., 0. ]]) if I print it or turn it into a string with str() I get: print w_points [[-468.927 -11.299 76.271 -536.723] [-429.379 -694.915 -214.689 745.763] [ 0. 0. 0. 0. ]] I need to turn it into a string that prints with separating commas while keeping the 2D array structure, that is: [[-468.927, -11.299, 76.271, -536.723], [-429.379, -694.915, -214.689, 745.763], [ 0., 0., 0., 0. ]] Does anybody know an easy way of turning a numpy array to that form of string? I know that .tolist() adds the commas but the result loses the 2D structure.", "output": "Try using repr >>> import numpy as np >>> points = np.array([[-468.927, -11.299, 76.271, -536.723], ... [-429.379, -694.915, -214.689, 745.763], ... [ 0., 0., 0., 0. ]]) >>> print(repr(points)) array([[-468.927, -11.299, 76.271, -536.723], [-429.379, -694.915, -214.689, 745.763], [ 0. , 0. , 0. , 0. ]]) If you plan on using large numpy arrays, set np.set_printoptions(threshold=np.nan) first. Without it, the array representation will be truncated after about 1000 entries (by default). >>> arr = np.arange(1001) >>> print(repr(arr)) array([ 0, 1, 2, ..., 998, 999, 1000]) Of course, if you have arrays that large, this starts to become less useful and you should probably analyze the data some way other than just looking at it and there are better ways of persisting a numpy array than saving it's repr to a file..."} +{"question_id": 11204789, "score": 74, "creation_date": 1340704413, "tags": ["python", "python-2.5", "pep8"], "instruction": "How to properly use python's isinstance() to check if a variable is a number?\n\nI found some old Python code that was doing something like: if type(var) is type(1): ... As expected, pep8 complains about this recommending usage of isinstance(). Now, the problem is that the numbers module was added in Python 2.6 and I need to write code that works with Python 2.5+ So if isinstance(var, Numbers.number) is not a solution. Which would be the proper solution in this case?", "output": "In Python 2, you can use the types module: >>> import types >>> var = 1 >>> NumberTypes = (types.IntType, types.LongType, types.FloatType, types.ComplexType) >>> isinstance(var, NumberTypes) True Note the use of a tuple to test against multiple types. Under the hood, IntType is just an alias for int, etc.: >>> isinstance(var, (int, long, float, complex)) True The complex type requires that your python was compiled with support for complex numbers; if you want to guard for this use a try/except block: >>> try: ... NumberTypes = (types.IntType, types.LongType, types.FloatType, types.ComplexType) ... except AttributeError: ... # No support for complex numbers compiled ... NumberTypes = (types.IntType, types.LongType, types.FloatType) ... or if you just use the types directly: >>> try: ... NumberTypes = (int, long, float, complex) ... except NameError: ... # No support for complex numbers compiled ... NumberTypes = (int, long, float) ... In Python 3 types no longer has any standard type aliases, complex is always enabled and there is no longer a long vs int difference, so in Python 3 always use: NumberTypes = (int, float, complex) Last but not least, you can use the numbers.Numbers abstract base type (new in Python 2.6) to also support custom numeric types that don't derive directly from the above types: >>> import numbers >>> isinstance(var, numbers.Number) True This check also returns True for decimal.Decimal() and fractions.Fraction() objects. This module does make the assumption that the complex type is enabled; you'll get an import error if it is not."} +{"question_id": 34803467, "score": 74, "creation_date": 1452825231, "tags": ["python", "python-3.x", "ansible", "ansible-2.x"], "instruction": "Unexpected Exception: name 'basestring' is not defined when invoking ansible2\n\nI'm trying to execute ansible2 commands... When I do: ansible-playbook -vvv -i my/inventory my/playbook.yml I get: Unexpected Exception: name 'basestring' is not defined the full traceback was: Traceback (most recent call last): File \"/usr/local/bin/ansible-playbook\", line 85, in sys.exit(cli.run()) File \"/usr/local/lib/python3.4/site-packages/ansible/cli/playbook.py\", line 150, in run results = pbex.run() File \"/usr/local/lib/python3.4/site-packages/ansible/executor/playbook_executor.py\", line 87, in run self._tqm.load_callbacks() File \"/usr/local/lib/python3.4/site-packages/ansible/executor/task_queue_manager.py\", line 149, in load_callbacks elif isinstance(self._stdout_callback, basestring): NameError: name 'basestring' is not defined Here is ansible --version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides And here is python --version Python 3.4.3", "output": "Ansible below version 2.5 requires Python 2.6 or 2.7 on the control host: Control Node Requirements basestring is no longer available in Python 3. From What\u2019s New In Python 3.0: The builtin basestring abstract type was removed. Use str instead. The str and bytes types don\u2019t have functionality enough in common to warrant a shared base class. The 2to3 tool (see below) replaces every occurrence of basestring with str. So the solution is to either upgrade Ansible or downgrade Python."} +{"question_id": 20648660, "score": 74, "creation_date": 1387333715, "tags": ["python", "beautifulsoup"], "instruction": "Python BeautifulSoup give multiple tags to findAll\n\nI'm looking for a way to use findAll to get two tags, in the order they appear on the page. Currently I have: import requests import BeautifulSoup def get_soup(url): request = requests.get(url) page = request.text soup = BeautifulSoup(page) get_tags = soup.findAll('hr' and 'strong') for each in get_tags: print each If I use that on a page with only 'em' or 'strong' in it then it will get me all of those tags, if I use on one with both it will get 'strong' tags. Is there a way to do this? My main concern is preserving the order in which the tags are found.", "output": "You could pass a list, to find any of the given tags: tags = soup.find_all(['hr', 'strong'])"} +{"question_id": 17323336, "score": 74, "creation_date": 1372257922, "tags": ["python", "ipython", "jupyter-notebook"], "instruction": "Automatically play sound in IPython notebook\n\nI often run long-running cells in my IPython notebook. I'd like the notebook to automatically beep or play a sound when the cell is finished executing. Is there some way to do this in iPython notebook, or maybe some command I can put at the end of a cell that will automatically play a sound? I'm using Chrome if that makes any difference.", "output": "TL;DR At the top of your notebook from IPython.display import Audio sound_file = './sound/beep.wav' sound_file should point to a file on your computer, or accessible from the internet. Then later, at the end of the long-running cell Audio(sound_file, autoplay=True) This method uses the Audio tag built into Newer versions of iPython/Jupyter. Note For Older Versions Older versions without the Audio tag can use the following method. Put this in a cell and run it before you want to play your sound: from IPython.display import HTML from base64 import b64encode path_to_audio = \"/path/to/snd/my-sound.mp3\" audio_type = \"mp3\" sound = open(path_to_audio, \"rb\").read() sound_encoded = b64encode(sound) sound_tag = \"\"\" \"\"\".format(sound_encoded, audio_type) play_beep = \"\"\" \"\"\" HTML(sound_tag) At the end of the cell you want to make a noise on completion put this: HTML(play_beep) How it works: It reads a file from the filesystem using iPython's built in open and read methods. Then it encodes this into base64. It then creates an audio tag with the ID beep and injects the base64 data into it. The final piece of setup creates a small script tag that plays the sound. This method should work in any browser that supports the HTML5 audio tag. Note: if you'd rather not display the audio controls in your notebook, just remove the controls attribute from the variable named sound_tag"} +{"question_id": 25698710, "score": 74, "creation_date": 1409994943, "tags": ["python", "replace", "pandas", "dataframe"], "instruction": "Replace all occurrences of a string in a pandas dataframe (Python)\n\nI have a pandas dataframe with about 20 columns. It is possible to replace all occurrences of a string (here a newline) by manually writing all column names: df['columnname1'] = df['columnname1'].str.replace(\"\\n\",\"
\") df['columnname2'] = df['columnname2'].str.replace(\"\\n\",\"
\") df['columnname3'] = df['columnname3'].str.replace(\"\\n\",\"
\") ... df['columnname20'] = df['columnname20'].str.replace(\"\\n\",\"
\") This unfortunately does not work: df = df.replace(\"\\n\",\"
\") Is there any other, more elegant solution?", "output": "You can use replace and pass the strings to find/replace as dictionary keys/items: df.replace({'\\n': '
'}, regex=True) For example: >>> df = pd.DataFrame({'a': ['1\\n', '2\\n', '3'], 'b': ['4\\n', '5', '6\\n']}) >>> df a b 0 1\\n 4\\n 1 2\\n 5 2 3 6\\n >>> df.replace({'\\n': '
'}, regex=True) a b 0 1
4
1 2
5 2 3 6
Note that this method returns a new DataFrame instance by default (it does not modify the original), so you'll need to either reassign the output: df = df.replace({'\\n': '
'}, regex=True) or specify inplace=True: df.replace({'\\n': '
'}, regex=True, inplace=True)"} +{"question_id": 39955521, "score": 74, "creation_date": 1476092725, "tags": ["python", "sqlalchemy", "pyramid", "pylons"], "instruction": "sqlalchemy existing database query\n\nI am using SQLAlchemy as ORM for a python project. I have created few models/schema and it is working fine. Now I need to query a existing MySQL database, no insert/update just the select statement. How can I create a wrapper around the tables of this existing database? I have briefly gone through the sqlalchemy docs and SO but couldn't find anything relevant. All suggest execute method, where I need to write the raw sql queries, while I want to use the SQLAlchemy query method in same way as I am using with the SA models. For example if the existing db has table name User then I want to query it using the dbsession ( only the select operation, probably with join)", "output": "You seem to have an impression that SQLAlchemy can only work with a database structure created by SQLAlchemy (probably using MetaData.create_all()) - this is not correct. SQLAlchemy can work perfectly with a pre-existing database, you just need to define your models to match database tables. One way to do that is to use reflection, as Ilja Everil\u00e4 suggests: from sqlalchemy import Table from sqlalchemy.orm import DeclarativeBase class Base(DeclarativeBase): pass class MyClass(Base): __table__ = Table('mytable', Base.metadata, autoload=True, autoload_with=some_engine) (which, in my opinion, would be totally fine for one-off scripts but may lead to incredibly frustrating bugs in a \"real\" application if there's a potential that the database structure may change over time) Another way is to simply define your models as usual taking care to define your models to match the database tables, which is not that difficult. The benefit of this approach is that you can map only a subset of database tables to you models and even only a subset of table columns to your model's fields. Suppose you have 10 tables in the database but only interested in users table from where you only need id, name and email fields: import sqlalchemy as sa from sqlalchemy.orm import DeclarativeBase class Base(DeclarativeBase): pass class User(Base): id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String) email = sa.Column(sa.String) (note how we didn't need to define some details which are only needed to emit correct DDL, such as the length of the String fields or the fact that the email field has an index) SQLAlchemy will not emit INSERT/UPDATE queries unless you create or modify models in your code. If you want to ensure that your queries are read-only you may create a special user in the database and grant that user SELECT privileges only. Alternatively/in addition, you may also experiment with rolling back the transaction in your application code."} +{"question_id": 40119743, "score": 74, "creation_date": 1476833694, "tags": ["python", "numpy"], "instruction": "convert a grayscale image to a 3-channel image\n\nI want to convert a gray-scale image with shape (height,width) to a 3 channels image with shape (height,width,nchannels). The work is done with a for-loop, but there must be a neat way. Here is a piece code in program, can someone give a hint. please advice. 30 if img.shape == (height,width): # if img is grayscale, expand 31 print \"convert 1-channel image to \", nchannels, \" image.\" 32 new_img = np.zeros((height,width,nchannels)) 33 for ch in range(nchannels): 34 for xx in range(height): 35 for yy in range(width): 36 new_img[xx,yy,ch] = img[xx,yy] 37 img = new_img", "output": "You can use np.stack to accomplish this much more concisely: img = np.array([[1, 2], [3, 4]]) stacked_img = np.stack((img,)*3, axis=-1) print(stacked_img) # array([[[1, 1, 1], # [2, 2, 2]], # [[3, 3, 3], # [4, 4, 4]]])"} +{"question_id": 53982871, "score": 74, "creation_date": 1546220723, "tags": ["python", "pandas", "dataframe", "parquet"], "instruction": "Pandas : Reading first n rows from parquet file?\n\nI have a parquet file and I want to read first n rows from the file into a pandas data frame. What I tried: df = pd.read_parquet(path= 'filepath', nrows = 10) It did not work and gave me error: TypeError: read_table() got an unexpected keyword argument 'nrows' I did try the skiprows argument as well but that also gave me same error. Alternatively, I can read the complete parquet file and filter the first n rows, but that will require more computations which I want to avoid. Is there any way to achieve it?", "output": "The accepted answer is out of date. It is now possible to read only the first few lines of a parquet file into pandas, though it is a bit messy and backend dependent. To read using PyArrow as the backend, follow below: from pyarrow.parquet import ParquetFile import pyarrow as pa pf = ParquetFile('file_name.pq') first_ten_rows = next(pf.iter_batches(batch_size = 10)) df = pa.Table.from_batches([first_ten_rows]).to_pandas() Change the line batch_size = 10 to match however many rows you want to read in."} +{"question_id": 8529390, "score": 74, "creation_date": 1324005803, "tags": ["python", "subprocess", "silent"], "instruction": "Is there a quiet version of subprocess.call?\n\nIs there a variant of subprocess.call that can run the command without printing to standard out, or a way to block out it's standard out messages?", "output": "Yes. Redirect its stdout to /dev/null. process = subprocess.call([\"my\", \"command\"], stdout=open(os.devnull, 'wb'))"} +{"question_id": 56288949, "score": 74, "creation_date": 1558687536, "tags": ["python", "pandas", "indexing"], "instruction": "How to access the last element in a Pandas series\n\nLet us consider the following data frame: import pandas as pd d = {'col1': [1, 2, 3], 'col2': [3, 4, 5]} df=pd.DataFrame(data=d) If I want to access the first element in pandas series df['col1'], I can simply go df['col1'][0]. But how can I access the last element in this series? I have tried df['col1'][-1] which returns the following error: KeyError: -1L I know that I could go for something like df['col1'][len(df)-1] but why is reverse indexing impossible here?", "output": "For select last value need Series.iloc or Series.iat, because df['col1'] return Series: print (df['col1'].iloc[-1]) 3 print (df['col1'].iat[-1]) 3 Or convert Series to numpy array and select last: print (df['col1'].values[-1]) 3 Or use DataFrame.iloc or DataFrame.iat - but is necessary position of column by Index.get_loc: print (df.iloc[-1, df.columns.get_loc('col1')]) 3 print (df.iat[-1, df.columns.get_loc('col1')]) 3 Or is possible use last value of index (necessary not duplicated) and select by DataFrame.loc: print (df.loc[df.index[-1], 'col1']) 3"} +{"question_id": 54470917, "score": 74, "creation_date": 1548978359, "tags": ["python", "pandas", "group-by"], "instruction": "Pandas GroupBy and select rows with the minimum value in a specific column\n\nI have a DataFrame with columns A, B, and C. For each value of A, I would like to select the row with the minimum value in column B. That is, from this: df = pd.DataFrame({'A': [1, 1, 1, 2, 2, 2], 'B': [4, 5, 2, 7, 4, 6], 'C': [3, 4, 10, 2, 4, 6]}) A B C 0 1 4 3 1 1 5 4 2 1 2 10 3 2 7 2 4 2 4 4 5 2 6 6 I would like to get: A B C 0 1 2 10 1 2 4 4 For the moment I am grouping by column A, then creating a value that indicates to me the rows I will keep: a = data.groupby('A').min() a['A'] = a.index to_keep = [str(x[0]) + str(x[1]) for x in a[['A', 'B']].values] data['id'] = data['A'].astype(str) + data['B'].astype('str') data[data['id'].isin(to_keep)] I am sure that there is a much more straightforward way to do this. I have seen many answers here that use MultiIndex, which I would prefer to avoid. Thank you for your help.", "output": "I feel like you're overthinking this. Just use groupby and idxmin: df.loc[df.groupby('A').B.idxmin()] A B C 2 1 2 10 4 2 4 4 df.loc[df.groupby('A').B.idxmin()].reset_index(drop=True) A B C 0 1 2 10 1 2 4 4"} +{"question_id": 28668351, "score": 74, "creation_date": 1424674555, "tags": ["python", "django", "migrate"], "instruction": "Django: dependencies reference nonexistent parent node\n\nWhen I run the following command python manage.py migrate I receive this error from django so can't step forward in my practice: Traceback (most recent call last): File \"manage.py\", line 10, in execute_from_command_line(sys.argv) File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 385, in execute_from_command_line utility.execute() File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 377, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/core/management/base.py\", line 288, in run_from_argv self.execute(*args, **options.__dict__) File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/core/management/base.py\", line 338, in execute output = self.handle(*args, **options) File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py\", line 63, in handle executor = MigrationExecutor(connection, self.migration_progress_callback) File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/db/migrations/executor.py\", line 17, in __init__ self.loader = MigrationLoader(self.connection) File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/db/migrations/loader.py\", line 48, in __init__ self.build_graph() File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/db/migrations/loader.py\", line 241, in build_graph self.graph.add_dependency(migration, key, parent) File \"/home/nikhil/testWeb-devEnv/local/lib/python2.7/site-packages/django/db/migrations/graph.py\", line 42, in add_dependency raise KeyError(\"Migration %s dependencies reference nonexistent parent node %r\" % (migration, parent)) KeyError: u\"Migration testBolt.0001_initial dependencies reference nonexistent parent node (u'delivery_boy', u'0004_auto_20150221_2011')\" How do I solve this problem?", "output": "Solution - 1 Remove pyc files from your migrations folder. Solution - 2 Need to remove that reference from testBolt.0001_initial by editing migration file. Solution - 3 Remove the new changes from the models and run python manage.py migrate --fake Now again modify your models with new changes Run python manage.py makemigrations And then again run python manage.py migrate"} +{"question_id": 37895568, "score": 74, "creation_date": 1466243122, "tags": ["python", "tuples"], "instruction": "How to create a tuple of an empty tuple in Python?\n\nHow can I create a tuple consisting of just an empty tuple, i.e. (())? I have tried tuple(tuple()), tuple(tuple(tuple())), tuple([]) and tuple(tuple([])) which all gave me (). The reason that I use such a thing is as follows: Assume you have n bags with m items. To represent a list of items in a bag, I use a tuple of length n where each element of that tuple is a representative for a bag. A bag might be empty, which is labeled by (). Now, at some initial point, I have just one bag with empty items!", "output": "The empty tuple is () (or the more-verbose and slower tuple()), and a tuple with just one item (such as the integer 1), called a singleton (see here and here) is (1,). Therefore, the tuple containing only the empty tuple is ((),) Here are some results showing that works: >>> a=((),) >>> type(a) >>> len(a) 1 >>> a[0] () >>> type(a[0]) >>> len(a[0]) 0"} +{"question_id": 30765606, "score": 74, "creation_date": 1433964645, "tags": ["python", "python-3.4", "python-asyncio"], "instruction": "What's the correct way to clean up after an interrupted event loop?\n\nI have an event loop that runs some co-routines as part of a command line tool. The user may interrupt the tool with the usual Ctrl + C, at which point I want to clean up properly after the interrupted event loop. Here's what I tried. import asyncio @asyncio.coroutine def shleepy_time(seconds): print(\"Shleeping for {s} seconds...\".format(s=seconds)) yield from asyncio.sleep(seconds) if __name__ == '__main__': loop = asyncio.get_event_loop() # Side note: Apparently, async() will be deprecated in 3.4.4. # See: https://docs.python.org/3.4/library/asyncio-task.html#asyncio.async tasks = [ asyncio.async(shleepy_time(seconds=5)), asyncio.async(shleepy_time(seconds=10)) ] try: loop.run_until_complete(asyncio.gather(*tasks)) except KeyboardInterrupt as e: print(\"Caught keyboard interrupt. Canceling tasks...\") # This doesn't seem to be the correct solution. for t in tasks: t.cancel() finally: loop.close() Running this and hitting Ctrl + C yields: $ python3 asyncio-keyboardinterrupt-example.py Shleeping for 5 seconds... Shleeping for 10 seconds... ^CCaught keyboard interrupt. Canceling tasks... Task was destroyed but it is pending! task: wait_for= cb=[gather.._done_callback(1)() at /usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/tasks.py:587]> Task was destroyed but it is pending! task: wait_for= cb=[gather.._done_callback(0)() at /usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/tasks.py:587]> Clearly, I didn't clean up correctly. I thought perhaps calling cancel() on the tasks would be the way to do it. What's the correct way to clean up after an interrupted event loop?", "output": "If using Python 3.7+, using asyncio.run to execute your main function will take care of this, as pointed out in this answer. See below for an explanation of why just calling t.cancel() with the code in the question does not work, and a way to handle it that was relevant on older versions of Python, prior to the introduction of asyncio.run(). When you CTRL+C, the event loop gets stopped, so your calls to t.cancel() don't actually take effect. For the tasks to be cancelled, you need to start the loop back up again. Here's how you can handle it: import asyncio @asyncio.coroutine def shleepy_time(seconds): print(\"Shleeping for {s} seconds...\".format(s=seconds)) yield from asyncio.sleep(seconds) if __name__ == '__main__': loop = asyncio.get_event_loop() # Side note: Apparently, async() will be deprecated in 3.4.4. # See: https://docs.python.org/3.4/library/asyncio-task.html#asyncio.async tasks = asyncio.gather( asyncio.async(shleepy_time(seconds=5)), asyncio.async(shleepy_time(seconds=10)) ) try: loop.run_until_complete(tasks) except KeyboardInterrupt as e: print(\"Caught keyboard interrupt. Canceling tasks...\") tasks.cancel() loop.run_forever() tasks.exception() finally: loop.close() Once we catch KeyboardInterrupt, we call tasks.cancel() and then start the loop up again. run_forever will actually exit as soon as tasks gets cancelled (note that cancelling the Future returned by asyncio.gather also cancels all the Futures inside of it), because the interrupted loop.run_until_complete call added a done_callback to tasks that stops the loop. So, when we cancel tasks, that callback fires, and the loop stops. At that point we call tasks.exception, just to avoid getting a warning about not fetching the exception from the _GatheringFuture."} +{"question_id": 26640145, "score": 74, "creation_date": 1414614856, "tags": ["python", "pandas", "dataframe"], "instruction": "How do I get the name of the rows from the index of a data frame?\n\nConsider a data frame with row names that aren't a column of their own per se, such as the following: X Y Row 1 0 5 Row 2 8 1 Row 3 3 0 How would I extract the name of these rows as a list, if I have their index? For example, it would look something like: function_name(dataframe[indices]) > ['Row 1', 'Row 2']", "output": "df.index outputs the row names as pandas Index object. list(df.index) casts to a list. df.index['Row 2':'Row 5'] supports label slicing similar to columns."} +{"question_id": 38662296, "score": 74, "creation_date": 1469805629, "tags": ["python", "python-3.x", "string-formatting"], "instruction": "Unexpected '{' in field name when doing string formatting\n\nI'm trying to write a small script that will automate some PHP boilerplate that I need to write. It should write a copy of the string code to the output file with the various replacement fields filled in for each dict in the fields list. However, I'm getting the error: Traceback (most recent call last): File \"writefields.py\", line 43, in formatted = code.format(**field) ValueError: unexpected '{' in field name As far as I can tell, there are no extra braces in either the replacement fields or the dicts that should be causing issues, so any help would be appreciated. code = ''' // {label} add_filter( 'submit_job_form_fields', 'frontend_add_{fieldname}_field' ); function frontend_add_{fieldname}_field($fields) { $fields['job']['job_{fieldname}'] = array( 'label' => __('{label}', 'job_manager'), 'type' => 'text', 'required' => {required}, 'priority' => 7, 'placeholder' => '{placeholder}' ); return $fields; } add_filter( 'job_manager_job_listing_data_fields', 'admin_add_{fieldname}_field' ); function admin_add_{fieldname}_field( $fields ) { $fields['_job_{fieldname}'] = array( 'label' => __( '{label}', 'job_manager' ), 'type' => 'text', 'placeholder' => '{placeholder}', 'description' => '' ); return $fields; } ''' fields = [ { 'fieldname': 'salary', 'label': 'Salary ($)', 'required': 'true', 'placeholder': 'e.g. 20000', }, { 'fieldname': 'test', 'label': 'Test Field', 'required': 'true', 'placeholder': '', } ] with open('field-out.txt', 'w') as f: for field in fields: formatted = code.format(**field) f.write(formatted) f.write('\\n')", "output": "You need to double any { or } that are not part of a formatting placeholder. For example, you have: function admin_add_{fieldname}_field( $fields ) { [....] } in the string. The { at the end of the first line and } on the last are not part of a placeholder. Replace them with {{ and }} respectively: function admin_add_{fieldname}_field( $fields ) {{ [....] }} Doubling up those curly braces escapes them; the final output will contain single { and } characters again. In the full string you used, that'd be: code = ''' // {label} add_filter( 'submit_job_form_fields', 'frontend_add_{fieldname}_field' ); function frontend_add_{fieldname}_field($fields) {{ $fields['job']['job_{fieldname}'] = array( 'label' => __('{label}', 'job_manager'), 'type' => 'text', 'required' => {required}, 'priority' => 7, 'placeholder' => '{placeholder}' ); return $fields; }} add_filter( 'job_manager_job_listing_data_fields', 'admin_add_{fieldname}_field' ); function admin_add_{fieldname}_field( $fields ) {{ $fields['_job_{fieldname}'] = array( 'label' => __( '{label}', 'job_manager' ), 'type' => 'text', 'placeholder' => '{placeholder}', 'description' => '' ); return $fields; }} '''"} +{"question_id": 29959191, "score": 74, "creation_date": 1430369107, "tags": ["python", "json"], "instruction": "How to parse json file with c-style comments?\n\nI have a json file, such as the following: { \"author\":\"John\", \"desc\": \"If it is important to decode all valid JSON correctly \\ and speed isn't as important, you can use the built-in json module, \\ orsimplejson. They are basically the same but sometimes simplej \\ further along than the version of it that is included with \\ distribution.\" //\"birthday\": \"nothing\" //I comment this line } This file is auto created by another program. How do I parse it with Python?", "output": "I can not imagine a json file \"auto created by other program\" would contain comments inside. Because json spec defines no comment at all, and that is by design, so no json library would output a json file with comment. Those comments are usually added later, by a human. No exception in this case. The OP mentioned that in his post: //\"birthday\": \"nothing\" //I comment this line. So the real question should be, how do I properly comment some content in a json file, yet maintaining its compliance with spec and hence its compatibility with other json libraries? And the answer is, rename your field to another name. Example: { \"foo\": \"content for foo\", \"bar\": \"content for bar\" } can be changed into: { \"foo\": \"content for foo\", \"this_is_bar_but_been_commented_out\": \"content for bar\" } This will work just fine most of the time because the consumer will very likely ignore unexpected fields (but not always, it depends on your json file consumer's implementation. So YMMV.) UPDATE: Apparently some reader was unhappy because this answer does not give the \"solution\" they expect. Well, in fact, I did give a working solution, by implicitly linking to the JSON designer's quote: Douglas Crockford Public Apr 30, 2012 Comments in JSON I removed comments from JSON because I saw people were using them to hold parsing directives, a practice which would have destroyed interoperability. I know that the lack of comments makes some people sad, but it shouldn't. Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all the comments you like. Then pipe it through JSMin before handing it to your JSON parser. So, yeah, go ahead to use JSMin. Just keep in mind that when you are heading towards \"using comments in JSON\", that is a conceptually uncharted territory. There is no guarantee that whatever tools you choose would handle: inline [1,2,3,/* a comment */ 10], Python style [1, 2, 3] # a comment (which is a comment in Python but not in Javascript), INI style [1, 2, 3] ; a comment, ..., you get the idea. I would still suggest to NOT adding noncompliant comments in JSON in the first place."} +{"question_id": 24441132, "score": 74, "creation_date": 1403821063, "tags": ["python", "psycopg2"], "instruction": "Getting affected row count from psycopg2 connection.commit()\n\nCurrently, I have the following method to execute INSERT/UPDATE/DELETE statements using psycopg2 in Python: def exec_statement(_cxn, _stmt): try: db_crsr = _cxn.cursor() db_crsr.execute(_stmt) _cxn.commit() db_crsr.close() return True except: return False But what I would really like it to do, instead of bool, is return the row count affected by the transaction or -1 if the operation fails. Is there a way to get a number of rows affected by _cxn.commit()? E.g. for a single INSERT it would be always 1, for a DELETE or UPDATE, the number of rows affected by the statement etc.?", "output": "commit() can't be used to get the row count, but you can use the cursor to get that information after each execute call. You can use its rowcount attribute to get the number of rows affected for SELECT, INSERT, UPDATE and DELETE. i.e. db_crsr = _cxn.cursor() db_crsr.execute(_stmt) rowcount = db_crsr.rowcount _cxn.commit() db_crsr.close() return rowcount If you want to return the number of affected rows, I would recommend not catching any exceptions, since if the operation truly failed (say the query was malformed, or there was a FK constraint violation, etc.), an exception should be raised, and in that case the caller could catch that and behave as desired. (Or, if you want to centralize the exception handling, perhaps raise a custom MyPostgresException, or similar.) -1 can be returned in a non-failure case in certain situations (https://www.psycopg.org/docs/cursor.html#cursor.rowcount), so I would recommend against using that value as the failure indicator. If you really want to return a numerical value in the case of failure, perhaps returning a number like -10 would work (in the except block), since rowcount shouldn't ever return that."} +{"question_id": 57785471, "score": 74, "creation_date": 1567589451, "tags": ["python", "mypy"], "instruction": "Why does mypy think library imports are missing?\n\nWhen I run mypy it complains that modules cannot be found: sal@ahfang:~/workspace/ecs/cx-project-skeleton-repo/src/cx-example-function$ pipenv run python -m mypy . example_lambda.py:3: error: Cannot find module named 'aws_xray_sdk.core' But when trying to import that exact same module with the exact same Python interpreter, it seems that the module does exist and is importable. python Python 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] on linux Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>> import aws_xray_sdk.core >>> Other than to force ignore the imports in the mypy.ini file, is there anything I should be doing to help mypy see importable modules that definitely do exist?", "output": "So, here's the crux of the issue: mypy does not try type-checking every single module you've imported. Instead, it only attempts to type-check modules that have explicitly opted-in to the typing ecosystem. Modules can opt-in to the typing ecosystem via two key mechanisms: Add type hints or stubs to their code, and include a file named py.typed within the package they distribute to PyPi (or any other package repository). The presence of this marker makes the package PEP-561-aware. The mypy docs also have more info about PEP-561-aware packages. Alternatively, add stubs to typeshed, the repository of type hints for the standard library and select 3rd party libraries. The aws_xray_sdk package has done neither of these things, so will be ignored by mypy. This is a bit unfortunate, so what can you do? The Missing imports section of the mypy docs has some detailed recommendations on what to do, but to summarize, you basically have three options which I'll list in order from least to most effort: Just silence the import by manually add # type: ignore comments to each import. You can also add the following section to your mypy config file to have this happen automatically: [mypy-aws_xray_sdk] ignore_missing_imports = True Now, anything you import from this module will be treated as being of type Any. Search around and see if anybody has created a third party stubs package for your library: basically, an unofficial (or sometimes semi-official) PEP-561-aware package that only contains type hints. For example, for django, there's django-stubs, for SqlAlchemy, there's sqlalchemy-stubs. Create your own stubs for this library and point to them via the mypy_path option in your mypy config file: mypy_path = my_stubs/aws_xray_sdk, my_stubs/some_other_library These stubs don't have to be complete, necessarily: you can get away with just adding annotations for the few things you're using. (And if they do end up becoming relatively complete, you perhaps look into contributing them back to the open-source community.) Now finally, you may be wondering why mypy behaves this way? Part of this is because it's not safe in the general case for mypy to just try finding and analyzing the module. Just blindly importing and using packages that are not type-hinting ready can sometimes result in odd type errors, or worse, can result in code incorrectly being marked as type-safe. That is, if you care about type-safety, it's better to be immediately notified that some package you're using has no type hints instead of mypy blindly inferring and smearing Anys all over your code. Mypy could give a better error message here though, at least most of the time. IMO the fact that it doesn't is largely an oversight. There's some discussion about this in https://github.com/python/mypy/issues/4542."} +{"question_id": 71102876, "score": 74, "creation_date": 1644771035, "tags": ["python", "ipython"], "instruction": "in ipython how do I accept and use an autocomplete suggestion?\n\nI'm using Python 3.8.9 with IPython 8.0.1 on macOS. When I type anything whatsoever, it displays a predicted suggestion based on past commands. Cool. However, how do I actually accept that suggestion? I tried the obvious: tab, which does not accept the suggestion, but rather opens up a menu with different suggestions, while the original suggestion is still there (see screenshot). I also tried space, and return, but both of those act as if the suggestion was never made. How the heck do I actually use the ipython autosuggestion? Or is tab supposed to work and something is wrong with my ipython build or something?", "output": "CTRL-E, CTRL-F, or Right Arrow Key https://ipython.readthedocs.io/en/8.13.2/config/shortcuts/index.html Alternatively, the End key, as suggested by Richard Berg (below)."} +{"question_id": 19855156, "score": 74, "creation_date": 1383901537, "tags": ["python", "pickle"], "instruction": "What's the exact usage of __reduce__ in Pickler\n\nI know that in order to be picklable, a class has to overwrite __reduce__ method, and it has to return string or tuple. How does this function work? What the exact usage of __reduce__? When will it been used?", "output": "When you try to pickle an object, there might be some properties that don't serialize well. One example of this is an open file handle. Pickle won't know how to handle the object and will throw an error. You can tell the pickle module how to handle these types of objects natively within a class directly. Lets see an example of an object which has a single property; an open file handle: import pickle class Test(object): def __init__(self, file_path=\"test1234567890.txt\"): # An open file in write mode self.some_file_i_have_opened = open(file_path, 'wb') my_test = Test() # Now, watch what happens when we try to pickle this object: pickle.dumps(my_test) It should fail and give a traceback: Traceback (most recent call last): File \"\", line 1, in --- snip snip a lot of lines --- File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py\", line 70, in _reduce_ex raise TypeError, \"can't pickle %s objects\" % base.__name__ TypeError: can't pickle file objects However, had we defined a __reduce__ method in our Test class, pickle would have known how to serialize this object: import pickle class Test(object): def __init__(self, file_path=\"test1234567890.txt\"): # Used later in __reduce__ self._file_name_we_opened = file_path # An open file in write mode self.some_file_i_have_opened = open(self._file_name_we_opened, 'wb') def __reduce__(self): # we return a tuple of class_name to call, # and optional parameters to pass when re-creating return (self.__class__, (self._file_name_we_opened, )) my_test = Test() saved_object = pickle.dumps(my_test) # Just print the representation of the string of the object, # because it contains newlines. print(repr(saved_object)) This should give you something like: \"c__main__\\nTest\\np0\\n(S'test1234567890.txt'\\np1\\ntp2\\nRp3\\n.\", which can be used to recreate the object with open file handles: print(vars(pickle.loads(saved_object))) In general, the __reduce__ method needs to return a tuple with at least two elements: A blank object class to call. In this case, self.__class__ A tuple of arguments to pass to the class constructor. In the example it's a single string, which is the path to the file to open. Consult the docs for a detailed explanation of what else the __reduce__ method can return."} +{"question_id": 45604688, "score": 74, "creation_date": 1502340402, "tags": ["python", "arrays", "numpy", "apply"], "instruction": "Apply function on each row (row-wise) of a NumPy array\n\nSo, I have the function - def function(x): x , y = vector return exp(((-x**2/200))-0.5*(y+0.05*(x**2) - 100*0.05)**2) and let's say that I would like to evaluate it at the following points (first column are the x-values and second column are the y-values) - array([[-1.56113514, 4.51759732], [-2.80261623, 5.068371 ], [ 0.7792729 , 6.0169462 ], [-1.35672858, 3.52517478], [-1.92074891, 5.79966161], [-2.79340321, 4.73430001], [-2.79655868, 5.05361163], [-2.13637747, 5.39255837], [ 0.17341809, 3.60918261], [-1.22712921, 4.95327158]]) i.e. I would like to pass the function the first row of values and evaluate, then the second row and evaluate etc. and then the final result would be an array of the values evaluated at these points (so, an array consisting of 10 values). So, for example, if the function was, say, a bivariate normal distribution - def function2(x): function2 = (mvnorm.pdf(x,[0,0],[[1,0],[0,1]])) return function2 and I passed the above values into this function, I would get - array([ 1.17738907e-05, 1.08383957e-04, 1.69855078e-04, 5.64757613e-06, 1.37432346e-05, 1.44032800e-04, 1.33426313e-05, 1.97822328e-06, 6.56121709e-08, 4.67076770e-05]) So basically, I am looking for a way to rewrite the function so that it can do this. Moreover, I would like to keep the function as a function of one variable only (i.e. only a function of x). Thank you for your help!", "output": "You can use np.apply_along_axis: np.apply_along_axis(function, 1, array) The first argument is the function, the second argument is the axis along which the function is to be applied. In your case, it is the first axis. The last argument is the array, of course. You should be warned, however, that apply_along_axis is only a convenience function, not a magic bullet. It has a severe speed limitation, since it just hides a loop. You should always try to vectorize your computation, where possible. Here's how I'd do this: v = array[:, 0] ** 2 # computing just once return np.exp((-v / 200) - 0.5 * (array[:, 1] + 0.05 * v - 5) ** 2)"} +{"question_id": 56665298, "score": 74, "creation_date": 1560939289, "tags": ["python", "python-3.x", "default-value", "python-dataclasses"], "instruction": "How to apply default value to Python dataclass field when None was passed?\n\nI need a class that will accept a number of parameters, I know that all parameters will be provided but some maybe passed as None in which case my class will have to provide default values. I want to setup a simple dataclass with a some default values like so: @dataclass class Specs1: a: str b: str = 'Bravo' c: str = 'Charlie' I would like to be able to get the default value for the second field but still set a value for the third one. I cannot do this with None because it is happily accepted as a value for my string: r1 = Specs1('Apple', None, 'Cherry') # Specs1(a='Apple', b=None, c='Cherry') I have come up with the following solution: @dataclass class Specs2: def_b: ClassVar = 'Bravo' def_c: ClassVar = 'Charlie' a: str b: str = def_b c: str = def_c def __post_init__(self): self.b = self.def_b if self.b is None else self.b self.c = self.def_c if self.c is None else self.c Which seems to behave as intended: r2 = Specs2('Apple', None, 'Cherry') # Specs2(a='Apple', b='Bravo', c='Cherry') However, I feel it is quite ugly and that I am maybe missing something here. My actual class will have more fields so it will only get uglier. The parameters passed to the class contain None and I do not have control over this aspect.", "output": "I know this is a little late, but inspired by MikeSchneeberger's answer I made a small adaptation to the __post_init__ function that allows you to keep the defaults in the standard format: from dataclasses import dataclass, fields def __post_init__(self): # Loop through the fields for field in fields(self): # If there is a default and the value of the field is none we can assign a value if not isinstance(field.default, dataclasses._MISSING_TYPE) and getattr(self, field.name) is None: setattr(self, field.name, field.default) Adding this to your dataclass should then ensure that the default values are enforced without requiring a new default class."} +{"question_id": 4172131, "score": 74, "creation_date": 1289645541, "tags": ["python", "list", "random", "performance"], "instruction": "Create random list of integers in Python\n\nI'd like to create a random list of integers for testing purposes. The distribution of the numbers is not important. The only thing that is counting is time. I know generating random numbers is a time-consuming task, but there must be a better way. Here's my current solution: import random import timeit # Random lists from [0-999] interval print [random.randint(0, 1000) for r in xrange(10)] # v1 print [random.choice([i for i in xrange(1000)]) for r in xrange(10)] # v2 # Measurement: t1 = timeit.Timer('[random.randint(0, 1000) for r in xrange(10000)]', 'import random') # v1 t2 = timeit.Timer('random.sample(range(1000), 10000)', 'import random') # v2 print t1.timeit(1000)/1000 print t2.timeit(1000)/1000 v2 is faster than v1, but it is not working on such a large scale. It gives the following error: ValueError: sample larger than population Is there a fast, efficient solution that works at that scale? Some results from the answer Andrew's: 0.000290962934494 gnibbler's: 0.0058455221653 KennyTM's: 0.00219276118279 NumPy came, saw, and conquered.", "output": "It is not entirely clear what you want, but I would use numpy.random.randint: import numpy.random as nprnd import timeit t1 = timeit.Timer('[random.randint(0, 1000) for r in xrange(10000)]', 'import random') # v1 ### Change v2 so that it picks numbers in (0, 10000) and thus runs... t2 = timeit.Timer('random.sample(range(10000), 10000)', 'import random') # v2 t3 = timeit.Timer('nprnd.randint(1000, size=10000)', 'import numpy.random as nprnd') # v3 print t1.timeit(1000)/1000 print t2.timeit(1000)/1000 print t3.timeit(1000)/1000 which gives on my machine: 0.0233682730198 0.00781716918945 0.000147947072983 Note that randint is very different from random.sample (in order for it to work in your case I had to change the 1,000 to 10,000 as one of the commentators pointed out -- if you really want them from 0 to 1,000 you could divide by 10). And if you really don't care what distribution you are getting then it is possible that you either don't understand your problem very well, or random numbers -- with apologies if that sounds rude..."} +{"question_id": 62019960, "score": 74, "creation_date": 1590488718, "tags": ["python"], "instruction": "Difference between pass statement and 3 dots(...) in python\n\nWhat's the difference between the pass statement: def function(): pass and 3 dots: def function(): ... Which way is better and faster to execute(CPython)?", "output": "pass has been in the language for a very long time and is just a no-op. It is designed to explicitly do nothing. ... is a token having the singleton value Ellipsis, similar to how None is a singleton value. Putting ... as your method body has the same effect as for example: def foo(): 1 The ... can be interpreted as a sentinel value where it makes sense from an API-design standpoint, e.g. if you overwrite __getitem__ to do something special if Ellipsis are passed, and then giving foo[...] special meaning. It is not specifically meant as a replacement for no-op stubs, though I have seen it being used that way and it doesn't hurt either"} +{"question_id": 16628329, "score": 74, "creation_date": 1368906380, "tags": ["python", "sqlite", "pandas", "hdf5"], "instruction": "HDF5 - concurrency, compression & I/O performance\n\nI have the following questions about HDF5 performance and concurrency: Does HDF5 support concurrent write access? Concurrency considerations aside, how is HDF5 performance in terms of I/O performance (does compression rates affect the performance)? Since I use HDF5 with Python, how does its performance compare to Sqlite? References: http://www.sqlite.org/faq.html#q5 Locking sqlite file on NFS filesystem possible? http://pandas.pydata.org/", "output": "Updated to use pandas 0.13.1 No. http://pandas.pydata.org/pandas-docs/dev/io.html#notes-caveats. There are various ways to do this, e.g. have your different threads/processes write out the computation results, then have a single process combine. depending the type of data you store, how you do it, and how you want to retrieve, HDF5 can offer vastly better performance. Storing in an HDFStore as a single array, float data, compressed (in other words, not storing it in a format that allows for querying), will be stored/read amazingly fast. Even storing in the table format (which slows down the write performance), will offer quite good write performance. You can look at this for some detailed comparisons (which is what HDFStore uses under the hood). http://www.pytables.org/, here's a nice picture: Since PyTables 2.3 the queries are now indexed, so performance is actually MUCH better than this. To answer your question, if you want any kind of performance, HDF5 is the way to go. Writing: In [14]: %timeit test_sql_write(df) 1 loops, best of 3: 6.24 s per loop In [15]: %timeit test_hdf_fixed_write(df) 1 loops, best of 3: 237 ms per loop In [16]: %timeit test_hdf_table_write(df) 1 loops, best of 3: 901 ms per loop In [17]: %timeit test_csv_write(df) 1 loops, best of 3: 3.44 s per loop Reading In [18]: %timeit test_sql_read() 1 loops, best of 3: 766 ms per loop In [19]: %timeit test_hdf_fixed_read() 10 loops, best of 3: 19.1 ms per loop In [20]: %timeit test_hdf_table_read() 10 loops, best of 3: 39 ms per loop In [22]: %timeit test_csv_read() 1 loops, best of 3: 620 ms per loop And here's the code import sqlite3 import os from pandas.io import sql In [3]: df = DataFrame(randn(1000000,2),columns=list('AB')) Int64Index: 1000000 entries, 0 to 999999 Data columns (total 2 columns): A 1000000 non-null values B 1000000 non-null values dtypes: float64(2) def test_sql_write(df): if os.path.exists('test.sql'): os.remove('test.sql') sql_db = sqlite3.connect('test.sql') sql.write_frame(df, name='test_table', con=sql_db) sql_db.close() def test_sql_read(): sql_db = sqlite3.connect('test.sql') sql.read_frame(\"select * from test_table\", sql_db) sql_db.close() def test_hdf_fixed_write(df): df.to_hdf('test_fixed.hdf','test',mode='w') def test_csv_read(): pd.read_csv('test.csv',index_col=0) def test_csv_write(df): df.to_csv('test.csv',mode='w') def test_hdf_fixed_read(): pd.read_hdf('test_fixed.hdf','test') def test_hdf_table_write(df): df.to_hdf('test_table.hdf','test',format='table',mode='w') def test_hdf_table_read(): pd.read_hdf('test_table.hdf','test') Of course YMMV."} +{"question_id": 1319074, "score": 73, "creation_date": 1251048650, "tags": ["python", "callback"], "instruction": "How can I provide a \"callback\" to an API?\n\nI was reading some module documentation and saw something I didn't understand, in the explanation of parameters for a method: callback - callback function which will be called with argument list equal to callbackargs+(result,) as soon as calculation is done callbackargs - additional arguments for callback function group - job group, is used when wait(group) is called to wait for jobs in a given group to finish How can I call the method and correctly supply arguments for these parameters? What are they used for, and how does this \"callback\" scheme work?", "output": "A callback is a function provided by the consumer of an API that the API can then turn around and invoke (calling you back). If I setup a Dr.'s appointment, I can give them my phone number, so they can call me the day before to confirm the appointment. A callback is like that, except instead of just being a phone number, it can be arbitrary instructions like \"send me an email at this address, and also call my secretary and have her put it in my calendar. Callbacks are often used in situations where an action is asynchronous. If you need to call a function, and immediately continue working, you can't sit there wait for its return value to let you know what happened, so you provide a callback. When the function is done completely its asynchronous work it will then invoke your callback with some predetermined arguments (usually some you supply, and some about the status and result of the asynchronous action you requested). If the Dr. is out of the office, or they are still working on the schedule, rather than having me wait on hold until he gets back, which could be several hours, we hang up, and once the appointment has been scheduled, they call me. In this specific case, the documented method will compute the result, put it together with any callbackargs specified, and call callback, passing it those values as the arguments."} +{"question_id": 36392510, "score": 73, "creation_date": 1459725961, "tags": ["python", "django"], "instruction": "Django download a file\n\nI'm quite new to using Django and I am trying to develop a website where the user is able to upload a number of excel files, these files are then stored in a media folder Webproject/project/media. def upload(request): if request.POST: form = FileForm(request.POST, request.FILES) if form.is_valid(): form.save() return render_to_response('project/upload_successful.html') else: form = FileForm() args = {} args.update(csrf(request)) args['form'] = form return render_to_response('project/create.html', args) The document is then displayed in a list along with any other document they have uploaded, which you can click into and it will displays basic info about them and the name of the excelfile they have uploaded. From here I want to be able to download the same excel file again using the link: Download Document My urls are urlpatterns = [ url(r'^$', ListView.as_view(queryset=Post.objects.all().order_by(\"-date\")[:25], template_name=\"project/project.html\")), url(r'^(?P\\d+)$', DetailView.as_view(model=Post, template_name=\"project/post.html\")), url(r'^upload/$', upload), url(r'^download/(?P.*)$', serve, {'document root': settings.MEDIA_ROOT}), ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) but I get the error, serve() got an unexpected keyword argument 'document root'. can anyone explain how to fix this? OR Explain how I can get the uploaded files to to be selected and served using def download(request): file_name = #get the filename of desired excel file path_to_file = #get the path of desired excel file response = HttpResponse(mimetype='application/force-download') response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name) response['X-Sendfile'] = smart_str(path_to_file) return response", "output": "You missed underscore in argument document_root. But it's bad idea to use serve in production. Use something like this instead: import os from django.conf import settings from django.http import HttpResponse, Http404 def download(request, path): file_path = os.path.join(settings.MEDIA_ROOT, path) if os.path.exists(file_path): with open(file_path, 'rb') as fh: response = HttpResponse(fh.read(), content_type=\"application/vnd.ms-excel\") response['Content-Disposition'] = 'inline; filename=' + os.path.basename(file_path) return response raise Http404"} +{"question_id": 30791550, "score": 73, "creation_date": 1434057139, "tags": ["python", "multithreading", "numpy"], "instruction": "Limit number of threads in numpy\n\nIt seems that my numpy library is using 4 threads, and setting OMP_NUM_THREADS=1 does not stop this. numpy.show_config() gives me these results: atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = f77 include_dirs = ['/usr/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = c include_dirs = ['/usr/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = c include_dirs = ['/usr/include'] openblas_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] define_macros = [('ATLAS_INFO', '\"\\\\\"3.8.4\\\\\"\"')] language = f77 include_dirs = ['/usr/include'] So I know it is using blas, but I can't figure out how to make it use 1 thread for matrix multiplication.", "output": "There are a few common multi CPU libraries that are used for numerical computations, including inside of NumPy. There are a few environment flags that you can set before running the script to limit the number of CPUS that they use. Try setting all of the following: export MKL_NUM_THREADS=1 export NUMEXPR_NUM_THREADS=1 export OMP_NUM_THREADS=1 Sometimes it's a bit tricky to see where exactly multithreading is introduced. Other answers show environment flags for other libraries. They may also work."} +{"question_id": 1950414, "score": 73, "creation_date": 1261535979, "tags": ["python"], "instruction": "What is the purpose of classmethod in this code?\n\nIn django.utils.tree.py: def _new_instance(cls, children=None, connector=None, negated=False): obj = Node(children, connector, negated) obj.__class__ = cls return obj _new_instance = classmethod(_new_instance) I don't know what classmethod does in this code sample. Can someone explain what it does and how to use it?", "output": "classmethod is a decorator, wrapping a function, and you can call the resulting object on a class or (equivalently) an instance thereof: >>> class x(object): ... def c1(*args): print 'c1', args ... c1 = classmethod(c1) ... @classmethod ... def c2(*args): print 'c2', args ... >>> inst = x() >>> x.c1() c1 (,) >>> x.c2() c2 (,) >>> inst.c1() c1 (,) >>> inst.c2() c2 (,) As you see, whether you define it directly or with decorator syntax, and whether you call it on the class or the instance, the classmethod always receives the class as its first argument. One of the main uses of classmethod is to define alternative constructors: >>> class y(object): ... def __init__(self, astring): ... self.s = astring ... @classmethod ... def fromlist(cls, alist): ... x = cls('') ... x.s = ','.join(str(s) for s in alist) ... return x ... def __repr__(self): ... return 'y(%r)' % self.s ... >>> y1 = y('xx') >>> y1 y('xx') >>> y2 = y.fromlist(range(3)) >>> y2 y('0,1,2') Now if you subclass y, the classmethod keeps working, e.g.: >>> class k(y): ... def __repr__(self): ... return 'k(%r)' % self.s.upper() ... >>> k1 = k.fromlist(['za','bu']) >>> k1 k('ZA,BU')"} +{"question_id": 33053241, "score": 73, "creation_date": 1444474876, "tags": ["python", "sqlalchemy"], "instruction": "Sqlalchemy if table does not exist\n\nI wrote a module which is to create an empty database file def create_database(): engine = create_engine(\"sqlite:///myexample.db\", echo=True) metadata = MetaData(engine) metadata.create_all() But in another function, I want to open myexample.db database, and create tables to it if it doesn't already have that table. EG of the first, subsequent table I would create would be: Table(Variable_TableName, metadata, Column('Id', Integer, primary_key=True, nullable=False), Column('Date', Date), Column('Volume', Float)) (Since it is initially an empty database, it will have no tables in it, but subsequently, I can add more tables to it. Thats what i'm trying to say.) Any suggestions?", "output": "I've managed to figure out what I intended to do. I used engine.dialect.has_table(engine, Variable_tableName) to check if the database has the table inside. IF it doesn't, then it will proceed to create a table in the database. Sample code: engine = create_engine(\"sqlite:///myexample.db\") # Access the DB Engine if not engine.dialect.has_table(engine, Variable_tableName): # If table don't exist, Create. metadata = MetaData(engine) # Create a table with the appropriate Columns Table(Variable_tableName, metadata, Column('Id', Integer, primary_key=True, nullable=False), Column('Date', Date), Column('Country', String), Column('Brand', String), Column('Price', Float), # Implement the creation metadata.create_all() This seems to be giving me what i'm looking for."}