qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
51,878,354
Is there a built-in function that works like zip(), but fills the results so that the length of the resulting list is the length of the longest input and fills the list **from the left** with e.g. `None`? There is already an [answer](https://stackoverflow.com/a/1277311/2648551) using [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) from `itertools` module and the corresponding [question](https://stackoverflow.com/q/1277278/2648551) is very similar to this. But with `zip_longest` it seems that you can only fill missing data from the right. Here might be a use case for that, assuming we have names stored only like this (it's just an example): ``` header = ["title", "firstname", "lastname"] person_1 = ["Dr.", "Joe", "Doe"] person_2 = ["Mary", "Poppins"] person_3 = ["Smith"] ``` There is no other permutation like (`["Poppins", "Mary"]`, `["Poppins", "Dr", "Mary"]`) and so on. How can I get results like this using built-in functions? ``` >>> dict(magic_zip(header, person_1)) {'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'} >>> dict(magic_zip(header, person_2)) {'title': None, 'lastname': 'Poppins', 'firstname': 'Mary'} >>> dict(magic_zip(header, person_3)) {'title': None, 'lastname': 'Smith', 'firstname': None} ```
2018/08/16
[ "https://Stackoverflow.com/questions/51878354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2648551/" ]
Use **`zip_longest`** but reverse lists. **Example**: ``` from itertools import zip_longest header = ["title", "firstname", "lastname"] person_1 = ["Dr.", "Joe", "Doe"] person_2 = ["Mary", "Poppins"] person_3 = ["Smith"] print(dict(zip_longest(reversed(header), reversed(person_2)))) # {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None} ``` On your use cases: ``` >>> dict(zip_longest(reversed(header), reversed(person_1))) {'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'} >>> dict(zip_longest(reversed(header), reversed(person_2))) {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None} >>> dict(zip_longest(reversed(header), reversed(person_3))) {'lastname': 'Smith', 'firstname': None, 'title': None} ```
The generic "magic zip" generator function with a variable number of args (which only uses lazy-evaluation functions and no python loops): ``` import itertools def magic_zip(*args): return itertools.zip_longest(*map(reversed,args)) ``` testing (of course in the case of a dict build, only 2 params are needed): ``` for p in (person_1,person_2,person_3): print(dict(magic_zip(header,p))) ``` result: ``` {'lastname': 'Doe', 'title': 'Dr.', 'firstname': 'Joe'} {'lastname': 'Poppins', 'title': None, 'firstname': 'Mary'} {'lastname': 'Smith', 'title': None, 'firstname': None} ```
51,878,354
Is there a built-in function that works like zip(), but fills the results so that the length of the resulting list is the length of the longest input and fills the list **from the left** with e.g. `None`? There is already an [answer](https://stackoverflow.com/a/1277311/2648551) using [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) from `itertools` module and the corresponding [question](https://stackoverflow.com/q/1277278/2648551) is very similar to this. But with `zip_longest` it seems that you can only fill missing data from the right. Here might be a use case for that, assuming we have names stored only like this (it's just an example): ``` header = ["title", "firstname", "lastname"] person_1 = ["Dr.", "Joe", "Doe"] person_2 = ["Mary", "Poppins"] person_3 = ["Smith"] ``` There is no other permutation like (`["Poppins", "Mary"]`, `["Poppins", "Dr", "Mary"]`) and so on. How can I get results like this using built-in functions? ``` >>> dict(magic_zip(header, person_1)) {'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'} >>> dict(magic_zip(header, person_2)) {'title': None, 'lastname': 'Poppins', 'firstname': 'Mary'} >>> dict(magic_zip(header, person_3)) {'title': None, 'lastname': 'Smith', 'firstname': None} ```
2018/08/16
[ "https://Stackoverflow.com/questions/51878354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2648551/" ]
Simply use `zip_longest` and read the arguments in the reverse direction: ``` In [20]: dict(zip_longest(header[::-1], person_1[::-1])) Out[20]: {'lastname': 'Doe', 'firstname': 'Joe', 'title': 'Dr.'} In [21]: dict(zip_longest(header[::-1], person_2[::-1])) Out[21]: {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None} In [22]: dict(zip_longest(header[::-1], person_3[::-1])) Out[22]: {'lastname': 'Smith', 'firstname': None, 'title': None} ``` Since the zip\* functions need to be able to work on general iterables, they don't support filling "from the left", because you'd need to exhaust the iterable first. Here we can just flip things ourselves.
``` def magic_zip(*lists): max_len = max(map(len, lists)) return zip(*([None] * (max_len - len(l)) + l for l in lists)) ```
51,878,354
Is there a built-in function that works like zip(), but fills the results so that the length of the resulting list is the length of the longest input and fills the list **from the left** with e.g. `None`? There is already an [answer](https://stackoverflow.com/a/1277311/2648551) using [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) from `itertools` module and the corresponding [question](https://stackoverflow.com/q/1277278/2648551) is very similar to this. But with `zip_longest` it seems that you can only fill missing data from the right. Here might be a use case for that, assuming we have names stored only like this (it's just an example): ``` header = ["title", "firstname", "lastname"] person_1 = ["Dr.", "Joe", "Doe"] person_2 = ["Mary", "Poppins"] person_3 = ["Smith"] ``` There is no other permutation like (`["Poppins", "Mary"]`, `["Poppins", "Dr", "Mary"]`) and so on. How can I get results like this using built-in functions? ``` >>> dict(magic_zip(header, person_1)) {'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'} >>> dict(magic_zip(header, person_2)) {'title': None, 'lastname': 'Poppins', 'firstname': 'Mary'} >>> dict(magic_zip(header, person_3)) {'title': None, 'lastname': 'Smith', 'firstname': None} ```
2018/08/16
[ "https://Stackoverflow.com/questions/51878354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2648551/" ]
Simply use `zip_longest` and read the arguments in the reverse direction: ``` In [20]: dict(zip_longest(header[::-1], person_1[::-1])) Out[20]: {'lastname': 'Doe', 'firstname': 'Joe', 'title': 'Dr.'} In [21]: dict(zip_longest(header[::-1], person_2[::-1])) Out[21]: {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None} In [22]: dict(zip_longest(header[::-1], person_3[::-1])) Out[22]: {'lastname': 'Smith', 'firstname': None, 'title': None} ``` Since the zip\* functions need to be able to work on general iterables, they don't support filling "from the left", because you'd need to exhaust the iterable first. Here we can just flip things ourselves.
The generic "magic zip" generator function with a variable number of args (which only uses lazy-evaluation functions and no python loops): ``` import itertools def magic_zip(*args): return itertools.zip_longest(*map(reversed,args)) ``` testing (of course in the case of a dict build, only 2 params are needed): ``` for p in (person_1,person_2,person_3): print(dict(magic_zip(header,p))) ``` result: ``` {'lastname': 'Doe', 'title': 'Dr.', 'firstname': 'Joe'} {'lastname': 'Poppins', 'title': None, 'firstname': 'Mary'} {'lastname': 'Smith', 'title': None, 'firstname': None} ```
51,878,354
Is there a built-in function that works like zip(), but fills the results so that the length of the resulting list is the length of the longest input and fills the list **from the left** with e.g. `None`? There is already an [answer](https://stackoverflow.com/a/1277311/2648551) using [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) from `itertools` module and the corresponding [question](https://stackoverflow.com/q/1277278/2648551) is very similar to this. But with `zip_longest` it seems that you can only fill missing data from the right. Here might be a use case for that, assuming we have names stored only like this (it's just an example): ``` header = ["title", "firstname", "lastname"] person_1 = ["Dr.", "Joe", "Doe"] person_2 = ["Mary", "Poppins"] person_3 = ["Smith"] ``` There is no other permutation like (`["Poppins", "Mary"]`, `["Poppins", "Dr", "Mary"]`) and so on. How can I get results like this using built-in functions? ``` >>> dict(magic_zip(header, person_1)) {'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'} >>> dict(magic_zip(header, person_2)) {'title': None, 'lastname': 'Poppins', 'firstname': 'Mary'} >>> dict(magic_zip(header, person_3)) {'title': None, 'lastname': 'Smith', 'firstname': None} ```
2018/08/16
[ "https://Stackoverflow.com/questions/51878354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2648551/" ]
The generic "magic zip" generator function with a variable number of args (which only uses lazy-evaluation functions and no python loops): ``` import itertools def magic_zip(*args): return itertools.zip_longest(*map(reversed,args)) ``` testing (of course in the case of a dict build, only 2 params are needed): ``` for p in (person_1,person_2,person_3): print(dict(magic_zip(header,p))) ``` result: ``` {'lastname': 'Doe', 'title': 'Dr.', 'firstname': 'Joe'} {'lastname': 'Poppins', 'title': None, 'firstname': 'Mary'} {'lastname': 'Smith', 'title': None, 'firstname': None} ```
``` def magic_zip(*lists): max_len = max(map(len, lists)) return zip(*([None] * (max_len - len(l)) + l for l in lists)) ```
25,438,170
Input: ``` A B C D E F ``` This file is NOT exclusively tab-delimited, some entries are space-delimited to look like they were tab-delimited (which is annoying). I tried reading in the file with the `csv` module using the canonical tab delimited option hoping it wouldn't mind a few spaces (needless to say, my output came out botched with this code): ``` with open('file.txt') as f: input = csv.reader(f, delimiter='\t') for row in input: print row ``` I then tried replacing the second line with `csv.reader('\t'.join(f.split()))` to try to take advantage of [Remove whitespace in Python using string.whitespace](https://stackoverflow.com/questions/1898656/remove-whitespace-in-python-using-string-whitespace/1898835#1898835) but my error was: `AttributeError: 'file' object has no attribute 'split'`. I also tried examining [Can I import a CSV file and automatically infer the delimiter?](https://stackoverflow.com/questions/16312104/python-import-csv-file-delimiter-or) but here the OP imported either semicolon-delimited or comma-delimited files, but not a file which was a random mixture of both kinds of delimiters. Was wondering if the `csv` module can handle reading in files with a mix of various delimiters or whether I should try a different approach (e.g., not use the `csv` module)? I am hoping that there exists a way to read in a file with a mixture of delimiters and automatically turn this file into a tab-delimited file.
2014/08/22
[ "https://Stackoverflow.com/questions/25438170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3878253/" ]
Just use .split(): ``` csv='''\ A\tB\tC D E F ''' data=[] for line in csv.splitlines(): data.append(line.split()) print data # [['A', 'B', 'C'], ['D', 'E', 'F']] ``` Or, more succinctly: ``` >>> [line.split() for line in csv.splitlines()] [['A', 'B', 'C'], ['D', 'E', 'F']] ``` For a file, something like: ``` with open(fn, 'r') as fin: data=[line.split() for line in fin] ``` It works because [str.split()](https://docs.python.org/2/library/stdtypes.html#str.split) will split on all whitespace between data elements even if more than 1 whitespace character or if mixed: ``` >>> '1\t\t\t2 3\t \t \t4'.split() ['1', '2', '3', '4'] ```
Why not just roll your own splitter rather than the CSV module? ``` delimeters = [',', ' ', '\t'] unique = '[**This is a unique delimeter**]' with open(fileName) as f: for l in f: for d in delimeters: l = unique.join(l.split(d)) row = l.split(unique) ```
25,438,170
Input: ``` A B C D E F ``` This file is NOT exclusively tab-delimited, some entries are space-delimited to look like they were tab-delimited (which is annoying). I tried reading in the file with the `csv` module using the canonical tab delimited option hoping it wouldn't mind a few spaces (needless to say, my output came out botched with this code): ``` with open('file.txt') as f: input = csv.reader(f, delimiter='\t') for row in input: print row ``` I then tried replacing the second line with `csv.reader('\t'.join(f.split()))` to try to take advantage of [Remove whitespace in Python using string.whitespace](https://stackoverflow.com/questions/1898656/remove-whitespace-in-python-using-string-whitespace/1898835#1898835) but my error was: `AttributeError: 'file' object has no attribute 'split'`. I also tried examining [Can I import a CSV file and automatically infer the delimiter?](https://stackoverflow.com/questions/16312104/python-import-csv-file-delimiter-or) but here the OP imported either semicolon-delimited or comma-delimited files, but not a file which was a random mixture of both kinds of delimiters. Was wondering if the `csv` module can handle reading in files with a mix of various delimiters or whether I should try a different approach (e.g., not use the `csv` module)? I am hoping that there exists a way to read in a file with a mixture of delimiters and automatically turn this file into a tab-delimited file.
2014/08/22
[ "https://Stackoverflow.com/questions/25438170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3878253/" ]
Just use .split(): ``` csv='''\ A\tB\tC D E F ''' data=[] for line in csv.splitlines(): data.append(line.split()) print data # [['A', 'B', 'C'], ['D', 'E', 'F']] ``` Or, more succinctly: ``` >>> [line.split() for line in csv.splitlines()] [['A', 'B', 'C'], ['D', 'E', 'F']] ``` For a file, something like: ``` with open(fn, 'r') as fin: data=[line.split() for line in fin] ``` It works because [str.split()](https://docs.python.org/2/library/stdtypes.html#str.split) will split on all whitespace between data elements even if more than 1 whitespace character or if mixed: ``` >>> '1\t\t\t2 3\t \t \t4'.split() ['1', '2', '3', '4'] ```
.split() is an easy and nice solution for the situation that "consecutive, arbitrarily-mixed tabs and blanks as one delimiter"; However, this does not work while value with blank (enclosed by quote mark) appears. First, we may replace each tab in the text file with one blank `' '`; This can simplify the situation to "consecutive, arbitrary-number of blanks as one delimiter". There is a good example for replacing a pattern over a file: <https://www.safaribooksonline.com/library/view/python-cookbook/0596001673/ch04s04.html> **Note 1:** DO NOT replace with `''` (empty string), due to there may be a delimiter includes ONLY tabs. **Note 2:** This approach DOES NOT work while you have tab character (*/t*) inside a value that enclosed by quote mark. Then we can use Python's csv module, with delimiter as `' '` (one blank), and use `skipinitialspace=True` to ignore consecutive blanks.
25,438,170
Input: ``` A B C D E F ``` This file is NOT exclusively tab-delimited, some entries are space-delimited to look like they were tab-delimited (which is annoying). I tried reading in the file with the `csv` module using the canonical tab delimited option hoping it wouldn't mind a few spaces (needless to say, my output came out botched with this code): ``` with open('file.txt') as f: input = csv.reader(f, delimiter='\t') for row in input: print row ``` I then tried replacing the second line with `csv.reader('\t'.join(f.split()))` to try to take advantage of [Remove whitespace in Python using string.whitespace](https://stackoverflow.com/questions/1898656/remove-whitespace-in-python-using-string-whitespace/1898835#1898835) but my error was: `AttributeError: 'file' object has no attribute 'split'`. I also tried examining [Can I import a CSV file and automatically infer the delimiter?](https://stackoverflow.com/questions/16312104/python-import-csv-file-delimiter-or) but here the OP imported either semicolon-delimited or comma-delimited files, but not a file which was a random mixture of both kinds of delimiters. Was wondering if the `csv` module can handle reading in files with a mix of various delimiters or whether I should try a different approach (e.g., not use the `csv` module)? I am hoping that there exists a way to read in a file with a mixture of delimiters and automatically turn this file into a tab-delimited file.
2014/08/22
[ "https://Stackoverflow.com/questions/25438170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3878253/" ]
Why not just roll your own splitter rather than the CSV module? ``` delimeters = [',', ' ', '\t'] unique = '[**This is a unique delimeter**]' with open(fileName) as f: for l in f: for d in delimeters: l = unique.join(l.split(d)) row = l.split(unique) ```
.split() is an easy and nice solution for the situation that "consecutive, arbitrarily-mixed tabs and blanks as one delimiter"; However, this does not work while value with blank (enclosed by quote mark) appears. First, we may replace each tab in the text file with one blank `' '`; This can simplify the situation to "consecutive, arbitrary-number of blanks as one delimiter". There is a good example for replacing a pattern over a file: <https://www.safaribooksonline.com/library/view/python-cookbook/0596001673/ch04s04.html> **Note 1:** DO NOT replace with `''` (empty string), due to there may be a delimiter includes ONLY tabs. **Note 2:** This approach DOES NOT work while you have tab character (*/t*) inside a value that enclosed by quote mark. Then we can use Python's csv module, with delimiter as `' '` (one blank), and use `skipinitialspace=True` to ignore consecutive blanks.
46,964,509
I am following a tutorial on using selenium and python to make a web **scraper** for twitter, and I ran into this error. ``` File "C:\Python34\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 62, in __init__ self.service.start() File "C:\Python34\lib\site-packages\selenium\webdriver\common\service.py", line 81, in start os.path.basename(self.path), self.start_error_message) selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home ``` I went to the website specified in the error and downloaded the driver. Then I added it to path by going to System Properties > Advanced > Environment Variables > Path > New and added the exe file to path. I tried again and i still got the error.
2017/10/26
[ "https://Stackoverflow.com/questions/46964509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7922147/" ]
Another way is download and uzip [chromedriver](https://chromedriver.storage.googleapis.com/index.html?path=2.33/) and put 'chromedriver.exe' in C:\Python27\Scripts and then you need not to provide the path of driver, just ``` driver= webdriver.Chrome() ``` will work
If you take a look to your exception: ``` selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home ``` At the [indicated url](https://sites.google.com/a/chromium.org/chromedriver/home), you can see the [Getting started with ChromeDriver on Desktop (Windows, Mac, Linux)](https://sites.google.com/a/chromium.org/chromedriver/getting-started). Where there is written: > > Any of these steps should do the trick: > > > 1. include the ChromeDriver location in your PATH environment variable > 2. (Java only) specify its location via the webdriver.chrome.driver system property (see sample below) > 3. (Python only) include the path to ChromeDriver when instantiating webdriver.Chrome (see sample below) > > > If you are not able to include your ChromeDriver location in your PATH environment variable, you could try with the third one option: ``` import time from selenium import webdriver driver = webdriver.Chrome('/path/to/chromedriver') # Optional argument, if not specified will search path. driver.get('http://www.google.com'); ```
4,663,024
Hey, I would like to be able to perform [this](https://stackoverflow.com/questions/638048/how-do-i-sum-the-first-value-in-each-tuple-in-a-list-of-tuples-in-python) but with being selective for which lists I sum up. Let's say, that same example, but with only adding up the first number from the 3rd and 4th list.
2011/01/11
[ "https://Stackoverflow.com/questions/4663024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/556344/" ]
Something like: ``` sum(int(tuple_list[i][0]) for i in range(3,5)) ``` range(x, y) generates a list of integers from x(included) to y(excluded) and 1 as the step. If you want to change the `range(x, y, step)` will do the same but increasing by step. You can find the official documentation [here](http://docs.python.org/library/functions.html#range) Or you can do: ``` sum(float(close[4]) for close in tickers[30:40]) ```
If you want to limit by some property of each element, you can use [`filter()`](http://docs.python.org/library/functions.html#filter) before feeding it to the code posted in your link. This will let you write a unique filter depending on what you want. This doesn't work for the example you gave, but it seemed like you were more interested in the general case. ``` sum(pair[0] for pair in filter(PREDICATE_FUNCTION_OR_LAMBDA, list_of_pairs)) ```
4,663,024
Hey, I would like to be able to perform [this](https://stackoverflow.com/questions/638048/how-do-i-sum-the-first-value-in-each-tuple-in-a-list-of-tuples-in-python) but with being selective for which lists I sum up. Let's say, that same example, but with only adding up the first number from the 3rd and 4th list.
2011/01/11
[ "https://Stackoverflow.com/questions/4663024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/556344/" ]
Something like: ``` sum(int(tuple_list[i][0]) for i in range(3,5)) ``` range(x, y) generates a list of integers from x(included) to y(excluded) and 1 as the step. If you want to change the `range(x, y, step)` will do the same but increasing by step. You can find the official documentation [here](http://docs.python.org/library/functions.html#range) Or you can do: ``` sum(float(close[4]) for close in tickers[30:40]) ```
``` >>> l1 [(0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 8), (7, 9), (8, 10), (9, 11)] >>> sum([el[0] for (nr, el) in enumerate(l1) if nr in [3, 4]]) 7 >>> ```
4,663,024
Hey, I would like to be able to perform [this](https://stackoverflow.com/questions/638048/how-do-i-sum-the-first-value-in-each-tuple-in-a-list-of-tuples-in-python) but with being selective for which lists I sum up. Let's say, that same example, but with only adding up the first number from the 3rd and 4th list.
2011/01/11
[ "https://Stackoverflow.com/questions/4663024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/556344/" ]
Something like: ``` sum(int(tuple_list[i][0]) for i in range(3,5)) ``` range(x, y) generates a list of integers from x(included) to y(excluded) and 1 as the step. If you want to change the `range(x, y, step)` will do the same but increasing by step. You can find the official documentation [here](http://docs.python.org/library/functions.html#range) Or you can do: ``` sum(float(close[4]) for close in tickers[30:40]) ```
not seen an answer using reduce yet. `reduce(lambda sumSoFar,(tuple0,tuple1): sumSoFar+tuple0, list, 0)` In essence sum is identical to `reduce(int.__add__, list, 0)` edit: didn't read the predicate part. Easily fixed, but probably not the best answer anymore: ``` predicate = lambda x: x == 2 or x == 4 reduce(lambda sumSoFar,(t0,t1): sumSoFar+(t0 if predicate(t0) else 0), list, 0) ```
4,663,024
Hey, I would like to be able to perform [this](https://stackoverflow.com/questions/638048/how-do-i-sum-the-first-value-in-each-tuple-in-a-list-of-tuples-in-python) but with being selective for which lists I sum up. Let's say, that same example, but with only adding up the first number from the 3rd and 4th list.
2011/01/11
[ "https://Stackoverflow.com/questions/4663024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/556344/" ]
``` >>> l1 [(0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 8), (7, 9), (8, 10), (9, 11)] >>> sum([el[0] for (nr, el) in enumerate(l1) if nr in [3, 4]]) 7 >>> ```
If you want to limit by some property of each element, you can use [`filter()`](http://docs.python.org/library/functions.html#filter) before feeding it to the code posted in your link. This will let you write a unique filter depending on what you want. This doesn't work for the example you gave, but it seemed like you were more interested in the general case. ``` sum(pair[0] for pair in filter(PREDICATE_FUNCTION_OR_LAMBDA, list_of_pairs)) ```
4,663,024
Hey, I would like to be able to perform [this](https://stackoverflow.com/questions/638048/how-do-i-sum-the-first-value-in-each-tuple-in-a-list-of-tuples-in-python) but with being selective for which lists I sum up. Let's say, that same example, but with only adding up the first number from the 3rd and 4th list.
2011/01/11
[ "https://Stackoverflow.com/questions/4663024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/556344/" ]
``` >>> l1 [(0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 8), (7, 9), (8, 10), (9, 11)] >>> sum([el[0] for (nr, el) in enumerate(l1) if nr in [3, 4]]) 7 >>> ```
not seen an answer using reduce yet. `reduce(lambda sumSoFar,(tuple0,tuple1): sumSoFar+tuple0, list, 0)` In essence sum is identical to `reduce(int.__add__, list, 0)` edit: didn't read the predicate part. Easily fixed, but probably not the best answer anymore: ``` predicate = lambda x: x == 2 or x == 4 reduce(lambda sumSoFar,(t0,t1): sumSoFar+(t0 if predicate(t0) else 0), list, 0) ```
64,773,690
I'm new to python and I'm trying to use the census geocoding services API to geocode addresses then convert the output to a dataframe. I've been able to read in my address file and I can see the output, but I can't seem to figure out how to import it into a dataframe. I provided the code I used below as well as the contents of the address file. The output does not appear to be in JSON format, but rather CSV. I tried to import the output as I would a CSV file, but I couldn't figure out how to import the variable as I would a CSV file and I couldn't figure out how to export the output to a CSV file that I could import. The URL describing the API is <https://geocoding.geo.census.gov/geocode...es_API.pdf> ``` import requests import pandas as pd import json url = 'https://geocoding.geo.census.gov/geocoder/geographies/addressbatch' payload = {'benchmark':'Public_AR_Current','vintage':'Current_Current'} files = {'addressFile': ('C:\PYTHON_CLASS\CSV\ADDRESS_SAMPLE.csv', open('C:\PYTHON_CLASS\CSV\ADDRESS_SAMPLE.csv', 'rb'), 'text/csv')} response = requests.post(url, files=files, data = payload) type(response) print(response.text) ``` -I tried the code below (among many other versions) which is how I would normally import a CSV file, but it generates an error message "Invalid file path or buffer object type: <class 'requests.models.Response'>" ``` df = pd.read_csv(response) ``` The contents of the address file I used to generate the geocoding is: id,address,city,state,zipcode 1,1600 Pennsylvania Avenue NW, Washington,DC,20500 2,4 S Market St,Boston,MA,02109 3,1200 Getty Center Drive,Los Angeles,CA,90049 4,1800 Congress Ave,Austin,TX,78701 5,One Caesars Palace Drive,Las Vegas,NV,89109 6,1060 West Addison,Chicago,IL,60613 7,One East 161st Street,Bronx,NY,10451 8,201 E Jefferson St,Phoenix,AZ,85004 9,600 N 1st Ave,Minneapolis,MN,55403 10,400 W Church St,Orlando,FL,32801 The output is shown below: ``` print(response.text) ``` "1","1600 Pennsylvania Avenue NW, Washington, DC, 20500","Match","Non\_Exact","1600 PENNSYLVANIA AVE NW, WASHINGTON, DC, 20006","-77.03535,38.898754","76225813","L","11","001","006202","1031" "2","4 S Market St, Boston, MA, 02109","Match","Exact","4 S MARKET ST, BOSTON, MA, 02109","-71.05566,42.359936","85723841","R","25","025","030300","2017" "3","1200 Getty Center Drive, Los Angeles, CA, 90049","Match","Exact","1200 GETTY CENTER DR, LOS ANGELES, CA, 90049","-118.47564,34.08857","142816014","L","06","037","262302","1005" "4","1800 Congress Ave, Austin, TX, 78701","Match","Exact","1800 CONGRESS AVE, AUSTIN, TX, 78701","-97.73847,30.279745","63946318","L","48","453","000700","1007" "5","One Caesars Palace Drive, Las Vegas, NV, 89109","No\_Match" "6","1060 West Addison, Chicago, IL, 60613","Match","Non\_Exact","1060 W ADDISON ST, CHICAGO, IL, 60613","-87.65581,41.947227","111863716","R","17","031","061100","1014" "7","One East 161st Street, Bronx, NY, 10451","No\_Match" "8","201 E Jefferson St, Phoenix, AZ, 85004","Match","Exact","201 E JEFFERSON ST, PHOENIX, AZ, 85004","-112.07113,33.44675","128300920","L","04","013","114100","1058" "9","600 N 1st Ave, Minneapolis, MN, 55403","No\_Match" "id","address, city, state, zipcode","No\_Match" "10","400 W Church St, Orlando, FL, 32801","Match","Exact","400 W CHURCH ST, ORLANDO, FL, 32801","-81.38436,28.540176","94416807","L","12","095","010500","1002" The output for `response.text` is: '"1","1600 Pennsylvania Avenue NW, Washington, DC, 20500","Match","Non\_Exact","1600 PENNSYLVANIA AVE NW, WASHINGTON, DC, 20006","-77.03535,38.898754","76225813","L","11","001","006202","1031"\n"2","4 S Market St, Boston, MA, 02109","Match","Exact","4 S MARKET ST, BOSTON, MA, 02109","-71.05566,42.359936","85723841","R","25","025","030300","2017"\n"3","1200 Getty Center Drive, Los Angeles, CA, 90049","Match","Exact","1200 GETTY CENTER DR, LOS ANGELES, CA, 90049","-118.47564,34.08857","142816014","L","06","037","262302","1005"\n"4","1800 Congress Ave, Austin, TX, 78701","Match","Exact","1800 CONGRESS AVE, AUSTIN, TX, 78701","-97.73847,30.279745","63946318","L","48","453","000700","1007"\n"5","One Caesars Palace Drive, Las Vegas, NV, 89109","No\_Match"\n"6","1060 West Addison, Chicago, IL, 60613","Match","Non\_Exact","1060 W ADDISON ST, CHICAGO, IL, 60613","-87.65581,41.947227","111863716","R","17","031","061100","1014"\n"7","One East 161st Street, Bronx, NY, 10451","No\_Match"\n"8","201 E Jefferson St, Phoenix, AZ, 85004","Match","Exact","201 E JEFFERSON ST, PHOENIX, AZ, 85004","-112.07113,33.44675","128300920","L","04","013","114100","1058"\n"9","600 N 1st Ave, Minneapolis, MN, 55403","No\_Match"\n"id","address, city, state, zipcode","No\_Match"\n"10","400 W Church St, Orlando, FL, 32801","Match","Exact","400 W CHURCH ST, ORLANDO, FL, 32801","-81.38436,28.540176","94416807","L","12","095","010500","1002"\n' When I tried ``` df = pd.read_csv(io.StringIO(response), sep=',', header=None, quoting=csv.QUOTE_ALL) ``` I got the error message ``` TypeError Traceback (most recent call last) <ipython-input-60-55e6c5ac54af> in <module> ----> 1 df = pd.read_csv(io.StringIO(response), sep=',', header=None, quoting=csv.QUOTE_ALL) TypeError: initial_value must be str or None, not Response ``` When I tried ``` df = pd.read_csv(io.StringIO(response.replace('" "', '"\n"')), sep=',', header=None, quoting=csv.QUOTE_ALL) ``` I got ``` AttributeError Traceback (most recent call last) <ipython-input-61-a92a7ffcf170> in <module> ----> 1 df = pd.read_csv(io.StringIO(response.replace('" "', '"\n"')), sep=',', header=None, quoting=csv.QUOTE_ALL) AttributeError: 'Response' object has no attribute 'replace' ```
2020/11/10
[ "https://Stackoverflow.com/questions/64773690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14614221/" ]
To address both the legend and palette issue at the same time. First you could convert the data frame into long format using `pivot_longer()`, then add a column that specifies the colour you want with the associated variable. You can map those colours using `scale_colour_manual()`. Not the most elegant solution but I found it useful when dealing with manually set palettes. ``` library(ggplot2) library(dplyr) library(tidyr) library(tibble) df <- data.frame(date = as.Date(c("2020-08-05","2020-08-06","2020-08-07","2020-08-08","2020-08-09","2020-08-10","2020-08-11","2020-08-12")), State.1_day=c(0.8,0.3,0.2,0.5,0.6,0.7,0.8,0.7), State.2_day=c(0.4,0.2,0.1,0.2,0.3,0.4,0.5,0.6), State.1_night=c(0.7,0.8,0.5,0.4,0.3,0.2,0.3,0.2), State.2_night=c(0.5,0.6,0.7,0.4,0.3,0.5,0.6,0.7)) line_colors_a <- RColorBrewer::brewer.pal(6, "Blues")[c(3,6)] line_colors_a line_colors_b <- RColorBrewer::brewer.pal(6, "Greens")[c(3,6)] line_colors_b line_colors <- c(line_colors_a,line_colors_b) df1 <- df %>% pivot_longer(-date) %>% mutate(colour = case_when( name == "State.1_day" ~ line_colors[1], name == "State.1_night" ~ line_colors[2], name == "State.2_day" ~ line_colors[3], name == "State.2_night" ~ line_colors[4] )) ggplot(df1, aes(x = date, y = value, colour = name)) + geom_line(size = 1) + scale_x_date(date_labels = "%Y-%m-%d") + scale_colour_manual(values = tibble::deframe(distinct(df1, colour, name))) + theme_bw() + labs(y = "% time", x = "Date") + theme(strip.text = element_text(face="bold", size=18), strip.background=element_rect(fill="white", colour="black",size=2), axis.title.x =element_text(margin = margin(t = 10, r = 0, b = 0, l = 0),size = 20), axis.title.y =element_text(margin = margin(t = 0, r = 10, b = 0, l = 0),size = 20), axis.text.x = element_text(angle = 70, hjust = 1,size = 15), axis.text.y = element_text(angle = 0, hjust = 0.5,size = 15), axis.line = element_line(), panel.grid.major= element_blank(), panel.grid.minor = element_blank(), legend.text=element_text(size=18), legend.title = element_text(size=19, face = "bold"), legend.key=element_blank(), legend.position = "top", panel.border = element_blank(), strip.placement = "outside") ``` [![enter image description here](https://i.stack.imgur.com/WS2sf.png)](https://i.stack.imgur.com/WS2sf.png)
Since @EJJ's reply did not work for some reason, I used a similar approach but using `melt()`. Here is the code and the plot: ``` colnames(df) <- c("date","Act_day","Rest_day","Act_night","Rest_night") df <- melt(df, id.vars=c("date")) colnames(df) <- c("date","State","value") Plot <- ggplot(df,aes(x = date, y = value, colour = State)) + geom_line(size = 1) + scale_x_date(labels = date_format("%Y-%m-%d")) + scale_color_discrete(name = "States", labels = c("Active_day", "Active_night", "Resting_day", "Resting_night")) + theme_bw() + labs(y = "% time", x = "Date") + theme(strip.text = element_text(face="bold", size=18), strip.background=element_rect(fill="white", colour="black",size=2), axis.title.x =element_text(margin = margin(t = 10, r = 0, b = 0, l = 0),size = 20), axis.title.y =element_text(margin = margin(t = 0, r = 10, b = 0, l = 0),size = 20), axis.text.x = element_text(angle = 70, hjust = 1,size = 15), axis.text.y = element_text(angle = 0, hjust = 0.5,size = 15), axis.line = element_line(), panel.grid.major= element_blank(), panel.grid.minor = element_blank(), legend.text=element_text(size=18), legend.title = element_text(size=19, face = "bold"), legend.key=element_blank(), legend.position = "top", panel.border = element_blank(), strip.placement = "outside") + scale_color_manual(values = c("Act_day" = line_colors[1], "Act_night" = line_colors[2], "Rest_day" = line_colors[3], "Rest_night" = line_colors[4])) Plot ``` [![enter image description here](https://i.stack.imgur.com/YD4qw.png)](https://i.stack.imgur.com/YD4qw.png)
26,345,185
I’m having trouble using python’s multiprocessing module. This is the first time I’ve tried using the module. I’ve tried simplifying my processing to the bare bones, but keep getting the same error. I’m using python 2.7.2, and Windows 7. The script I’m trying to run is called `learnmp.py`, and the error message says that the problem is that it can't find module `learnmp`. ``` import multiprocessing def doSomething(): """worker function""" print 'something' return if __name__ == '__main__': jobs = [] for i in range(2): p = multiprocessing.Process(target=doSomething) jobs.append(p) p.start() ``` The error is : ``` File “<string>”, line 1, in <module> File “C:\Python27\ArcGISx6410.1\lib\multiprocessing\forking.py”, line 373, in main prepare(preparation_data) File “C:\Python27\ArcGISx6410.1\lib\multiprocessing\forking.py”, line 482, in prepare file, path_name, etc = imp.find_module (main_name, dirs) ImportError: No module named learnmp ``` What’s causing the error, and how can I solve it? EDIT: I still don't know what was causing the error, but changing the file name eliminated it.
2014/10/13
[ "https://Stackoverflow.com/questions/26345185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2241053/" ]
I know it's been a while, but I ran into this same error, also using the version of Python distributed with ArcGIS, and I've found a solution which at least worked in my case. The problem that I had was that I was calling my program name, Test.py, as test.py. Note the difference in case. ``` c:\python27\arcgisx6410.2\python.exe c:\temp\test.py c:\python27\arcgisx6410.2\python.exe c:\temp\Test.py ``` This isn't normally an issue if you're not using the multiprocessing library. However, when you write: ``` if __name__ == '__main__': ``` what appears to be happening is that the part of the program in main is being bound to the name of the python file. In my case that was test. However, there is no test, just Test. So although Windows will allow case-incorrect filenames in cmd, PowerShell, and in batch files, Python's multiprocessing library balks at this and throws a nasty series of errors. Hopefully this helps someone.
Looks like you might be going down a rabbit-hole looking into `multiprocessing`. As the traceback shows, your python install is trying to look in the ArcGIS version of python before actually looking at your system install. My guess is that the version of python that ships with ArcGIS is slightly customized for some reason or another and can't find your python script. The question then becomes: > > Why is your Windows machine looking in ArcGIS for python? > > > Without looking at your machine at a slightly lower level I can't quite be sure, but if I had to guess, you probably added the ArcGIS directory to your `PATH` variable in front of the standard python directory, so it looks in ArcGIS first. If you move the ArcGIS path to the end of your `PATH` variable it should resolve the problem. Changing your `PATH` variable: <http://www.computerhope.com/issues/ch000549.htm>
26,345,185
I’m having trouble using python’s multiprocessing module. This is the first time I’ve tried using the module. I’ve tried simplifying my processing to the bare bones, but keep getting the same error. I’m using python 2.7.2, and Windows 7. The script I’m trying to run is called `learnmp.py`, and the error message says that the problem is that it can't find module `learnmp`. ``` import multiprocessing def doSomething(): """worker function""" print 'something' return if __name__ == '__main__': jobs = [] for i in range(2): p = multiprocessing.Process(target=doSomething) jobs.append(p) p.start() ``` The error is : ``` File “<string>”, line 1, in <module> File “C:\Python27\ArcGISx6410.1\lib\multiprocessing\forking.py”, line 373, in main prepare(preparation_data) File “C:\Python27\ArcGISx6410.1\lib\multiprocessing\forking.py”, line 482, in prepare file, path_name, etc = imp.find_module (main_name, dirs) ImportError: No module named learnmp ``` What’s causing the error, and how can I solve it? EDIT: I still don't know what was causing the error, but changing the file name eliminated it.
2014/10/13
[ "https://Stackoverflow.com/questions/26345185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2241053/" ]
I know it's been a while, but I ran into this same error, also using the version of Python distributed with ArcGIS, and I've found a solution which at least worked in my case. The problem that I had was that I was calling my program name, Test.py, as test.py. Note the difference in case. ``` c:\python27\arcgisx6410.2\python.exe c:\temp\test.py c:\python27\arcgisx6410.2\python.exe c:\temp\Test.py ``` This isn't normally an issue if you're not using the multiprocessing library. However, when you write: ``` if __name__ == '__main__': ``` what appears to be happening is that the part of the program in main is being bound to the name of the python file. In my case that was test. However, there is no test, just Test. So although Windows will allow case-incorrect filenames in cmd, PowerShell, and in batch files, Python's multiprocessing library balks at this and throws a nasty series of errors. Hopefully this helps someone.
Microsoft Visual C++ 9.0 is required for some python modules to work in windows,so download below package it will work. `http://aka.ms/vcpython27` This package contains the compiler and set of system headers necessary for producing binary wheels for Python 2.7 packages.
74,113,894
I have a request respond from api and it looks like this: ``` '224014@@@1;1=8.4=0;2=33=0;3=9.4=0@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0@@@3;1=17=0;2=7.4=0;3=27=0@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1' ``` I had splited them for your EASY READING with some explaination: ``` [1]The 6 digital numbers string mens UPDATE TIME. [2]It sets apart something like'@@@X'and the X means Race No. [3]For each race (after '@@@X'),there is a pattern for each horse. [4]For each horse,Horse_No,Odd & status are inside the pattern(eg:1=8.4=0)and they were connected using '=' [5]Number of races and number of horses are not certain(maybe more or less) (UPDATE TIME)'224014 (Race 1)@@@1;1=8.4=0;2=33=0;3=9.4=0 (Race 2)@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0 (Race 3)@@@3;1=17=0;2=7.4=0;3=27=0 (Race 4)@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1' ``` Expcet output using python (i guess regex is necessary): ``` [ {'Race_No':1,'Horse_No':1,"Odd":8.4,'status':0,'updatetime':224014}, {'Race_No':1,'Horse_No':2,"Odd":33,'status':0,'updatetime':224014}, {'Race_No':1,'Horse_No':3,"Odd":9.4,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':1,"Odd":15,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':2,"Odd":3.3,'status':1,'updatetime':224014}, {'Race_No':2,'Horse_No':3,"Odd":4.2,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':4,"Odd":5.7,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':5,"Odd":5.9,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':6,"Odd":22,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':1,"Odd":17,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':2,"Odd":7.4,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':3,"Odd":27,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':1,"Odd":14,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':2,"Odd":7.8,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':3,"Odd":5.9,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':4,"Odd":23,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':5,"Odd":4.0,'status':1,'updatetime':224014} ] ```
2022/10/18
[ "https://Stackoverflow.com/questions/74113894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19998897/" ]
firstly, `sum` is a protected keyword since it is the list sum function, so don't call any variables "sum". to split the string, try: ```py sum=0 sq="" for i in range (0+2,1000+1,2): sum+=i if i<1000: sq=sq+str(i)+", " else: sq=sq+str(i) if i % 40 == 0: sq += "\n" print(sq, end="\n") print("Sum of all even numbers within 1 and 1000 =",sum) ```
#### solution ```py sum=0 for i in range(2,1001,2): sum+=i if i%20 == 2: print("\n{}".format(i),end="") # print new line per 20 numbers else: print(", {}".format(i),end="") print("\nSum of all even numbers within 1 and 1000 =",sum) ``` * Output: ```bash 2, 4, 6, 8, 10, 12, 14, 16, 18, 20 22, 24, 26, 28, 30, 32, 34, 36, 38, 40 ... 962, 964, 966, 968, 970, 972, 974, 976, 978, 980 982, 984, 986, 988, 990, 992, 994, 996, 998, 1000 Sum of all even numbers within 1 and 1000 = 250500 ``` --- #### another solution with better run time performance ```py print("".join(["\n"+str(i) if i%20==2 else ", "+str(i) for i in range(2,1001,2)])) print("\nSum of all even numbers within 1 and 1000 =",sum(range(2,1001,2))) ```
74,113,894
I have a request respond from api and it looks like this: ``` '224014@@@1;1=8.4=0;2=33=0;3=9.4=0@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0@@@3;1=17=0;2=7.4=0;3=27=0@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1' ``` I had splited them for your EASY READING with some explaination: ``` [1]The 6 digital numbers string mens UPDATE TIME. [2]It sets apart something like'@@@X'and the X means Race No. [3]For each race (after '@@@X'),there is a pattern for each horse. [4]For each horse,Horse_No,Odd & status are inside the pattern(eg:1=8.4=0)and they were connected using '=' [5]Number of races and number of horses are not certain(maybe more or less) (UPDATE TIME)'224014 (Race 1)@@@1;1=8.4=0;2=33=0;3=9.4=0 (Race 2)@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0 (Race 3)@@@3;1=17=0;2=7.4=0;3=27=0 (Race 4)@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1' ``` Expcet output using python (i guess regex is necessary): ``` [ {'Race_No':1,'Horse_No':1,"Odd":8.4,'status':0,'updatetime':224014}, {'Race_No':1,'Horse_No':2,"Odd":33,'status':0,'updatetime':224014}, {'Race_No':1,'Horse_No':3,"Odd":9.4,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':1,"Odd":15,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':2,"Odd":3.3,'status':1,'updatetime':224014}, {'Race_No':2,'Horse_No':3,"Odd":4.2,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':4,"Odd":5.7,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':5,"Odd":5.9,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':6,"Odd":22,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':1,"Odd":17,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':2,"Odd":7.4,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':3,"Odd":27,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':1,"Odd":14,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':2,"Odd":7.8,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':3,"Odd":5.9,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':4,"Odd":23,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':5,"Odd":4.0,'status':1,'updatetime':224014} ] ```
2022/10/18
[ "https://Stackoverflow.com/questions/74113894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19998897/" ]
firstly, `sum` is a protected keyword since it is the list sum function, so don't call any variables "sum". to split the string, try: ```py sum=0 sq="" for i in range (0+2,1000+1,2): sum+=i if i<1000: sq=sq+str(i)+", " else: sq=sq+str(i) if i % 40 == 0: sq += "\n" print(sq, end="\n") print("Sum of all even numbers within 1 and 1000 =",sum) ```
Uisng `textwrap` [Inbuilt Library](https://docs.python.org/3/library/textwrap.html) ``` import textwrap import re sum=0 sq="" for i in range (0+2,1000+1,2): sum+=i if i<1000: sq=sq+str(i)+"," else: sq=sq+str(i) #print(sq, end="\n") print('\n'.join(textwrap.wrap(sq, 20)))#Mask n here print("Sum of all even numbers within 1 and 1000 =",sum) ``` #output ``` 2,4,6,8,10,12,14,16,18,20,22,24,26,28,30 ,32,34,36,38,40,42,44,46,48,50,52,54,56, 58,60,62,64,66,68,70,72,74,76,78,80,82,8 4,86,88,90,92,94,96,98,100,102,104,106,1 08,110,112,114,116,118,120,122,124,126,1 28,130,132,134,136,138,140,142,144,146,1 48,150,152,154,156,158,160,162,164,166,1 68,170,172,174,176,178,180,182,184,186,1 88,190,192,194,196,198,200,202,204,206,2 08,210,212,214,216,218,220,222,224,226,2 28,230,232,234,236,238,240,242,244,246,2 48,250,252,254,256,258,260,262,264,266,2 68,270,272,274,276,278,280,282,284,286,2 88,290,292,294,296,298,300,302,304,306,3 08,310,312,314,316,318,320,322,324,326,3 28,330,332,334,336,338,340,342,344,346,3 48,350,352,354,356,358,360,362,364,366,3 68,370,372,374,376,378,380,382,384,386,3 88,390,392,394,396,398,400,402,404,406,4 08,410,412,414,416,418,420,422,424,426,4 28,430,432,434,436,438,440,442,444,446,4 48,450,452,454,456,458,460,462,464,466,4 68,470,472,474,476,478,480,482,484,486,4 88,490,492,494,496,498,500,502,504,506,5 08,510,512,514,516,518,520,522,524,526,5 28,530,532,534,536,538,540,542,544,546,5 48,550,552,554,556,558,560,562,564,566,5 68,570,572,574,576,578,580,582,584,586,5 88,590,592,594,596,598,600,602,604,606,6 08,610,612,614,616,618,620,622,624,626,6 28,630,632,634,636,638,640,642,644,646,6 48,650,652,654,656,658,660,662,664,666,6 68,670,672,674,676,678,680,682,684,686,6 88,690,692,694,696,698,700,702,704,706,7 08,710,712,714,716,718,720,722,724,726,7 28,730,732,734,736,738,740,742,744,746,7 48,750,752,754,756,758,760,762,764,766,7 68,770,772,774,776,778,780,782,784,786,7 88,790,792,794,796,798,800,802,804,806,8 08,810,812,814,816,818,820,822,824,826,8 28,830,832,834,836,838,840,842,844,846,8 48,850,852,854,856,858,860,862,864,866,8 68,870,872,874,876,878,880,882,884,886,8 88,890,892,894,896,898,900,902,904,906,9 08,910,912,914,916,918,920,922,924,926,9 28,930,932,934,936,938,940,942,944,946,9 48,950,952,954,956,958,960,962,964,966,9 68,970,972,974,976,978,980,982,984,986,9 88,990,992,994,996,998,1000 Sum of all even numbers within 1 and 1000 = 250500 ``` Also ``` import textwrap import re sum=0 sq="" for i in range (0+2,1000+1,2): sum+=i if i<1000: sq=sq+str(i)+"," else: sq=sq+str(i) #print(sq, end="\n") print (textwrap.fill(sq, 20)) print("Sum of all even numbers within 1 and 1000 =",sum) ``` #same output
74,113,894
I have a request respond from api and it looks like this: ``` '224014@@@1;1=8.4=0;2=33=0;3=9.4=0@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0@@@3;1=17=0;2=7.4=0;3=27=0@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1' ``` I had splited them for your EASY READING with some explaination: ``` [1]The 6 digital numbers string mens UPDATE TIME. [2]It sets apart something like'@@@X'and the X means Race No. [3]For each race (after '@@@X'),there is a pattern for each horse. [4]For each horse,Horse_No,Odd & status are inside the pattern(eg:1=8.4=0)and they were connected using '=' [5]Number of races and number of horses are not certain(maybe more or less) (UPDATE TIME)'224014 (Race 1)@@@1;1=8.4=0;2=33=0;3=9.4=0 (Race 2)@@@2;1=15=0;2=3.3=1;3=4.2=0;4=5.7=0;5=9.4=0;6=22=0 (Race 3)@@@3;1=17=0;2=7.4=0;3=27=0 (Race 4)@@@4;1=14=0;2=7.8=0;3=5.9=0;4=23=0;5=4.0=1' ``` Expcet output using python (i guess regex is necessary): ``` [ {'Race_No':1,'Horse_No':1,"Odd":8.4,'status':0,'updatetime':224014}, {'Race_No':1,'Horse_No':2,"Odd":33,'status':0,'updatetime':224014}, {'Race_No':1,'Horse_No':3,"Odd":9.4,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':1,"Odd":15,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':2,"Odd":3.3,'status':1,'updatetime':224014}, {'Race_No':2,'Horse_No':3,"Odd":4.2,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':4,"Odd":5.7,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':5,"Odd":5.9,'status':0,'updatetime':224014}, {'Race_No':2,'Horse_No':6,"Odd":22,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':1,"Odd":17,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':2,"Odd":7.4,'status':0,'updatetime':224014}, {'Race_No':3,'Horse_No':3,"Odd":27,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':1,"Odd":14,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':2,"Odd":7.8,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':3,"Odd":5.9,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':4,"Odd":23,'status':0,'updatetime':224014}, {'Race_No':4,'Horse_No':5,"Odd":4.0,'status':1,'updatetime':224014} ] ```
2022/10/18
[ "https://Stackoverflow.com/questions/74113894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19998897/" ]
firstly, `sum` is a protected keyword since it is the list sum function, so don't call any variables "sum". to split the string, try: ```py sum=0 sq="" for i in range (0+2,1000+1,2): sum+=i if i<1000: sq=sq+str(i)+", " else: sq=sq+str(i) if i % 40 == 0: sq += "\n" print(sq, end="\n") print("Sum of all even numbers within 1 and 1000 =",sum) ```
Try this, ``` lst = [i for i in range(2,1001,2)] for i in range(0, len(lst), 20): print(','.join(str(i) for i in lst[i:i+20])) print(f'Sum of all even numbers within 1 and 1000 : {sum(lst)}') ```
44,211,461
What is the fastest way to combine 100 CSV files with headers into one with the following setup: 1. The total size of files is 200 MB. (The size is reduced to make the computation time visible) 2. The files are located on an SSD with a maximum speed of 240 MB/s. 3. The CPU has 4 cores so multi-threading and multiple processes are allowed. 4. There exists only one node (important for Spark) 5. The available memory is 15 GB. So the files easily fit into memory. 6. The OS is Linux (Debian Jessie) 7. The computer is actually a n1-standard-4 instance in Google Cloud. (The detailed setup was included to make the scope of the question more specific. The changes were made according to [the feedback here](https://meta.stackoverflow.com/questions/349793/why-is-benchmarking-a-specific-task-in-multiple-languages-considered-too-broad)) File 1.csv: ``` a,b 1,2 ``` File 2.csv: ``` a,b 3,4 ``` Final out.csv: ``` a,b 1,2 3,4 ``` According to my benchmarks the fastest from all the proposed methods is pure python. Is there any faster method? **Benchmarks (Updated with the methods from comments and posts):** ``` Method Time pure python 0.298s sed 1.9s awk 2.5s R data.table 4.4s R data.table with colClasses 4.4s Spark 2 40.2s python pandas 1min 11.0s ``` Versions of tools: ``` sed 4.2.2 awk: mawk 1.3.3 Nov 1996 Python 3.6.1 Pandas 0.20.1 R 3.4.0 data.table 1.10.4 Spark 2.1.1 ``` **Code in Jupyter notebooks:** sed: ``` %%time !head temp/in/1.csv > temp/merged_sed.csv !sed 1d temp/in/*.csv >> temp/merged_sed.csv ``` Pure Python all binary read-write with undocumented behavior of "next": ``` %%time with open("temp/merged_pure_python2.csv","wb") as fout: # first file: with open("temp/in/1.csv", "rb") as f: fout.write(f.read()) # now the rest: for num in range(2,101): with open("temp/in/"+str(num)+".csv", "rb") as f: next(f) # skip the header fout.write(f.read()) ``` awk: ``` %%time !awk 'NR==1; FNR==1{{next}} 1' temp/in/*.csv > temp/merged_awk.csv ``` R data.table: ``` %%time %%R filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv")) files <- lapply(filenames, fread) merged_data <- rbindlist(files, use.names=F) fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE) ``` R data.table with colClasses: ``` %%time %%R filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv")) files <- lapply(filenames, fread,colClasses=c( V1="integer", V2="integer", V3="integer", V4="integer", V5="integer", V6="integer", V7="integer", V8="integer", V9="integer", V10="integer")) merged_data <- rbindlist(files, use.names=F) fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE) ``` Spark (pyspark): ``` %%time df = spark.read.format("csv").option("header", "true").load("temp/in/*.csv") df.coalesce(1).write.option("header", "true").csv("temp/merged_pyspark.csv") ``` Python pandas: ``` %%time import pandas as pd interesting_files = glob.glob("temp/in/*.csv") df_list = [] for filename in sorted(interesting_files): df_list.append(pd.read_csv(filename)) full_df = pd.concat(df_list) full_df.to_csv("temp/merged_pandas.csv", index=False) ``` Data was generated by: ``` %%R df=data.table(replicate(10,sample(0:9,100000,rep=TRUE))) for (i in 1:100){ write.csv(df,paste0("temp/in/",i,".csv"), row.names=FALSE) } ```
2017/05/26
[ "https://Stackoverflow.com/questions/44211461", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3596337/" ]
According to the benchmarks in the question the fastest method is pure Python with undocumented "next()" function behavior with binary files. The method was proposed by [Stefan Pochmann](https://stackoverflow.com/users/1672429/stefan-pochmann) Benchmarks: **Benchmarks (Updated with the methods from comments and posts):** ``` Method Time pure python 0.298s sed 1.9s awk 2.5s R data.table 4.4s R data.table with colClasses 4.4s Spark 2 40.2s python pandas 1min 11.0s ``` Versions of tools: ``` sed 4.2.2 awk: mawk 1.3.3 Nov 1996 Python 3.6.1 Pandas 0.20.1 R 3.4.0 data.table 1.10.4 Spark 2.1.1 ``` Pure Python code: ``` with open("temp/merged_pure_python2.csv","wb") as fout: # first file: with open("temp/in/1.csv", "rb") as f: fout.write(f.read()) # now the rest: for num in range(2,101): with open("temp/in/"+str(num)+".csv", "rb") as f: next(f) # skip the header fout.write(f.read()) ```
`sed` is probably the fastest. I would also propose an `awk` alternative ``` awk 'NR==1; FNR==1{next} 1' file* > output ``` prints the first line from the first file, then skips all other first lines from the rest of the files. Timings: I tried 10,000 lines long 100 files each around 200MB (not sure). Here is a worst timing on my server. ``` real 0m0.429s user 0m0.360s sys 0m0.068s ``` server specs (little monster) ``` $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 1 Core(s) per socket: 6 Socket(s): 2 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Stepping: 2 CPU MHz: 2394.345 BogoMIPS: 4789.86 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 15360K NUMA node0 CPU(s): 0-11 ```
42,544,150
I am using python-3.x, and I am trying to do mutation on a binary string that will flip one bit of the elements from 0 to 1 or 1 to 0 by random, I tried some methods but didn't work I don't know where is the problem: ``` x=[0, 0, 0, 0, 0] def mutation (x, muta): for i in range(len(x)): if random.random() < muta: x[i] = type(x[i])(not x[i]) return x, print (x) ``` The output for example should be x=[0, 0, 0, 1, 0] or x=[1, 0, 0, 0, 0] and so on.... Also, I tried this one: ``` MUTATION_RATE = 0.5 CHROMO_LEN = 6 def mutate(x): x = "" for i in range(CHROMO_LEN): if (random.random() < MUTATION_RATE): if (x[i] == 1): x += 0 else: x += 1 else: x += x[i] return x print(x) ``` please any suggestion or advice will be appreciated
2017/03/01
[ "https://Stackoverflow.com/questions/42544150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7632116/" ]
If you want that your object don't pass from other object then, you should use colldier without [**isTrigger**](https://docs.unity3d.com/ScriptReference/Collider-isTrigger.html) check (isTrigger should be false) and use [OnCollisionEnter](https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnCollisionEnter.html) Event instead OnTriggerEnter.
you just using `Bounds` insteads of making many collider.
18,014,633
Consider the following simple python code: ``` f=open('raw1', 'r') i=1 for line in f: line1=line.split() for word in line1: print word, print '\n' ``` In the first for loop i.e "for line in f:", how does python know that I want to read a line and not a word or a character? The second loop is clearer as line1 is a list. So the second loop will iterate over the list elemnts.
2013/08/02
[ "https://Stackoverflow.com/questions/18014633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2625987/" ]
Python has a notation of what are called "iterables". They're things that know how to let you traverse some data they hold. Some common iterators are lists, sets, dicts, pretty much every data structure. Files are no exception to this. The way things become iterable is by defining a method to return an object with a `next` method. This `next` method is meant to be called repeatedly and return the next piece of data each time. The `for foo in bar` loops actually are just calling the `next` method repeatedly behind the scenes. For files, the `next` method returns lines, that's it. It doesn't "know" that you want lines, it's just always going to return lines. The reason for this is that ~50% of cases involving file traversal are by line, and if you want words, ``` for word in (word for line in f for word in line.split(' ')): ... ``` works just fine.
In python the **for..in** syntax is used over iterables (elements tht can be iterated upon). For a file object, the iterator is the file itself. Please refer [here](http://docs.python.org/release/2.5.2/lib/bltin-file-objects.html) to the documentation of **next()** method - excerpt pasted below: > > A file object is its own iterator, for example iter(f) returns f > (unless f is closed). When a file is used as an iterator, typically in > a for loop (for example, for line in f: print line), the next() method > is called repeatedly. This method returns the next input line, or > raises StopIteration when EOF is hit when the file is open for reading > (behavior is undefined when the file is open for writing). In order to > make a for loop the most efficient way of looping over the lines of a > file (a very common operation), the next() method uses a hidden > read-ahead buffer. As a consequence of using a read-ahead buffer, > combining next() with other file methods (like readline()) does not > work right. However, using seek() to reposition the file to an > absolute position will flush the read-ahead buffer. New in version > 2.3. > > >
32,127,602
After instantiating a deck (`deck = Deck()`), calling `deck.show_deck()` just prints out "two of diamonds" 52 times. The 'copy' part is as per [this answer](https://stackoverflow.com/questions/2196956/add-an-object-to-a-python-list), but doesn't seem to help. Any suggestions? ``` import copy from card import Card class Deck: card_ranks = ['ace','king','queen','jack','ten','nine','eight','seven','six','five','four','three','two'] card_suites = ['clubs','hearts','spades','diamonds'] deck = [] def __init__(self): #create a deck of 52 cards for suite in Deck.card_suites: for rank in Deck.card_ranks: Deck.deck.append(copy.deepcopy(Card(card_rank=rank, card_suite=suite))) def show_deck(self): for item in Deck.deck: print item.get_name() ``` Card: ``` class Card: card_name = '' def __init__(self, card_rank, card_suite): self.card_rank = card_rank.lower() self.card_suite = card_suite.lower() Card.card_name = card_rank + " of " + card_suite def get_name(self): return Card.card_name ```
2015/08/20
[ "https://Stackoverflow.com/questions/32127602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2372996/" ]
The problem here is that the `Card` class has a name variable which is shared with all instances of the `Card` class. When you have: ``` class Card: card_name = '' ``` This means that all `Card` objects will have the same name (`card_name`) which is almost surely not what you want. You have to make the name be part of the instance instead like so: ``` class Card: def __init__(self, card_rank, card_suite): self.card_rank = card_rank.lower() self.card_suite = card_suite.lower() self.card_name = card_rank + " of " + card_suite def get_name(self): return self.card_name ``` You will find that the `deepcopy` is not needed, nor was it ever needed, but it does show you that `deepcopy` will not allow you to keep different states of class variables. Further I would recommend you change `Card` to have it's own `__str__` method if you want to print it out: ``` class Card: def __init__(self, card_rank, card_suite): self.card_rank = card_rank.lower() self.card_suite = card_suite.lower() def __str__(self): return "{0} of {1}".format(card_rank, card_suit) ``` This uses the Python language itself to print the class and has the upside that your class will now work properly in print statements and in conversions to strings. So instead of: ``` print some_card.get_name() ``` you could do ``` print some_card ```
To expand on what shuttle87 said: ``` class Card: card_name = '' ``` makes `card_name` a static variable (shared between all instances of that class) Once you make the variable non-static (by using `self.card_name` in the `__init__` method) you won't have to worry about the copy part as each instance of the card class will have it's own unique name On that note, the `deck` in Deck is also static in your code. ``` from card import Card class Deck: # these 2 can be static, they never change between copies of the deck class card_ranks = ['ace','king','queen','jack','ten','nine','eight','seven','six','five','four','three','two'] card_suites = ['clubs','hearts','spades','diamonds'] def __init__(self): # this shouldn't be static since you might want to shuffle them # or do other things that make them unique for each deck self.cards = [] for suite in Deck.card_suites: for rank in Deck.card_ranks: self.cards.append(Card(rank, suite)) def show_deck(self): for item in self.cards: print item ``` --- ``` class Card: def __init__(self, rank, suite): self.rank = rank self.suite = suite def __str__(self): return self.rank + ' of ' + self.suite ``` --- ``` #! python2 from deck import Deck def main(): deck = Deck() deck.show_deck() if __name__ == '__main__': main() ``` --- ``` ace of clubs king of clubs queen of clubs jack of clubs ... ```
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
Meanwhile, I have become a big fan of [PDB++](https://pypi.org/project/pdbpp/) debugger for python. It works like the iPython CLI, so I think the question has become obsolete specifically for me, but still may have some value for others.
It seems this is a desired feature for VS Code but not yet implemented. See this post: <https://github.com/DonJayamanne/vscodeJupyter/issues/19> I'm trying to see if one could use the config file of VS Code to define an ipython debug configuration e.g.: `{ "name": "ipython", "type": "python", "request": "launch", "program": "${file}", "pythonPath": "/Users/tsando/anaconda3/bin/ipython" }` but so far no luck. You can see my post in the above link.
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
It seems this is a desired feature for VS Code but not yet implemented. See this post: <https://github.com/DonJayamanne/vscodeJupyter/issues/19> I'm trying to see if one could use the config file of VS Code to define an ipython debug configuration e.g.: `{ "name": "ipython", "type": "python", "request": "launch", "program": "${file}", "pythonPath": "/Users/tsando/anaconda3/bin/ipython" }` but so far no luck. You can see my post in the above link.
the vscode debug console does allow for auto completion ... however, I am not sure if what you wanted was a way to trigger your code from an ipython shell, if so. you can start ipython like so, ``` python -m debugpy --listen 5678 `which ipython` ``` now you can connect to this remote debugger from vscode.
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
It seems this is a desired feature for VS Code but not yet implemented. See this post: <https://github.com/DonJayamanne/vscodeJupyter/issues/19> I'm trying to see if one could use the config file of VS Code to define an ipython debug configuration e.g.: `{ "name": "ipython", "type": "python", "request": "launch", "program": "${file}", "pythonPath": "/Users/tsando/anaconda3/bin/ipython" }` but so far no luck. You can see my post in the above link.
For tab completion you could install pyreadline3: ``` python -m pip install pyreadline3 ``` This is not neccessary on Linux, but on Windows it is.
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
Meanwhile, I have become a big fan of [PDB++](https://pypi.org/project/pdbpp/) debugger for python. It works like the iPython CLI, so I think the question has become obsolete specifically for me, but still may have some value for others.
No, currently (unfortunately) not. Here's an ongoing thread about this on github. The issue has P1 status, so will hopefully be implemented soon: <https://github.com/microsoft/vscode-python/issues/6972>
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
No, currently (unfortunately) not. Here's an ongoing thread about this on github. The issue has P1 status, so will hopefully be implemented soon: <https://github.com/microsoft/vscode-python/issues/6972>
the vscode debug console does allow for auto completion ... however, I am not sure if what you wanted was a way to trigger your code from an ipython shell, if so. you can start ipython like so, ``` python -m debugpy --listen 5678 `which ipython` ``` now you can connect to this remote debugger from vscode.
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
No, currently (unfortunately) not. Here's an ongoing thread about this on github. The issue has P1 status, so will hopefully be implemented soon: <https://github.com/microsoft/vscode-python/issues/6972>
For tab completion you could install pyreadline3: ``` python -m pip install pyreadline3 ``` This is not neccessary on Linux, but on Windows it is.
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
Meanwhile, I have become a big fan of [PDB++](https://pypi.org/project/pdbpp/) debugger for python. It works like the iPython CLI, so I think the question has become obsolete specifically for me, but still may have some value for others.
the vscode debug console does allow for auto completion ... however, I am not sure if what you wanted was a way to trigger your code from an ipython shell, if so. you can start ipython like so, ``` python -m debugpy --listen 5678 `which ipython` ``` now you can connect to this remote debugger from vscode.
47,701,629
Is there a way to run an `ipython` like debug console in VC Code that would allow tab completion and other sort of things?
2017/12/07
[ "https://Stackoverflow.com/questions/47701629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2058333/" ]
Meanwhile, I have become a big fan of [PDB++](https://pypi.org/project/pdbpp/) debugger for python. It works like the iPython CLI, so I think the question has become obsolete specifically for me, but still may have some value for others.
For tab completion you could install pyreadline3: ``` python -m pip install pyreadline3 ``` This is not neccessary on Linux, but on Windows it is.
57,689,479
I am converting pdfs to text and got this code off a previous post: [Extracting text from a PDF file using PDFMiner in python?](https://stackoverflow.com/questions/26494211/extracting-text-from-a-pdf-file-using-pdfminer-in-python) When I print(text) it has done exactly what I want, but then I need to save this to a text file, which is when I get the above error. The code follows exactly the first answer on the linked question. Then I: ``` text = convert_pdf_to_txt("GMCA ECON.pdf") file = open('GMCAECON.txt', 'w', 'utf-8') file.write(text) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-ebc6b7708d93> in <module> ----> 1 file = open('GMCAECON.txt', 'w', 'utf-8') 2 file.write(text) TypeError: an integer is required (got type str) ``` I'm afraid it's probably something really simple but I can't figure it out. I want it to write the text to a text file with the same name, which I can then do further analysis on. Thanks.
2019/08/28
[ "https://Stackoverflow.com/questions/57689479", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11759292/" ]
The problem is your third argument. Third positional argument accepted by `open` is buffering, not encoding. Call `open` like this: ``` open('GMCAECON.txt', 'w', encoding='utf-8') ``` and your problem should go away.
when you do `file = open('GMCAECON.txt', 'w', 'utf-8')` you pass positional arguments to `open()`. Third argument you pass is `encoding`, however the third argument it expect is `buffering`. You need to pass `encoding` as keyword argument, e.g. `file = open('GMCAECON.txt', 'w', encoding='utf-8')` Note that it's much better is to use `with` context manager ``` with open('GMCAECON.txt', 'w', encoding='utf-8') as f: f.write(text) ```
58,466,174
We would like to remove the key and the values from a YAML file using python, for example ``` - misc_props: - attribute: tmp-1 value: 1 - attribute: tmp-2 value: 604800 - attribute: tmp-3 value: 100 - attribute: tmp-4 value: 1209600 name: temp_key1 attr-1: 20 attr-2: 1 - misc_props: - attribute: tmp-1 value: 1 - attribute: tmp-2 value: 604800 - attribute: tmp-3 value: 100 - attribute: tmp-4 value: 1209600 name: temp_key2 atrr-1: 20 attr-2: 1 ``` From the above example we would like to delete the whole bunch of property and where key name matches the value, for example if we want to delete name: temp\_key2 the newly created dictionary after delete will be like below:- ``` - misc_props: - attribute: tmp-1 value: 1 - attribute: tmp-2 value: 604800 - attribute: tmp-3 value: 100 - attribute: tmp-4 value: 1209600 name: temp_key1 attr-1: 20 attr-2: 1 ```
2019/10/19
[ "https://Stackoverflow.com/questions/58466174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5596456/" ]
It is not sufficient to delete a key-value pair to get your desired output. ``` import sys import ruamel.yaml yaml = ruamel.yaml.YAML() with open('input.yaml') as fp: data = yaml.load(fp) del data[1]['misc_props'] yaml.dump(data, sys.stdout) ``` as that gives: ``` - misc_props: - attribute: tmp-1 value: 1 - attribute: tmp-2 value: 604800 - attribute: tmp-3 value: 100 - attribute: tmp-4 value: 1209600 name: temp_key1 attr-1: 20 attr-2: 1 - name: temp_key2 atrr-1: 20 attr-2: 1 ``` What you need to do is delete one of the items of the sequence that is the root of the YAML structure: ``` del data[1] yaml.dump(data, sys.stdout) ``` which gives: ``` - misc_props: - attribute: tmp-1 value: 1 - attribute: tmp-2 value: 604800 - attribute: tmp-3 value: 100 - attribute: tmp-4 value: 1209600 name: temp_key1 attr-1: 20 attr-2: 1 ```
Did you try using the yaml module? ``` import yaml with open('./old.yaml') as file: old_yaml = yaml.full_load(file) #This is the part of the code which filters out the undesired keys new_yaml = filter(lambda x: x['name']!='temp_key2', old_yaml) with open('./new.yaml', 'w') as file: documents = yaml.dump(new_yaml, file) ```
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
You can use **as** while importing time. ``` import time as t from datetime import datetime ... t.sleep(2) ```
Don't use `from ... import *` – this is a convenience syntax for interactive use, and leads to confusion in scripts. Here' a version that should work: ``` import time import datetime ... checktime = datetime.datetime.today() - datetime.timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = time.localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` When importing the modules using `import <modulename>`, you of course need to use fully qualified names for all names in these modules
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
You can use **as** while importing time. ``` import time as t from datetime import datetime ... t.sleep(2) ```
These two modules define some functions/types with the sasme names. The best way is to import them explicitly and use what you need: ``` import datetime import time datetime.datetime.today() # Datetime object for today time.time() # Current time ``` More generally, you can't just expect to blindly switch between `from x import *` and `import x`. You need to look at the documentation for each library to decide what functions you want to use.
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
Don't use `from ... import *` – this is a convenience syntax for interactive use, and leads to confusion in scripts. Here' a version that should work: ``` import time import datetime ... checktime = datetime.datetime.today() - datetime.timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = time.localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` When importing the modules using `import <modulename>`, you of course need to use fully qualified names for all names in these modules
Never use imports of the form `from x import *` because you don't know what you'll be getting. In this case the second import is wiping out some symbols from the first import because they have the same name. Either use `import x` and qualify everything you use from that module with `x.y`, or import only selected items with `from x import y`.
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
Don't use `from ... import *` – this is a convenience syntax for interactive use, and leads to confusion in scripts. Here' a version that should work: ``` import time import datetime ... checktime = datetime.datetime.today() - datetime.timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = time.localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` When importing the modules using `import <modulename>`, you of course need to use fully qualified names for all names in these modules
There can be name conflicts when you just do import \*. I strongly recommend not to do that. ``` import time import datetime . . . . time.sleep(60) ``` You can also do the following if you don't want to prepend all function with `time.` or `datetime.` ``` from datetime import X, Y from time import Z, W X.something() ... etc ... ```
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
These two modules define some functions/types with the sasme names. The best way is to import them explicitly and use what you need: ``` import datetime import time datetime.datetime.today() # Datetime object for today time.time() # Current time ``` More generally, you can't just expect to blindly switch between `from x import *` and `import x`. You need to look at the documentation for each library to decide what functions you want to use.
There can be name conflicts when you just do import \*. I strongly recommend not to do that. ``` import time import datetime . . . . time.sleep(60) ``` You can also do the following if you don't want to prepend all function with `time.` or `datetime.` ``` from datetime import X, Y from time import Z, W X.something() ... etc ... ```
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
My guess is that you have conflicts because of your `from something import *`. Since `datetime` exports a `time` class, this could conflict with the `time` module. Conclusion: don't use `import *` ;-)
### Insted of that you can make it simple ```py from datetime import * from time import * ```
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
Never use imports of the form `from x import *` because you don't know what you'll be getting. In this case the second import is wiping out some symbols from the first import because they have the same name. Either use `import x` and qualify everything you use from that module with `x.y`, or import only selected items with `from x import y`.
There can be name conflicts when you just do import \*. I strongly recommend not to do that. ``` import time import datetime . . . . time.sleep(60) ``` You can also do the following if you don't want to prepend all function with `time.` or `datetime.` ``` from datetime import X, Y from time import Z, W X.something() ... etc ... ```
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
Don't use `from ... import *` – this is a convenience syntax for interactive use, and leads to confusion in scripts. Here' a version that should work: ``` import time import datetime ... checktime = datetime.datetime.today() - datetime.timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = time.localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` When importing the modules using `import <modulename>`, you of course need to use fully qualified names for all names in these modules
``` from time import * import time as t from datetime import * import datetime as dt secs=69 print (dt.timedelta(seconds=secs)) now = datetime.now() #Time current_time = now.strftime("%H:%M:%S") print("Current Time =", current_time) #converting conversion = dt.timedelta(seconds=secs) print("Converted: ", conversion) #sleep function t.sleep (3) print("DONE!") ```
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
You can use **as** while importing time. ``` import time as t from datetime import datetime ... t.sleep(2) ```
There can be name conflicts when you just do import \*. I strongly recommend not to do that. ``` import time import datetime . . . . time.sleep(60) ``` You can also do the following if you don't want to prepend all function with `time.` or `datetime.` ``` from datetime import X, Y from time import Z, W X.something() ... etc ... ```
11,211,650
I'm using Python 2.7 on Windows and I am writing a script that uses both time and datetime modules. I've done this before, but python seems to be touchy about having both modules loaded and the methods I've used before don't seem to be working. Here are the different syntax I've used and the errors I am currently getting. First I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... else: time.sleep(60) ``` ERROR: `else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has no attribute 'sleep'` Then I tried: ``` from datetime import * from time import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` and I got no errors, but no sleep delay either. Next I tried: ``` from datetime import * import time ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `filetime = localtime(filetimesecs) NameError: name 'localtime' is not defined` Another modification and I tried this: ``` import time import datetime ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` Finally, I tried this: ``` import time from datetime import * ... checktime = datetime.today() - timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` ERROR: `checktime = datetime.today() - timedelta(days=int(2)) AttributeError: 'module' object has no attribute 'today'` So I'm not sure how to get the two modules to play nicely. Or I need another method to put a delay in the script. Suggestions? Or pointers to mistakes that I made? Thanks.
2012/06/26
[ "https://Stackoverflow.com/questions/11211650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1070061/" ]
Don't use `from ... import *` – this is a convenience syntax for interactive use, and leads to confusion in scripts. Here' a version that should work: ``` import time import datetime ... checktime = datetime.datetime.today() - datetime.timedelta(days=int(2)) checktime = checktime.timetuple() ... filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn) file = webgatelogdir + '/' + fn filetime = time.localtime(filetimesecs) ... #else: time.sleep(60) # comment out time.sleep statement ``` When importing the modules using `import <modulename>`, you of course need to use fully qualified names for all names in these modules
As everyone rightly mentioned in above comments, this problem was due to: ``` from datetime import * ``` But I was facing the issue where I wrote this in a file and tried to run and since it wasn't working I removed that entire import statement from that file but when I tried to run it again, it was still trowing same error. That was surprising as when a statement is not in file at all, how could it cause error? But after some debugging I realised this same statement was in some other interdependent file and hence the error. **So, all I want to say is, please check all files in your project for this statement if error persists and replace them with specific modules to be imported, like:** ``` from datetime import datetime, timedelta ``` Hope this helps!
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
There are two nice ways to install Python packages on Ubuntu (and similar Linux systems): ``` sudo apt-get install python-pygame ``` to use the Debian/Ubuntu package manager APT. This only works for packages that are shipped by Ubuntu, unless you change the APT configuration, and in particular there seems to be no PyGame package for Python 3. The other option is to use PIP, the Python package manager: ``` sudo apt-get install python3-pip ``` to install it, then ``` sudo pip3 install pygame ``` to fetch the PyGame package from [PyPI](https://pypi.python.org/pypi) and install it for Python 3. PIP has some limitations compared to APT, but it does always fetch the latest version of a package instead of the one that the Ubuntu packagers have chosen to ship. **EDIT**: to repeat what I said in the comment, `pip3` isn't in Ubuntu 12.04 yet. It can still be installed with ``` sudo apt-get install python3-setuptools sudo easy_install3 pip sudo apt-get purge python-pip ``` After this, `pip` is the Python 3 version of PIP, instead of `pip3`. The last command is just for safety; there might be a Python 2 PIP installed as `/usr/bin/pip`.
You can use several approaches: 1 - Download the package by yourself. This is what I use the most. If the package follows the specifications, you should be able to install it by moving to its uncompressed folder and typing in the console: ``` python setup.py build python setup.py install ``` 2 - Use pip. Pip is pretty straightforward. In the console, you have to type: ``` pip install package_name ``` You can obtain pip here <https://pypi.python.org/pypi/pip> and install it with method 1 One thing to note: if you aren't using a virtualenv, you'll have to add sudo before those commands (not recommended)
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
Try to install pip. ``` apt-get install python-pip pip install pygame ```
You can use several approaches: 1 - Download the package by yourself. This is what I use the most. If the package follows the specifications, you should be able to install it by moving to its uncompressed folder and typing in the console: ``` python setup.py build python setup.py install ``` 2 - Use pip. Pip is pretty straightforward. In the console, you have to type: ``` pip install package_name ``` You can obtain pip here <https://pypi.python.org/pypi/pip> and install it with method 1 One thing to note: if you aren't using a virtualenv, you'll have to add sudo before those commands (not recommended)
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
You can use several approaches: 1 - Download the package by yourself. This is what I use the most. If the package follows the specifications, you should be able to install it by moving to its uncompressed folder and typing in the console: ``` python setup.py build python setup.py install ``` 2 - Use pip. Pip is pretty straightforward. In the console, you have to type: ``` pip install package_name ``` You can obtain pip here <https://pypi.python.org/pypi/pip> and install it with method 1 One thing to note: if you aren't using a virtualenv, you'll have to add sudo before those commands (not recommended)
``` curl -O http://python-distribute.org/distribute_setup.py sudo python distribute_setup.py sudo easy_install pygame ``` [Differences between distribute, distutils, setuptools and distutils2](https://stackoverflow.com/questions/6344076/differences-between-distribute-distutils-setuptools-and-distutils2)
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
There are two nice ways to install Python packages on Ubuntu (and similar Linux systems): ``` sudo apt-get install python-pygame ``` to use the Debian/Ubuntu package manager APT. This only works for packages that are shipped by Ubuntu, unless you change the APT configuration, and in particular there seems to be no PyGame package for Python 3. The other option is to use PIP, the Python package manager: ``` sudo apt-get install python3-pip ``` to install it, then ``` sudo pip3 install pygame ``` to fetch the PyGame package from [PyPI](https://pypi.python.org/pypi) and install it for Python 3. PIP has some limitations compared to APT, but it does always fetch the latest version of a package instead of the one that the Ubuntu packagers have chosen to ship. **EDIT**: to repeat what I said in the comment, `pip3` isn't in Ubuntu 12.04 yet. It can still be installed with ``` sudo apt-get install python3-setuptools sudo easy_install3 pip sudo apt-get purge python-pip ``` After this, `pip` is the Python 3 version of PIP, instead of `pip3`. The last command is just for safety; there might be a Python 2 PIP installed as `/usr/bin/pip`.
Try to install pip. ``` apt-get install python-pip pip install pygame ```
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
There are two nice ways to install Python packages on Ubuntu (and similar Linux systems): ``` sudo apt-get install python-pygame ``` to use the Debian/Ubuntu package manager APT. This only works for packages that are shipped by Ubuntu, unless you change the APT configuration, and in particular there seems to be no PyGame package for Python 3. The other option is to use PIP, the Python package manager: ``` sudo apt-get install python3-pip ``` to install it, then ``` sudo pip3 install pygame ``` to fetch the PyGame package from [PyPI](https://pypi.python.org/pypi) and install it for Python 3. PIP has some limitations compared to APT, but it does always fetch the latest version of a package instead of the one that the Ubuntu packagers have chosen to ship. **EDIT**: to repeat what I said in the comment, `pip3` isn't in Ubuntu 12.04 yet. It can still be installed with ``` sudo apt-get install python3-setuptools sudo easy_install3 pip sudo apt-get purge python-pip ``` After this, `pip` is the Python 3 version of PIP, instead of `pip3`. The last command is just for safety; there might be a Python 2 PIP installed as `/usr/bin/pip`.
``` curl -O http://python-distribute.org/distribute_setup.py sudo python distribute_setup.py sudo easy_install pygame ``` [Differences between distribute, distutils, setuptools and distutils2](https://stackoverflow.com/questions/6344076/differences-between-distribute-distutils-setuptools-and-distutils2)
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
There are two nice ways to install Python packages on Ubuntu (and similar Linux systems): ``` sudo apt-get install python-pygame ``` to use the Debian/Ubuntu package manager APT. This only works for packages that are shipped by Ubuntu, unless you change the APT configuration, and in particular there seems to be no PyGame package for Python 3. The other option is to use PIP, the Python package manager: ``` sudo apt-get install python3-pip ``` to install it, then ``` sudo pip3 install pygame ``` to fetch the PyGame package from [PyPI](https://pypi.python.org/pypi) and install it for Python 3. PIP has some limitations compared to APT, but it does always fetch the latest version of a package instead of the one that the Ubuntu packagers have chosen to ship. **EDIT**: to repeat what I said in the comment, `pip3` isn't in Ubuntu 12.04 yet. It can still be installed with ``` sudo apt-get install python3-setuptools sudo easy_install3 pip sudo apt-get purge python-pip ``` After this, `pip` is the Python 3 version of PIP, instead of `pip3`. The last command is just for safety; there might be a Python 2 PIP installed as `/usr/bin/pip`.
It depends on the Ubuntu version and the IDE you are using. Ubuntu 15 and older come with Python 2.7 and Ubuntu 16.04 comes with both Python 2.7 and 3.5. Now based on the IDE you are using there are several ways to do this. Let`s say you only installed Spyder from Ubuntu app store or installed Jupyter. In other words you do not have a distribution like Anaconda or Enthought which install their own Python versions. This is important to pay attention to because once you are trying to install a package/library, you need to know which Python it is being installed to. Now assuming you just have an IDE that is connected to Ubuntu`s default Python versions, you can use the terminal to install your packages: For python 2.7 use ``` pip install libraryname ``` For python 3.5 use ``` pip3 install libraryname ``` Sometimes, for reasons that I don`t know, during the package installation process, Linux blocks access to the Python so try these as well: ``` sudo apt install python-libraryname ``` and for Python 3.5 ``` sudo apt install python3-libraryname ``` These have helped me to install all the libraries that I need. Now, if you are using a distribution like Aanaconda or Enthought, there is a good chance that the libraries that you are installing are not going to be added to the libraries that those distributions use. In order to install the libraries for these distributions, once you run the distribution, go to the ipython console and write ``` !pip install libraryname ``` In case of Enthought, it has it`s own Package Manager where it has most of the libraries you need and you can install them there without using pip or anything else.
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
Try to install pip. ``` apt-get install python-pip pip install pygame ```
``` curl -O http://python-distribute.org/distribute_setup.py sudo python distribute_setup.py sudo easy_install pygame ``` [Differences between distribute, distutils, setuptools and distutils2](https://stackoverflow.com/questions/6344076/differences-between-distribute-distutils-setuptools-and-distutils2)
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
Try to install pip. ``` apt-get install python-pip pip install pygame ```
It depends on the Ubuntu version and the IDE you are using. Ubuntu 15 and older come with Python 2.7 and Ubuntu 16.04 comes with both Python 2.7 and 3.5. Now based on the IDE you are using there are several ways to do this. Let`s say you only installed Spyder from Ubuntu app store or installed Jupyter. In other words you do not have a distribution like Anaconda or Enthought which install their own Python versions. This is important to pay attention to because once you are trying to install a package/library, you need to know which Python it is being installed to. Now assuming you just have an IDE that is connected to Ubuntu`s default Python versions, you can use the terminal to install your packages: For python 2.7 use ``` pip install libraryname ``` For python 3.5 use ``` pip3 install libraryname ``` Sometimes, for reasons that I don`t know, during the package installation process, Linux blocks access to the Python so try these as well: ``` sudo apt install python-libraryname ``` and for Python 3.5 ``` sudo apt install python3-libraryname ``` These have helped me to install all the libraries that I need. Now, if you are using a distribution like Aanaconda or Enthought, there is a good chance that the libraries that you are installing are not going to be added to the libraries that those distributions use. In order to install the libraries for these distributions, once you run the distribution, go to the ipython console and write ``` !pip install libraryname ``` In case of Enthought, it has it`s own Package Manager where it has most of the libraries you need and you can install them there without using pip or anything else.
19,034,959
I need to install some modules for python on Ubuntu Linux 12.04. I want pygame and livewires but I'm not sure how to install them. I have the py file for livewires, which has been specially edited (from a book I'm reading) and I want to install it but I'm not sure how to, I also want to install pygame.
2013/09/26
[ "https://Stackoverflow.com/questions/19034959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2765940/" ]
It depends on the Ubuntu version and the IDE you are using. Ubuntu 15 and older come with Python 2.7 and Ubuntu 16.04 comes with both Python 2.7 and 3.5. Now based on the IDE you are using there are several ways to do this. Let`s say you only installed Spyder from Ubuntu app store or installed Jupyter. In other words you do not have a distribution like Anaconda or Enthought which install their own Python versions. This is important to pay attention to because once you are trying to install a package/library, you need to know which Python it is being installed to. Now assuming you just have an IDE that is connected to Ubuntu`s default Python versions, you can use the terminal to install your packages: For python 2.7 use ``` pip install libraryname ``` For python 3.5 use ``` pip3 install libraryname ``` Sometimes, for reasons that I don`t know, during the package installation process, Linux blocks access to the Python so try these as well: ``` sudo apt install python-libraryname ``` and for Python 3.5 ``` sudo apt install python3-libraryname ``` These have helped me to install all the libraries that I need. Now, if you are using a distribution like Aanaconda or Enthought, there is a good chance that the libraries that you are installing are not going to be added to the libraries that those distributions use. In order to install the libraries for these distributions, once you run the distribution, go to the ipython console and write ``` !pip install libraryname ``` In case of Enthought, it has it`s own Package Manager where it has most of the libraries you need and you can install them there without using pip or anything else.
``` curl -O http://python-distribute.org/distribute_setup.py sudo python distribute_setup.py sudo easy_install pygame ``` [Differences between distribute, distutils, setuptools and distutils2](https://stackoverflow.com/questions/6344076/differences-between-distribute-distutils-setuptools-and-distutils2)
54,396,228
I am trying to build a chat app with Django but when I try to run it I get this error ``` No application configured for scope type 'websocket' ``` my routing.py file is ``` from channels.auth import AuthMiddlewareStack from channels.routing import ProtocolTypeRouter , URLRouter import chat.routing application = ProtocolTypeRouter({ # (http->django views is added by default) 'websocket':AuthMiddlewareStack( URLRouter( chat.routing.websocket_urlpatterns ) ), }) ``` my settings.py is ``` ASGI_APPLICATION = 'mychat.routing.application' CHANNEL_LAYERS = { 'default': { 'BACKEND': 'channels_redis.core.RedisChannelLayer', 'CONFIG': { "hosts": [('127.0.0.1', 6379)], }, }, } ``` when I open my URL in 2 tabs I should be able to see the messages that I posted in the first tab appeared in the 2nd tab but I am getting an error ``` [Failure instance: Traceback: <class 'ValueError'>: No application configured for scope type 'websocket' /home/vaibhav/.local/lib/python3.6/site-packages/autobahn/websocket/protocol.py:2801:processHandshake /home/vaibhav/.local/lib/python3.6/site-packages/txaio/tx.py:429:as_future /home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred /home/vaibhav/.local/lib/python3.6/site-packages/daphne/ws_protocol.py:82:onConnect --- <exception caught here> --- /home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred /home/vaibhav/.local/lib/python3.6/site-packages/daphne/server.py:198:create_application /home/vaibhav/.local/lib/python3.6/site-packages/channels/staticfiles.py:41:__call__ /home/vaibhav/.local/lib/python3.6/site-packages/channels/routing.py:61:__call__ ] WebSocket DISCONNECT /ws/chat/lobby/ [127.0.0.1:34724] ``` I couldn't find a duplicate of this question on stackoverflow
2019/01/28
[ "https://Stackoverflow.com/questions/54396228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10974783/" ]
Your xpath can be more specific, would suggest you go with incremental approach, first try with: ``` driver.find_element_by_xpath('//*[@id="form1"]//div[@class="screen-group-content"]') ``` If above returns True ``` driver.find_element_by_xpath('//*[@id="form1"]//div[@class="screen-group-content"]//table[@class="asureTable"]') ``` If above is true too; then you could get rows and data by index on above Xpath. Also, do check for any frames in the upper hierarchy of the HTML snippet that has been attached in your post.
did you have try using regular expressions? Using **Selenium**: ``` import re from selenium import webdriver #n = webdriver.Firefox() or n.webdriver.Chrome() n.get_url( your_url ) html_source_code = str(n.page_source) # Using a regular expression # The element that you want to fetch/collect # will be inside of the 'values' variable values = re.findall( r'title=\"View Check Detail\"\>(.+)\</td>', html_source_code ) ``` **Update:** If the content is inside of a **iframe**, using **selenium + Chrome driver** you can do this: ``` from selenium import webdriver from selenium.webdriver.chrome import options o = options.Options() o.headless = True n = webdriver.Chrome(options=o) n.get_url( your_url ) links = n.find_elements_by_tag_name("iframe") outer = [ e.get_attribute("src") for e in links] # In the best case outer will be a list o strings, # each outer's element contain the values of the src attribute. # Compute the correct element inside of outer n.get_url(correct_outer_element) # This will make a 'new' html code. # Create a new xpath and fetch the data! ```
54,396,228
I am trying to build a chat app with Django but when I try to run it I get this error ``` No application configured for scope type 'websocket' ``` my routing.py file is ``` from channels.auth import AuthMiddlewareStack from channels.routing import ProtocolTypeRouter , URLRouter import chat.routing application = ProtocolTypeRouter({ # (http->django views is added by default) 'websocket':AuthMiddlewareStack( URLRouter( chat.routing.websocket_urlpatterns ) ), }) ``` my settings.py is ``` ASGI_APPLICATION = 'mychat.routing.application' CHANNEL_LAYERS = { 'default': { 'BACKEND': 'channels_redis.core.RedisChannelLayer', 'CONFIG': { "hosts": [('127.0.0.1', 6379)], }, }, } ``` when I open my URL in 2 tabs I should be able to see the messages that I posted in the first tab appeared in the 2nd tab but I am getting an error ``` [Failure instance: Traceback: <class 'ValueError'>: No application configured for scope type 'websocket' /home/vaibhav/.local/lib/python3.6/site-packages/autobahn/websocket/protocol.py:2801:processHandshake /home/vaibhav/.local/lib/python3.6/site-packages/txaio/tx.py:429:as_future /home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred /home/vaibhav/.local/lib/python3.6/site-packages/daphne/ws_protocol.py:82:onConnect --- <exception caught here> --- /home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred /home/vaibhav/.local/lib/python3.6/site-packages/daphne/server.py:198:create_application /home/vaibhav/.local/lib/python3.6/site-packages/channels/staticfiles.py:41:__call__ /home/vaibhav/.local/lib/python3.6/site-packages/channels/routing.py:61:__call__ ] WebSocket DISCONNECT /ws/chat/lobby/ [127.0.0.1:34724] ``` I couldn't find a duplicate of this question on stackoverflow
2019/01/28
[ "https://Stackoverflow.com/questions/54396228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10974783/" ]
Your xpath can be more specific, would suggest you go with incremental approach, first try with: ``` driver.find_element_by_xpath('//*[@id="form1"]//div[@class="screen-group-content"]') ``` If above returns True ``` driver.find_element_by_xpath('//*[@id="form1"]//div[@class="screen-group-content"]//table[@class="asureTable"]') ``` If above is true too; then you could get rows and data by index on above Xpath. Also, do check for any frames in the upper hierarchy of the HTML snippet that has been attached in your post.
The table is in an iFrame. You have to select it. Following [this](https://stackoverflow.com/questions/48656659/how-can-i-parse-table-data-from-website-using-selenium), I edited the code as follows: ``` wait = WebDriverWait(driver, 10) wait.until(eConds.frame_to_be_available_and_switch_to_it((wdBy.CSS_SELECTOR, "iframe[id='hr2oScreen']:nth-of-type(1)"))) for table in wait.until(eConds.presence_of_all_elements_located((wdBy.CSS_SELECTOR, "table tr")))[1:]: data = [item.text for item in table.find_elements_by_css_selector("th,td")] print(data) ``` Thanks Pooja for giving me pointers on how to determine the text wasn't there.
54,396,228
I am trying to build a chat app with Django but when I try to run it I get this error ``` No application configured for scope type 'websocket' ``` my routing.py file is ``` from channels.auth import AuthMiddlewareStack from channels.routing import ProtocolTypeRouter , URLRouter import chat.routing application = ProtocolTypeRouter({ # (http->django views is added by default) 'websocket':AuthMiddlewareStack( URLRouter( chat.routing.websocket_urlpatterns ) ), }) ``` my settings.py is ``` ASGI_APPLICATION = 'mychat.routing.application' CHANNEL_LAYERS = { 'default': { 'BACKEND': 'channels_redis.core.RedisChannelLayer', 'CONFIG': { "hosts": [('127.0.0.1', 6379)], }, }, } ``` when I open my URL in 2 tabs I should be able to see the messages that I posted in the first tab appeared in the 2nd tab but I am getting an error ``` [Failure instance: Traceback: <class 'ValueError'>: No application configured for scope type 'websocket' /home/vaibhav/.local/lib/python3.6/site-packages/autobahn/websocket/protocol.py:2801:processHandshake /home/vaibhav/.local/lib/python3.6/site-packages/txaio/tx.py:429:as_future /home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred /home/vaibhav/.local/lib/python3.6/site-packages/daphne/ws_protocol.py:82:onConnect --- <exception caught here> --- /home/vaibhav/.local/lib/python3.6/site-packages/twisted/internet/defer.py:151:maybeDeferred /home/vaibhav/.local/lib/python3.6/site-packages/daphne/server.py:198:create_application /home/vaibhav/.local/lib/python3.6/site-packages/channels/staticfiles.py:41:__call__ /home/vaibhav/.local/lib/python3.6/site-packages/channels/routing.py:61:__call__ ] WebSocket DISCONNECT /ws/chat/lobby/ [127.0.0.1:34724] ``` I couldn't find a duplicate of this question on stackoverflow
2019/01/28
[ "https://Stackoverflow.com/questions/54396228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10974783/" ]
The table is in an iFrame. You have to select it. Following [this](https://stackoverflow.com/questions/48656659/how-can-i-parse-table-data-from-website-using-selenium), I edited the code as follows: ``` wait = WebDriverWait(driver, 10) wait.until(eConds.frame_to_be_available_and_switch_to_it((wdBy.CSS_SELECTOR, "iframe[id='hr2oScreen']:nth-of-type(1)"))) for table in wait.until(eConds.presence_of_all_elements_located((wdBy.CSS_SELECTOR, "table tr")))[1:]: data = [item.text for item in table.find_elements_by_css_selector("th,td")] print(data) ``` Thanks Pooja for giving me pointers on how to determine the text wasn't there.
did you have try using regular expressions? Using **Selenium**: ``` import re from selenium import webdriver #n = webdriver.Firefox() or n.webdriver.Chrome() n.get_url( your_url ) html_source_code = str(n.page_source) # Using a regular expression # The element that you want to fetch/collect # will be inside of the 'values' variable values = re.findall( r'title=\"View Check Detail\"\>(.+)\</td>', html_source_code ) ``` **Update:** If the content is inside of a **iframe**, using **selenium + Chrome driver** you can do this: ``` from selenium import webdriver from selenium.webdriver.chrome import options o = options.Options() o.headless = True n = webdriver.Chrome(options=o) n.get_url( your_url ) links = n.find_elements_by_tag_name("iframe") outer = [ e.get_attribute("src") for e in links] # In the best case outer will be a list o strings, # each outer's element contain the values of the src attribute. # Compute the correct element inside of outer n.get_url(correct_outer_element) # This will make a 'new' html code. # Create a new xpath and fetch the data! ```
21,318,968
I have a textfield in a database that contains the results of a python `json.dumps(list_instance)` operation. As such, the internal fields have a `u'` prefix, and break the browser's `JSON.parse()` function. An example of the JSON string is ``` "density": "{u'Penobscot': 40.75222856500098, u'Sagadahoc': 122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec': 123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland': 288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock': 30.698239582715903, u'Washington': 12.368718341168325, u'Aroostook': 10.827378163074039, u'York': 183.47612497543722, u'Franklin': 16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset': 12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin': 208.75502815768303}" ``` What I'd like to do is replace those occurrences of `u'` with a `'`(single-quote). I've tried ``` function renderValues(data){ var pop = JSON.parse(data.density.replace(/u'/g, "'")); } ``` but I'm always getting a `unexpected token '` exception. Since many of the possible key fields may contain a `u`, it is not feasable to just delete that character. How can I find all instances of `u'` and replace with `'` without getting the exception?
2014/01/23
[ "https://Stackoverflow.com/questions/21318968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214892/" ]
Updated solution: `replace(/u'/g, "'"));` => `replace(/u'(?=[^:]+')/g, "'"));`. Tested with the following: `"{u'Penobscot': 40.75222856500098, u'Sagadahoc': 122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec': 123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland': 288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock': 30.698239582715903, u'Timbuktu': 12.368718341168325, u'Aroostook': 10.827378163074039, u'York': 183.47612497543722, u'Franklin': 16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset': 12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin': 208.75502815768303}".replace(/u'(?=[^:]+')/g, "'");` results in: `"{'Penobscot': 40.75222856500098, 'Sagadahoc': 122.27083333333333, 'Lincoln': 67.97977755308392, 'Kennebec': 123.12237174095878, 'Waldo': 48.02117802779616, 'Cumberland': 288.9285325791363, 'Piscataquis': 3.9373586457405247, 'Hancock': 30.698239582715903, 'Timbuktu': 12.368718341168325, 'Aroostook': 10.827378163074039, 'York': 183.47612497543722, 'Franklin': 16.89330963710371, 'Oxford': 25.171240748402518, 'Somerset': 12.425648288323485, 'Knox': 108.48302300109529, 'Androscoggin': 208.75502815768303}"`
a little bit old in the answer but if there is no way to change or access the server response try with: ``` var strExample = {'att1':u'something with u'}; strExample.replace(/u'[\}|\,]/g, "ç'").replace(/u'/g, "'").replace(/ç'/g, "u'"); //{'att1':'something with u'}; ``` The first replace will handle the u' that are in the trailing part of the string in the object changing it to 'ç' character then removing the u from the phyton unicode and finally change it to u' like the original
21,318,968
I have a textfield in a database that contains the results of a python `json.dumps(list_instance)` operation. As such, the internal fields have a `u'` prefix, and break the browser's `JSON.parse()` function. An example of the JSON string is ``` "density": "{u'Penobscot': 40.75222856500098, u'Sagadahoc': 122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec': 123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland': 288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock': 30.698239582715903, u'Washington': 12.368718341168325, u'Aroostook': 10.827378163074039, u'York': 183.47612497543722, u'Franklin': 16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset': 12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin': 208.75502815768303}" ``` What I'd like to do is replace those occurrences of `u'` with a `'`(single-quote). I've tried ``` function renderValues(data){ var pop = JSON.parse(data.density.replace(/u'/g, "'")); } ``` but I'm always getting a `unexpected token '` exception. Since many of the possible key fields may contain a `u`, it is not feasable to just delete that character. How can I find all instances of `u'` and replace with `'` without getting the exception?
2014/01/23
[ "https://Stackoverflow.com/questions/21318968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214892/" ]
Updated solution: `replace(/u'/g, "'"));` => `replace(/u'(?=[^:]+')/g, "'"));`. Tested with the following: `"{u'Penobscot': 40.75222856500098, u'Sagadahoc': 122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec': 123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland': 288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock': 30.698239582715903, u'Timbuktu': 12.368718341168325, u'Aroostook': 10.827378163074039, u'York': 183.47612497543722, u'Franklin': 16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset': 12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin': 208.75502815768303}".replace(/u'(?=[^:]+')/g, "'");` results in: `"{'Penobscot': 40.75222856500098, 'Sagadahoc': 122.27083333333333, 'Lincoln': 67.97977755308392, 'Kennebec': 123.12237174095878, 'Waldo': 48.02117802779616, 'Cumberland': 288.9285325791363, 'Piscataquis': 3.9373586457405247, 'Hancock': 30.698239582715903, 'Timbuktu': 12.368718341168325, 'Aroostook': 10.827378163074039, 'York': 183.47612497543722, 'Franklin': 16.89330963710371, 'Oxford': 25.171240748402518, 'Somerset': 12.425648288323485, 'Knox': 108.48302300109529, 'Androscoggin': 208.75502815768303}"`
I had a similar issue and made this regex that found all of the u's even if the values had them too. ``` replace(/(?!\s|:)((u)(?='))/g, "") ``` The accepted answer, I found, missed these occurrences. I know the OP's doesn't have 'u' for the values and only for keys but thought this may be useful too :)
21,318,968
I have a textfield in a database that contains the results of a python `json.dumps(list_instance)` operation. As such, the internal fields have a `u'` prefix, and break the browser's `JSON.parse()` function. An example of the JSON string is ``` "density": "{u'Penobscot': 40.75222856500098, u'Sagadahoc': 122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec': 123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland': 288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock': 30.698239582715903, u'Washington': 12.368718341168325, u'Aroostook': 10.827378163074039, u'York': 183.47612497543722, u'Franklin': 16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset': 12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin': 208.75502815768303}" ``` What I'd like to do is replace those occurrences of `u'` with a `'`(single-quote). I've tried ``` function renderValues(data){ var pop = JSON.parse(data.density.replace(/u'/g, "'")); } ``` but I'm always getting a `unexpected token '` exception. Since many of the possible key fields may contain a `u`, it is not feasable to just delete that character. How can I find all instances of `u'` and replace with `'` without getting the exception?
2014/01/23
[ "https://Stackoverflow.com/questions/21318968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214892/" ]
The accepted solution is wrong. Your code fails because that string is not valid JSON. Fixing the pseudo-JSON string by replacing it is wrong. What you have to do is fix the python code that is producing that broken JSON string, which I am pretty sure it is a str() or unicode() where there should be nothing. What you have as a value for the key "density" is a string instead of a dictionary, and therefore, python returns you something like the following: ``` {"density": "a string that looks like JSON but it is in fact a string reprensentation of a dictionary"} ``` The function `json.dumps` will return you always valid JSON strings. Fix that and you will not have to hack around with filthy string replacements or whatever. **EDIT** Check the following snippet out. There you can see that the u'...' is just the python readable-representation of a unicode object, and has nothing to do whatsoever with a JSON serialization. ``` >>> import json >>> data = {u'name': u'Manuel', u'age': 26} >>> print data {u'age': 26, u'name': u'Manuel'} # this is the python representation of a dictionary >>> print json.dumps(data) {"age": 26, "name": "Manuel"} # this is a valid JSON string ``` That JSON is not properly formed, as easy as that.
a little bit old in the answer but if there is no way to change or access the server response try with: ``` var strExample = {'att1':u'something with u'}; strExample.replace(/u'[\}|\,]/g, "ç'").replace(/u'/g, "'").replace(/ç'/g, "u'"); //{'att1':'something with u'}; ``` The first replace will handle the u' that are in the trailing part of the string in the object changing it to 'ç' character then removing the u from the phyton unicode and finally change it to u' like the original
21,318,968
I have a textfield in a database that contains the results of a python `json.dumps(list_instance)` operation. As such, the internal fields have a `u'` prefix, and break the browser's `JSON.parse()` function. An example of the JSON string is ``` "density": "{u'Penobscot': 40.75222856500098, u'Sagadahoc': 122.27083333333333, u'Lincoln': 67.97977755308392, u'Kennebec': 123.12237174095878, u'Waldo': 48.02117802779616, u'Cumberland': 288.9285325791363, u'Piscataquis': 3.9373586457405247, u'Hancock': 30.698239582715903, u'Washington': 12.368718341168325, u'Aroostook': 10.827378163074039, u'York': 183.47612497543722, u'Franklin': 16.89330963710371, u'Oxford': 25.171240748402518, u'Somerset': 12.425648288323485, u'Knox': 108.48302300109529, u'Androscoggin': 208.75502815768303}" ``` What I'd like to do is replace those occurrences of `u'` with a `'`(single-quote). I've tried ``` function renderValues(data){ var pop = JSON.parse(data.density.replace(/u'/g, "'")); } ``` but I'm always getting a `unexpected token '` exception. Since many of the possible key fields may contain a `u`, it is not feasable to just delete that character. How can I find all instances of `u'` and replace with `'` without getting the exception?
2014/01/23
[ "https://Stackoverflow.com/questions/21318968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214892/" ]
The accepted solution is wrong. Your code fails because that string is not valid JSON. Fixing the pseudo-JSON string by replacing it is wrong. What you have to do is fix the python code that is producing that broken JSON string, which I am pretty sure it is a str() or unicode() where there should be nothing. What you have as a value for the key "density" is a string instead of a dictionary, and therefore, python returns you something like the following: ``` {"density": "a string that looks like JSON but it is in fact a string reprensentation of a dictionary"} ``` The function `json.dumps` will return you always valid JSON strings. Fix that and you will not have to hack around with filthy string replacements or whatever. **EDIT** Check the following snippet out. There you can see that the u'...' is just the python readable-representation of a unicode object, and has nothing to do whatsoever with a JSON serialization. ``` >>> import json >>> data = {u'name': u'Manuel', u'age': 26} >>> print data {u'age': 26, u'name': u'Manuel'} # this is the python representation of a dictionary >>> print json.dumps(data) {"age": 26, "name": "Manuel"} # this is a valid JSON string ``` That JSON is not properly formed, as easy as that.
I had a similar issue and made this regex that found all of the u's even if the values had them too. ``` replace(/(?!\s|:)((u)(?='))/g, "") ``` The accepted answer, I found, missed these occurrences. I know the OP's doesn't have 'u' for the values and only for keys but thought this may be useful too :)
20,332,359
Im trying to use python's default logging module in a multiprocessing scenario. I've read: 1. [Python MultiProcess, Logging, Various Classes](https://stackoverflow.com/questions/17582155/python-multiprocess-logging-various-classes) 2. [Logging using multiprocessing](https://stackoverflow.com/questions/10665090/logging-using-multiprocessing) and other multiple posts about multiprocessing, logging, python classes and such. After all this reading I've came to this piece of code I cannot make it properly run which uses python's logutils QueueHandler: ``` import sys import logging from logging import INFO from multiprocessing import Process, Queue as mpQueue import threading import time from logutils.queue import QueueListener, QueueHandler class Worker(Process): def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q self.qh = QueueHandler(self.queue) self.root = logging.getLogger() self.root.addHandler(self.qh) self.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) def listener_process(queue): while True: try: record = queue.get() if record is None: break logger = logging.getLogger(record.name) logger.handle(record) except (KeyboardInterrupt, SystemExit): raise except: import sys, traceback print >> sys.stderr, 'Whoops! Problem:' traceback.print_exc(file=sys.stderr) if __name__ == "__main__": mpq = mpQueue(-1) root = logging.getLogger() h = logging.StreamHandler() f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s') h.setFormatter(f) root.addHandler(h) l = logging.getLogger("Test") l.setLevel(logging.DEBUG) listener = Process(target=listener_process, args=(mpq,)) listener.start() workers=[] for i in xrange(1): worker = Worker(i, mpq) worker.daemon = True worker.start() workers.append(worker) for worker in workers: worker.join() mpq.put_nowait(None) listener.join() for i in xrange(10): l.info("testing %i"%i) print "Finish" ``` If the code is executed, the output somehow repeats lines like: ``` 2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2 2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8 2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9 2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0 2013-12-02 16:44:46,005 MainProcess Test INFO testing 0 2013-12-02 16:44:46,005 MainProcess Test INFO testing 1 2013-12-02 16:44:46,005 MainProcess Test INFO testing 2 2013-12-02 16:44:46,005 MainProcess Test INFO testing 3 2013-12-02 16:44:46,005 MainProcess Test INFO testing 4 2013-12-02 16:44:46,005 MainProcess Test INFO testing 5 2013-12-02 16:44:46,006 MainProcess Test INFO testing 6 2013-12-02 16:44:46,006 MainProcess Test INFO testing 7 2013-12-02 16:44:46,006 MainProcess Test INFO testing 8 2013-12-02 16:44:46,006 MainProcess Test INFO testing 9 Finish ``` In other questios it's suggested that the handler gets added more than once, but, as you can see, I only add the streamhanlder once in the **main** method. I've already tested embedding the **main** method into a class with the same result. EDIT: as @max suggested (or what I believe he said) I've modified the code of the worker class as: ``` class Worker(Process): root = logging.getLogger() qh = None def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q if not self.qh: Worker.qh = QueueHandler(self.queue) Worker.root.addHandler(self.qh) Worker.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) print self.root.handlers def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) ``` With the same results, Now the queue handler is not added again and again but still there are duplicate log entries, even with just one worker. EDIT2: I've changed the code a little bit. I changed the listener process and now use a QueueListener (that's what I intended in the begining anyway), moved the main code to a class. ``` import sys import logging from logging import INFO from multiprocessing import Process, Queue as mpQueue import threading import time from logutils.queue import QueueListener, QueueHandler root = logging.getLogger() added_qh = False class Worker(Process): def __init__(self, logconf, n, qh): super(Worker, self).__init__() self.n = n self.logconf = logconf # global root global added_qh if not added_qh: added_qh = True root.addHandler(qh) root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) #print root.handlers def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) class Main(object): def __init__(self): pass def start(self): mpq = mpQueue(-1) qh = QueueHandler(mpq) h = logging.StreamHandler() ql = QueueListener(mpq, h) #h.setFormatter(f) root.addHandler(qh) l = logging.getLogger("Test") l.setLevel(logging.DEBUG) workers=[] for i in xrange(15): worker = Worker(logconf, i, qh) worker.daemon = True worker.start() workers.append(worker) for worker in workers: print "joining worker: {}".format(worker) worker.join() mpq.put_nowait(None) ql.start() # listener.join() for i in xrange(10): l.info("testing %i"%i) if __name__ == "__main__": x = Main() x.start() time.sleep(10) print "Finish" ``` Now it **mostly** works until I reach a certain number of workers (~15) when for some reason the Main class get blocked in de join and the rest of the workers do nothing.
2013/12/02
[ "https://Stackoverflow.com/questions/20332359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3057996/" ]
I'm coming late, so you probably don't need the answer anymore. The problem comes from the fact that you already have a handler set in your main process, and in your worker you are adding another one. This means that in your worker process, two handlers are in fact managing your data, the one in pushing the log to queue, and the one writing to the stream. You can fix this simply by adding an extra line `self.root.handlers = []` to your code. From your original code, the `__init__` method of the worker would look like this: ``` def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q self.qh = QueueHandler(self.queue) self.root = logging.getLogger() self.root.handlers = [] self.root.addHandler(self.qh) self.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) ``` The output now looks like this: ``` python workers.py 2016-05-12 10:07:02,971 Worker-2 W0 INFO Worker 0 Starting 2016-05-12 10:07:02,972 Worker-2 W0 INFO testing 0 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 1 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 2 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 3 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 4 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 5 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 6 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 7 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 8 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 9 2016-05-12 10:07:02,973 Worker-2 W0 INFO Completed 0 Finish ```
All your `Worker`s share the same root logger object (obtained in `Worker.__init__` -- the `getLogger` call always returns the same logger). However, every time you create a `Worker`, you add a handler (`QueueHandler`) to that logger. So if you create 10 Workers, you will have 10 (identical) handlers on your root logger, which means output gets repeated 10 times. Instead, you should make the logger a module attribute rather than an instance attribute, and configure it once at the module level -- not at the class level. (actually, loggers should be configured once at the *program* level)
20,332,359
Im trying to use python's default logging module in a multiprocessing scenario. I've read: 1. [Python MultiProcess, Logging, Various Classes](https://stackoverflow.com/questions/17582155/python-multiprocess-logging-various-classes) 2. [Logging using multiprocessing](https://stackoverflow.com/questions/10665090/logging-using-multiprocessing) and other multiple posts about multiprocessing, logging, python classes and such. After all this reading I've came to this piece of code I cannot make it properly run which uses python's logutils QueueHandler: ``` import sys import logging from logging import INFO from multiprocessing import Process, Queue as mpQueue import threading import time from logutils.queue import QueueListener, QueueHandler class Worker(Process): def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q self.qh = QueueHandler(self.queue) self.root = logging.getLogger() self.root.addHandler(self.qh) self.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) def listener_process(queue): while True: try: record = queue.get() if record is None: break logger = logging.getLogger(record.name) logger.handle(record) except (KeyboardInterrupt, SystemExit): raise except: import sys, traceback print >> sys.stderr, 'Whoops! Problem:' traceback.print_exc(file=sys.stderr) if __name__ == "__main__": mpq = mpQueue(-1) root = logging.getLogger() h = logging.StreamHandler() f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s') h.setFormatter(f) root.addHandler(h) l = logging.getLogger("Test") l.setLevel(logging.DEBUG) listener = Process(target=listener_process, args=(mpq,)) listener.start() workers=[] for i in xrange(1): worker = Worker(i, mpq) worker.daemon = True worker.start() workers.append(worker) for worker in workers: worker.join() mpq.put_nowait(None) listener.join() for i in xrange(10): l.info("testing %i"%i) print "Finish" ``` If the code is executed, the output somehow repeats lines like: ``` 2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2 2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8 2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9 2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0 2013-12-02 16:44:46,005 MainProcess Test INFO testing 0 2013-12-02 16:44:46,005 MainProcess Test INFO testing 1 2013-12-02 16:44:46,005 MainProcess Test INFO testing 2 2013-12-02 16:44:46,005 MainProcess Test INFO testing 3 2013-12-02 16:44:46,005 MainProcess Test INFO testing 4 2013-12-02 16:44:46,005 MainProcess Test INFO testing 5 2013-12-02 16:44:46,006 MainProcess Test INFO testing 6 2013-12-02 16:44:46,006 MainProcess Test INFO testing 7 2013-12-02 16:44:46,006 MainProcess Test INFO testing 8 2013-12-02 16:44:46,006 MainProcess Test INFO testing 9 Finish ``` In other questios it's suggested that the handler gets added more than once, but, as you can see, I only add the streamhanlder once in the **main** method. I've already tested embedding the **main** method into a class with the same result. EDIT: as @max suggested (or what I believe he said) I've modified the code of the worker class as: ``` class Worker(Process): root = logging.getLogger() qh = None def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q if not self.qh: Worker.qh = QueueHandler(self.queue) Worker.root.addHandler(self.qh) Worker.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) print self.root.handlers def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) ``` With the same results, Now the queue handler is not added again and again but still there are duplicate log entries, even with just one worker. EDIT2: I've changed the code a little bit. I changed the listener process and now use a QueueListener (that's what I intended in the begining anyway), moved the main code to a class. ``` import sys import logging from logging import INFO from multiprocessing import Process, Queue as mpQueue import threading import time from logutils.queue import QueueListener, QueueHandler root = logging.getLogger() added_qh = False class Worker(Process): def __init__(self, logconf, n, qh): super(Worker, self).__init__() self.n = n self.logconf = logconf # global root global added_qh if not added_qh: added_qh = True root.addHandler(qh) root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) #print root.handlers def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) class Main(object): def __init__(self): pass def start(self): mpq = mpQueue(-1) qh = QueueHandler(mpq) h = logging.StreamHandler() ql = QueueListener(mpq, h) #h.setFormatter(f) root.addHandler(qh) l = logging.getLogger("Test") l.setLevel(logging.DEBUG) workers=[] for i in xrange(15): worker = Worker(logconf, i, qh) worker.daemon = True worker.start() workers.append(worker) for worker in workers: print "joining worker: {}".format(worker) worker.join() mpq.put_nowait(None) ql.start() # listener.join() for i in xrange(10): l.info("testing %i"%i) if __name__ == "__main__": x = Main() x.start() time.sleep(10) print "Finish" ``` Now it **mostly** works until I reach a certain number of workers (~15) when for some reason the Main class get blocked in de join and the rest of the workers do nothing.
2013/12/02
[ "https://Stackoverflow.com/questions/20332359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3057996/" ]
I figured out a pretty simple workaround using monkeypatching. It probably isn't robust and I am not an expert with the logging module, but it seemed like the best solution for my situation. After trying some code-changes (to enable passing in an existing logger, from `multiprocess.get_logger()`) I didn't like how much the code was changing and came up with a quick (well it would have been, had I done this in the first place) easy to read hack/workaround: (working example, complete with multiprocessing pool) ``` import logging import multiprocessing class FakeLogger(object): def __init__(self, q): self.q = q def info(self, item): self.q.put('INFO - {}'.format(item)) def debug(self, item): self.q.put('DEBUG - {}'.format(item)) def critical(self, item): self.q.put('CRITICAL - {}'.format(item)) def warning(self, item): self.q.put('WARNING - {}'.format(item)) def some_other_func_that_gets_logger_and_logs(num): # notice the name get's discarded # of course you can easily add this to your FakeLogger class local_logger = logging.getLogger('local') local_logger.info('Hey I am logging this: {} and working on it to make this {}!'.format(num, num*2)) local_logger.debug('hmm, something may need debugging here') return num*2 def func_to_parallelize(data_chunk): # unpack our args the_num, logger_q = data_chunk # since we're now in a new process, let's monkeypatch the logging module logging.getLogger = lambda name=None: FakeLogger(logger_q) # now do the actual work that happens to log stuff too new_num = some_other_func_that_gets_logger_and_logs(the_num) return (the_num, new_num) if __name__ == '__main__': multiprocessing.freeze_support() m = multiprocessing.Manager() logger_q = m.Queue() # we have to pass our data to be parallel-processed # we also need to pass the Queue object so we can retrieve the logs parallelable_data = [(1, logger_q), (2, logger_q)] # set up a pool of processes so we can take advantage of multiple CPU cores pool_size = multiprocessing.cpu_count() * 2 pool = multiprocessing.Pool(processes=pool_size, maxtasksperchild=4) worker_output = pool.map(func_to_parallelize, parallelable_data) pool.close() # no more tasks pool.join() # wrap up current tasks # get the contents of our FakeLogger object while not logger_q.empty(): print logger_q.get() print 'worker output contained: {}'.format(worker_output) ``` Of course this is probably not going to cover the whole gamut of `logging` usage, but I think the concept is simple enough here to get working quickly and relatively painlessly. And it should be easy to modify (for example the lambda func discards a possible prefix that can be passed into `getLogger`).
All your `Worker`s share the same root logger object (obtained in `Worker.__init__` -- the `getLogger` call always returns the same logger). However, every time you create a `Worker`, you add a handler (`QueueHandler`) to that logger. So if you create 10 Workers, you will have 10 (identical) handlers on your root logger, which means output gets repeated 10 times. Instead, you should make the logger a module attribute rather than an instance attribute, and configure it once at the module level -- not at the class level. (actually, loggers should be configured once at the *program* level)
20,332,359
Im trying to use python's default logging module in a multiprocessing scenario. I've read: 1. [Python MultiProcess, Logging, Various Classes](https://stackoverflow.com/questions/17582155/python-multiprocess-logging-various-classes) 2. [Logging using multiprocessing](https://stackoverflow.com/questions/10665090/logging-using-multiprocessing) and other multiple posts about multiprocessing, logging, python classes and such. After all this reading I've came to this piece of code I cannot make it properly run which uses python's logutils QueueHandler: ``` import sys import logging from logging import INFO from multiprocessing import Process, Queue as mpQueue import threading import time from logutils.queue import QueueListener, QueueHandler class Worker(Process): def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q self.qh = QueueHandler(self.queue) self.root = logging.getLogger() self.root.addHandler(self.qh) self.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) def listener_process(queue): while True: try: record = queue.get() if record is None: break logger = logging.getLogger(record.name) logger.handle(record) except (KeyboardInterrupt, SystemExit): raise except: import sys, traceback print >> sys.stderr, 'Whoops! Problem:' traceback.print_exc(file=sys.stderr) if __name__ == "__main__": mpq = mpQueue(-1) root = logging.getLogger() h = logging.StreamHandler() f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s') h.setFormatter(f) root.addHandler(h) l = logging.getLogger("Test") l.setLevel(logging.DEBUG) listener = Process(target=listener_process, args=(mpq,)) listener.start() workers=[] for i in xrange(1): worker = Worker(i, mpq) worker.daemon = True worker.start() workers.append(worker) for worker in workers: worker.join() mpq.put_nowait(None) listener.join() for i in xrange(10): l.info("testing %i"%i) print "Finish" ``` If the code is executed, the output somehow repeats lines like: ``` 2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2 2013-12-02 16:44:46,002 Worker-2 W0 INFO Worker 0 Starting 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 0 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 1 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 2 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 3 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 4 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 5 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7 2013-12-02 16:44:46,003 Worker-2 W0 INFO testing 6 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 7 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 8 2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0 2013-12-02 16:44:46,004 Worker-2 W0 INFO testing 9 2013-12-02 16:44:46,004 Worker-2 W0 INFO Completed 0 2013-12-02 16:44:46,005 MainProcess Test INFO testing 0 2013-12-02 16:44:46,005 MainProcess Test INFO testing 1 2013-12-02 16:44:46,005 MainProcess Test INFO testing 2 2013-12-02 16:44:46,005 MainProcess Test INFO testing 3 2013-12-02 16:44:46,005 MainProcess Test INFO testing 4 2013-12-02 16:44:46,005 MainProcess Test INFO testing 5 2013-12-02 16:44:46,006 MainProcess Test INFO testing 6 2013-12-02 16:44:46,006 MainProcess Test INFO testing 7 2013-12-02 16:44:46,006 MainProcess Test INFO testing 8 2013-12-02 16:44:46,006 MainProcess Test INFO testing 9 Finish ``` In other questios it's suggested that the handler gets added more than once, but, as you can see, I only add the streamhanlder once in the **main** method. I've already tested embedding the **main** method into a class with the same result. EDIT: as @max suggested (or what I believe he said) I've modified the code of the worker class as: ``` class Worker(Process): root = logging.getLogger() qh = None def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q if not self.qh: Worker.qh = QueueHandler(self.queue) Worker.root.addHandler(self.qh) Worker.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) print self.root.handlers def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) ``` With the same results, Now the queue handler is not added again and again but still there are duplicate log entries, even with just one worker. EDIT2: I've changed the code a little bit. I changed the listener process and now use a QueueListener (that's what I intended in the begining anyway), moved the main code to a class. ``` import sys import logging from logging import INFO from multiprocessing import Process, Queue as mpQueue import threading import time from logutils.queue import QueueListener, QueueHandler root = logging.getLogger() added_qh = False class Worker(Process): def __init__(self, logconf, n, qh): super(Worker, self).__init__() self.n = n self.logconf = logconf # global root global added_qh if not added_qh: added_qh = True root.addHandler(qh) root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) #print root.handlers def run(self): self.logger.info("Worker %i Starting"%self.n) for i in xrange(10): self.logger.log(INFO, "testing %i"%i) self.logger.log(INFO, "Completed %i"%self.n) class Main(object): def __init__(self): pass def start(self): mpq = mpQueue(-1) qh = QueueHandler(mpq) h = logging.StreamHandler() ql = QueueListener(mpq, h) #h.setFormatter(f) root.addHandler(qh) l = logging.getLogger("Test") l.setLevel(logging.DEBUG) workers=[] for i in xrange(15): worker = Worker(logconf, i, qh) worker.daemon = True worker.start() workers.append(worker) for worker in workers: print "joining worker: {}".format(worker) worker.join() mpq.put_nowait(None) ql.start() # listener.join() for i in xrange(10): l.info("testing %i"%i) if __name__ == "__main__": x = Main() x.start() time.sleep(10) print "Finish" ``` Now it **mostly** works until I reach a certain number of workers (~15) when for some reason the Main class get blocked in de join and the rest of the workers do nothing.
2013/12/02
[ "https://Stackoverflow.com/questions/20332359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3057996/" ]
I'm coming late, so you probably don't need the answer anymore. The problem comes from the fact that you already have a handler set in your main process, and in your worker you are adding another one. This means that in your worker process, two handlers are in fact managing your data, the one in pushing the log to queue, and the one writing to the stream. You can fix this simply by adding an extra line `self.root.handlers = []` to your code. From your original code, the `__init__` method of the worker would look like this: ``` def __init__(self, n, q): super(Worker, self).__init__() self.n = n self.queue = q self.qh = QueueHandler(self.queue) self.root = logging.getLogger() self.root.handlers = [] self.root.addHandler(self.qh) self.root.setLevel(logging.DEBUG) self.logger = logging.getLogger("W%i"%self.n) ``` The output now looks like this: ``` python workers.py 2016-05-12 10:07:02,971 Worker-2 W0 INFO Worker 0 Starting 2016-05-12 10:07:02,972 Worker-2 W0 INFO testing 0 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 1 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 2 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 3 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 4 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 5 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 6 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 7 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 8 2016-05-12 10:07:02,973 Worker-2 W0 INFO testing 9 2016-05-12 10:07:02,973 Worker-2 W0 INFO Completed 0 Finish ```
I figured out a pretty simple workaround using monkeypatching. It probably isn't robust and I am not an expert with the logging module, but it seemed like the best solution for my situation. After trying some code-changes (to enable passing in an existing logger, from `multiprocess.get_logger()`) I didn't like how much the code was changing and came up with a quick (well it would have been, had I done this in the first place) easy to read hack/workaround: (working example, complete with multiprocessing pool) ``` import logging import multiprocessing class FakeLogger(object): def __init__(self, q): self.q = q def info(self, item): self.q.put('INFO - {}'.format(item)) def debug(self, item): self.q.put('DEBUG - {}'.format(item)) def critical(self, item): self.q.put('CRITICAL - {}'.format(item)) def warning(self, item): self.q.put('WARNING - {}'.format(item)) def some_other_func_that_gets_logger_and_logs(num): # notice the name get's discarded # of course you can easily add this to your FakeLogger class local_logger = logging.getLogger('local') local_logger.info('Hey I am logging this: {} and working on it to make this {}!'.format(num, num*2)) local_logger.debug('hmm, something may need debugging here') return num*2 def func_to_parallelize(data_chunk): # unpack our args the_num, logger_q = data_chunk # since we're now in a new process, let's monkeypatch the logging module logging.getLogger = lambda name=None: FakeLogger(logger_q) # now do the actual work that happens to log stuff too new_num = some_other_func_that_gets_logger_and_logs(the_num) return (the_num, new_num) if __name__ == '__main__': multiprocessing.freeze_support() m = multiprocessing.Manager() logger_q = m.Queue() # we have to pass our data to be parallel-processed # we also need to pass the Queue object so we can retrieve the logs parallelable_data = [(1, logger_q), (2, logger_q)] # set up a pool of processes so we can take advantage of multiple CPU cores pool_size = multiprocessing.cpu_count() * 2 pool = multiprocessing.Pool(processes=pool_size, maxtasksperchild=4) worker_output = pool.map(func_to_parallelize, parallelable_data) pool.close() # no more tasks pool.join() # wrap up current tasks # get the contents of our FakeLogger object while not logger_q.empty(): print logger_q.get() print 'worker output contained: {}'.format(worker_output) ``` Of course this is probably not going to cover the whole gamut of `logging` usage, but I think the concept is simple enough here to get working quickly and relatively painlessly. And it should be easy to modify (for example the lambda func discards a possible prefix that can be passed into `getLogger`).
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
First of all: the behaviour is documented explicitly in the docs for the [hash](https://docs.python.org/3.5/library/functions.html#hash) function: > > **`hash(object)`** > > > Return the hash value of the object (if it has one). Hash values are > integers. They are used to quickly compare dictionary keys during a > dictionary lookup. **Numeric values that compare equal have the same > hash value (even if they are of different types, as is the case for `1` > and `1.0`).** > > > Secondly, a limitation of hashing is pointed out in the docs for [`object.__hash__`](https://docs.python.org/3.5/reference/datamodel.html#object.__hash__) > > **`object.__hash__(self)`** > > > Called by built-in function [`hash()`](https://docs.python.org/3.5/library/functions.html#hash) and for operations on members of > hashed collections including `set`, `frozenset`, and `dict. __hash__()` > should return an integer. **The only required property is that objects > which compare equal have the same hash value;** > > > This is not unique to python. Java has the same caveat: if you implement `hashCode` then, in order for things to work correctly, you **must** implement it in such a way that: `x.equals(y)` implies `x.hashCode() == y.hashCode()`. So, python decided that `1.0 == 1` holds, hence it's *forced* to provide an implementation for `hash` such that `hash(1.0) == hash(1)`. The side effect is that `1.0` and `1` act exactly in the same way as `dict` keys, hence the behaviour. In other words the behaviour in itself doesn't have to be used or useful in any way. **It is necessary**. Without that behaviour there would be cases where you could accidentally overwrite a different key. If we had `1.0 == 1` but `hash(1.0) != hash(1)` we could still have a *collision*. And if `1.0` and `1` collide, the `dict` will use equality to be sure whether they are the same key or not and *kaboom* the value gets overwritten even if you intended them to be different. The only way to avoid this would be to have `1.0 != 1`, so that the `dict` is able to distinguish between them even in case of collision. But it was deemed more important to have `1.0 == 1` than to avoid the behaviour you are seeing, since you practically never use `float`s and `int`s as dictionary keys anyway. Since python tries to hide the distinction between numbers by automatically converting them when needed (e.g. `1/2 -> 0.5`) it makes sense that this behaviour is reflected even in such circumstances. It's more consistent with the rest of python. --- This behaviour would appear in *any* implementation where the matching of the keys is at least partially (as in a hash map) based on comparisons. For example if a `dict` was implemented using a red-black tree or an other kind of balanced BST, when the key `1.0` is looked up the comparisons with other keys would return the same results as for `1` and so they would still act in the same way. Hash maps require even more care because of the fact that it's the value of the hash that is used to find the entry of the key and comparisons are done only afterwards. So breaking the rule presented above means you'd introduce a bug that's quite hard to spot because at times the `dict` may seem to work as you'd expect it, and at other times, when the size changes, it would start to behave incorrectly. --- Note that there *would* be a way to fix this: have a separate hash map/BST for each type inserted in the dictionary. In this way there couldn't be any collisions between objects of different type and how `==` compares wouldn't matter when the arguments have different types. However this would complicate the implementation, it would probably be inefficient since hash maps have to keep quite a few free locations in order to have O(1) access times. If they become too full the performances decrease. Having multiple hash maps means wasting more space and also you'd need to first choose which hash map to look at before even starting the actual lookup of the key. If you used BSTs you'd first have to lookup the type and the perform a second lookup. So if you are going to use many types you'd end up with twice the work (and the lookup would take O(log n) instead of O(1)).
Frankly, the opposite is dangerous! `1 == 1.0`, so it's not improbable to imagine that if you had them point to different keys and tried to access them based on an evaluated number then you'd likely run into trouble with it because the ambiguity is hard to figure out. Dynamic typing means that the value is more important than what the technical type of something is, since the type is malleable (which *is* a very useful feature) and so distinguishing both `ints` and `floats` of the same value as distinct is unnecessary semantics that will only lead to confusion.
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
In python: ``` 1==1.0 True ``` This is because of implicit casting However: ``` 1 is 1.0 False ``` I can see why automatic casting between `float` and `int` is handy, It is relatively safe to cast `int` into `float`, and yet there are other languages (e.g. go) that stay away from implicit casting. It is actually a language design decision and a matter of taste more than different functionalities
I agree with others that it makes sense to treat `1` and `1.0` as the same in this context. Even if Python did treat them differently, it would probably be a bad idea to try to use `1` and `1.0` as distinct keys for a dictionary. On the other hand -- I have trouble thinking of a natural use-case for using `1.0` as an alias for `1` in the context of keys. The problem is that either the key is literal or it is computed. If it is a literal key then why not just use `1` rather than `1.0`? If it is a computed key -- round off error could muck things up: ``` >>> d = {} >>> d[1] = 5 >>> d[1.0] 5 >>> x = sum(0.01 for i in range(100)) #conceptually this is 1.0 >>> d[x] Traceback (most recent call last): File "<pyshell#12>", line 1, in <module> d[x] KeyError: 1.0000000000000007 ``` So I would say that, generally speaking, the answer to your question "is this ever a useful language feature?" is "No, probably not."
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
You should consider that the `dict` aims at storing data depending on the logical numeric value, not on how you represented it. The difference between `int`s and `float`s is indeed just an implementation detail and not conceptual. Ideally the only number type should be an arbitrary precision number with unbounded accuracy even sub-unity... this is however hard to implement without getting into troubles... but may be that will be the only future numeric type for Python. So while having different types for technical reasons Python tries to hide these implementation details and `int`->`float` conversion is automatic. It would be much more surprising if in a Python program `if x == 1: ...` wasn't going to be taken when `x` is a `float` with value 1. Note that also with Python 3 the value of `1/2` is `0.5` (the division of two integers) and that the types `long` and non-unicode string have been dropped with the same attempt to hide implementation details.
In python: ``` 1==1.0 True ``` This is because of implicit casting However: ``` 1 is 1.0 False ``` I can see why automatic casting between `float` and `int` is handy, It is relatively safe to cast `int` into `float`, and yet there are other languages (e.g. go) that stay away from implicit casting. It is actually a language design decision and a matter of taste more than different functionalities
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
Dictionaries are implemented with a hash table. To look up something in a hash table, you start at the position indicated by the hash value, then search different locations until you find a key value that's equal or an empty bucket. If you have two key values that compare equal but have different hashes, you may get inconsistent results depending on whether the other key value was in the searched locations or not. For example this would be more likely as the table gets full. This is something you want to avoid. It appears that the Python developers had this in mind, since the built-in `hash` function returns the same hash for equivalent numeric values, no matter if those values are `int` or `float`. Note that this extends to other numeric types, `False` is equal to `0` and `True` is equal to `1`. Even `fractions.Fraction` and `decimal.Decimal` uphold this property. The requirement that if `a == b` then `hash(a) == hash(b)` is documented in the definition of [`object.__hash__()`](https://docs.python.org/2/reference/datamodel.html#object.__hash__): > > Called by built-in function `hash()` and for operations on members of hashed collections including `set`, `frozenset`, and `dict`. `__hash__()` should return an integer. The only required property is that objects which compare equal have the same hash value; it is advised to somehow mix together (e.g. using exclusive or) the hash values for the components of the object that also play a part in comparison of objects. > > > **TL;DR:** a dictionary would break if keys that compared equal did not map to the same value.
Frankly, the opposite is dangerous! `1 == 1.0`, so it's not improbable to imagine that if you had them point to different keys and tried to access them based on an evaluated number then you'd likely run into trouble with it because the ambiguity is hard to figure out. Dynamic typing means that the value is more important than what the technical type of something is, since the type is malleable (which *is* a very useful feature) and so distinguishing both `ints` and `floats` of the same value as distinct is unnecessary semantics that will only lead to confusion.
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
You should consider that the `dict` aims at storing data depending on the logical numeric value, not on how you represented it. The difference between `int`s and `float`s is indeed just an implementation detail and not conceptual. Ideally the only number type should be an arbitrary precision number with unbounded accuracy even sub-unity... this is however hard to implement without getting into troubles... but may be that will be the only future numeric type for Python. So while having different types for technical reasons Python tries to hide these implementation details and `int`->`float` conversion is automatic. It would be much more surprising if in a Python program `if x == 1: ...` wasn't going to be taken when `x` is a `float` with value 1. Note that also with Python 3 the value of `1/2` is `0.5` (the division of two integers) and that the types `long` and non-unicode string have been dropped with the same attempt to hide implementation details.
Dictionaries are implemented with a hash table. To look up something in a hash table, you start at the position indicated by the hash value, then search different locations until you find a key value that's equal or an empty bucket. If you have two key values that compare equal but have different hashes, you may get inconsistent results depending on whether the other key value was in the searched locations or not. For example this would be more likely as the table gets full. This is something you want to avoid. It appears that the Python developers had this in mind, since the built-in `hash` function returns the same hash for equivalent numeric values, no matter if those values are `int` or `float`. Note that this extends to other numeric types, `False` is equal to `0` and `True` is equal to `1`. Even `fractions.Fraction` and `decimal.Decimal` uphold this property. The requirement that if `a == b` then `hash(a) == hash(b)` is documented in the definition of [`object.__hash__()`](https://docs.python.org/2/reference/datamodel.html#object.__hash__): > > Called by built-in function `hash()` and for operations on members of hashed collections including `set`, `frozenset`, and `dict`. `__hash__()` should return an integer. The only required property is that objects which compare equal have the same hash value; it is advised to somehow mix together (e.g. using exclusive or) the hash values for the components of the object that also play a part in comparison of objects. > > > **TL;DR:** a dictionary would break if keys that compared equal did not map to the same value.
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
In python: ``` 1==1.0 True ``` This is because of implicit casting However: ``` 1 is 1.0 False ``` I can see why automatic casting between `float` and `int` is handy, It is relatively safe to cast `int` into `float`, and yet there are other languages (e.g. go) that stay away from implicit casting. It is actually a language design decision and a matter of taste more than different functionalities
Frankly, the opposite is dangerous! `1 == 1.0`, so it's not improbable to imagine that if you had them point to different keys and tried to access them based on an evaluated number then you'd likely run into trouble with it because the ambiguity is hard to figure out. Dynamic typing means that the value is more important than what the technical type of something is, since the type is malleable (which *is* a very useful feature) and so distinguishing both `ints` and `floats` of the same value as distinct is unnecessary semantics that will only lead to confusion.
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
Dictionaries are implemented with a hash table. To look up something in a hash table, you start at the position indicated by the hash value, then search different locations until you find a key value that's equal or an empty bucket. If you have two key values that compare equal but have different hashes, you may get inconsistent results depending on whether the other key value was in the searched locations or not. For example this would be more likely as the table gets full. This is something you want to avoid. It appears that the Python developers had this in mind, since the built-in `hash` function returns the same hash for equivalent numeric values, no matter if those values are `int` or `float`. Note that this extends to other numeric types, `False` is equal to `0` and `True` is equal to `1`. Even `fractions.Fraction` and `decimal.Decimal` uphold this property. The requirement that if `a == b` then `hash(a) == hash(b)` is documented in the definition of [`object.__hash__()`](https://docs.python.org/2/reference/datamodel.html#object.__hash__): > > Called by built-in function `hash()` and for operations on members of hashed collections including `set`, `frozenset`, and `dict`. `__hash__()` should return an integer. The only required property is that objects which compare equal have the same hash value; it is advised to somehow mix together (e.g. using exclusive or) the hash values for the components of the object that also play a part in comparison of objects. > > > **TL;DR:** a dictionary would break if keys that compared equal did not map to the same value.
I agree with others that it makes sense to treat `1` and `1.0` as the same in this context. Even if Python did treat them differently, it would probably be a bad idea to try to use `1` and `1.0` as distinct keys for a dictionary. On the other hand -- I have trouble thinking of a natural use-case for using `1.0` as an alias for `1` in the context of keys. The problem is that either the key is literal or it is computed. If it is a literal key then why not just use `1` rather than `1.0`? If it is a computed key -- round off error could muck things up: ``` >>> d = {} >>> d[1] = 5 >>> d[1.0] 5 >>> x = sum(0.01 for i in range(100)) #conceptually this is 1.0 >>> d[x] Traceback (most recent call last): File "<pyshell#12>", line 1, in <module> d[x] KeyError: 1.0000000000000007 ``` So I would say that, generally speaking, the answer to your question "is this ever a useful language feature?" is "No, probably not."
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
First of all: the behaviour is documented explicitly in the docs for the [hash](https://docs.python.org/3.5/library/functions.html#hash) function: > > **`hash(object)`** > > > Return the hash value of the object (if it has one). Hash values are > integers. They are used to quickly compare dictionary keys during a > dictionary lookup. **Numeric values that compare equal have the same > hash value (even if they are of different types, as is the case for `1` > and `1.0`).** > > > Secondly, a limitation of hashing is pointed out in the docs for [`object.__hash__`](https://docs.python.org/3.5/reference/datamodel.html#object.__hash__) > > **`object.__hash__(self)`** > > > Called by built-in function [`hash()`](https://docs.python.org/3.5/library/functions.html#hash) and for operations on members of > hashed collections including `set`, `frozenset`, and `dict. __hash__()` > should return an integer. **The only required property is that objects > which compare equal have the same hash value;** > > > This is not unique to python. Java has the same caveat: if you implement `hashCode` then, in order for things to work correctly, you **must** implement it in such a way that: `x.equals(y)` implies `x.hashCode() == y.hashCode()`. So, python decided that `1.0 == 1` holds, hence it's *forced* to provide an implementation for `hash` such that `hash(1.0) == hash(1)`. The side effect is that `1.0` and `1` act exactly in the same way as `dict` keys, hence the behaviour. In other words the behaviour in itself doesn't have to be used or useful in any way. **It is necessary**. Without that behaviour there would be cases where you could accidentally overwrite a different key. If we had `1.0 == 1` but `hash(1.0) != hash(1)` we could still have a *collision*. And if `1.0` and `1` collide, the `dict` will use equality to be sure whether they are the same key or not and *kaboom* the value gets overwritten even if you intended them to be different. The only way to avoid this would be to have `1.0 != 1`, so that the `dict` is able to distinguish between them even in case of collision. But it was deemed more important to have `1.0 == 1` than to avoid the behaviour you are seeing, since you practically never use `float`s and `int`s as dictionary keys anyway. Since python tries to hide the distinction between numbers by automatically converting them when needed (e.g. `1/2 -> 0.5`) it makes sense that this behaviour is reflected even in such circumstances. It's more consistent with the rest of python. --- This behaviour would appear in *any* implementation where the matching of the keys is at least partially (as in a hash map) based on comparisons. For example if a `dict` was implemented using a red-black tree or an other kind of balanced BST, when the key `1.0` is looked up the comparisons with other keys would return the same results as for `1` and so they would still act in the same way. Hash maps require even more care because of the fact that it's the value of the hash that is used to find the entry of the key and comparisons are done only afterwards. So breaking the rule presented above means you'd introduce a bug that's quite hard to spot because at times the `dict` may seem to work as you'd expect it, and at other times, when the size changes, it would start to behave incorrectly. --- Note that there *would* be a way to fix this: have a separate hash map/BST for each type inserted in the dictionary. In this way there couldn't be any collisions between objects of different type and how `==` compares wouldn't matter when the arguments have different types. However this would complicate the implementation, it would probably be inefficient since hash maps have to keep quite a few free locations in order to have O(1) access times. If they become too full the performances decrease. Having multiple hash maps means wasting more space and also you'd need to first choose which hash map to look at before even starting the actual lookup of the key. If you used BSTs you'd first have to lookup the type and the perform a second lookup. So if you are going to use many types you'd end up with twice the work (and the lookup would take O(log n) instead of O(1)).
You should consider that the `dict` aims at storing data depending on the logical numeric value, not on how you represented it. The difference between `int`s and `float`s is indeed just an implementation detail and not conceptual. Ideally the only number type should be an arbitrary precision number with unbounded accuracy even sub-unity... this is however hard to implement without getting into troubles... but may be that will be the only future numeric type for Python. So while having different types for technical reasons Python tries to hide these implementation details and `int`->`float` conversion is automatic. It would be much more surprising if in a Python program `if x == 1: ...` wasn't going to be taken when `x` is a `float` with value 1. Note that also with Python 3 the value of `1/2` is `0.5` (the division of two integers) and that the types `long` and non-unicode string have been dropped with the same attempt to hide implementation details.
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
First of all: the behaviour is documented explicitly in the docs for the [hash](https://docs.python.org/3.5/library/functions.html#hash) function: > > **`hash(object)`** > > > Return the hash value of the object (if it has one). Hash values are > integers. They are used to quickly compare dictionary keys during a > dictionary lookup. **Numeric values that compare equal have the same > hash value (even if they are of different types, as is the case for `1` > and `1.0`).** > > > Secondly, a limitation of hashing is pointed out in the docs for [`object.__hash__`](https://docs.python.org/3.5/reference/datamodel.html#object.__hash__) > > **`object.__hash__(self)`** > > > Called by built-in function [`hash()`](https://docs.python.org/3.5/library/functions.html#hash) and for operations on members of > hashed collections including `set`, `frozenset`, and `dict. __hash__()` > should return an integer. **The only required property is that objects > which compare equal have the same hash value;** > > > This is not unique to python. Java has the same caveat: if you implement `hashCode` then, in order for things to work correctly, you **must** implement it in such a way that: `x.equals(y)` implies `x.hashCode() == y.hashCode()`. So, python decided that `1.0 == 1` holds, hence it's *forced* to provide an implementation for `hash` such that `hash(1.0) == hash(1)`. The side effect is that `1.0` and `1` act exactly in the same way as `dict` keys, hence the behaviour. In other words the behaviour in itself doesn't have to be used or useful in any way. **It is necessary**. Without that behaviour there would be cases where you could accidentally overwrite a different key. If we had `1.0 == 1` but `hash(1.0) != hash(1)` we could still have a *collision*. And if `1.0` and `1` collide, the `dict` will use equality to be sure whether they are the same key or not and *kaboom* the value gets overwritten even if you intended them to be different. The only way to avoid this would be to have `1.0 != 1`, so that the `dict` is able to distinguish between them even in case of collision. But it was deemed more important to have `1.0 == 1` than to avoid the behaviour you are seeing, since you practically never use `float`s and `int`s as dictionary keys anyway. Since python tries to hide the distinction between numbers by automatically converting them when needed (e.g. `1/2 -> 0.5`) it makes sense that this behaviour is reflected even in such circumstances. It's more consistent with the rest of python. --- This behaviour would appear in *any* implementation where the matching of the keys is at least partially (as in a hash map) based on comparisons. For example if a `dict` was implemented using a red-black tree or an other kind of balanced BST, when the key `1.0` is looked up the comparisons with other keys would return the same results as for `1` and so they would still act in the same way. Hash maps require even more care because of the fact that it's the value of the hash that is used to find the entry of the key and comparisons are done only afterwards. So breaking the rule presented above means you'd introduce a bug that's quite hard to spot because at times the `dict` may seem to work as you'd expect it, and at other times, when the size changes, it would start to behave incorrectly. --- Note that there *would* be a way to fix this: have a separate hash map/BST for each type inserted in the dictionary. In this way there couldn't be any collisions between objects of different type and how `==` compares wouldn't matter when the arguments have different types. However this would complicate the implementation, it would probably be inefficient since hash maps have to keep quite a few free locations in order to have O(1) access times. If they become too full the performances decrease. Having multiple hash maps means wasting more space and also you'd need to first choose which hash map to look at before even starting the actual lookup of the key. If you used BSTs you'd first have to lookup the type and the perform a second lookup. So if you are going to use many types you'd end up with twice the work (and the lookup would take O(log n) instead of O(1)).
Dictionaries are implemented with a hash table. To look up something in a hash table, you start at the position indicated by the hash value, then search different locations until you find a key value that's equal or an empty bucket. If you have two key values that compare equal but have different hashes, you may get inconsistent results depending on whether the other key value was in the searched locations or not. For example this would be more likely as the table gets full. This is something you want to avoid. It appears that the Python developers had this in mind, since the built-in `hash` function returns the same hash for equivalent numeric values, no matter if those values are `int` or `float`. Note that this extends to other numeric types, `False` is equal to `0` and `True` is equal to `1`. Even `fractions.Fraction` and `decimal.Decimal` uphold this property. The requirement that if `a == b` then `hash(a) == hash(b)` is documented in the definition of [`object.__hash__()`](https://docs.python.org/2/reference/datamodel.html#object.__hash__): > > Called by built-in function `hash()` and for operations on members of hashed collections including `set`, `frozenset`, and `dict`. `__hash__()` should return an integer. The only required property is that objects which compare equal have the same hash value; it is advised to somehow mix together (e.g. using exclusive or) the hash values for the components of the object that also play a part in comparison of objects. > > > **TL;DR:** a dictionary would break if keys that compared equal did not map to the same value.
32,209,155
I'm working through <http://www.mypythonquiz.com>, and [question #45](http://www.mypythonquiz.com/question.php?qid=255) asks for the output of the following code: ``` confusion = {} confusion[1] = 1 confusion['1'] = 2 confusion[1.0] = 4 sum = 0 for k in confusion: sum += confusion[k] print sum ``` The output is `6`, since the key `1.0` replaces `1`. This feels a bit dangerous to me, is this ever a useful language feature?
2015/08/25
[ "https://Stackoverflow.com/questions/32209155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2401331/" ]
First of all: the behaviour is documented explicitly in the docs for the [hash](https://docs.python.org/3.5/library/functions.html#hash) function: > > **`hash(object)`** > > > Return the hash value of the object (if it has one). Hash values are > integers. They are used to quickly compare dictionary keys during a > dictionary lookup. **Numeric values that compare equal have the same > hash value (even if they are of different types, as is the case for `1` > and `1.0`).** > > > Secondly, a limitation of hashing is pointed out in the docs for [`object.__hash__`](https://docs.python.org/3.5/reference/datamodel.html#object.__hash__) > > **`object.__hash__(self)`** > > > Called by built-in function [`hash()`](https://docs.python.org/3.5/library/functions.html#hash) and for operations on members of > hashed collections including `set`, `frozenset`, and `dict. __hash__()` > should return an integer. **The only required property is that objects > which compare equal have the same hash value;** > > > This is not unique to python. Java has the same caveat: if you implement `hashCode` then, in order for things to work correctly, you **must** implement it in such a way that: `x.equals(y)` implies `x.hashCode() == y.hashCode()`. So, python decided that `1.0 == 1` holds, hence it's *forced* to provide an implementation for `hash` such that `hash(1.0) == hash(1)`. The side effect is that `1.0` and `1` act exactly in the same way as `dict` keys, hence the behaviour. In other words the behaviour in itself doesn't have to be used or useful in any way. **It is necessary**. Without that behaviour there would be cases where you could accidentally overwrite a different key. If we had `1.0 == 1` but `hash(1.0) != hash(1)` we could still have a *collision*. And if `1.0` and `1` collide, the `dict` will use equality to be sure whether they are the same key or not and *kaboom* the value gets overwritten even if you intended them to be different. The only way to avoid this would be to have `1.0 != 1`, so that the `dict` is able to distinguish between them even in case of collision. But it was deemed more important to have `1.0 == 1` than to avoid the behaviour you are seeing, since you practically never use `float`s and `int`s as dictionary keys anyway. Since python tries to hide the distinction between numbers by automatically converting them when needed (e.g. `1/2 -> 0.5`) it makes sense that this behaviour is reflected even in such circumstances. It's more consistent with the rest of python. --- This behaviour would appear in *any* implementation where the matching of the keys is at least partially (as in a hash map) based on comparisons. For example if a `dict` was implemented using a red-black tree or an other kind of balanced BST, when the key `1.0` is looked up the comparisons with other keys would return the same results as for `1` and so they would still act in the same way. Hash maps require even more care because of the fact that it's the value of the hash that is used to find the entry of the key and comparisons are done only afterwards. So breaking the rule presented above means you'd introduce a bug that's quite hard to spot because at times the `dict` may seem to work as you'd expect it, and at other times, when the size changes, it would start to behave incorrectly. --- Note that there *would* be a way to fix this: have a separate hash map/BST for each type inserted in the dictionary. In this way there couldn't be any collisions between objects of different type and how `==` compares wouldn't matter when the arguments have different types. However this would complicate the implementation, it would probably be inefficient since hash maps have to keep quite a few free locations in order to have O(1) access times. If they become too full the performances decrease. Having multiple hash maps means wasting more space and also you'd need to first choose which hash map to look at before even starting the actual lookup of the key. If you used BSTs you'd first have to lookup the type and the perform a second lookup. So if you are going to use many types you'd end up with twice the work (and the lookup would take O(log n) instead of O(1)).
I agree with others that it makes sense to treat `1` and `1.0` as the same in this context. Even if Python did treat them differently, it would probably be a bad idea to try to use `1` and `1.0` as distinct keys for a dictionary. On the other hand -- I have trouble thinking of a natural use-case for using `1.0` as an alias for `1` in the context of keys. The problem is that either the key is literal or it is computed. If it is a literal key then why not just use `1` rather than `1.0`? If it is a computed key -- round off error could muck things up: ``` >>> d = {} >>> d[1] = 5 >>> d[1.0] 5 >>> x = sum(0.01 for i in range(100)) #conceptually this is 1.0 >>> d[x] Traceback (most recent call last): File "<pyshell#12>", line 1, in <module> d[x] KeyError: 1.0000000000000007 ``` So I would say that, generally speaking, the answer to your question "is this ever a useful language feature?" is "No, probably not."
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
Hummm. . . . Hard to believe that Qt is forbidden for commercial use, as it has been created by some of the most important companies in the world . . . <http://qt.nokia.com/> Go for pyQt ;)
Pyside might be the best bet for you : <http://www.pyside.org/> It is basically Qt but under the LGPL license, which means you can use it in your commercial application.
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
First of all, I suggest you to stay with Python 2.x if you want to develop commercial products **at this moment**. This is because it is still the most widely available version of Python. Currently, Ubuntu ships with 2.7.2 and OS X Lion with 2.7.2, too. Regarding PyQT, you can use Nokia's re-implementation of it, [PySide](http://pyside.org "PySide"). It is under LGPL, so yes, you can create commercial products. Moreover, QT also transitioned to LGPL. See [QT License](http://qt.nokia.com/products/licensing) here. Update: Additionally, support for Python 3.x is still under development for many GUI frameworks, PySide included.
Hummm. . . . Hard to believe that Qt is forbidden for commercial use, as it has been created by some of the most important companies in the world . . . <http://qt.nokia.com/> Go for pyQt ;)
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
Hummm. . . . Hard to believe that Qt is forbidden for commercial use, as it has been created by some of the most important companies in the world . . . <http://qt.nokia.com/> Go for pyQt ;)
Well, If you feel Qts is not suitable(thats hard to belive either) you could switch to WxPython . It too has an good learning curve and can satisfy your commersial needs
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
First of all, I suggest you to stay with Python 2.x if you want to develop commercial products **at this moment**. This is because it is still the most widely available version of Python. Currently, Ubuntu ships with 2.7.2 and OS X Lion with 2.7.2, too. Regarding PyQT, you can use Nokia's re-implementation of it, [PySide](http://pyside.org "PySide"). It is under LGPL, so yes, you can create commercial products. Moreover, QT also transitioned to LGPL. See [QT License](http://qt.nokia.com/products/licensing) here. Update: Additionally, support for Python 3.x is still under development for many GUI frameworks, PySide included.
Pyside might be the best bet for you : <http://www.pyside.org/> It is basically Qt but under the LGPL license, which means you can use it in your commercial application.
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
You probably mean that **PyQt** can only be used for GPL projects. However, the equivalent [PySide](http://www.pyside.org "PySide") Python bindings for QT are LGPL, like QT itself, so you *can* use those; unfortunately, they only support Python 2.5/7 at the moment. If you don't mind being cross-platform, you can fall back on the win32api stuff (bleh), or go the hybrid way with [Jython](http://www.jython.org "Jython") (which supports Swing as well as any other Java-based toolkit) or IronPython (which uses .Net).
Pyside might be the best bet for you : <http://www.pyside.org/> It is basically Qt but under the LGPL license, which means you can use it in your commercial application.
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
First of all, I suggest you to stay with Python 2.x if you want to develop commercial products **at this moment**. This is because it is still the most widely available version of Python. Currently, Ubuntu ships with 2.7.2 and OS X Lion with 2.7.2, too. Regarding PyQT, you can use Nokia's re-implementation of it, [PySide](http://pyside.org "PySide"). It is under LGPL, so yes, you can create commercial products. Moreover, QT also transitioned to LGPL. See [QT License](http://qt.nokia.com/products/licensing) here. Update: Additionally, support for Python 3.x is still under development for many GUI frameworks, PySide included.
You probably mean that **PyQt** can only be used for GPL projects. However, the equivalent [PySide](http://www.pyside.org "PySide") Python bindings for QT are LGPL, like QT itself, so you *can* use those; unfortunately, they only support Python 2.5/7 at the moment. If you don't mind being cross-platform, you can fall back on the win32api stuff (bleh), or go the hybrid way with [Jython](http://www.jython.org "Jython") (which supports Swing as well as any other Java-based toolkit) or IronPython (which uses .Net).
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
First of all, I suggest you to stay with Python 2.x if you want to develop commercial products **at this moment**. This is because it is still the most widely available version of Python. Currently, Ubuntu ships with 2.7.2 and OS X Lion with 2.7.2, too. Regarding PyQT, you can use Nokia's re-implementation of it, [PySide](http://pyside.org "PySide"). It is under LGPL, so yes, you can create commercial products. Moreover, QT also transitioned to LGPL. See [QT License](http://qt.nokia.com/products/licensing) here. Update: Additionally, support for Python 3.x is still under development for many GUI frameworks, PySide included.
Well, If you feel Qts is not suitable(thats hard to belive either) you could switch to WxPython . It too has an good learning curve and can satisfy your commersial needs
8,758,354
I've been using Python 3 for some months and I would like to create some GUIs. Does anyone know a good GUI Python GUI framework I could use for this? I don't want to use [TkInter](http://wiki.python.org/moin/TkInter) because I don't think it's very good. I also don't want to use [PyQt](http://wiki.python.org/moin/PyQt) due to its licensing requirements in a commercial application.
2012/01/06
[ "https://Stackoverflow.com/questions/8758354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1114830/" ]
You probably mean that **PyQt** can only be used for GPL projects. However, the equivalent [PySide](http://www.pyside.org "PySide") Python bindings for QT are LGPL, like QT itself, so you *can* use those; unfortunately, they only support Python 2.5/7 at the moment. If you don't mind being cross-platform, you can fall back on the win32api stuff (bleh), or go the hybrid way with [Jython](http://www.jython.org "Jython") (which supports Swing as well as any other Java-based toolkit) or IronPython (which uses .Net).
Well, If you feel Qts is not suitable(thats hard to belive either) you could switch to WxPython . It too has an good learning curve and can satisfy your commersial needs
68,935,814
I would like to know how to run the following cURL request using python (I'm working in Jupyter notebook): ``` curl -i -X GET "https://graph.facebook.com/{graph-api-version}/oauth/access_token? grant_type=fb_exchange_token& client_id={app-id}& client_secret={app-secret}& fb_exchange_token={your-access-token}" ``` I've seen some similar questions and answers suggesting using "requests.get", but I am a complete python newbie and am not sure how to structure the syntax for whole request including the id, secret and token elements. Any help would be really appreciated. Thanks!
2021/08/26
[ "https://Stackoverflow.com/questions/68935814", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16757711/" ]
`decode(String)` returns a `byte[]`, you need to convert that to a string using a `String` constructor and not the `toString()` method: ```java byte[] bytes = java.util.Base64.getDecoder().decode(encodedstring); String s = new String(bytes, java.nio.charset.StandardCharsets.UTF_8); ```
It looks like you need mime decoder message ```js java.util.Base64.Decoder decoder = java.util.Base64.getMimeDecoder(); // Decoding MIME encoded message String dStr = new String(decoder.decode(encodedstring)); System.out.println("Decoded message: "+dStr); ```
58,706,091
I am using cplex .dll file in python to solve a well-formulated lp problem using pulp solver. Here is the code here model is pulp object created using pulp library ==================================================== When I run a.actualSolve(Model) I get following error from subprocess.py file. OSError: [WinError 193] %1 is not a valid Win32 application I tried with python 32 bit and 64 bit but couldn't solve it. import pulp a = pulp.solvers.CPLEX\_CMD("cplex dll file location") a.actualSolve(model) I expect the cplex dll file to solve my formulated optimization model and give me a solution for all the variables.
2019/11/05
[ "https://Stackoverflow.com/questions/58706091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6369726/" ]
Like the error says, you need to put the closing curly brace on the same line as the subsequent block after the `else`: ``` if (err.status === 'not found') { cb({ statusCode: 404 }) return } else { // <--- now } is on same line as { cb({ statusCode: 500 }) return } ``` From an example from [the docs](https://standardjs.com/rules-en.html) on Standard linting: > > Keep else statements on the same line as their curly braces. > > > eslint: brace-style > > > > ``` > // ✓ ok > if (condition) { > // ... > } else { > // ... > } > > // ✗ avoid > if (condition) { > // ... > } > else { > // ... > } > > ``` > >
**Use below format when you face above error in typescript eslint.** ``` if (Logic1) { //content1 } else if (Logic2) { //content2 } else if (Logic3) { //content3 } else { //content4 } ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
``` from django.conf import settings dir(settings) ``` and then choose attribute from what `dir(settings)` have shown you to say: ``` settings.name ``` where `name` is the attribute that is of your interest Alternatively: ``` settings.__dict__ ``` prints all the settings. But it prints also the module standard attributes, which may somewhat clutter the output.
To show all django settings (including default settings not specified in your local settings file): ``` from django.conf import settings dir(settings) ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
``` from django.conf import settings dir(settings) ``` and then choose attribute from what `dir(settings)` have shown you to say: ``` settings.name ``` where `name` is the attribute that is of your interest Alternatively: ``` settings.__dict__ ``` prints all the settings. But it prints also the module standard attributes, which may somewhat clutter the output.
In case a newbie stumbles upon this question wanting to be spoon fed the way to print out the values for all settings: ``` def show_settings(): from django.conf import settings for name in dir(settings): print(name, getattr(settings, name)) ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
``` from django.conf import settings dir(settings) ``` and then choose attribute from what `dir(settings)` have shown you to say: ``` settings.name ``` where `name` is the attribute that is of your interest Alternatively: ``` settings.__dict__ ``` prints all the settings. But it prints also the module standard attributes, which may somewhat clutter the output.
I know that this is an old question, but with current versions of django (1.6+), you can accomplish this from the command line the following way: ``` python manage.py diffsettings --all ``` The result will show all of the settings including the defautls (denoted by ### in front of the settings name).
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
``` from django.conf import settings dir(settings) ``` and then choose attribute from what `dir(settings)` have shown you to say: ``` settings.name ``` where `name` is the attribute that is of your interest Alternatively: ``` settings.__dict__ ``` prints all the settings. But it prints also the module standard attributes, which may somewhat clutter the output.
In your shell, you can call Django's built-in [diffsettings](https://docs.djangoproject.com/en/2.1/ref/django-admin/#diffsettings): ``` from django.core.management.commands import diffsettings output = diffsettings.Command().handle(default=None, output="hash", all=False) ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
I know that this is an old question, but with current versions of django (1.6+), you can accomplish this from the command line the following way: ``` python manage.py diffsettings --all ``` The result will show all of the settings including the defautls (denoted by ### in front of the settings name).
To show all django settings (including default settings not specified in your local settings file): ``` from django.conf import settings dir(settings) ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
To show all django settings (including default settings not specified in your local settings file): ``` from django.conf import settings dir(settings) ```
In your shell, you can call Django's built-in [diffsettings](https://docs.djangoproject.com/en/2.1/ref/django-admin/#diffsettings): ``` from django.core.management.commands import diffsettings output = diffsettings.Command().handle(default=None, output="hash", all=False) ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
I know that this is an old question, but with current versions of django (1.6+), you can accomplish this from the command line the following way: ``` python manage.py diffsettings --all ``` The result will show all of the settings including the defautls (denoted by ### in front of the settings name).
In case a newbie stumbles upon this question wanting to be spoon fed the way to print out the values for all settings: ``` def show_settings(): from django.conf import settings for name in dir(settings): print(name, getattr(settings, name)) ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
In case a newbie stumbles upon this question wanting to be spoon fed the way to print out the values for all settings: ``` def show_settings(): from django.conf import settings for name in dir(settings): print(name, getattr(settings, name)) ```
In your shell, you can call Django's built-in [diffsettings](https://docs.djangoproject.com/en/2.1/ref/django-admin/#diffsettings): ``` from django.core.management.commands import diffsettings output = diffsettings.Command().handle(default=None, output="hash", all=False) ```
6,369,697
When I run `python manage.py shell`, I can print out the python path ``` >>> import sys >>> sys.path ``` What should I type to introspect all my django settings ?
2011/06/16
[ "https://Stackoverflow.com/questions/6369697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/450278/" ]
I know that this is an old question, but with current versions of django (1.6+), you can accomplish this from the command line the following way: ``` python manage.py diffsettings --all ``` The result will show all of the settings including the defautls (denoted by ### in front of the settings name).
In your shell, you can call Django's built-in [diffsettings](https://docs.djangoproject.com/en/2.1/ref/django-admin/#diffsettings): ``` from django.core.management.commands import diffsettings output = diffsettings.Command().handle(default=None, output="hash", all=False) ```
8,114,826
Hi I'm working on converting perl to python for something to do. I've been looking at some code on hash tables in perl and I've come across a line of code that I really don't know how it does what it does in python. I know that it shifts the bit strings of page by 1 ``` %page_table = (); #page table is a hash of hashes %page_table_entry = ( #page table entry structure "dirty", 0, #0/1 boolean "referenced", 0, #0/1 boolean "valid", 0, #0/1 boolean "frame_no", -1, #-1 indicates an "x", i.e. the page isn't in ram "page", 0 #used for aging algorithm. 8 bit string.); @ram = ((-1) x $num_frames); ``` Could someone please give me an idea on how this would be represented in python? I've got the definitions of the hash tables done, they're just there as references as to what I'm doing. Thanks for any help that you can give me. ``` for($i=0; $i<@ram; $i++){ $page_table{$ram[$i]}->{page} = $page_table{$ram[$i]}->{page} >> 1;} ```
2011/11/13
[ "https://Stackoverflow.com/questions/8114826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1044593/" ]
The only thing confusing is that page table is a hash of hashes. $page\_table{$v} contains a hashref to a hash that contains a key 'page' whose value is an integer. The loop bitshifts that integer but is not very clear perl code. Simpler would be: ``` foreach my $v (@ram) { $page_table{$v}->{page} >>= 1; } ``` Now the translation to python should be obvious: ``` for v in ram: page_table[v][page] >>= 1 ```
Woof! No wonder you want to try Python! Yes, Python can do this because Python dictionaries (what you'd call hashes in Perl) can contain other arrays or dictionaries without doing references to them. However, I **highly** suggest that you look into moving into object oriented programming. After looking at that assignment statement of yours, I had to lie down for a bit. I can't imagine trying to maintain and write an entire program like that. Whenever you have to do a hash that contains an array, or an array of arrays, or a hash of hashes, you should be looking into using object oriented code. Object oriented code can prevent you from making all the sorts of errors that happen when you do that type of stuff. And, it can make your code much more readable -- even Perl code. Take a look at the [Python Tutorial](http://docs.python.org/tutorial/) and take a look at the [Perl Object Oriented Tutorial](http://perldoc.perl.org/perlboot.html) and learn a bit about object oriented programming. This is especially true in Python which was written from the ground up to be object oriented.
8,114,826
Hi I'm working on converting perl to python for something to do. I've been looking at some code on hash tables in perl and I've come across a line of code that I really don't know how it does what it does in python. I know that it shifts the bit strings of page by 1 ``` %page_table = (); #page table is a hash of hashes %page_table_entry = ( #page table entry structure "dirty", 0, #0/1 boolean "referenced", 0, #0/1 boolean "valid", 0, #0/1 boolean "frame_no", -1, #-1 indicates an "x", i.e. the page isn't in ram "page", 0 #used for aging algorithm. 8 bit string.); @ram = ((-1) x $num_frames); ``` Could someone please give me an idea on how this would be represented in python? I've got the definitions of the hash tables done, they're just there as references as to what I'm doing. Thanks for any help that you can give me. ``` for($i=0; $i<@ram; $i++){ $page_table{$ram[$i]}->{page} = $page_table{$ram[$i]}->{page} >> 1;} ```
2011/11/13
[ "https://Stackoverflow.com/questions/8114826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1044593/" ]
The only thing confusing is that page table is a hash of hashes. $page\_table{$v} contains a hashref to a hash that contains a key 'page' whose value is an integer. The loop bitshifts that integer but is not very clear perl code. Simpler would be: ``` foreach my $v (@ram) { $page_table{$v}->{page} >>= 1; } ``` Now the translation to python should be obvious: ``` for v in ram: page_table[v][page] >>= 1 ```
Here is what my [Pythonizer](https://github.com/snoopyjc/pythonizer) generates for that code: ``` #!/usr/bin/env python3 # Generated by "pythonizer -mV q8114826.pl" v0.974 run by snoopyjc on Thu Apr 21 23:35:38 2022 import perllib, builtins _str = lambda s: "" if s is None else str(s) perllib.init_package("main") num_frames = 0 builtins.__PACKAGE__ = "main" page_table = {} # page table is a hash of hashes page_table_entry = {"dirty": 0, "referenced": 0, "valid": 0, "frame_no": -1, "page": 0} # page table entry structure # 0/1 boolean # 0/1 boolean # 0/1 boolean # -1 indicates an "x", i.e. the page isn't in ram # used for aging algorithm. 8 bit string. ram = [(-1) for _ in range(num_frames)] for i in range(0, len(ram)): page_table[_str(ram[i])]["page"] = perllib.num(page_table.get(_str(ram[i])).get("page")) >> 1 ```
8,114,826
Hi I'm working on converting perl to python for something to do. I've been looking at some code on hash tables in perl and I've come across a line of code that I really don't know how it does what it does in python. I know that it shifts the bit strings of page by 1 ``` %page_table = (); #page table is a hash of hashes %page_table_entry = ( #page table entry structure "dirty", 0, #0/1 boolean "referenced", 0, #0/1 boolean "valid", 0, #0/1 boolean "frame_no", -1, #-1 indicates an "x", i.e. the page isn't in ram "page", 0 #used for aging algorithm. 8 bit string.); @ram = ((-1) x $num_frames); ``` Could someone please give me an idea on how this would be represented in python? I've got the definitions of the hash tables done, they're just there as references as to what I'm doing. Thanks for any help that you can give me. ``` for($i=0; $i<@ram; $i++){ $page_table{$ram[$i]}->{page} = $page_table{$ram[$i]}->{page} >> 1;} ```
2011/11/13
[ "https://Stackoverflow.com/questions/8114826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1044593/" ]
Here is what my [Pythonizer](https://github.com/snoopyjc/pythonizer) generates for that code: ``` #!/usr/bin/env python3 # Generated by "pythonizer -mV q8114826.pl" v0.974 run by snoopyjc on Thu Apr 21 23:35:38 2022 import perllib, builtins _str = lambda s: "" if s is None else str(s) perllib.init_package("main") num_frames = 0 builtins.__PACKAGE__ = "main" page_table = {} # page table is a hash of hashes page_table_entry = {"dirty": 0, "referenced": 0, "valid": 0, "frame_no": -1, "page": 0} # page table entry structure # 0/1 boolean # 0/1 boolean # 0/1 boolean # -1 indicates an "x", i.e. the page isn't in ram # used for aging algorithm. 8 bit string. ram = [(-1) for _ in range(num_frames)] for i in range(0, len(ram)): page_table[_str(ram[i])]["page"] = perllib.num(page_table.get(_str(ram[i])).get("page")) >> 1 ```
Woof! No wonder you want to try Python! Yes, Python can do this because Python dictionaries (what you'd call hashes in Perl) can contain other arrays or dictionaries without doing references to them. However, I **highly** suggest that you look into moving into object oriented programming. After looking at that assignment statement of yours, I had to lie down for a bit. I can't imagine trying to maintain and write an entire program like that. Whenever you have to do a hash that contains an array, or an array of arrays, or a hash of hashes, you should be looking into using object oriented code. Object oriented code can prevent you from making all the sorts of errors that happen when you do that type of stuff. And, it can make your code much more readable -- even Perl code. Take a look at the [Python Tutorial](http://docs.python.org/tutorial/) and take a look at the [Perl Object Oriented Tutorial](http://perldoc.perl.org/perlboot.html) and learn a bit about object oriented programming. This is especially true in Python which was written from the ground up to be object oriented.
62,393,428
``` drivers available with me **python shell** '''In [2]: pyodbc.drivers()''' **Output:** **Out[2]: ['SQL Server']** code in settings.py django: **Settings.py in django** '''# Database # https://docs.djangoproject.com/en/2.2/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'sql_server.pyodbc', 'NAME': 'dbname', 'HOST': 'ansqlserver.database.windows.net', 'USER': 'test', 'PASSWORD': 'Password', 'OPTIONS': { 'driver': 'SQL Server', } }''' **ERROR:** **Trying to connect to MicrsoftSQL server facing below error** ``` File "C:\Local\Programs\Python\Python37\lib\site-packages\sql\_server\pyodbc\base.py", line 314, in get\_new\_connectiontimeout=timeout) django.db.utils.OperationalError: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver]Neither DSNnor SERVER keyword supplied (0) (SQLDriverConnect); [08001] [Microsoft][ODBC SQL Server Driver]Invalid connection string attribute (0)')
2020/06/15
[ "https://Stackoverflow.com/questions/62393428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13458554/" ]
Though the documentation suggests using the framework `SearchView`, I've always found that the support/androidx `SearchView` plays nicer with the library components – e.g., `AppCompatActivity`, `MaterialToolbar`, etc. – though I'm not sure exactly what causes these little glitches. Indeed, using `androidx.appcompat.widget.SearchView` here in lieu of `android.widget.SearchView` for the `actionViewClass` got rid of that misplaced search icon upon expanding. However, the `AutoCompleteTextView` inside the `SearchView` still has a similar search icon as a hint because it's not ending up with the right style. I initially expected that setting the `Toolbar` as the support `ActionBar` would've integrated that with the other relevant styles for the children, but it seems `SearchView`'s style, for some reason, is normally set with a `ThemeOverlay.*.ActionBar` on the `<*Toolbar>` acting as the `ActionBar`. Though most sources seem to indicate that the various `ThemeOverlay.*.ActionBar` styles only adjust the `colorControlNormal` attribute, they actually set the `searchViewStyle` to the appropriate `Widget.*.SearchView.ActionBar` value, too, so it's doubly important that we add a proper overlay. For example, in keeping with changing to the `androidx` version: ```xml <com.google.android.material.appbar.MaterialToolbar android:id="@+id/toolbar" android:theme="@style/ThemeOverlay.MaterialComponents.Dark.ActionBar" ... /> ``` This could also work by setting that as the `actionBarTheme` in your `Activity`'s theme instead, but be warned that it can be overridden by attributes on the `<*Toolbar>` itself, like it would be in the given setup by `style="@style/Widget.MaterialComponents.Toolbar.Primary"`. If you're not using Material Components, `ThemeOverlay.AppCompat` styles are available as well. And if you're using only platform classes, similar styles are available in the system namespace; e.g., `@android:style/ThemeOverlay.Material.Dark.ActionBar`. --- The initial revision of this answer removed that hint icon manually, as at the time I was unaware of how exactly the given setup was failing. It shouldn't be necessary to do that now, but if you'd like to customize this further, that example simply replaced the menu `<item>`'s `app:actionViewClass` attribute with an `app:actionLayout` pointing to this layout: ```xml <androidx.appcompat.widget.SearchView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/search_view" android:layout_width="match_parent" android:layout_height="wrap_content" app:searchHintIcon="@null" /> ``` The `searchHintIcon` setting is all that was needed for the example here, but you can set whatever applicable `SearchView` attributes you'd like. If you're going this route, it might be preferable to set `style="@style/Widget.AppCompat.SearchView.ActionBar"`, which includes the `searchHintIcon` setting, and ensures the correct overall style for the `SearchView`, as suggested by Artem Mostyaev in comments below.
the above method does not work for me. I don't know why but a tried this and succeed. Refer to the search hint icon through SearchView and set it's visibility to GONE: ``` ImageView icon = (ImageView) mSearchView.findViewById(androidx.appcompat.R.id.search_mag_icon); icon.setVisibility(View.GONE); ``` And then add this line: ``` mSearchView.setIconified(false); ```
28,986,131
I need to load 1460 files into a list, from a folder with 163.360 files. I use the following python code to do this: ``` import os import glob Directory = 'C:\\Users\\Nicolai\\Desktop\\sealev\\dkss_all' stationName = '20002' filenames = glob.glob("dkss."+stationName+"*") ``` This has been running fine so far, but today when I booted my machine and ran the code it was just stuck on the last line. I tried to reboot, and it didn't help, in the end I just let it run, went to lunch break, came back and it was finished. It took 45 minutes. Now when I run it it takes less than a second, what is going on? Is this a cache thing? How can I prevent having to wait 45 minutes again? Any explanations would be much appreciated.
2015/03/11
[ "https://Stackoverflow.com/questions/28986131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1972356/" ]
Presuming that `ls` on that same directory is just as slow, you can't reduce the total time needed for the directory listing operation. Filesystems are slow sometimes (which is why, yes, the operating system *does* cache directory entries). However, there actually *is* something you can do in your Python code: You can operate on filenames as they come in, rather than waiting for the entire result to finish before the rest of your code even starts. Unfortunately, this is functionality not present in the standard library, meaning you need to call C functions. See [Ben Hoyt's scandir module](https://github.com/benhoyt/scandir) for an implementation of this. See also [this StackOverflow question, describing the problem](http://stackoverflow.com/questions/4403598/list-files-in-a-folder-as-a-stream-to-begin-process-immediately). Using scandir might look something like the following: ``` prefix = 'dkss.%s.' % stationName for direntry in scandir(path='.'): if direntry.name.startswith(prefix): pass # do whatever work you want with this file here. ```
Yes, it is a caching thing. Your harddisk is a slow peripheral, reading 163.360 filenames from it can take some time. Yes, your operating system caches that kind of information for you. Python has to wait for that information to be loaded before it can filter out the matching filenames. You don't have to wait all that time again until your operating system decides to use the memory caching the directory information for something else, or you restart the computer. Since you rebooted your computer, the information was no longer cached.
21,869,675
``` list_ = [(1, 'a'), (2, 'b'), (3, 'c')] item1 = 1 item2 = 'c' #hypothetical: assert list_.index_by_first_value(item1) == 0 assert list_.index_by_second_value(item2) == 2 ``` What would be the fastest way to emulate the `index_by_first/second_value` method in python? If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index? --- My best guess would be this: ``` [i[0] for i in list_].index(item1) [i[1] for i in list_].index(item2) ``` But I'm interested in seeing what you guys will come up with. Any ideas?
2014/02/19
[ "https://Stackoverflow.com/questions/21869675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3002473/" ]
At first, I thought along [the same lines as Nick T](https://stackoverflow.com/a/21869852/418413). Your method is fine if the number of tuples (N) is short. But of course a linear search is O(N). As the number of tuples increases, the time increases directly with it. You can get O(1) lookup time with a dict mapping the zeroth element of each tuple to its index: ``` {el[0]:idx for idx,el in enumerate(list_)} ``` But the cost of converting the list to a dict may be too high! Here are my results: ``` >>> from timeit import timeit as t >>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)") 1.557116985321045 >>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)") 7.415766954421997 >>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)") 2.1753010749816895 >>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)") 15.062835216522217 ``` So the list-to-dict conversion is killing any benefit we get from having the O(1) lookups. But just to prove that the dict is really fast if we can avoid doing the conversion more than once: ``` >>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(10)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}") 0.050583839416503906 >>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(100)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}") 0.05001211166381836 >>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(1000)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}") 0.050894975662231445 ```
Searching a list is O(n). Convert it to a dictionary, then lookups take O(1). ``` >>> list_ = [(1, 'a'), (2, 'b'), (3, 'c')] >>> dict(list_) {1: 'a', 2: 'b', 3: 'c'} >>> dict((k, v) for v, k in list_) {'a': 1, 'c': 3, 'b': 2} ``` If you want the original index you could enumerate it: ``` >>> dict((kv[0], (i, kv[1])) for i, kv in enumerate(list_)) {1: (0, 'a'), 2: (1, 'b'), 3: (2, 'c')} >> dict((kv[1], (i, kv[0])) for i, kv in enumerate(list_)) {'a': (0, 1), 'c': (2, 3), 'b': (1, 2)} ```
21,869,675
``` list_ = [(1, 'a'), (2, 'b'), (3, 'c')] item1 = 1 item2 = 'c' #hypothetical: assert list_.index_by_first_value(item1) == 0 assert list_.index_by_second_value(item2) == 2 ``` What would be the fastest way to emulate the `index_by_first/second_value` method in python? If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index? --- My best guess would be this: ``` [i[0] for i in list_].index(item1) [i[1] for i in list_].index(item2) ``` But I'm interested in seeing what you guys will come up with. Any ideas?
2014/02/19
[ "https://Stackoverflow.com/questions/21869675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3002473/" ]
Searching a list is O(n). Convert it to a dictionary, then lookups take O(1). ``` >>> list_ = [(1, 'a'), (2, 'b'), (3, 'c')] >>> dict(list_) {1: 'a', 2: 'b', 3: 'c'} >>> dict((k, v) for v, k in list_) {'a': 1, 'c': 3, 'b': 2} ``` If you want the original index you could enumerate it: ``` >>> dict((kv[0], (i, kv[1])) for i, kv in enumerate(list_)) {1: (0, 'a'), 2: (1, 'b'), 3: (2, 'c')} >> dict((kv[1], (i, kv[0])) for i, kv in enumerate(list_)) {'a': (0, 1), 'c': (2, 3), 'b': (1, 2)} ```
@Nick T I think some time is wasted enumerating the list and then converting it to a dictionary, so even if it is an O(1) lookup for a dict, creating the dict in the first place is too costly to consider it a viable option for large lists. This is the test I used to determine it: ``` import time l = [(i, chr(i)) for i in range(1000000)] def test1(): t1 = time.time() ([i[0] for i in l].index(10872)) t2 = time.time() return t2 - t1 def test2(): t1 = time.time() (dict((kv[0], (i, kv[1])) for i, kv in enumerate(l))[10872][0]) t2 = time.time() return t2 - t1 def test3(): sum1 = [] sum2 = [] for i in range(1000): sum1.append(test1()) sum2.append(test2()) print(sum(sum1)/1000) print(sum(sum2)/1000) test3() ``` EDIT: Haha Kojiro, you beat me to it!
21,869,675
``` list_ = [(1, 'a'), (2, 'b'), (3, 'c')] item1 = 1 item2 = 'c' #hypothetical: assert list_.index_by_first_value(item1) == 0 assert list_.index_by_second_value(item2) == 2 ``` What would be the fastest way to emulate the `index_by_first/second_value` method in python? If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index? --- My best guess would be this: ``` [i[0] for i in list_].index(item1) [i[1] for i in list_].index(item2) ``` But I'm interested in seeing what you guys will come up with. Any ideas?
2014/02/19
[ "https://Stackoverflow.com/questions/21869675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3002473/" ]
EDIT: Just kidding. As the lists grow longer it looks like the manual `for` loop takes less time. Updated to generate random lists via kojiro's method: Just some timing tests for your information while maintaining lists. The good thing about preserving list form versus a dictionary is that it's expansible to include tuples of any length. ``` import timeit from operator import itemgetter import random list_= [('a', i) for i in range(10)] random.shuffle(list_) def a(): return [i[1] for i in list_].index(1) def b(): return zip(*list_)[1].index(1) def c(): return map(itemgetter(1), list_).index(1) def d(): for index, value in enumerate(list_): if 1 == value[1]: return index ``` With `timeit`: ``` C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.a()" 1000000 loops, best of 3: 1.21 usec per loop C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.b()" 1000000 loops, best of 3: 1.2 usec per loop C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.c()" 1000000 loops, best of 3: 1.45 usec per loop C:\Users\Jesse\Desktop>python -m timeit -s "import test" "test.d()" 1000000 loops, best of 3: 0.922 usec per loop ```
Searching a list is O(n). Convert it to a dictionary, then lookups take O(1). ``` >>> list_ = [(1, 'a'), (2, 'b'), (3, 'c')] >>> dict(list_) {1: 'a', 2: 'b', 3: 'c'} >>> dict((k, v) for v, k in list_) {'a': 1, 'c': 3, 'b': 2} ``` If you want the original index you could enumerate it: ``` >>> dict((kv[0], (i, kv[1])) for i, kv in enumerate(list_)) {1: (0, 'a'), 2: (1, 'b'), 3: (2, 'c')} >> dict((kv[1], (i, kv[0])) for i, kv in enumerate(list_)) {'a': (0, 1), 'c': (2, 3), 'b': (1, 2)} ```
21,869,675
``` list_ = [(1, 'a'), (2, 'b'), (3, 'c')] item1 = 1 item2 = 'c' #hypothetical: assert list_.index_by_first_value(item1) == 0 assert list_.index_by_second_value(item2) == 2 ``` What would be the fastest way to emulate the `index_by_first/second_value` method in python? If you don't understand what's going on; if you have a list of tuples (as is contained in `list_`), how would you go about finding the index of a tuple with the first/second value of the tuple being the element you want to index? --- My best guess would be this: ``` [i[0] for i in list_].index(item1) [i[1] for i in list_].index(item2) ``` But I'm interested in seeing what you guys will come up with. Any ideas?
2014/02/19
[ "https://Stackoverflow.com/questions/21869675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3002473/" ]
At first, I thought along [the same lines as Nick T](https://stackoverflow.com/a/21869852/418413). Your method is fine if the number of tuples (N) is short. But of course a linear search is O(N). As the number of tuples increases, the time increases directly with it. You can get O(1) lookup time with a dict mapping the zeroth element of each tuple to its index: ``` {el[0]:idx for idx,el in enumerate(list_)} ``` But the cost of converting the list to a dict may be too high! Here are my results: ``` >>> from timeit import timeit as t >>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)") 1.557116985321045 >>> t('[i[0] for i in list_].index(1)', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)") 7.415766954421997 >>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(10)]; random.shuffle(list_)") 2.1753010749816895 >>> t('{el[0]:idx for idx,el in enumerate(list_)}[1]', "import random;list_=[(i,'a') for i in range(100)]; random.shuffle(list_)") 15.062835216522217 ``` So the list-to-dict conversion is killing any benefit we get from having the O(1) lookups. But just to prove that the dict is really fast if we can avoid doing the conversion more than once: ``` >>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(10)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}") 0.050583839416503906 >>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(100)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}") 0.05001211166381836 >>> t('dict_[1]', "import random;list_=[(i,'a') for i in range(1000)];random.shuffle(list_);dict_={el[0]:idx for idx,el in enumerate(list_)}") 0.050894975662231445 ```
@Nick T I think some time is wasted enumerating the list and then converting it to a dictionary, so even if it is an O(1) lookup for a dict, creating the dict in the first place is too costly to consider it a viable option for large lists. This is the test I used to determine it: ``` import time l = [(i, chr(i)) for i in range(1000000)] def test1(): t1 = time.time() ([i[0] for i in l].index(10872)) t2 = time.time() return t2 - t1 def test2(): t1 = time.time() (dict((kv[0], (i, kv[1])) for i, kv in enumerate(l))[10872][0]) t2 = time.time() return t2 - t1 def test3(): sum1 = [] sum2 = [] for i in range(1000): sum1.append(test1()) sum2.append(test2()) print(sum(sum1)/1000) print(sum(sum2)/1000) test3() ``` EDIT: Haha Kojiro, you beat me to it!