qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
37,422,530
Working my way through a beginners Python book and there's two fairly simple things I don't understand, and was hoping someone here might be able to help. The example in the book uses regular expressions to take in email addresses and phone numbers from a clipboard and output them to the console. The code looks like this: ``` #! python3 # phoneAndEmail.py - Finds phone numbers and email addresses on the clipboard. import pyperclip, re # Create phone regex. phoneRegex = re.compile(r'''( (\d{3}|\(\d{3}\))? #[1] area code (\s|-|\.)? #[2] separator (\d{3}) #[3] first 3 digits (\s|-|\.) #[4] separator (\d{4}) #[5] last 4 digits (\s*(ext|x|ext.)\s*(\d{2,5}))? #[6] extension )''', re.VERBOSE) # Create email regex. emailRegex = re.compile(r'''( [a-zA-Z0-9._%+-]+ @ [\.[a-zA-Z0-9.-]+ (\.[a-zA-Z]{2,4}) )''', re.VERBOSE) # Find matches in clipboard text. text = str(pyperclip.paste()) matches = [] for groups in phoneRegex.findall(text): phoneNum = '-'.join([groups[1], groups[3], groups[5]]) if groups [8] != '': phoneNum += ' x' + groups[8] matches.append(phoneNum) for groups in emailRegex.findall(text): matches.append(groups[0]) # Copy results to the clipboard. if len(matches) > 0: pyperclip.copy('\n'.join(matches)) print('Copied to Clipboard:') print('\n'.join(matches)) else: print('No phone numbers of email addresses found') ``` Okay, so firstly, I don't really understand the phoneRegex object. The book mentions that adding parentheses will create groups in the regular expression. If that's the case, are my assumed index values in the comments wrong and should there really be two groups in the index marked one? Or if they're correct, what does groups[7,8] refer to in the matching loop below for phone numbers? Secondly, why does the emailRegex use a mixture of lists and tuples, while the phoneRegex uses mainly tuples? **Edit 1** Thanks for the answers so far, they've been helpful. Still kind of confused on the first part though. Should there be eight indexes like rock321987's answer or nine like sweaver2112's one? **Edit 2** Answered, thank you.
2016/05/24
[ "https://Stackoverflow.com/questions/37422530", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5195054/" ]
every opening left `(` marks the beginning of a capture group, and you can nest them: ``` ( #[1] around whole pattern (\d{3}|\(\d{3}\))? #[2] area code (\s|-|\.)? #[3] separator (\d{3}) #[4] first 3 digits (\s|-|\.) #[5] separator (\d{4}) #[6] last 4 digits (\s*(ext|x|ext.)\s*(\d{2,5}))? #[7,8,9] extension ) ``` You should use [named groups](https://docs.python.org/2/howto/regex.html#non-capturing-and-named-groups) here `(?<groupname>pattern)`, along with clustering only parens `(?:pattern)` that don't capture anything. And remember, you should capture quantified constructs, not quantify captured constructs: ``` (?<areacode>(?:\d{3}|\(\d{3}\))?) (?<separator>(?:\s|-|\.)?) (?<exchange>\d{3}) (?<separator2>\s|-|\.) (?<lastfour>\d{4}) (?<extension>(?:\s*(?:ext|x|ext.)\s*(?:\d{2,5}))?) ```
for the first part of your question see sweaver2112's answer for the second part, the both use lists and tuples. In Regex \d is the same as [0-9] it's just easier to write. in the same vein they could have written \w for [a-zA-Z] but that wouldn't account for special characters or 0-9 making it a little easier to put [a-zA-Z0-9.-]
14,074,149
I'm having a bit of difficulty figuring out what my next steps should be. I am using tastypie to create an API for my web application. From another application, specifically ifbyphone.com, I am receiving a POST with no headers that looks something like this: ``` post data:http://myapp.com/api/ callerid=1&someid=2&number=3&result=Answered&phoneid=4 ``` Now, I see in my server logs that this is hitting my server.But tastypie is complaining about the format of the POST. > > {"error\_message": "The format indicated > 'application/x-www-form-urlencoded' had no available deserialization > method. Please check your `formats` and `content_types` on your > Serializer.", "traceback": "Traceback (most recent call last):\n\n > File \"/usr/local/lib/python2.7/dist-packages/tastypie/resources.py\" > > > I also receive the same message when I try to POST raw data using curl, which I "believe" is the same basic process being used by ifbyphone's POST method: ``` curl -X POST --data 'callerid=1&someid=2&number=3&duration=4&phoneid=5' http://myapp.com/api/ ``` So, assuming my problem is actually what is specified in the error message, and there is no deserialization method, how would I go about writing one? ### #### Update ###### With some help from this commit ( <https://github.com/toastdriven/django-tastypie/commit/7c5ea699ff6a5e8ba0788f23446fa3ac31f1b8bf> ) I've been playing around with writing my own serializer, copying the basic framework from the documentation ( <https://django-tastypie.readthedocs.org/en/latest/serialization.html#implementing-your-own-serializer> ) ``` import urlparse from tastypie.serializers import Serializer class urlencodeSerializer(Serializer): formats = ['json', 'jsonp', 'xml', 'yaml', 'html', 'plist', 'urlencode'] content_types = { 'json': 'application/json', 'jsonp': 'text/javascript', 'xml': 'application/xml', 'yaml': 'text/yaml', 'html': 'text/html', 'plist': 'application/x-plist', 'urlencode': 'application/x-www-form-urlencoded', } def from_urlencode(self, data,options=None): """ handles basic formencoded url posts """ qs = dict((k, v if len(v)>1 else v[0] ) for k, v in urlparse.parse_qs(data).iteritems()) return qs def to_urlencode(self,content): pass ```
2012/12/28
[ "https://Stackoverflow.com/questions/14074149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/170352/" ]
This worked as expected when I edited my resource model to actually use the serializer class I created. This was not clear in the documentation. ``` class urlencodeSerializer(Serializer): formats = ['json', 'jsonp', 'xml', 'yaml', 'html', 'plist', 'urlencode'] content_types = { 'json': 'application/json', 'jsonp': 'text/javascript', 'xml': 'application/xml', 'yaml': 'text/yaml', 'html': 'text/html', 'plist': 'application/x-plist', 'urlencode': 'application/x-www-form-urlencoded', } def from_urlencode(self, data,options=None): """ handles basic formencoded url posts """ qs = dict((k, v if len(v)>1 else v[0] ) for k, v in urlparse.parse_qs(data).iteritems()) return qs def to_urlencode(self,content): pass MyModelResource(ModelResoucre): class Meta: ... serializer = urlencodeSerializer() # IMPORTANT ```
I would add a modification to the from\_urlencode mentioned in Brandon Bertelsen's post to work better with international characters: ``` def from_urlencode(self, data, options=None): """ handles basic formencoded url posts """ qs = {} for k, v in urlparse.parse_qs(data).iteritems(): value = v if len(v)>1 else v[0] value = value.encode("latin-1").decode('utf-8') qs[k] = value return qs ``` I'm not sure if this is because of an environmental reason on my side, but I found that when using the following string "ÁáÄäÅåÉéÍíÑñÓóÖöÚúÜü" and the original function, I ran into some problems. When this string gets URL encoded, it turns into: "%C3%81%C3%A1%C3%84%C3%A4%C3%85%C3%A5%C3%89%C3%A9%C3%8D%C3%AD%C3%91%C3%B1%C3%93%C3%B3%C3%96%C3%B6%C3%9A%C3%BA%C3%9C%C3%BC" When this gets URL decoded, we have: u'\xc3\x81\xc3\xa1\xc3\x84\xc3\xa4\xc3\x85\xc3\xa5\xc3\x89\xc3\xa9\xc3\x8d\xc3\xad\xc3\x91\xc3\xb1\xc3\x93\xc3\xb3\xc3\x96\xc3\xb6\xc3\x9a\xc3\xba\xc3\x9c\xc3\xbc' The problem here is that this string appears to be unicode, but it actually isn't, so the above string gets converted to: "ÃáÃäÃÃ¥ÃéÃíÃñÃóÃÃ" I found that if I interpreted the URL decoded value as latin-1, and then decoded it for UTF-8, I got the correct original string.
65,934,494
I have three boolean arrays: shift\_list, shift\_assignment, work。 shift\_list:rows represent shift, columns represent time. shift\_assignment:rows represent employee, columns represent shifts work: rows represent employee, columns represent time. **I want to change the value in work by changing the value in shift\_assignment, for example:** if I set shift\_assignment[0,2]==1 then work's Row e0 should be [0,0,1,1,1,0,0] , the [0,0,1,1,1,0,0] row shoud come from shift\_list's row s2. my purpose is to control work array through shift\_assignment,and the value of work must come from shift\_list. sorry,my english! [![enter image description here](https://i.stack.imgur.com/IPlXM.png)](https://i.stack.imgur.com/IPlXM.png) [![enter image description here](https://i.stack.imgur.com/70FeC.png)](https://i.stack.imgur.com/70FeC.png) [![enter image description here](https://i.stack.imgur.com/w1RJX.png)](https://i.stack.imgur.com/w1RJX.png) ```py from ortools.sat.python import cp_model model = cp_model.CpModel() solver = cp_model.CpSolver() shift_list=[[1,1,1,0,0,0,0], [0,1,1,1,0,0,0], [0,0,1,1,1,0,0], [0,0,0,1,1,1,0], [0,0,0,0,1,1,1]] shift_assignment={} for i in range(5): for j in range(5): shift_assignment[i,j] = model.NewBoolVar("shifts(%i,%i)" % (i,j)) work={} for i in range(5): for j in range(7): work[i,j] = model.NewBoolVar("work(%i,%i)" % (i,j)) for i in range(5): model.Add(sum(shift_assignment[i,j] for j in range(5))==1) for i in range(5): model.Add(how can i do?).OnlyEnforceIf(shift_assignment[i,j]) model.Add(shift_assignment[0,2]==1) model.Add(shift_assignment[1,1]==1) model.Add(shift_assignment[2,3]==1) model.Add(shift_assignment[3,4]==1) model.Add(shift_assignment[4,0]==1) res=np.zeros([5,7]) status = solver.Solve(model) print("status:",status) for i in range(5): for j in range(7): res[i,j]=solver.Value(work[i,j]) print(res) ```
2021/01/28
[ "https://Stackoverflow.com/questions/65934494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13863269/" ]
thank to @Laurent Perron! ``` from ortools.sat.python import cp_model model = cp_model.CpModel() solver = cp_model.CpSolver() shift_list=[[1,1,1,0,0,0,0], [0,1,1,1,0,0,0], [0,0,1,1,1,0,0], [0,0,0,1,1,1,0], [0,0,0,0,1,1,1]] num_emp = 5 num_shift=5 num_time = 7 work={} shift_assignment={} for e in range(num_emp): for s in range(num_shift): shift_assignment[e,s] = model.NewBoolVar("shifts(%i,%i)" % (e,s)) for e in range(num_emp): for t in range(num_time): work[e,t] = model.NewBoolVar("work(%i,%i)" % (e,t)) for e in range(num_emp): model.Add(sum(shift_assignment[e,s] for s in range(num_shift))==1) for e in range(num_emp): for s in range(num_shift): and_ls=[] or_ls=[] for t in range(num_time): if shift_list[s][t]: and_ls.append(work[e,t]) or_ls.append(work[e,t].Not()) else: and_ls.append(work[e,t].Not()) or_ls.append(work[e,t]) or_ls.append(shift_assignment[e,s]) model.AddBoolAnd(and_ls).OnlyEnforceIf(shift_assignment[e,s]) model.AddBoolOr(or_ls) model.Add(shift_assignment[0,2]==1) model.Add(shift_assignment[1,1]==1) model.Add(shift_assignment[2,3]==1) model.Add(shift_assignment[3,4]==1) model.Add(shift_assignment[4,0]==1) status = solver.Solve(model) print("status:",status) res=np.zeros([num_emp,num_time]) for e in range(num_emp): for t in range(num_time): res[e,t]=solver.Value(work[e,t]) print(res) ``` [![enter image description here](https://i.stack.imgur.com/zZU2H.png)](https://i.stack.imgur.com/zZU2H.png)
Basically you need a set of implications. looking only at the first worker: work = [w0, w1, w2, w3, w4, w5, w6] shift = [s0, s1, s2, s3, s4] ``` shift_list=[[1,1,1,0,0,0,0], [0,1,1,1,0,0,0], [0,0,1,1,1,0,0], [0,0,0,1,1,1,0], [0,0,0,0,1,1,1]] ``` so ``` w0 <=> s0 w1 <=> or(s0, s1) w2 <=> or(s0, s1, s2) w3 <=> or(s1, s2, s3) w4 <=> or(s2, s3, s4) w5 <=> or(s3, s4) w6 <=> s4 ``` where you encode `l0 <=> or(l1, ..., ln)` by writing ``` # l0 implies or(l1, .., ln) or(l0.Not(), l1, .., ln) # or(l1, .., ln) implies l0 forall i in 1..n: implication(li, l0) ```
30,893,843
I've the same issue as asked by the OP in [How to import or include data structures (e.g. a dict) into a Python file from a separate file](https://stackoverflow.com/questions/2132985/how-to-import-or-include-data-structures-e-g-a-dict-into-a-python-file-from-a). However for some reason i'm unable to get it working. My setup is as follows: file1.py: ``` TMP_DATA_FILE = {'a':'val1', 'b':'val2'} ``` file2.py: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` When i do this and run the script from cmd line, it says string indices must be integers. When i do `type(TMP_DATA_FILE)`, i get class 'str'. I tried to convert this to dict to be able to use dict operations, but couldn't get it working. If i do `print(TMP_DATA_FILE.get(var))`, since i'm developing using PyCharm, it lists dict operations like get(), keys(), items(), fromkeys() etc, however when i run the program from command line it says 'str' object has no attributes 'get'. When i do `print(TMP_DATA_FILE)` it just lists 'val1'. It doesn't list 'a', 'b', 'val2'. However the same script when run from PyCharm works without any issues. It's just when i run the script from command line as a separate interpreter process it gives those errors. I'm not sure if it's PyCharm that's causing the errors or if i'm doing anything wrong. Originally i had only one key:value in the dict variable and it worked, when i added new key:value pair that's when it started giving those errors. I've also tried using `ast.literal_eval` & `eval`, neither of them worked. Not sure where i'm going wrong. Any help would be highly appreciated. Thanks.
2015/06/17
[ "https://Stackoverflow.com/questions/30893843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3149936/" ]
There are two ways you can access variable `TMP_DATA_FILE` in file `file1.py`: ``` import file1 var = 'a' print(file1.TMP_DATA_FILE[var]) ``` or: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` `file1.py` is in a directory contained in the python search path, or in the same directory as the file importing it. Check [this answer](https://stackoverflow.com/questions/3144089/expand-python-search-path-to-other-source#answer-3144107) about the python search path.
You calling it the wrong way. It should be like this : ``` print file1.TMP_DATA_FILE[var] ```
30,893,843
I've the same issue as asked by the OP in [How to import or include data structures (e.g. a dict) into a Python file from a separate file](https://stackoverflow.com/questions/2132985/how-to-import-or-include-data-structures-e-g-a-dict-into-a-python-file-from-a). However for some reason i'm unable to get it working. My setup is as follows: file1.py: ``` TMP_DATA_FILE = {'a':'val1', 'b':'val2'} ``` file2.py: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` When i do this and run the script from cmd line, it says string indices must be integers. When i do `type(TMP_DATA_FILE)`, i get class 'str'. I tried to convert this to dict to be able to use dict operations, but couldn't get it working. If i do `print(TMP_DATA_FILE.get(var))`, since i'm developing using PyCharm, it lists dict operations like get(), keys(), items(), fromkeys() etc, however when i run the program from command line it says 'str' object has no attributes 'get'. When i do `print(TMP_DATA_FILE)` it just lists 'val1'. It doesn't list 'a', 'b', 'val2'. However the same script when run from PyCharm works without any issues. It's just when i run the script from command line as a separate interpreter process it gives those errors. I'm not sure if it's PyCharm that's causing the errors or if i'm doing anything wrong. Originally i had only one key:value in the dict variable and it worked, when i added new key:value pair that's when it started giving those errors. I've also tried using `ast.literal_eval` & `eval`, neither of them worked. Not sure where i'm going wrong. Any help would be highly appreciated. Thanks.
2015/06/17
[ "https://Stackoverflow.com/questions/30893843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3149936/" ]
If you have the `package`s created in your project, then you have to link the file from the main project. For example: If you have a folder `Z` which contains 2 folders `A` and `B` and those 2 files `file1.py` and `file2.py` are present in `A` and `B` folders, then you have to import it this way ``` import Z.A.file1 print(file1.TMP_DATA_FILE) ```
You calling it the wrong way. It should be like this : ``` print file1.TMP_DATA_FILE[var] ```
30,893,843
I've the same issue as asked by the OP in [How to import or include data structures (e.g. a dict) into a Python file from a separate file](https://stackoverflow.com/questions/2132985/how-to-import-or-include-data-structures-e-g-a-dict-into-a-python-file-from-a). However for some reason i'm unable to get it working. My setup is as follows: file1.py: ``` TMP_DATA_FILE = {'a':'val1', 'b':'val2'} ``` file2.py: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` When i do this and run the script from cmd line, it says string indices must be integers. When i do `type(TMP_DATA_FILE)`, i get class 'str'. I tried to convert this to dict to be able to use dict operations, but couldn't get it working. If i do `print(TMP_DATA_FILE.get(var))`, since i'm developing using PyCharm, it lists dict operations like get(), keys(), items(), fromkeys() etc, however when i run the program from command line it says 'str' object has no attributes 'get'. When i do `print(TMP_DATA_FILE)` it just lists 'val1'. It doesn't list 'a', 'b', 'val2'. However the same script when run from PyCharm works without any issues. It's just when i run the script from command line as a separate interpreter process it gives those errors. I'm not sure if it's PyCharm that's causing the errors or if i'm doing anything wrong. Originally i had only one key:value in the dict variable and it worked, when i added new key:value pair that's when it started giving those errors. I've also tried using `ast.literal_eval` & `eval`, neither of them worked. Not sure where i'm going wrong. Any help would be highly appreciated. Thanks.
2015/06/17
[ "https://Stackoverflow.com/questions/30893843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3149936/" ]
There are two ways you can access variable `TMP_DATA_FILE` in file `file1.py`: ``` import file1 var = 'a' print(file1.TMP_DATA_FILE[var]) ``` or: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` `file1.py` is in a directory contained in the python search path, or in the same directory as the file importing it. Check [this answer](https://stackoverflow.com/questions/3144089/expand-python-search-path-to-other-source#answer-3144107) about the python search path.
Correct variant: ``` import file1 var = 'a' print(file1.TMP_DATA_FILE[var]) ``` or ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ```
30,893,843
I've the same issue as asked by the OP in [How to import or include data structures (e.g. a dict) into a Python file from a separate file](https://stackoverflow.com/questions/2132985/how-to-import-or-include-data-structures-e-g-a-dict-into-a-python-file-from-a). However for some reason i'm unable to get it working. My setup is as follows: file1.py: ``` TMP_DATA_FILE = {'a':'val1', 'b':'val2'} ``` file2.py: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` When i do this and run the script from cmd line, it says string indices must be integers. When i do `type(TMP_DATA_FILE)`, i get class 'str'. I tried to convert this to dict to be able to use dict operations, but couldn't get it working. If i do `print(TMP_DATA_FILE.get(var))`, since i'm developing using PyCharm, it lists dict operations like get(), keys(), items(), fromkeys() etc, however when i run the program from command line it says 'str' object has no attributes 'get'. When i do `print(TMP_DATA_FILE)` it just lists 'val1'. It doesn't list 'a', 'b', 'val2'. However the same script when run from PyCharm works without any issues. It's just when i run the script from command line as a separate interpreter process it gives those errors. I'm not sure if it's PyCharm that's causing the errors or if i'm doing anything wrong. Originally i had only one key:value in the dict variable and it worked, when i added new key:value pair that's when it started giving those errors. I've also tried using `ast.literal_eval` & `eval`, neither of them worked. Not sure where i'm going wrong. Any help would be highly appreciated. Thanks.
2015/06/17
[ "https://Stackoverflow.com/questions/30893843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3149936/" ]
If you have the `package`s created in your project, then you have to link the file from the main project. For example: If you have a folder `Z` which contains 2 folders `A` and `B` and those 2 files `file1.py` and `file2.py` are present in `A` and `B` folders, then you have to import it this way ``` import Z.A.file1 print(file1.TMP_DATA_FILE) ```
Correct variant: ``` import file1 var = 'a' print(file1.TMP_DATA_FILE[var]) ``` or ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ```
30,893,843
I've the same issue as asked by the OP in [How to import or include data structures (e.g. a dict) into a Python file from a separate file](https://stackoverflow.com/questions/2132985/how-to-import-or-include-data-structures-e-g-a-dict-into-a-python-file-from-a). However for some reason i'm unable to get it working. My setup is as follows: file1.py: ``` TMP_DATA_FILE = {'a':'val1', 'b':'val2'} ``` file2.py: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` When i do this and run the script from cmd line, it says string indices must be integers. When i do `type(TMP_DATA_FILE)`, i get class 'str'. I tried to convert this to dict to be able to use dict operations, but couldn't get it working. If i do `print(TMP_DATA_FILE.get(var))`, since i'm developing using PyCharm, it lists dict operations like get(), keys(), items(), fromkeys() etc, however when i run the program from command line it says 'str' object has no attributes 'get'. When i do `print(TMP_DATA_FILE)` it just lists 'val1'. It doesn't list 'a', 'b', 'val2'. However the same script when run from PyCharm works without any issues. It's just when i run the script from command line as a separate interpreter process it gives those errors. I'm not sure if it's PyCharm that's causing the errors or if i'm doing anything wrong. Originally i had only one key:value in the dict variable and it worked, when i added new key:value pair that's when it started giving those errors. I've also tried using `ast.literal_eval` & `eval`, neither of them worked. Not sure where i'm going wrong. Any help would be highly appreciated. Thanks.
2015/06/17
[ "https://Stackoverflow.com/questions/30893843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3149936/" ]
There are two ways you can access variable `TMP_DATA_FILE` in file `file1.py`: ``` import file1 var = 'a' print(file1.TMP_DATA_FILE[var]) ``` or: ``` from file1 import TMP_DATA_FILE var = 'a' print(TMP_DATA_FILE[var]) ``` `file1.py` is in a directory contained in the python search path, or in the same directory as the file importing it. Check [this answer](https://stackoverflow.com/questions/3144089/expand-python-search-path-to-other-source#answer-3144107) about the python search path.
If you have the `package`s created in your project, then you have to link the file from the main project. For example: If you have a folder `Z` which contains 2 folders `A` and `B` and those 2 files `file1.py` and `file2.py` are present in `A` and `B` folders, then you have to import it this way ``` import Z.A.file1 print(file1.TMP_DATA_FILE) ```
60,992,072
I have a mini-program that can read text files and turn simple phrases into python code, it has Lexer, Parser, everything, I managed to make it play sound using "winsound" but for some reason, it plays the sound as long as the function does not return, this specific part in the code looks like this: ``` winsound.PlaySound(self.master.files.get(args[1]), winsound.SND_ASYNC | winsound.SND_LOOP | winsound.SND_NODEFAULT) time.sleep(10) return True ``` I used the time.sleep(10) just to experiment when the sound didn't play, and what I noticed is that it plays UNTIL the "return True" line occurs, so doing this time.sleep(10) will do it so the music will play only for 10 seconds. My question is: How can I make this play function without making the music stop whenever the function returns? **Edit**: I made is so the function will return True or False so that the superclass that manages all the commands will know whether each command ran successfully or not **Note** This is just a small portion of the code that is relevant to this question. If you suspect there's more to see in the code to understand my problem please write it in the comments :)
2020/04/02
[ "https://Stackoverflow.com/questions/60992072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12420682/" ]
Add `update` to your `ChangeNotifierProxyProvider` and change `build` to `create`. ``` ChangeNotifierProxyProvider<MyModel, MyChangeNotifier>( create: (_) => MyChangeNotifier(), update: (_, myModel, myNotifier) => myNotifier ..update(myModel), child: ... ); ``` See: <https://github.com/rrousselGit/provider/blob/master/README.md#ProxyProvider> and <https://pub.dev/documentation/provider/latest/provider/ChangeNotifierProxyProvider-class.html> Edit: Try this ``` ChangeNotifierProxyProvider<Auth, Products>( create: (c) => Products(Provider.of<Auth>(c, listen: false).token), update: (_, auth, products) => products.authToken = auth.token, ), ```
You can use it like this: ``` ListView.builder( physics: NeverScrollableScrollPhysics(), scrollDirection: Axis.vertical, itemCount: rrr.length, itemBuilder: (ctx, index) => ChangeNotifierProvider.value( value: rrr[index], child: ChildItem()), ), ``` Information about the provider content is in `ChildItem()`
16,903,936
How can I change the location of the .vim folder and the .vimrc file so that I can use two (or more) independent versions of vim? Is there a way to configure that while compiling vim from source? (maybe an entry in the feature.h?) Why do I want to do such a thing?: I have to work on project that use python2 as well as python3, therefore I want to have two independent vim setups with different plugins, configurations etc. Moreover, one version has to be compiled with +python, the other with +python3.
2013/06/03
[ "https://Stackoverflow.com/questions/16903936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2344834/" ]
You can influence which `~/.vimrc` is used via the `-u vimrc-file` command-line argument. Since this is the first initialization, you can then influence from where plugins are loaded (i.e. the `.vim` location) by modifying `'runtimepath'` in there. Note that for editing Python files of different versions, those settings (like indent, completion sources, etc.) are taken from *filetype* plugins which are sourced for every buffer separately, so it should be possible to even edit both Python 2 and 3 in the same Vim instance. (Unless you have some badly written plugins that define global stuff.) So for that, some sort of per-buffer configuration (some `:autocmd`s on the project path, or some more elaborate solution (search for *localrc* plugins or questions about *project vimrc* here) might suffice already. Also note that when the Python interpreter (which you'd only need for Python-based plugins and some interactive `:py` commands, not for editing Python) is compiled in with *dynamic linking* (which is the default at least on Windows), you can have both Python 2 **and** 3 support in the same Vim binary.
I think the easiest solution would be just to let pathogen handle your runtimepath for you. `pathogen#infect()` can take paths that specify different directories that you can use for your bundle directory. So if your `.vim` directory would look like this ``` .vim/ autoload/ pathogen.vim bundle_python2/ <plugins> bundle_python3/ <other plugins> ``` Then inside one of your vimrc for python 2 specific stuff you would have ``` call pathogen#infect('bundle_python2/{}') ``` and for python 3 specific stuff you would have ``` call pathogen#infect('bundle_python3/{}') ``` Since each plugin folder is really just a `.vim` folder you can place your python specific configuration stuff in a folder of the corresponding bundle and pretend its a `.vim`. This structure also has the added benefit that you can change both configurations at the same time if you feel like it by putting common stuff in `.vim` normally. You can also pass multiple bundle directories if you feel like to pathogen so you can have plugins that are shared without duplicating files. You just do this by passing multiple paths to `pathogen#infect('bundle/{}', 'bundle_python3/{}')` After this is all done you can just create aliases for vim to call the correct vimrc file.
16,903,936
How can I change the location of the .vim folder and the .vimrc file so that I can use two (or more) independent versions of vim? Is there a way to configure that while compiling vim from source? (maybe an entry in the feature.h?) Why do I want to do such a thing?: I have to work on project that use python2 as well as python3, therefore I want to have two independent vim setups with different plugins, configurations etc. Moreover, one version has to be compiled with +python, the other with +python3.
2013/06/03
[ "https://Stackoverflow.com/questions/16903936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2344834/" ]
You can influence which `~/.vimrc` is used via the `-u vimrc-file` command-line argument. Since this is the first initialization, you can then influence from where plugins are loaded (i.e. the `.vim` location) by modifying `'runtimepath'` in there. Note that for editing Python files of different versions, those settings (like indent, completion sources, etc.) are taken from *filetype* plugins which are sourced for every buffer separately, so it should be possible to even edit both Python 2 and 3 in the same Vim instance. (Unless you have some badly written plugins that define global stuff.) So for that, some sort of per-buffer configuration (some `:autocmd`s on the project path, or some more elaborate solution (search for *localrc* plugins or questions about *project vimrc* here) might suffice already. Also note that when the Python interpreter (which you'd only need for Python-based plugins and some interactive `:py` commands, not for editing Python) is compiled in with *dynamic linking* (which is the default at least on Windows), you can have both Python 2 **and** 3 support in the same Vim binary.
Note: I don't really recommend doing this. If you really really want to recompile vim so that it uses a different vimrc and different configuration directory take a look at `src/feature.h` Search this file for `USR_VIMRC_FILE`. Uncomment it and place the name of your vimrc here. This will change the defualt vimrc file. So it should look something like this ``` #define USR_VIMRC_FILE "~/path/to/vimrc" ``` Inside `src/os_unix.h` or `src/os_mac.h` and search for `DFLT_RUNTIMEPATH`. Change all instance of `~/.vim` to whatever folder you want. This should set the default runtime path that vim searches for settings.
16,903,936
How can I change the location of the .vim folder and the .vimrc file so that I can use two (or more) independent versions of vim? Is there a way to configure that while compiling vim from source? (maybe an entry in the feature.h?) Why do I want to do such a thing?: I have to work on project that use python2 as well as python3, therefore I want to have two independent vim setups with different plugins, configurations etc. Moreover, one version has to be compiled with +python, the other with +python3.
2013/06/03
[ "https://Stackoverflow.com/questions/16903936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2344834/" ]
You can influence which `~/.vimrc` is used via the `-u vimrc-file` command-line argument. Since this is the first initialization, you can then influence from where plugins are loaded (i.e. the `.vim` location) by modifying `'runtimepath'` in there. Note that for editing Python files of different versions, those settings (like indent, completion sources, etc.) are taken from *filetype* plugins which are sourced for every buffer separately, so it should be possible to even edit both Python 2 and 3 in the same Vim instance. (Unless you have some badly written plugins that define global stuff.) So for that, some sort of per-buffer configuration (some `:autocmd`s on the project path, or some more elaborate solution (search for *localrc* plugins or questions about *project vimrc* here) might suffice already. Also note that when the Python interpreter (which you'd only need for Python-based plugins and some interactive `:py` commands, not for editing Python) is compiled in with *dynamic linking* (which is the default at least on Windows), you can have both Python 2 **and** 3 support in the same Vim binary.
I found a way to do this! You can just create a fake $HOME, whose contents are simply the `.vim` folder and `.vimrc`. Then, when running vim, set the HOME environment variable to that folder, and change it back on `VimEnter`. Run vim with: ``` OLD_HOME="$HOME" HOME="$FAKE_HOME" vim ``` Add this to your `.vimrc`: ``` autocmd VimEnter * let $HOME = $OLD_HOME ``` On Windows you can use ``` let $HOME = $HOMEDRIVE.$HOMEPATH ``` insetead, no need to store the old home. This works, but if you use $HOME inside your vimrc or any plugins will see the old value, it might affect them somehow. So far I haven't seen it.
16,130,549
I've got an internet site running on tornado, with video features (convert, cut, merge). The video traitement is quite long, so i want to move it to another python process, and keep the tornado process as light as possible. I use the mongo db for commun db functionalities, synchronously as the db will stay light.
2013/04/21
[ "https://Stackoverflow.com/questions/16130549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1538095/" ]
There are several options: * [jQuery UI](http://jqueryui.com/) * [YUI](http://yuilibrary.com/) * [ninjaui](http://ninjaui.com/)
Use [kendo UI](http://www.kendoui.com/) Comprehensive HTML5/JavaScript framework for modern web and mobile app development Kendo UI is everything professional developers need to build HTML5 sites and mobile apps. Today, productivity of an average HTML/jQuery developer is hampered by assembling a Frankenstein framework of disparate JavaScript libraries and plug-ins. Kendo UI has it all: rich jQuery-based widgets, a simple and consistent programming interface, a rock-solid DataSource, validation, internationalization, a MVVM framework, themes, templates and the list goes on. WEB DEMOS are [here](http://demos.kendoui.com/web/overview/index.html) Stackoverflow question are [here](https://stackoverflow.com/questions/tagged/kendo-ui) about Kendo UI
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
**Python 3.X** ``` myString.translate({ord('X'):'X\n'}) ``` `translate()` allows a dict, so, you can replace more than one different character at time. Why `translate()` over `replace()` ? Check [translate vs replace](https://stackoverflow.com/questions/31143290/python-str-translate-vs-str-replace) **Python 2.7** ``` myString.maketrans('X','X\n') ```
A list has no `split` method (as the error says). Assuming `myList` is a list of strings and you want to replace `'X'` with `'X\n'` in each once of them, you can use list comprehension: ``` new_list = [string.replace('X', 'X\n') for string in myList] ```
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
``` myString = '1X2X3X' print (myString.replace ('X', 'X\n')) ```
**Python 3.X** ``` myString.translate({ord('X'):'X\n'}) ``` `translate()` allows a dict, so, you can replace more than one different character at time. Why `translate()` over `replace()` ? Check [translate vs replace](https://stackoverflow.com/questions/31143290/python-str-translate-vs-str-replace) **Python 2.7** ``` myString.maketrans('X','X\n') ```
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
**Python 3.X** ``` myString.translate({ord('X'):'X\n'}) ``` `translate()` allows a dict, so, you can replace more than one different character at time. Why `translate()` over `replace()` ? Check [translate vs replace](https://stackoverflow.com/questions/31143290/python-str-translate-vs-str-replace) **Python 2.7** ``` myString.maketrans('X','X\n') ```
Based on your question details, it sounds like the most suitable is str.replace, as suggested by @DeepSpace. @levi's answer is also applicable, but could be a bit of a too big cannon to use. I add to those an even more powerful tool - regex, which is slower and harder to grasp, but in case this is not really "X" -> "X\n" substitution you actually try to do, but something more complex, you should consider: ``` import re result_string = re.sub("X", "X\n", original_string) ``` For more details: <https://docs.python.org/2/library/re.html#re.sub>
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
``` myString = '1X2X3X' print (myString.replace ('X', 'X\n')) ```
A list has no `split` method (as the error says). Assuming `myList` is a list of strings and you want to replace `'X'` with `'X\n'` in each once of them, you can use list comprehension: ``` new_list = [string.replace('X', 'X\n') for string in myList] ```
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
You can simply replace X by "X\n" ``` myString.replace("X","X\n") ```
A list has no `split` method (as the error says). Assuming `myList` is a list of strings and you want to replace `'X'` with `'X\n'` in each once of them, you can use list comprehension: ``` new_list = [string.replace('X', 'X\n') for string in myList] ```
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
A list has no `split` method (as the error says). Assuming `myList` is a list of strings and you want to replace `'X'` with `'X\n'` in each once of them, you can use list comprehension: ``` new_list = [string.replace('X', 'X\n') for string in myList] ```
Based on your question details, it sounds like the most suitable is str.replace, as suggested by @DeepSpace. @levi's answer is also applicable, but could be a bit of a too big cannon to use. I add to those an even more powerful tool - regex, which is slower and harder to grasp, but in case this is not really "X" -> "X\n" substitution you actually try to do, but something more complex, you should consider: ``` import re result_string = re.sub("X", "X\n", original_string) ``` For more details: <https://docs.python.org/2/library/re.html#re.sub>
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
``` myString = '1X2X3X' print (myString.replace ('X', 'X\n')) ```
You can simply replace X by "X\n" ``` myString.replace("X","X\n") ```
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
``` myString = '1X2X3X' print (myString.replace ('X', 'X\n')) ```
Based on your question details, it sounds like the most suitable is str.replace, as suggested by @DeepSpace. @levi's answer is also applicable, but could be a bit of a too big cannon to use. I add to those an even more powerful tool - regex, which is slower and harder to grasp, but in case this is not really "X" -> "X\n" substitution you actually try to do, but something more complex, you should consider: ``` import re result_string = re.sub("X", "X\n", original_string) ``` For more details: <https://docs.python.org/2/library/re.html#re.sub>
38,888,714
What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error ``` for myItem in myList.split('X'): myString = myString.join(myItem.replace('X','X\n')) ```
2016/08/11
[ "https://Stackoverflow.com/questions/38888714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284097/" ]
You can simply replace X by "X\n" ``` myString.replace("X","X\n") ```
Based on your question details, it sounds like the most suitable is str.replace, as suggested by @DeepSpace. @levi's answer is also applicable, but could be a bit of a too big cannon to use. I add to those an even more powerful tool - regex, which is slower and harder to grasp, but in case this is not really "X" -> "X\n" substitution you actually try to do, but something more complex, you should consider: ``` import re result_string = re.sub("X", "X\n", original_string) ``` For more details: <https://docs.python.org/2/library/re.html#re.sub>
72,432,540
as you see "python --version show python3.10.4 but the interpreter show python 3.7.3 [![enter image description here](https://i.stack.imgur.com/RUqlc.png)](https://i.stack.imgur.com/RUqlc.png) how can i change the envirnment in vscode
2022/05/30
[ "https://Stackoverflow.com/questions/72432540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16776924/" ]
If you click on the interpreter version being used by VSCode, you should be able to select different versions across your device. [![Interpreter version](https://i.stack.imgur.com/6tWBe.png)](https://i.stack.imgur.com/6tWBe.png)
Selecting the interpreter in VSCode: <https://code.visualstudio.com/docs/python/environments#_work-with-python-interpreters> To run `streamlit` in `vscode`: Open the `launch.json` file of your project. Copy the following: ``` { "configurations": [ { "name": "Python:Streamlit", "type": "python", "request": "launch", "module": "streamlit", "args": [ "run", "${file}" ] } ] } ```
72,432,540
as you see "python --version show python3.10.4 but the interpreter show python 3.7.3 [![enter image description here](https://i.stack.imgur.com/RUqlc.png)](https://i.stack.imgur.com/RUqlc.png) how can i change the envirnment in vscode
2022/05/30
[ "https://Stackoverflow.com/questions/72432540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16776924/" ]
If you click on the interpreter version being used by VSCode, you should be able to select different versions across your device. [![Interpreter version](https://i.stack.imgur.com/6tWBe.png)](https://i.stack.imgur.com/6tWBe.png)
Adding the following line to your setting.json (crtl+shift+P "preferences: open settings(JSON)"). ``` "terminal.integrated.env.osx": { "PATH": "" } ```
70,971,382
I want to compare two files and display the differences and the missing records in both files. Based on suggestions on this forum, I found awk is the fastest way to do it. Comparison is to be done based on composite key - match\_key and issuer\_grid\_id **Code:** ``` BEGIN { FS="[= ]" } { match(" "$0,/ match_key="[^"]+"/) key = substr($0,RSTART,RLENGTH) } NR==FNR { file1[key] = $0 next } { if ( key in file1 ) { nf = split(file1[key],tmp) for (i=1; i<nf; i+=2) { f1[key,tmp[i]] = tmp[i+1] } msg = sep = "" for (i=1; i<NF; i+=2) { if ( $(i+1) != f1[key,$i] ) { msg = msg sep OFS ARGV[1] "." $i "=" f1[key,$i] OFS FILENAME "." $i "=" $(i+1) sep = "," } } if ( msg != "" ) { print "Mismatch in row " FNR msg } delete file1[key] } else { file2[key] = $0 } } END { for (key in file1) { print "In file1 only:", key, file1[key] } for (key in file2) { print "In file2 only:", key, file2[key] } } ``` **file1:** ``` period="2021-02-28" book_base_ent_cd="U0028" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="USD" issuer_grid_id="2" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="3" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA22" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA21" ``` **file2:** ``` period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="3" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA20" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA23" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA21" ``` **file 3 (it has only one row but number of fields are more)** ``` period="2021-02-28" book_base_ent_cd="U0027" other_inst_ident="PLCHS258Q463" rep_nom_curr="PLN" reporting_basis="Unit" src_instr_class="Debt" mat_date="2026-08-25" nom_curr="PLN" primary_asset_class="Bond" seniority_type="931" security_status="alive" issuer_name="CUST38677608" intra_group_prud_scope="Issuer is not part of the reporting group" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_frbrnc_stts="NOFRBRNRNGT" prfrmng_stts="Performing" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" src_imprmnt_assssmnt_mthd="COLLECTIVE" accmltd_imprmnt="78.54" accmltd_chngs_fv_cr="0" expsr_vl="0" unit_measure="EUR" unit_measure_nv="EUR" crryng_amnt="24565.13" issuer_grid_id="38677608" match_key="PLCHS258Q463" ``` **Expected output:** ``` In file1 only : issuer_grid_id="2" match_key="PLCHS252SA22" In file2 only : issuer_grid_id="2" match_key="PLCHS252SA23" Mismatch for issuer_grid_id="2" match_key="PLCHS252SA20" : file1.book_base_ent_cd="U0028" file2.book_base_ent_cd="U0027", file1.unit_measure="USD" file2.unit_measure="EUR" ``` **Actual Output** ``` awk -f compare.awk file1 file2 Mismatch in row 1 for file1.issuer_grid_id="2" file2.issuer_grid_id="3", file1.match_key="PLCHS252SA21" file2.match_key="PLCHS252SA20" In file2 only: period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA21" ``` I am not able to find a way to do the multifield comparison? Any suggestion is appreciated. I tagged python too, if any way to do it in faster way in it. Best Regards.
2022/02/03
[ "https://Stackoverflow.com/questions/70971382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17742463/" ]
Just tweak the setting of `key` at the top to use whatever set of fields you want, and the printing of the mismatch message to be `from key ... key` instead of `from line ... FNR`: ``` $ cat tst.awk BEGIN { FS="[= ]" } { match(" "$0,/ issuer_grid_id="[^"]+"/) key = substr($0,RSTART,RLENGTH) match(" "$0,/ match_key="[^"]+"/) key = key substr($0,RSTART,RLENGTH) } NR==FNR { file1[key] = $0 next } { if ( key in file1 ) { nf = split(file1[key],tmp) for (i=1; i<nf; i+=2) { f1[key,tmp[i]] = tmp[i+1] } msg = sep = "" for (i=1; i<NF; i+=2) { if ( $(i+1) != f1[key,$i] ) { msg = msg sep OFS ARGV[1] "." $i "=" f1[key,$i] OFS FILENAME "." $i "=" $(i+1) sep = "," } } if ( msg != "" ) { print "Mismatch for key " key msg } delete file1[key] } else { file2[key] = $0 } } END { for (key in file1) { print "In file1 only:", key, file1[key] } for (key in file2) { print "In file2 only:", key, file2[key] } } ``` ``` $ awk -f tst.awk file1 file2 Mismatch for key issuer_grid_id="2" match_key="PLCHS252SA20" file1.book_base_ent_cd="U0028" file2.book_base_ent_cd="U0027", file1.unit_measure="USD" file2.unit_measure="EUR" In file1 only: issuer_grid_id="2" match_key="PLCHS252SA22" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA22" In file2 only: issuer_grid_id="2" match_key="PLCHS252SA23" period="2021-02-28" book_base_ent_cd="U0027" intra_group_acc_scope="Issuer is not part of the reporting group" frbrnc_stts="Not forborne or renegotiated" src_prfrmng_stts="KC10.1" dflt_stts_issr="Not in default" src_dflt_stts_issr="KC10.1" dflt_stts_instrmnt="Not in default" src_mes_accntng_clssfctn="AMC" prdntl_prtfl="Non-trading book" imprmnt_stts="Stage 1 (IFRS)" src_imprmnt_stts="1" imprmnt_assssmnt_mthd="Collectively assessed" unit_measure="EUR" issuer_grid_id="2" match_key="PLCHS252SA23" ```
You can use ruby sets: ``` $ cat tst.rb def f2h(fn) data={} File.open(fn){|fh| fh. each_line{|line| h=line.scan(/(\w+)="([^"]+)"/).to_h k=h.slice("issuer_grid_id", "match_key"). map{|k,v| "#{k}=#{v}"}.join(", ") data[k]=h} } data end f1=f2h(ARGV[0]) f2=f2h(ARGV[1]) mis=Hash.new { |hash, key| hash[key] = [] } (f2.keys & f1.keys).each{|k| f1[k].each{|ks,v| template="#{ks}: #{ARGV[0]}.#{f1[k][ks]}, #{ARGV[1]}.#{f2[k][ks]}" mis[k] << template if f1[k][ks]!=f2[k][ks]}} mis.each{|k,v| puts "Mismatch for key #{k} #{v.join(" ")}"} f1only=(f1.keys-f2.keys).join(", ") f2only=(f2.keys-f1.keys).join(", ") puts "Only in #{ARGV[0]}: #{f1only}\nOnly in #{ARGV[1]}: #{f2only}" ``` Then calling like so: ``` ruby tst.rb file1 file2 ``` Prints: ``` Mismatch for key issuer_grid_id=2, match_key=PLCHS252SA20 book_base_ent_cd: file1.U0028, file2.U0027 unit_measure: file1.USD, file2.EUR Only in file1: issuer_grid_id=2, match_key=PLCHS252SA22 Only in file2: issuer_grid_id=2, match_key=PLCHS252SA23 ``` (If you want quotes around the values, they are easily added.) It works because ruby support set arithmetic on arrays (this is from the ruby interactive shell): ``` irb(main):033:0> arr1=[1,2,3,4] => [1, 2, 3, 4] irb(main):034:0> arr2=[2,3,4,5] => [2, 3, 4, 5] irb(main):035:0> arr1-arr2 => [1] # only in arr1 irb(main):036:0> arr2-arr1 => [5] # only in arr2 irb(main):037:0> arr1 & arr2 => [2, 3, 4] # common between arr1 and arr2 ``` Since we are using `(f2.keys & f1.keys)` we are guaranteed to only be looping over shared keys. It therefore works just fine with your example `file3`: ``` $ ruby tst.rb file1 file3 Only in file1: issuer_grid_id=2, match_key=PLCHS252SA20, issuer_grid_id=3, match_key=PLCHS252SA20, issuer_grid_id=2, match_key=PLCHS252SA22, issuer_grid_id=2, match_key=PLCHS252SA21 Only in file3: issuer_grid_id=38677608, match_key=PLCHS258Q463 ``` Since Python also has sets, this is easily written in Python too: ``` import re def f2h(fn): di={} k1, k2="issuer_grid_id", "match_key" with open(fn) as f: for line in f: matches=dict(re.findall(r'(\w+)="([^"]+)"', line)) di[f"{k1}={matches[k1]}, {k2}={matches[k2]}"]=matches return di f1=f2h(fn1) f2=f2h(fn2) mis={} for k in set(f1.keys()) & set(f2.keys()): for ks,v in f1[k].items(): if f1[k][ks]!=f2[k][ks]: mis.setdefault(k, []).append( f"{ks}: {fn1}.{f1[k][ks]}, {fn2}.{f2[k][ks]}") for k,v in mis.items(): print(f"Mismatch for key {k} {' '.join(v)}") print(f"Only in {fn1}: {';'.join(set(f1.keys())-f2.keys())}") print(f"Only in {fn2}: {';'.join(set(f2.keys())-f1.keys())}") ``` While `awk` does not support sets, the set operations `and` and `minus` are trivial to write with associative arrays. Which then allows a `GNU awk` version of this same method: ``` function set_and(a1, a2, a3) { delete a3 for (e in a1) if (e in a2) a3[e] } function set_minus(a1, a2, a3) { delete a3 for (e in a1) if (!(e in a2)) a3[e] } function proc_line(s, data) { delete data # this is the only GNU specific portion and easily rewritten for POSIX patsplit(s,matches,/\w+="[^"]+"/) for (m in matches) { split(matches[m],kv, /=/) data[kv[1]]=kv[2] } } { proc_line($0, data) key=sprintf("issuer_grid_id=%s, match_key=%s", data["issuer_grid_id"], data["match_key"]) } FNR==NR{a1[key]=$0} FNR<NR{a2[key]=$0} END{ set_and(a1,a2, a3) for (key in a3) { ft=sprintf("Mismatch for key %s ", key) proc_line(a1[key],d1) proc_line(a2[key],d2) for (sk in d1) if (d1[sk]!=d2[sk]) { printf("%s %s %s.%s; %s.%s", ft, sk, ARGV[1], d1[sk], ARGV[2], d2[sk]) ft="" } if (ft=="") print "" } set_minus(a1,a2, a3) for (e in a3) printf("In %s only: %s\n", ARGV[1], e) set_minus(a2,a1, a3) for (e in a3) printf("In %s only: %s\n", ARGV[2], e) } ``` This works the same as the Ruby and Python version and also supports the third file example. Good luck!
72,337,348
I would like to get all text separated by double quotes and commas using python Beautifulsoup. The sample has no class or ids. Could use the div with "Information:" for parent like this: ``` try: test_var = soup.find(text='Information:').find_next('ul').find_next('li') for li in test_var.find_all: test_var = print(li.text, end="," except: test_var = '' ``` Sample: ``` <body> <div>Information:</div> <ul> <li>Text 1</li> <li>Text 2</li> <li>Text 3</li> </ul> </body> ``` The end result should be like this: "Text 1", "Text 2", "Text 3" Thank you.
2022/05/22
[ "https://Stackoverflow.com/questions/72337348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615887/" ]
Just use the [:not](https://api.jquery.com/not-selector/) selector like this: ```js $('.one:not([data-id="two"])').on('click', function() { $('.A').show(); }); $("[data-id='two'].one").on('click', function() { $('.B').show(); }); ``` ```css .one {width: 50px;margin: 10px;padding: 10px 0;text-align: center;outline: 1px solid black} .A, .B {display: none;background: yellow;width: 50px;margin: 10px;padding: 10px 0;text-align: center;outline: 1px solid black} ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
Change it to accept one or the other when any `$('.one')` is clicked: ``` $('.one').on('click', function() { if ($(this).data('id')) { $('.B').show(); } else { $('.A').show(); } }); ``` ```js if ($(this).data('id')) {... // if the `data-id` has a value ex. "2", then it is true ``` ```js $('.one').on('click', function() { if ($(this).data('id')) { $('.B').show(); } else { $('.A').show(); } }); ``` ```css .one { width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } .A, .B { display: none; background: yellow; width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
72,337,348
I would like to get all text separated by double quotes and commas using python Beautifulsoup. The sample has no class or ids. Could use the div with "Information:" for parent like this: ``` try: test_var = soup.find(text='Information:').find_next('ul').find_next('li') for li in test_var.find_all: test_var = print(li.text, end="," except: test_var = '' ``` Sample: ``` <body> <div>Information:</div> <ul> <li>Text 1</li> <li>Text 2</li> <li>Text 3</li> </ul> </body> ``` The end result should be like this: "Text 1", "Text 2", "Text 3" Thank you.
2022/05/22
[ "https://Stackoverflow.com/questions/72337348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615887/" ]
Change it to accept one or the other when any `$('.one')` is clicked: ``` $('.one').on('click', function() { if ($(this).data('id')) { $('.B').show(); } else { $('.A').show(); } }); ``` ```js if ($(this).data('id')) {... // if the `data-id` has a value ex. "2", then it is true ``` ```js $('.one').on('click', function() { if ($(this).data('id')) { $('.B').show(); } else { $('.A').show(); } }); ``` ```css .one { width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } .A, .B { display: none; background: yellow; width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
JQuery selector equals to `querySelectorAll`, while `querySelector` selects only the first element it finds: ```js document.querySelector(".one").onclick = function() { $('.A').show(); } $("[data-id='two'].one").on('click', function() { $('.B').show(); }); ``` ```css .one { width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } .A, .B { display: none; background: yellow; width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
72,337,348
I would like to get all text separated by double quotes and commas using python Beautifulsoup. The sample has no class or ids. Could use the div with "Information:" for parent like this: ``` try: test_var = soup.find(text='Information:').find_next('ul').find_next('li') for li in test_var.find_all: test_var = print(li.text, end="," except: test_var = '' ``` Sample: ``` <body> <div>Information:</div> <ul> <li>Text 1</li> <li>Text 2</li> <li>Text 3</li> </ul> </body> ``` The end result should be like this: "Text 1", "Text 2", "Text 3" Thank you.
2022/05/22
[ "https://Stackoverflow.com/questions/72337348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2615887/" ]
Just use the [:not](https://api.jquery.com/not-selector/) selector like this: ```js $('.one:not([data-id="two"])').on('click', function() { $('.A').show(); }); $("[data-id='two'].one").on('click', function() { $('.B').show(); }); ``` ```css .one {width: 50px;margin: 10px;padding: 10px 0;text-align: center;outline: 1px solid black} .A, .B {display: none;background: yellow;width: 50px;margin: 10px;padding: 10px 0;text-align: center;outline: 1px solid black} ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
JQuery selector equals to `querySelectorAll`, while `querySelector` selects only the first element it finds: ```js document.querySelector(".one").onclick = function() { $('.A').show(); } $("[data-id='two'].one").on('click', function() { $('.B').show(); }); ``` ```css .one { width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } .A, .B { display: none; background: yellow; width: 50px; margin: 10px; padding: 10px 0; text-align: center; outline: 1px solid black } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="one">one</div> <div data-id="two" class="one">two one</div> <div class="A">A</div> <div class="B">B</div> ```
67,018,079
I have probem with this code , why ? the code : ``` import cv2 import numpy as np from PIL import Image import os import numpy as np import cv2 import os import h5py import dlib from imutils import face_utils from keras.models import load_model import sys from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D,Dropout from keras.layers import Dense, Activation, Flatten from keras.utils import to_categorical from keras import backend as K from sklearn.model_selection import train_test_split from Model import model from keras import callbacks # Path for face image database path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer_create() detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml"); def downsample_image(img): img = Image.fromarray(img.astype('uint8'), 'L') img = img.resize((32,32), Image.ANTIALIAS) return np.array(img) # function to get the images and label data def getImagesAndLabels(path): path = 'dataset' imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] ids = [] for imagePath in imagePaths: #if there is an error saving any jpegs try: PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale except: continue img_numpy = np.array(PIL_img,'uint8') id = int(os.path.split(imagePath)[-1].split(".")[1]) faceSamples.append(img_numpy) ids.append(id) return faceSamples,ids print ("\n [INFO] Training faces now.") faces,ids = getImagesAndLabels(path) K.clear_session() n_faces = len(set(ids)) model = model((32,32,1),n_faces) faces = np.asarray(faces) faces = np.array([downsample_image(ab) for ab in faces]) ids = np.asarray(ids) faces = faces[:,:,:,np.newaxis] print("Shape of Data: " + str(faces.shape)) print("Number of unique faces : " + str(n_faces)) ids = to_categorical(ids) faces = faces.astype('float32') faces /= 255. x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0) checkpoint = callbacks.ModelCheckpoint('trained_model.h5', monitor='val_acc', save_best_only=True, save_weights_only=True, verbose=1) model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test), shuffle=True,callbacks=[checkpoint]) # Print the numer of faces trained and end program print("enter code here`\n [INFO] " + str(n_faces) + " faces trained. Exiting Program") ``` --- ``` the output: ------------------ File "D:\my hard sam\ماجستير\سنة ثانية\البحث\python\Real-Time-Face-Recognition-Using-CNN-master\Real-Time-Face-Recognition-Using-CNN-master\02_face_training.py", line 16, in <module> from keras.utils import to_categorical ImportError: cannot import name 'to_categorical' from 'keras.utils' (C:\Users\omar\PycharmProjects\SnakGame\venv\lib\site-packages\keras\utils\__init__.py) ```
2021/04/09
[ "https://Stackoverflow.com/questions/67018079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15558831/" ]
**Keras** is now fully intregrated into **Tensorflow**. So, importing only **Keras** causes error. It should be imported as: ``` from tensorflow.keras.utils import to_categorical ``` **Avoid** importing as: ``` from keras.utils import to_categorical ``` It is safe to use `from tensorflow.keras.` instead of `from keras.` while importing all the necessary modules. ```py from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D,Dropout from tensorflow.keras.layers import Dense, Activation, Flatten from tensorflow.keras.utils import to_categorical from tensorflow.keras import backend as K from sklearn.model_selection import train_test_split from tensorflow.keras import callbacks ```
First thing is you can install this `keras.utils` with ``` $!pip install keras.utils ``` or another simple method just import `to_categorical` module as ``` $ tensorflow.keras.utils import to_categorical ``` because keras comes under tensorflow package
67,018,079
I have probem with this code , why ? the code : ``` import cv2 import numpy as np from PIL import Image import os import numpy as np import cv2 import os import h5py import dlib from imutils import face_utils from keras.models import load_model import sys from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D,Dropout from keras.layers import Dense, Activation, Flatten from keras.utils import to_categorical from keras import backend as K from sklearn.model_selection import train_test_split from Model import model from keras import callbacks # Path for face image database path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer_create() detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml"); def downsample_image(img): img = Image.fromarray(img.astype('uint8'), 'L') img = img.resize((32,32), Image.ANTIALIAS) return np.array(img) # function to get the images and label data def getImagesAndLabels(path): path = 'dataset' imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] ids = [] for imagePath in imagePaths: #if there is an error saving any jpegs try: PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale except: continue img_numpy = np.array(PIL_img,'uint8') id = int(os.path.split(imagePath)[-1].split(".")[1]) faceSamples.append(img_numpy) ids.append(id) return faceSamples,ids print ("\n [INFO] Training faces now.") faces,ids = getImagesAndLabels(path) K.clear_session() n_faces = len(set(ids)) model = model((32,32,1),n_faces) faces = np.asarray(faces) faces = np.array([downsample_image(ab) for ab in faces]) ids = np.asarray(ids) faces = faces[:,:,:,np.newaxis] print("Shape of Data: " + str(faces.shape)) print("Number of unique faces : " + str(n_faces)) ids = to_categorical(ids) faces = faces.astype('float32') faces /= 255. x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0) checkpoint = callbacks.ModelCheckpoint('trained_model.h5', monitor='val_acc', save_best_only=True, save_weights_only=True, verbose=1) model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test), shuffle=True,callbacks=[checkpoint]) # Print the numer of faces trained and end program print("enter code here`\n [INFO] " + str(n_faces) + " faces trained. Exiting Program") ``` --- ``` the output: ------------------ File "D:\my hard sam\ماجستير\سنة ثانية\البحث\python\Real-Time-Face-Recognition-Using-CNN-master\Real-Time-Face-Recognition-Using-CNN-master\02_face_training.py", line 16, in <module> from keras.utils import to_categorical ImportError: cannot import name 'to_categorical' from 'keras.utils' (C:\Users\omar\PycharmProjects\SnakGame\venv\lib\site-packages\keras\utils\__init__.py) ```
2021/04/09
[ "https://Stackoverflow.com/questions/67018079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15558831/" ]
**Keras** is now fully intregrated into **Tensorflow**. So, importing only **Keras** causes error. It should be imported as: ``` from tensorflow.keras.utils import to_categorical ``` **Avoid** importing as: ``` from keras.utils import to_categorical ``` It is safe to use `from tensorflow.keras.` instead of `from keras.` while importing all the necessary modules. ```py from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D,Dropout from tensorflow.keras.layers import Dense, Activation, Flatten from tensorflow.keras.utils import to_categorical from tensorflow.keras import backend as K from sklearn.model_selection import train_test_split from tensorflow.keras import callbacks ```
Alternatively, you can use: `from keras.utils.np_utils import to_categorical` Please note the **np\_utils** after **keras.uitls**
67,018,079
I have probem with this code , why ? the code : ``` import cv2 import numpy as np from PIL import Image import os import numpy as np import cv2 import os import h5py import dlib from imutils import face_utils from keras.models import load_model import sys from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D,Dropout from keras.layers import Dense, Activation, Flatten from keras.utils import to_categorical from keras import backend as K from sklearn.model_selection import train_test_split from Model import model from keras import callbacks # Path for face image database path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer_create() detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml"); def downsample_image(img): img = Image.fromarray(img.astype('uint8'), 'L') img = img.resize((32,32), Image.ANTIALIAS) return np.array(img) # function to get the images and label data def getImagesAndLabels(path): path = 'dataset' imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] ids = [] for imagePath in imagePaths: #if there is an error saving any jpegs try: PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale except: continue img_numpy = np.array(PIL_img,'uint8') id = int(os.path.split(imagePath)[-1].split(".")[1]) faceSamples.append(img_numpy) ids.append(id) return faceSamples,ids print ("\n [INFO] Training faces now.") faces,ids = getImagesAndLabels(path) K.clear_session() n_faces = len(set(ids)) model = model((32,32,1),n_faces) faces = np.asarray(faces) faces = np.array([downsample_image(ab) for ab in faces]) ids = np.asarray(ids) faces = faces[:,:,:,np.newaxis] print("Shape of Data: " + str(faces.shape)) print("Number of unique faces : " + str(n_faces)) ids = to_categorical(ids) faces = faces.astype('float32') faces /= 255. x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0) checkpoint = callbacks.ModelCheckpoint('trained_model.h5', monitor='val_acc', save_best_only=True, save_weights_only=True, verbose=1) model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test), shuffle=True,callbacks=[checkpoint]) # Print the numer of faces trained and end program print("enter code here`\n [INFO] " + str(n_faces) + " faces trained. Exiting Program") ``` --- ``` the output: ------------------ File "D:\my hard sam\ماجستير\سنة ثانية\البحث\python\Real-Time-Face-Recognition-Using-CNN-master\Real-Time-Face-Recognition-Using-CNN-master\02_face_training.py", line 16, in <module> from keras.utils import to_categorical ImportError: cannot import name 'to_categorical' from 'keras.utils' (C:\Users\omar\PycharmProjects\SnakGame\venv\lib\site-packages\keras\utils\__init__.py) ```
2021/04/09
[ "https://Stackoverflow.com/questions/67018079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15558831/" ]
**Keras** is now fully intregrated into **Tensorflow**. So, importing only **Keras** causes error. It should be imported as: ``` from tensorflow.keras.utils import to_categorical ``` **Avoid** importing as: ``` from keras.utils import to_categorical ``` It is safe to use `from tensorflow.keras.` instead of `from keras.` while importing all the necessary modules. ```py from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D,Dropout from tensorflow.keras.layers import Dense, Activation, Flatten from tensorflow.keras.utils import to_categorical from tensorflow.keras import backend as K from sklearn.model_selection import train_test_split from tensorflow.keras import callbacks ```
``` y_train = tensorflow.keras.utils.to_categorical(y_train, num_classes) y_test = tensorflow.keras.utils.to_categorical(y_test, num_classes) ``` It solves my problem!
67,018,079
I have probem with this code , why ? the code : ``` import cv2 import numpy as np from PIL import Image import os import numpy as np import cv2 import os import h5py import dlib from imutils import face_utils from keras.models import load_model import sys from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D,Dropout from keras.layers import Dense, Activation, Flatten from keras.utils import to_categorical from keras import backend as K from sklearn.model_selection import train_test_split from Model import model from keras import callbacks # Path for face image database path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer_create() detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml"); def downsample_image(img): img = Image.fromarray(img.astype('uint8'), 'L') img = img.resize((32,32), Image.ANTIALIAS) return np.array(img) # function to get the images and label data def getImagesAndLabels(path): path = 'dataset' imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] ids = [] for imagePath in imagePaths: #if there is an error saving any jpegs try: PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale except: continue img_numpy = np.array(PIL_img,'uint8') id = int(os.path.split(imagePath)[-1].split(".")[1]) faceSamples.append(img_numpy) ids.append(id) return faceSamples,ids print ("\n [INFO] Training faces now.") faces,ids = getImagesAndLabels(path) K.clear_session() n_faces = len(set(ids)) model = model((32,32,1),n_faces) faces = np.asarray(faces) faces = np.array([downsample_image(ab) for ab in faces]) ids = np.asarray(ids) faces = faces[:,:,:,np.newaxis] print("Shape of Data: " + str(faces.shape)) print("Number of unique faces : " + str(n_faces)) ids = to_categorical(ids) faces = faces.astype('float32') faces /= 255. x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0) checkpoint = callbacks.ModelCheckpoint('trained_model.h5', monitor='val_acc', save_best_only=True, save_weights_only=True, verbose=1) model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test), shuffle=True,callbacks=[checkpoint]) # Print the numer of faces trained and end program print("enter code here`\n [INFO] " + str(n_faces) + " faces trained. Exiting Program") ``` --- ``` the output: ------------------ File "D:\my hard sam\ماجستير\سنة ثانية\البحث\python\Real-Time-Face-Recognition-Using-CNN-master\Real-Time-Face-Recognition-Using-CNN-master\02_face_training.py", line 16, in <module> from keras.utils import to_categorical ImportError: cannot import name 'to_categorical' from 'keras.utils' (C:\Users\omar\PycharmProjects\SnakGame\venv\lib\site-packages\keras\utils\__init__.py) ```
2021/04/09
[ "https://Stackoverflow.com/questions/67018079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15558831/" ]
Alternatively, you can use: `from keras.utils.np_utils import to_categorical` Please note the **np\_utils** after **keras.uitls**
First thing is you can install this `keras.utils` with ``` $!pip install keras.utils ``` or another simple method just import `to_categorical` module as ``` $ tensorflow.keras.utils import to_categorical ``` because keras comes under tensorflow package
67,018,079
I have probem with this code , why ? the code : ``` import cv2 import numpy as np from PIL import Image import os import numpy as np import cv2 import os import h5py import dlib from imutils import face_utils from keras.models import load_model import sys from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D,Dropout from keras.layers import Dense, Activation, Flatten from keras.utils import to_categorical from keras import backend as K from sklearn.model_selection import train_test_split from Model import model from keras import callbacks # Path for face image database path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer_create() detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml"); def downsample_image(img): img = Image.fromarray(img.astype('uint8'), 'L') img = img.resize((32,32), Image.ANTIALIAS) return np.array(img) # function to get the images and label data def getImagesAndLabels(path): path = 'dataset' imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] ids = [] for imagePath in imagePaths: #if there is an error saving any jpegs try: PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale except: continue img_numpy = np.array(PIL_img,'uint8') id = int(os.path.split(imagePath)[-1].split(".")[1]) faceSamples.append(img_numpy) ids.append(id) return faceSamples,ids print ("\n [INFO] Training faces now.") faces,ids = getImagesAndLabels(path) K.clear_session() n_faces = len(set(ids)) model = model((32,32,1),n_faces) faces = np.asarray(faces) faces = np.array([downsample_image(ab) for ab in faces]) ids = np.asarray(ids) faces = faces[:,:,:,np.newaxis] print("Shape of Data: " + str(faces.shape)) print("Number of unique faces : " + str(n_faces)) ids = to_categorical(ids) faces = faces.astype('float32') faces /= 255. x_train, x_test, y_train, y_test = train_test_split(faces,ids, test_size = 0.2, random_state = 0) checkpoint = callbacks.ModelCheckpoint('trained_model.h5', monitor='val_acc', save_best_only=True, save_weights_only=True, verbose=1) model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test), shuffle=True,callbacks=[checkpoint]) # Print the numer of faces trained and end program print("enter code here`\n [INFO] " + str(n_faces) + " faces trained. Exiting Program") ``` --- ``` the output: ------------------ File "D:\my hard sam\ماجستير\سنة ثانية\البحث\python\Real-Time-Face-Recognition-Using-CNN-master\Real-Time-Face-Recognition-Using-CNN-master\02_face_training.py", line 16, in <module> from keras.utils import to_categorical ImportError: cannot import name 'to_categorical' from 'keras.utils' (C:\Users\omar\PycharmProjects\SnakGame\venv\lib\site-packages\keras\utils\__init__.py) ```
2021/04/09
[ "https://Stackoverflow.com/questions/67018079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15558831/" ]
Alternatively, you can use: `from keras.utils.np_utils import to_categorical` Please note the **np\_utils** after **keras.uitls**
``` y_train = tensorflow.keras.utils.to_categorical(y_train, num_classes) y_test = tensorflow.keras.utils.to_categorical(y_test, num_classes) ``` It solves my problem!
4,424,004
I'm new with python programming and GUI. I search on internet about GUI programming and see that there are a lot of ways to do this. I see that easiest way for GUI in python might be tkinter(which is included in Python, and it's just GUI library not GUI builder)? I also read a lot about GLADE+PyGTK(and XML format), what is there so special(glade is GUI builder)? Can anyone make some "personal opinion" about this choices? I have python code, I need to make simple GUI(2 button's-open-close-read-write,and some "print" work) and then make some .exe file (is there best choice py2exe=?). Is there a lot of changes in code to make GUI? Many thanks
2010/12/12
[ "https://Stackoverflow.com/questions/4424004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/530877/" ]
``` bool perfectNumber(number); ``` This does not call the `perfectNumber` function; it declares a local variable named `perfectNumber` of type `bool` and initializes it with the value of `number` converted to type `bool`. In order to call the `perfectNumber` function, you need to use something along the lines of: ``` bool result = perfectNumber(number); ``` or: ``` bool result(perfectNumber(number)); ``` On another note: if you are going to read input from a stream (e.g. `cin>>number`), you must check to be sure that the extraction of the value from the stream succeeded. As it is now, if you typed in `asdf`, the extraction would fail and `number` would be left uninitialized. The best way to check whether an extraction succeeds is simply to test the state of the stream: ``` if (cin >> number) { bool result = perfectNumber(number); } else { // input operation failed; handle the error as appropriate } ``` You can learn more about how the stream error states are set and reset in [Semantics of flags on `basic_ios`](https://stackoverflow.com/questions/4258887/semantics-of-flags-on-basic-ios). You should also consult [a good, introductory-level C++ book](https://stackoverflow.com/questions/388242/the-definitive-c-book-guide-and-list) for more stream-use best practices.
``` void primenum(long double x) { bool prime = true; int number2; number2 = (int) floor(sqrt(x));// Calculates the square-root of 'x' for (int i = 1; i <= x; i++) { for (int j = 2; j <= number2; j++) { if (i != j && i % j == 0) { prime = false; break; } } if (prime) { cout << " " << i << " "; c += 1; } prime = true; } } ```
4,424,004
I'm new with python programming and GUI. I search on internet about GUI programming and see that there are a lot of ways to do this. I see that easiest way for GUI in python might be tkinter(which is included in Python, and it's just GUI library not GUI builder)? I also read a lot about GLADE+PyGTK(and XML format), what is there so special(glade is GUI builder)? Can anyone make some "personal opinion" about this choices? I have python code, I need to make simple GUI(2 button's-open-close-read-write,and some "print" work) and then make some .exe file (is there best choice py2exe=?). Is there a lot of changes in code to make GUI? Many thanks
2010/12/12
[ "https://Stackoverflow.com/questions/4424004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/530877/" ]
``` bool perfectNumber(number); ``` This does not call the `perfectNumber` function; it declares a local variable named `perfectNumber` of type `bool` and initializes it with the value of `number` converted to type `bool`. In order to call the `perfectNumber` function, you need to use something along the lines of: ``` bool result = perfectNumber(number); ``` or: ``` bool result(perfectNumber(number)); ``` On another note: if you are going to read input from a stream (e.g. `cin>>number`), you must check to be sure that the extraction of the value from the stream succeeded. As it is now, if you typed in `asdf`, the extraction would fail and `number` would be left uninitialized. The best way to check whether an extraction succeeds is simply to test the state of the stream: ``` if (cin >> number) { bool result = perfectNumber(number); } else { // input operation failed; handle the error as appropriate } ``` You can learn more about how the stream error states are set and reset in [Semantics of flags on `basic_ios`](https://stackoverflow.com/questions/4258887/semantics-of-flags-on-basic-ios). You should also consult [a good, introductory-level C++ book](https://stackoverflow.com/questions/388242/the-definitive-c-book-guide-and-list) for more stream-use best practices.
``` #pragma hdrstop #include <tchar.h> #include <stdio.h> #include <conio.h> //--------------------------------------------------------------------------- bool is_prim(int nr) { for (int i = 2; i < nr-1; i++) { if (nr%i==0) return false; } return true; } bool is_ptr(int nr) { int sum=0; for (int i = 1; i < nr; i++) { if (nr%i==0) { sum=sum+i; } } if (sum==nr) { return true; } else return false; } #pragma argsused int _tmain(int argc, _TCHAR* argv[]) { int numar; printf ("Number=");scanf("%d",&numar); if (is_prim(numar)==true) { printf("The number is prime"); } else printf("The number is not prime"); if (is_ptr(numar)==true) { printf(" The number is perfect"); } else printf(" The number is not perfect"); getch(); return 0; } ```
4,424,004
I'm new with python programming and GUI. I search on internet about GUI programming and see that there are a lot of ways to do this. I see that easiest way for GUI in python might be tkinter(which is included in Python, and it's just GUI library not GUI builder)? I also read a lot about GLADE+PyGTK(and XML format), what is there so special(glade is GUI builder)? Can anyone make some "personal opinion" about this choices? I have python code, I need to make simple GUI(2 button's-open-close-read-write,and some "print" work) and then make some .exe file (is there best choice py2exe=?). Is there a lot of changes in code to make GUI? Many thanks
2010/12/12
[ "https://Stackoverflow.com/questions/4424004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/530877/" ]
``` bool perfectNumber(number); ``` This does not call the `perfectNumber` function; it declares a local variable named `perfectNumber` of type `bool` and initializes it with the value of `number` converted to type `bool`. In order to call the `perfectNumber` function, you need to use something along the lines of: ``` bool result = perfectNumber(number); ``` or: ``` bool result(perfectNumber(number)); ``` On another note: if you are going to read input from a stream (e.g. `cin>>number`), you must check to be sure that the extraction of the value from the stream succeeded. As it is now, if you typed in `asdf`, the extraction would fail and `number` would be left uninitialized. The best way to check whether an extraction succeeds is simply to test the state of the stream: ``` if (cin >> number) { bool result = perfectNumber(number); } else { // input operation failed; handle the error as appropriate } ``` You can learn more about how the stream error states are set and reset in [Semantics of flags on `basic_ios`](https://stackoverflow.com/questions/4258887/semantics-of-flags-on-basic-ios). You should also consult [a good, introductory-level C++ book](https://stackoverflow.com/questions/388242/the-definitive-c-book-guide-and-list) for more stream-use best practices.
``` bool isPerfect( int number){ int i; int sum=0; for(i=1;i<number ;i++){ if(number %i == 0){ cout<<" " << i ; sum+=i; } } if (sum == number){ cout<<"\n \t\t THIS NUMBER >>> "<< number <<" IS PERFECT \n\n"; return i; }else if (sum |= number) { cout<<"\nThis number >>> " << number <<" IS NOT PERFECT \n\n"; return 0; } } ```
4,424,004
I'm new with python programming and GUI. I search on internet about GUI programming and see that there are a lot of ways to do this. I see that easiest way for GUI in python might be tkinter(which is included in Python, and it's just GUI library not GUI builder)? I also read a lot about GLADE+PyGTK(and XML format), what is there so special(glade is GUI builder)? Can anyone make some "personal opinion" about this choices? I have python code, I need to make simple GUI(2 button's-open-close-read-write,and some "print" work) and then make some .exe file (is there best choice py2exe=?). Is there a lot of changes in code to make GUI? Many thanks
2010/12/12
[ "https://Stackoverflow.com/questions/4424004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/530877/" ]
``` void primenum(long double x) { bool prime = true; int number2; number2 = (int) floor(sqrt(x));// Calculates the square-root of 'x' for (int i = 1; i <= x; i++) { for (int j = 2; j <= number2; j++) { if (i != j && i % j == 0) { prime = false; break; } } if (prime) { cout << " " << i << " "; c += 1; } prime = true; } } ```
``` #pragma hdrstop #include <tchar.h> #include <stdio.h> #include <conio.h> //--------------------------------------------------------------------------- bool is_prim(int nr) { for (int i = 2; i < nr-1; i++) { if (nr%i==0) return false; } return true; } bool is_ptr(int nr) { int sum=0; for (int i = 1; i < nr; i++) { if (nr%i==0) { sum=sum+i; } } if (sum==nr) { return true; } else return false; } #pragma argsused int _tmain(int argc, _TCHAR* argv[]) { int numar; printf ("Number=");scanf("%d",&numar); if (is_prim(numar)==true) { printf("The number is prime"); } else printf("The number is not prime"); if (is_ptr(numar)==true) { printf(" The number is perfect"); } else printf(" The number is not perfect"); getch(); return 0; } ```
4,424,004
I'm new with python programming and GUI. I search on internet about GUI programming and see that there are a lot of ways to do this. I see that easiest way for GUI in python might be tkinter(which is included in Python, and it's just GUI library not GUI builder)? I also read a lot about GLADE+PyGTK(and XML format), what is there so special(glade is GUI builder)? Can anyone make some "personal opinion" about this choices? I have python code, I need to make simple GUI(2 button's-open-close-read-write,and some "print" work) and then make some .exe file (is there best choice py2exe=?). Is there a lot of changes in code to make GUI? Many thanks
2010/12/12
[ "https://Stackoverflow.com/questions/4424004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/530877/" ]
``` bool isPerfect( int number){ int i; int sum=0; for(i=1;i<number ;i++){ if(number %i == 0){ cout<<" " << i ; sum+=i; } } if (sum == number){ cout<<"\n \t\t THIS NUMBER >>> "<< number <<" IS PERFECT \n\n"; return i; }else if (sum |= number) { cout<<"\nThis number >>> " << number <<" IS NOT PERFECT \n\n"; return 0; } } ```
``` #pragma hdrstop #include <tchar.h> #include <stdio.h> #include <conio.h> //--------------------------------------------------------------------------- bool is_prim(int nr) { for (int i = 2; i < nr-1; i++) { if (nr%i==0) return false; } return true; } bool is_ptr(int nr) { int sum=0; for (int i = 1; i < nr; i++) { if (nr%i==0) { sum=sum+i; } } if (sum==nr) { return true; } else return false; } #pragma argsused int _tmain(int argc, _TCHAR* argv[]) { int numar; printf ("Number=");scanf("%d",&numar); if (is_prim(numar)==true) { printf("The number is prime"); } else printf("The number is not prime"); if (is_ptr(numar)==true) { printf(" The number is perfect"); } else printf(" The number is not perfect"); getch(); return 0; } ```
66,413,002
I'm attempting to translate the following curl request to something that will run in django. ``` curl -X POST https://api.lemlist.com/api/hooks --data '{"targetUrl":"https://example.com/lemlist-hook"}' --header "Content-Type: application/json" --user ":1234567980abcedf" ``` I've run this in git bash and it returns the expected response. What I have in my django project is the following: ``` apikey = '1234567980abcedf' hookurl = 'https://example.com/lemlist-hook' data = '{"targetUrl":hookurl}' headers = {'Content-Type': 'application/json'} response = requests.post(f'https://api.lemlist.com/api/hooks/', data=data, headers=headers, auth=('', apikey)) ``` Running this python code returns this as a json response ``` {} ``` Any thoughts on where there might be a problem in my code? Thanks!
2021/02/28
[ "https://Stackoverflow.com/questions/66413002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7609684/" ]
One way you can do this at the *word* level is: ``` select t.* from t cross apply (select count(*) as cnt from string_split(t.text, ' ') s1 cross join string_split(@sentence, ' ') s2 on s1.value = s2.value ) ss order by ss.cnt desc; ``` Notes: * This only looks for exact word matches in the two phrases. * This requires that words are separated by spaces, both in `text` and in "the sentence". * Duplicate words might through the count off. This can be managed (say by using `count(distinct s1.value) as cnt`) if you need to.
There's a lot of way two select item. For example: ``` SELECT 'I want to buy a ' + A.BrandName + ' cellphone and the model should be ' + A.ModelName FROM ( SELECT SUBSTRING(TEXT, 1, LEN('sumsung')) AS BrandName , SUBSTRING(TEXT, LEN(SUBSTRING(TEXT, 1, LEN('sumsung')))+1, LEN(TEXT)) AS ModelName FROM TABLE_NAME WHERE TEXT LIKE N'%samsung%' AND TEXT LIKE N' %galaxy s9%' ) AS A ```
66,755,583
I've tried all the installing methods in geopandas' [documentation](https://geopandas.org/getting_started/install.html) and nothing works. `conda install geopandas` gives ``` UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` `conda install --channel conda-forge geopandas` gives the same error. Created a new environment with conda: ``` Package python conflicts for: python=3 geopandas -> python[version='2.7.*|3.5.*|3.6.*|>=3.5|>=3.6|3.4.*|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=2.7,<2.8.0a0|>=3.5,<3.6.0a0'] geopandas -> pandas[version='>=0.24'] -> python[version='>=3.7|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']The following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` I tried installing from source, no luck: ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I also followed [this answer](https://stackoverflow.com/a/58943939/13083530), which gives similar errors for all packages installing: ``` Package `geopandas` found in cache Downloading package . . . https://download.lfd.uci.edu/pythonlibs/z4tqcw5k/geopandas-0.8.1-py3-none-any.whl geopandas-0.8.1-py3-none-any.whl Traceback (most recent call last): File "C:\Users\\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found ``` I also followed [this tutorial](https://towardsdatascience.com/geopandas-installation-the-easy-way-for-windows-31a666b3610f) and download 5 dependencies' binary wheels and pip install them. I have this error for installing `Fiona`, `geopandas`, `pyproj` ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I'm in my venv with Python 3.8.7 in Windows 10. I have GDAL installed and set `GDAL_DATA` and `GDAL_DRIVER_PATH` as environment vars.
2021/03/23
[ "https://Stackoverflow.com/questions/66755583", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13083530/" ]
@duckboycool and @Ken Y-N are right, downgrading to Python 3.7 did the trick! Downgrading with conda `conda install python=3.7` and then `conda install geopandas`
You need to create an environment initially, Then inside the new environment try to install Geopandas: ```none 1- conda create -n geo_env 2- conda activate geo_env 3- conda config --env --add channels conda-forge 4- conda config --env --set channel_priority strict 5- conda install python=3 geopandas ``` and following video: <https://youtu.be/k-MWeAWEta8> <https://geopandas.org/getting_started/install.html>
66,755,583
I've tried all the installing methods in geopandas' [documentation](https://geopandas.org/getting_started/install.html) and nothing works. `conda install geopandas` gives ``` UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` `conda install --channel conda-forge geopandas` gives the same error. Created a new environment with conda: ``` Package python conflicts for: python=3 geopandas -> python[version='2.7.*|3.5.*|3.6.*|>=3.5|>=3.6|3.4.*|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=2.7,<2.8.0a0|>=3.5,<3.6.0a0'] geopandas -> pandas[version='>=0.24'] -> python[version='>=3.7|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']The following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` I tried installing from source, no luck: ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I also followed [this answer](https://stackoverflow.com/a/58943939/13083530), which gives similar errors for all packages installing: ``` Package `geopandas` found in cache Downloading package . . . https://download.lfd.uci.edu/pythonlibs/z4tqcw5k/geopandas-0.8.1-py3-none-any.whl geopandas-0.8.1-py3-none-any.whl Traceback (most recent call last): File "C:\Users\\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found ``` I also followed [this tutorial](https://towardsdatascience.com/geopandas-installation-the-easy-way-for-windows-31a666b3610f) and download 5 dependencies' binary wheels and pip install them. I have this error for installing `Fiona`, `geopandas`, `pyproj` ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I'm in my venv with Python 3.8.7 in Windows 10. I have GDAL installed and set `GDAL_DATA` and `GDAL_DRIVER_PATH` as environment vars.
2021/03/23
[ "https://Stackoverflow.com/questions/66755583", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13083530/" ]
@duckboycool and @Ken Y-N are right, downgrading to Python 3.7 did the trick! Downgrading with conda `conda install python=3.7` and then `conda install geopandas`
I found that the following works. I first tried conda install geopandas in the base environment, and that didn't work. Several times. I created a new environment in Anaconda Navigator, activated my new environment and repeated conda install geopandas, and tried installing geopandas from the Navigator and that failed too. Finally, I created a new environment using Anaconda Prompt and installed the package. ``` conda create --name pandamaps geopandas ``` See the conda>getting started>managing environments <https://conda.io/projects/conda/en/latest/user-guide/getting-started.html>
66,755,583
I've tried all the installing methods in geopandas' [documentation](https://geopandas.org/getting_started/install.html) and nothing works. `conda install geopandas` gives ``` UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` `conda install --channel conda-forge geopandas` gives the same error. Created a new environment with conda: ``` Package python conflicts for: python=3 geopandas -> python[version='2.7.*|3.5.*|3.6.*|>=3.5|>=3.6|3.4.*|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=2.7,<2.8.0a0|>=3.5,<3.6.0a0'] geopandas -> pandas[version='>=0.24'] -> python[version='>=3.7|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']The following specifications were found to be incompatible with your CUDA driver: - feature:/win-32::__cuda==10.1=0 Your installed CUDA driver is: 10.1 ``` I tried installing from source, no luck: ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I also followed [this answer](https://stackoverflow.com/a/58943939/13083530), which gives similar errors for all packages installing: ``` Package `geopandas` found in cache Downloading package . . . https://download.lfd.uci.edu/pythonlibs/z4tqcw5k/geopandas-0.8.1-py3-none-any.whl geopandas-0.8.1-py3-none-any.whl Traceback (most recent call last): File "C:\Users\\AppData\Local\Programs\Python\Python38\lib\urllib\request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found ``` I also followed [this tutorial](https://towardsdatascience.com/geopandas-installation-the-easy-way-for-windows-31a666b3610f) and download 5 dependencies' binary wheels and pip install them. I have this error for installing `Fiona`, `geopandas`, `pyproj` ``` A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable. ``` I'm in my venv with Python 3.8.7 in Windows 10. I have GDAL installed and set `GDAL_DATA` and `GDAL_DRIVER_PATH` as environment vars.
2021/03/23
[ "https://Stackoverflow.com/questions/66755583", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13083530/" ]
I found that the following works. I first tried conda install geopandas in the base environment, and that didn't work. Several times. I created a new environment in Anaconda Navigator, activated my new environment and repeated conda install geopandas, and tried installing geopandas from the Navigator and that failed too. Finally, I created a new environment using Anaconda Prompt and installed the package. ``` conda create --name pandamaps geopandas ``` See the conda>getting started>managing environments <https://conda.io/projects/conda/en/latest/user-guide/getting-started.html>
You need to create an environment initially, Then inside the new environment try to install Geopandas: ```none 1- conda create -n geo_env 2- conda activate geo_env 3- conda config --env --add channels conda-forge 4- conda config --env --set channel_priority strict 5- conda install python=3 geopandas ``` and following video: <https://youtu.be/k-MWeAWEta8> <https://geopandas.org/getting_started/install.html>
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
Freesurfer segments and annotates the brain in the patient's native space, resulting in patient-specific regions, like [so](http://dl.dropbox.com/u/2467665/freesurfer_segmentation.png). I'm not sure what you mean by patching, or to what other images you'd like to apply this transformation, but it seems like the software most compatible for working with individual patient data, rather than normalized data across patients.
I think [ITK](http://www.itk.org/) is made for this kind if purpose. A Python wrapper exists ([Paul Novotny](http://www.paulnovo.org/) distributes binaries for Ubuntu on his site), but this is mainly C++. If you work under Linux then it is quite simple to compile if you are familiar with cmake. As this toolkit is a very low-level framework I can advise you to try [elastix](http://elastix.isi.uu.nl/index.php) which is a command line utility allowing one to make registration on picture using multiscale Bspline dense registration. Another interesting tool based on Maxwell demons and improved with diffeomorphic capabilities is [MedINIRA](http://www-sop.inria.fr/asclepios/software/MedINRIA/).
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
Freesurfer segments and annotates the brain in the patient's native space, resulting in patient-specific regions, like [so](http://dl.dropbox.com/u/2467665/freesurfer_segmentation.png). I'm not sure what you mean by patching, or to what other images you'd like to apply this transformation, but it seems like the software most compatible for working with individual patient data, rather than normalized data across patients.
Along SPM's lines, you can use [IBSPM](http://www.thomaskoenig.ch/Lester/ibaspm.htm). It was developed to solve exactly that problem.
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
Freesurfer segments and annotates the brain in the patient's native space, resulting in patient-specific regions, like [so](http://dl.dropbox.com/u/2467665/freesurfer_segmentation.png). I'm not sure what you mean by patching, or to what other images you'd like to apply this transformation, but it seems like the software most compatible for working with individual patient data, rather than normalized data across patients.
You can use ANTs software, or u can use Python within 3dSclicer for template registration. However, I did mane template registration in SPM and I recommend it for fMRI data better than ITK or Slicer. I found these links very helpful :) let me know if you need more help. <https://fmri-training-course.psych.lsa.umich.edu/wp-content/uploads/2017/08/Preprocessing-of-fMRI-Data-in-SPM-12-Lab-1.pdf> <https://nipype.readthedocs.io/en/latest/users/examples/fmri_spm.html>
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
There is a bulk of tools for image registration, e.g. look at <http://www.nitrc.org> under "Spatial transformation" -> "Registration". Nipype is indeed a nice Python module which wraps many of those (e.g. FSL, Freesurfer, etc) so you could explore different available tools within somewhat unified interface. Besides those well known (SPM, FSL, AFNI) also you could give a try to somewhat less known but very powerful CMTK (http://www.nitrc.org/projects/cmtk) which comes with non-linear registration(s), population-based template construction, many other features and SRI24 atlas. Such script as asegment\_sri24 could be used for a quick start with registering/reslicing each subject using labels available in SRI24 atlas. To start using CMTK (or dozens of other neuroimaging software) in a matter of minutes I would recommend you to look at <http://neuro.debian.net> -- the platform to allow very easy deployment of (maintained) neuroscience software. FSL, AFNI, CMTK, SRI24 atlas etc are available there upon your demand ;)
I think [ITK](http://www.itk.org/) is made for this kind if purpose. A Python wrapper exists ([Paul Novotny](http://www.paulnovo.org/) distributes binaries for Ubuntu on his site), but this is mainly C++. If you work under Linux then it is quite simple to compile if you are familiar with cmake. As this toolkit is a very low-level framework I can advise you to try [elastix](http://elastix.isi.uu.nl/index.php) which is a command line utility allowing one to make registration on picture using multiscale Bspline dense registration. Another interesting tool based on Maxwell demons and improved with diffeomorphic capabilities is [MedINIRA](http://www-sop.inria.fr/asclepios/software/MedINRIA/).
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
I think [ITK](http://www.itk.org/) is made for this kind if purpose. A Python wrapper exists ([Paul Novotny](http://www.paulnovo.org/) distributes binaries for Ubuntu on his site), but this is mainly C++. If you work under Linux then it is quite simple to compile if you are familiar with cmake. As this toolkit is a very low-level framework I can advise you to try [elastix](http://elastix.isi.uu.nl/index.php) which is a command line utility allowing one to make registration on picture using multiscale Bspline dense registration. Another interesting tool based on Maxwell demons and improved with diffeomorphic capabilities is [MedINIRA](http://www-sop.inria.fr/asclepios/software/MedINRIA/).
You can use ANTs software, or u can use Python within 3dSclicer for template registration. However, I did mane template registration in SPM and I recommend it for fMRI data better than ITK or Slicer. I found these links very helpful :) let me know if you need more help. <https://fmri-training-course.psych.lsa.umich.edu/wp-content/uploads/2017/08/Preprocessing-of-fMRI-Data-in-SPM-12-Lab-1.pdf> <https://nipype.readthedocs.io/en/latest/users/examples/fmri_spm.html>
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
There is a bulk of tools for image registration, e.g. look at <http://www.nitrc.org> under "Spatial transformation" -> "Registration". Nipype is indeed a nice Python module which wraps many of those (e.g. FSL, Freesurfer, etc) so you could explore different available tools within somewhat unified interface. Besides those well known (SPM, FSL, AFNI) also you could give a try to somewhat less known but very powerful CMTK (http://www.nitrc.org/projects/cmtk) which comes with non-linear registration(s), population-based template construction, many other features and SRI24 atlas. Such script as asegment\_sri24 could be used for a quick start with registering/reslicing each subject using labels available in SRI24 atlas. To start using CMTK (or dozens of other neuroimaging software) in a matter of minutes I would recommend you to look at <http://neuro.debian.net> -- the platform to allow very easy deployment of (maintained) neuroscience software. FSL, AFNI, CMTK, SRI24 atlas etc are available there upon your demand ;)
Along SPM's lines, you can use [IBSPM](http://www.thomaskoenig.ch/Lester/ibaspm.htm). It was developed to solve exactly that problem.
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
There is a bulk of tools for image registration, e.g. look at <http://www.nitrc.org> under "Spatial transformation" -> "Registration". Nipype is indeed a nice Python module which wraps many of those (e.g. FSL, Freesurfer, etc) so you could explore different available tools within somewhat unified interface. Besides those well known (SPM, FSL, AFNI) also you could give a try to somewhat less known but very powerful CMTK (http://www.nitrc.org/projects/cmtk) which comes with non-linear registration(s), population-based template construction, many other features and SRI24 atlas. Such script as asegment\_sri24 could be used for a quick start with registering/reslicing each subject using labels available in SRI24 atlas. To start using CMTK (or dozens of other neuroimaging software) in a matter of minutes I would recommend you to look at <http://neuro.debian.net> -- the platform to allow very easy deployment of (maintained) neuroscience software. FSL, AFNI, CMTK, SRI24 atlas etc are available there upon your demand ;)
You can use ANTs software, or u can use Python within 3dSclicer for template registration. However, I did mane template registration in SPM and I recommend it for fMRI data better than ITK or Slicer. I found these links very helpful :) let me know if you need more help. <https://fmri-training-course.psych.lsa.umich.edu/wp-content/uploads/2017/08/Preprocessing-of-fMRI-Data-in-SPM-12-Lab-1.pdf> <https://nipype.readthedocs.io/en/latest/users/examples/fmri_spm.html>
6,767,990
So, I use [SPM](http://www.fil.ion.ucl.ac.uk/spm/) to register fMRI brain images between the same patient; however, I am having trouble registering images between patients. Essentially, I want to register a brain atlas to a patient-specific scan, so that I can do some image patching. So register, then apply that warping and transformation to any number of images. SPM was unsuccessful in such a registration. It cannot warp the atlas to be in the same brain shape as the patient brain. Would software such as [freesurfer](http://surfer.nmr.mgh.harvard.edu/) be good for this?? Or is there something better out there in either matlab or python (but preferably python)?? Thanks! tylerthemiler
2011/07/20
[ "https://Stackoverflow.com/questions/6767990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/402632/" ]
Along SPM's lines, you can use [IBSPM](http://www.thomaskoenig.ch/Lester/ibaspm.htm). It was developed to solve exactly that problem.
You can use ANTs software, or u can use Python within 3dSclicer for template registration. However, I did mane template registration in SPM and I recommend it for fMRI data better than ITK or Slicer. I found these links very helpful :) let me know if you need more help. <https://fmri-training-course.psych.lsa.umich.edu/wp-content/uploads/2017/08/Preprocessing-of-fMRI-Data-in-SPM-12-Lab-1.pdf> <https://nipype.readthedocs.io/en/latest/users/examples/fmri_spm.html>
23,533,566
I want to use /etc/sudoers to change the owner of a file from bangtest(user) to root. Reason to change: when I uploaded an image from bangtest(user) to my server using Django application then image file permission are like ``` ls -l /home/bangtest/alpha/media/products/image_2093.jpg -rw-r--r-- 1 bangtest bangtest 28984 May 6 02:47 ``` but when I tried to access those file from server using //myhost/media/products/image\_2093.jpg, I am getting 404 error.When I tried to log the error its like ``` Caught race condition abuser. attacker: 0, victim: 502 open file owner: 502, open file: /home/bangtest/alpha/media/products/image_2093.jpg ``` After when I changed the owner of a file from bangtest to root,then I am able to access the image perfectly. So because of that reason I want to change owner of file dynamically using python script. I have tried by changing the sudoers file like mentioned below.But still I am getting error like ``` chown: changing ownership of `image.jpg': Operation not permitted ``` My sudoers code: ``` root ALL=(ALL) ALL bangtest ALL=(ALL) /bin/chown root:bangtest /home/bangtest/alpha/* ``` Any Clues why sudoers are not working? Note:Operating system Linux. Thanks
2014/05/08
[ "https://Stackoverflow.com/questions/23533566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2479352/" ]
I strongly suggest you use a browser such as Firefox with Firebug installed. Load any page, hit Tools > Web Developer > Inspector (or its hot key equivalent), then click on your object, the HTML code inspector will reference the exact line of the css file that is governing the style being generated (either the style directly, or the computed style). Time and sanity saver.
After several attempts and some help from Zurb support the CSS i needed was: ``` .top-bar-section .dropdown li:not(.has-form) a:not(.button) { color: white; background: #740707; } ``` Thanks for the help
23,533,566
I want to use /etc/sudoers to change the owner of a file from bangtest(user) to root. Reason to change: when I uploaded an image from bangtest(user) to my server using Django application then image file permission are like ``` ls -l /home/bangtest/alpha/media/products/image_2093.jpg -rw-r--r-- 1 bangtest bangtest 28984 May 6 02:47 ``` but when I tried to access those file from server using //myhost/media/products/image\_2093.jpg, I am getting 404 error.When I tried to log the error its like ``` Caught race condition abuser. attacker: 0, victim: 502 open file owner: 502, open file: /home/bangtest/alpha/media/products/image_2093.jpg ``` After when I changed the owner of a file from bangtest to root,then I am able to access the image perfectly. So because of that reason I want to change owner of file dynamically using python script. I have tried by changing the sudoers file like mentioned below.But still I am getting error like ``` chown: changing ownership of `image.jpg': Operation not permitted ``` My sudoers code: ``` root ALL=(ALL) ALL bangtest ALL=(ALL) /bin/chown root:bangtest /home/bangtest/alpha/* ``` Any Clues why sudoers are not working? Note:Operating system Linux. Thanks
2014/05/08
[ "https://Stackoverflow.com/questions/23533566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2479352/" ]
I strongly suggest you use a browser such as Firefox with Firebug installed. Load any page, hit Tools > Web Developer > Inspector (or its hot key equivalent), then click on your object, the HTML code inspector will reference the exact line of the css file that is governing the style being generated (either the style directly, or the computed style). Time and sanity saver.
If you use the SCSS/SASS version of foundation you should change the defaultvalues for the topbar. The defaultsettings are stored in `_settings.scss`. For example to change it to cornflowerblue I used these settings: ``` $topbar-bg-color: cornflowerblue; $topbar-bg: $topbar-bg-color; $topbar-link-bg-hover: scale-color($topbar-bg, $lightness: -14%); $topbar-link-bg-active: $topbar-bg; $topbar-dropdown-bg: $topbar-bg; $topbar-dropdown-link-bg: $topbar-bg; $topbar-dropdown-link-bg-hover: scale-color($topbar-bg, $lightness: -14%); ```
23,533,566
I want to use /etc/sudoers to change the owner of a file from bangtest(user) to root. Reason to change: when I uploaded an image from bangtest(user) to my server using Django application then image file permission are like ``` ls -l /home/bangtest/alpha/media/products/image_2093.jpg -rw-r--r-- 1 bangtest bangtest 28984 May 6 02:47 ``` but when I tried to access those file from server using //myhost/media/products/image\_2093.jpg, I am getting 404 error.When I tried to log the error its like ``` Caught race condition abuser. attacker: 0, victim: 502 open file owner: 502, open file: /home/bangtest/alpha/media/products/image_2093.jpg ``` After when I changed the owner of a file from bangtest to root,then I am able to access the image perfectly. So because of that reason I want to change owner of file dynamically using python script. I have tried by changing the sudoers file like mentioned below.But still I am getting error like ``` chown: changing ownership of `image.jpg': Operation not permitted ``` My sudoers code: ``` root ALL=(ALL) ALL bangtest ALL=(ALL) /bin/chown root:bangtest /home/bangtest/alpha/* ``` Any Clues why sudoers are not working? Note:Operating system Linux. Thanks
2014/05/08
[ "https://Stackoverflow.com/questions/23533566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2479352/" ]
After several attempts and some help from Zurb support the CSS i needed was: ``` .top-bar-section .dropdown li:not(.has-form) a:not(.button) { color: white; background: #740707; } ``` Thanks for the help
If you use the SCSS/SASS version of foundation you should change the defaultvalues for the topbar. The defaultsettings are stored in `_settings.scss`. For example to change it to cornflowerblue I used these settings: ``` $topbar-bg-color: cornflowerblue; $topbar-bg: $topbar-bg-color; $topbar-link-bg-hover: scale-color($topbar-bg, $lightness: -14%); $topbar-link-bg-active: $topbar-bg; $topbar-dropdown-bg: $topbar-bg; $topbar-dropdown-link-bg: $topbar-bg; $topbar-dropdown-link-bg-hover: scale-color($topbar-bg, $lightness: -14%); ```
60,103,642
I already know how to open windows command prompt through python, but I was wondering how if there is a way to open a windows powershellx86 window and run commands through python 3.7 on windows 10?
2020/02/06
[ "https://Stackoverflow.com/questions/60103642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can just call out to powershell.exe using `subprocess.run` ``` import subprocess subprocess.run('powershell.exe Get-Item *') ```
If you know how to run the command prompt (CMD.EXE) then you should be able to use the same method to run PowerShell (PowerShell.EXE). PowerShell.EXE is located in c:\windows\system32\windowspowershell\v1.0\ by default. To run the shell with commands use: ``` c:\windows\system32\windowspowershell\v1.0\PowerShell.exe -c {commands} ``` To launch a .ps1 script file, use ``` c:\windows\system32\windowspowershell\v1.0\PowerShell.exe -f Path\Script.ps1 ``` Good luck.
49,411,277
I'm using Python to automate some reporting, but I am stuck trying to connect to an SSAS cube. I am on Windows 7 using Anaconda 4.4, and I am unable to install any libraries beyond those included in Anaconda. I have used pyodbc+pandas to connect to SQL Server databases and extract data with SQL queries, and the goal now is to do something similar on an SSAS cube, using an MDX query to extract data, but I can't get a successful connection. This first connection string is very similar to the strings that I used to connect to the SQL Server databases, but it gives me an authentication error. I can access the cube no problem using SQL Server Management Studio so I know that my Windows credentials have access. ``` connection = pyodbc.connect('Trusted_Connection=yes',DRIVER='{SQL Server}',SERVER='Cube Server', database='Cube') query = "MDX query" report_df = pandas.read_sql(query, connection) Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user '*****'. (18456) (SQLDriverConnect)") ``` When I tried to replicate the attempts at [Question1](https://stackoverflow.com/questions/24712994/connect-to-sql-server-analysis-service-from-python) and [Question2](https://stackoverflow.com/questions/38985729/connect-to-an-olap-cube-using-python-on-linux) I got a different error: ``` Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') ``` Any help/guidance would be greatly appreciated. My experience with SSAS cubes is minimal, so it is possible that I am on the completely wrong path for this task and that even if the connection issue gets solved, there will be another issue loading the data into pandas, etc.
2018/03/21
[ "https://Stackoverflow.com/questions/49411277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9529670/" ]
SSAS doesn't support [ODBC clients](https://learn.microsoft.com/en-us/sql/analysis-services/instances/data-providers-used-for-analysis-services-connections) . It does provide HTTP access through IIS, which requires [a few configuration steps](https://learn.microsoft.com/en-us/sql/analysis-services/instances/configure-http-access-to-analysis-services-on-iis-8-0). Once configured, any client can issue XMLA queries over HTTP. The [xmla package](https://pypi.python.org/pypi/xmla/) can connect to various OLAP sources, including SSAS over HTTP
Perhaps this solution will help you <https://stackoverflow.com/a/65434789/14872543> the idea is to use the construct on linced MSSQL Server ``` SELECT olap.* from OpenRowset ('"+ olap_conn_string+"',' " + mdx_string +"') "+ 'as olap' ```
49,411,277
I'm using Python to automate some reporting, but I am stuck trying to connect to an SSAS cube. I am on Windows 7 using Anaconda 4.4, and I am unable to install any libraries beyond those included in Anaconda. I have used pyodbc+pandas to connect to SQL Server databases and extract data with SQL queries, and the goal now is to do something similar on an SSAS cube, using an MDX query to extract data, but I can't get a successful connection. This first connection string is very similar to the strings that I used to connect to the SQL Server databases, but it gives me an authentication error. I can access the cube no problem using SQL Server Management Studio so I know that my Windows credentials have access. ``` connection = pyodbc.connect('Trusted_Connection=yes',DRIVER='{SQL Server}',SERVER='Cube Server', database='Cube') query = "MDX query" report_df = pandas.read_sql(query, connection) Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user '*****'. (18456) (SQLDriverConnect)") ``` When I tried to replicate the attempts at [Question1](https://stackoverflow.com/questions/24712994/connect-to-sql-server-analysis-service-from-python) and [Question2](https://stackoverflow.com/questions/38985729/connect-to-an-olap-cube-using-python-on-linux) I got a different error: ``` Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)') ``` Any help/guidance would be greatly appreciated. My experience with SSAS cubes is minimal, so it is possible that I am on the completely wrong path for this task and that even if the connection issue gets solved, there will be another issue loading the data into pandas, etc.
2018/03/21
[ "https://Stackoverflow.com/questions/49411277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9529670/" ]
SSAS doesn't support [ODBC clients](https://learn.microsoft.com/en-us/sql/analysis-services/instances/data-providers-used-for-analysis-services-connections) . It does provide HTTP access through IIS, which requires [a few configuration steps](https://learn.microsoft.com/en-us/sql/analysis-services/instances/configure-http-access-to-analysis-services-on-iis-8-0). Once configured, any client can issue XMLA queries over HTTP. The [xmla package](https://pypi.python.org/pypi/xmla/) can connect to various OLAP sources, including SSAS over HTTP
The Pyadomd package might help you with your problem: [Pyadomd](https://github.com/S-C-O-U-T/Pyadomd) It is not tested on Windows 7, but I would expect it to work fine :-)
65,605,972
Before downgrading my GCC, I want to know if there's a way to figure which programs/frameworks or dependencies in my machine will break and if there is a better way to do this for openpose installation? (e.g. changing something in CMake) Is there a hack to fix this without changing my system GCC version and potentially breaking other things? ``` [10889:10881 0:2009] 09:21:36 Wed Jan 06 [mona@goku:pts/0 +1] ~/research/code/openpose/build $ make -j`nproc` [ 12%] Performing configure step for 'openpose_lib' CMake Warning (dev) at cmake/Misc.cmake:32 (set): implicitly converting 'BOOLEAN' to 'STRING' type. Call Stack (most recent call first): CMakeLists.txt:25 (include) This warning is for project developers. Use -Wno-dev to suppress it. -- Found gflags (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so) -- Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so) -- Found PROTOBUF Compiler: /usr/local/bin/protoc -- HDF5: Using hdf5 compiler wrapper to determine C configuration -- HDF5: Using hdf5 compiler wrapper to determine CXX configuration -- CUDA detected: 10.1 -- Added CUDA NVCC flags for: sm_75 -- Found Atlas: /usr/include/x86_64-linux-gnu -- Found Atlas (include: /usr/include/x86_64-linux-gnu library: /usr/lib/x86_64-linux-gnu/libatlas.so lapack: /usr/lib/x86_64-linux-gnu/liblapack.so -- Python interface is disabled or not all required dependencies found. Building without it... -- Found Git: /usr/bin/git (found version "2.25.1") -- -- ******************* Caffe Configuration Summary ******************* -- General: -- Version : 1.0.0 -- Git : 1.0-149-g1807aada -- System : Linux -- C++ compiler : /usr/bin/c++ -- Release CXX flags : -O3 -DNDEBUG -fPIC -Wall -std=c++11 -Wno-sign-compare -Wno-uninitialized -- Debug CXX flags : -g -fPIC -Wall -std=c++11 -Wno-sign-compare -Wno-uninitialized -- Build type : Release -- -- BUILD_SHARED_LIBS : ON -- BUILD_python : OFF -- BUILD_matlab : OFF -- BUILD_docs : OFF -- CPU_ONLY : OFF -- USE_OPENCV : OFF -- USE_LEVELDB : OFF -- USE_LMDB : OFF -- USE_NCCL : OFF -- ALLOW_LMDB_NOLOCK : OFF -- USE_HDF5 : ON -- -- Dependencies: -- BLAS : Yes (Atlas) -- Boost : Yes (ver. 1.71) -- glog : Yes -- gflags : Yes -- protobuf : Yes (ver. 3.6.1) -- CUDA : Yes (ver. 10.1) -- -- NVIDIA CUDA: -- Target GPU(s) : Auto -- GPU arch(s) : sm_75 -- cuDNN : Disabled -- -- Install: -- Install path : /home/mona/research/code/openpose/build/caffe -- -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: CUDA_ARCH_BIN -- Build files have been written to: /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build [ 25%] Performing build step for 'openpose_lib' [ 1%] Running C++/Python protocol buffer compiler on /home/mona/research/code/openpose/3rdparty/caffe/src/caffe/proto/caffe.proto Scanning dependencies of target caffeproto [ 1%] Building CXX object src/caffe/CMakeFiles/caffeproto.dir/__/__/include/caffe/proto/caffe.pb.cc.o [ 1%] Linking CXX static library ../../lib/libcaffeproto.a [ 1%] Built target caffeproto [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bnll_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_accuracy_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_reindex_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_norm_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bias_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_base_data_layer.cu.o [ 4%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_concat_layer.cu.o [ 5%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_clip_layer.cu.o [ 6%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_absval_layer.cu.o [ 6%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_conv_layer.cu.o [ 6%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_contrastive_loss_layer.cu.o In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /usr/include/cuda_runtime.h:83, from <command-line>: /usr/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ In file included from /home/mona/research/code/openpose/3rdparty/caffe/src/caffe/util/math_functions.cu:1: /usr/include/math_functions.h:54:2: warning: #warning "math_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." [-Wcpp] 54 | #warning "math_functions.h is an internal header file and must not be used directly. This file will be removed in a future CUDA release. Please use cuda_runtime_api.h or cuda_runtime.h instead." | ^~~~~~~ CMake Error at cuda_compile_1_generated_clip_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_clip_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:114: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_clip_layer.cu.o] Error 1 make[5]: *** Waiting for unfinished jobs.... CMake Error at cuda_compile_1_generated_absval_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_absval_layer.cu.o CMake Error at cuda_compile_1_generated_concat_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_concat_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:65: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_absval_layer.cu.o] Error 1 make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:121: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_concat_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_batch_reindex_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_batch_reindex_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:93: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_reindex_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_bias_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_bias_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:100: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bias_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_batch_norm_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_batch_norm_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:86: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_batch_norm_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_contrastive_loss_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_contrastive_loss_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:128: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_contrastive_loss_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_conv_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_conv_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:135: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_conv_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_accuracy_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_accuracy_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:72: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_accuracy_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_base_data_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_base_data_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:79: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_base_data_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_bnll_layer.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/layers/./cuda_compile_1_generated_bnll_layer.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:107: src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_bnll_layer.cu.o] Error 1 CMake Error at cuda_compile_1_generated_math_functions.cu.o.Release.cmake:220 (message): Error generating /home/mona/research/code/openpose/build/caffe/src/openpose_lib-build/src/caffe/CMakeFiles/cuda_compile_1.dir/util/./cuda_compile_1_generated_math_functions.cu.o make[5]: *** [src/caffe/CMakeFiles/caffe.dir/build.make:499: src/caffe/CMakeFiles/cuda_compile_1.dir/util/cuda_compile_1_generated_math_functions.cu.o] Error 1 make[4]: *** [CMakeFiles/Makefile2:371: src/caffe/CMakeFiles/caffe.dir/all] Error 2 make[3]: *** [Makefile:130: all] Error 2 make[2]: *** [CMakeFiles/openpose_lib.dir/build.make:112: caffe/src/openpose_lib-stamp/openpose_lib-build] Error 2 make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/openpose_lib.dir/all] Error 2 make: *** [Makefile:84: all] Error 2 21834/31772MB(openpose) [10889:10881 0:2010] 09:21:55 Wed Jan 06 [mona@goku:pts/0 +1] ~/research/code/openpose/build $ ``` I have: ``` $ gcc --version gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Copyright (C) 2019 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` I am following the compilation instructions here on Ubuntu 20.04: <https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/README.md#prerequisites>
2021/01/07
[ "https://Stackoverflow.com/questions/65605972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2414957/" ]
Solved by downgrading the GCC from 9.3.0 to 7: ``` $ sudo apt remove gcc $ sudo apt-get install gcc-7 g++-7 -y $ sudo ln -s /usr/bin/gcc-7 /usr/bin/gcc $ sudo ln -s /usr/bin/g++-7 /usr/bin/g++ $ sudo ln -s /usr/bin/gcc-7 /usr/bin/cc $ sudo ln -s /usr/bin/g++-7 /usr/bin/c++ $ gcc --version gcc (Ubuntu 7.5.0-6ubuntu2) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ```
You should point to a correct GCC bin file (below 9) from the dependencies in cmake command. no need to downgrade the GCC for example: ``` cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_C_COMPILER=/usr/bin/gcc-8 ```
53,369,766
Following the [Microsoft Azure documentation for Python developers](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.models.blob?view=azure-python). The `azure.storage.blob.models.Blob` class does have a private method called `__sizeof__()`. But it returns a constant value of 16, whether the blob is empty (0 byte) or 1 GB. Is there any method/attribute of a blob object with which I can dynamically check the size of the object? To be clearer, this is how my source code looks like. ``` for i in blobService.list_blobs(container_name=container, prefix=path): if i.name.endswith('.json') and r'CIJSONTM.json/part' in i.name: #do some stuffs ``` However, the data pool contains many empty blobs having legitimate names, and before I `#do some stuffs`, I want to have an additional check on the size to judge whether I am dealing with an empty blob. Also, bonus for what exactly does the `__sizeof__()` method give, if not the size of the blob object?
2018/11/19
[ "https://Stackoverflow.com/questions/53369766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2604247/" ]
> > I want to have an additional check on the size to judge whether I am dealing with an empty blob. > > > We could use the [BlobProperties().content\_length](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.models.blobproperties?view=azure-python) to check whether it is a empty blob. ``` BlockBlobService.get_blob_properties(block_blob_service,container_name,blob_name).properties.content_length ``` The following is the demo code how to get the blob content\_length : ``` from azure.storage.blob import BlockBlobService block_blob_service = BlockBlobService(account_name='accoutName', account_key='accountKey') container_name ='containerName' block_blob_service.create_container(container_name) generator = block_blob_service.list_blobs(container_name) for blob in generator: length = BlockBlobService.get_blob_properties(block_blob_service,container_name,blob.name).properties.content_length print("\t Blob name: " + blob.name) print(length) ```
``` from azure.storage.blob import BlobServiceClient blob_service_client = BlobServiceClient.from_connection_string(connect_str) blob_list = blob_service_client.get_container_client(my_container).list_blobs() for blob in blob_list: print("\t" + blob.name) print('\tsize=', blob.size) ```
39,981,667
I installed Robotframework RIDE with my user credentials and trying to access that by logging in with the another user in the same machine. when i copy paste the ride.py(available in C:/Python27/Scripts) file from my user to another user i can access RIDE by double clicking the ride.py file, but when i try to access using ride.py through command line i am not able access RIDE showing a error msg as "ride.py is not recognised as an internal or external command, operable program or batch file ". Installed python for all users and again re installed everything through pip in C:/Users, previously installed in C:/Users/MyUser. While i am trying to re install everything using pip in C:\Users it is showing as "Requirement already satisfied"
2016/10/11
[ "https://Stackoverflow.com/questions/39981667", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5295988/" ]
I'm using gem [breadcrumbs on rails](https://github.com/weppos/breadcrumbs_on_rails) with devise in my project. If you haven't made User model with devise make that first: ``` rails g devise User rake db:migrate rails generate devise:views users ``` My registration\_controller.rb looks like this: ``` # app/controllers/registrations_controller.rb class RegistrationsController < Devise::RegistrationsController add_breadcrumb "home", :root_path add_breadcrumb "contact", :contacts_path end ``` I changed routes: ``` devise_for :users, :controllers => { registrations: 'registrations' } ``` In application.html.erb layout I added breadcrumbs (just above the <%= yield %> ) ``` <%= render_breadcrumbs %> ``` I've just tested it, and it works as you can see from the screenshot. [![enter image description here](https://i.stack.imgur.com/UJI2m.jpg)](https://i.stack.imgur.com/UJI2m.jpg) **EDITED:** In case that you want to add breadcrumbs to other pages of Devise gem, for example Forgot your password page, you can make new controller: ``` # app/controllers/passwords_controller.rb class PasswordsController < Devise::PasswordsController add_breadcrumb "home", :root_path add_breadcrumb "contact", :contacts_path end ``` and update your routes: ``` devise_for :users, controllers: { registrations: 'registrations', passwords: 'passwords' } ``` Please let me know if it works for you.
You can generate the devise views with: `rails generate devise:views users` Make sure to replace `users` with whatever your user model name is if it isn't `User` (e.g. `Admin`, `Manager`, etc) You can then add to those views whatever you need to show breadcrumbs.
14,672,640
I am trying to use python-twitter api in GAE. I need to import Oauth2 and httplib2. Here is how I did For OAuth2, I downloaded github.com/simplegeo/python-oauth2/tree/master/oauth2. For HTTPLib2, I dowloaded code.google.com/p/httplib2/wiki/Install and extracted folder python2/httplib2 to project root folder. my views.py ``` import twitter def index(request): api = twitter.Api(consumer_key='XNAUYmsmono4gs3LP4T6Pw',consumer_secret='xxxxx',access_token_key='xxxxx',access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) return render_to_response('fbtwitter/index.html') ``` I got the error [paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc](http://paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc) ``` AttributeError AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' Traceback (most recent call last) File "D:\PythonProj\fbtwitter\kay\lib\werkzeug\wsgi.py", line 471, in __call__ return app(environ, start_response) File "D:\PythonProj\fbtwitter\kay\app.py", line 478, in __call__ response = self.get_response(request) File "D:\PythonProj\fbtwitter\kay\app.py", line 405, in get_response return self.handle_uncaught_exception(request, exc_info) File "D:\PythonProj\fbtwitter\kay\app.py", line 371, in get_response response = view_func(request, **values) File "D:\PythonProj\fbtwitter\fbtwitter\views.py", line 39, in index access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2235, in __init__ self.SetCredentials(consumer_key, consumer_secret, access_token_key, access_token_secret) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2264, in SetCredentials self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' ``` It seems I did not import Oauth2 correctly when I tracked the error in twitter.py ``` self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() ``` I even go to twitter.py and add `import oauth2 as oauth` but it couldnt solve the problem Can anybody help?
2013/02/03
[ "https://Stackoverflow.com/questions/14672640", "https://Stackoverflow.com", "https://Stackoverflow.com/users/496837/" ]
``` ‘%A%’; ``` v.s. ``` '%A%'; ``` The first has fancy `'` characters. The usual cause for that is Outlook's AutoCorrect.
Problem with the 1st is the single quote. `SQL` doesn't accept that quote. I dont find the one in my keyboard. May be you copied the query from somewhere.
14,672,640
I am trying to use python-twitter api in GAE. I need to import Oauth2 and httplib2. Here is how I did For OAuth2, I downloaded github.com/simplegeo/python-oauth2/tree/master/oauth2. For HTTPLib2, I dowloaded code.google.com/p/httplib2/wiki/Install and extracted folder python2/httplib2 to project root folder. my views.py ``` import twitter def index(request): api = twitter.Api(consumer_key='XNAUYmsmono4gs3LP4T6Pw',consumer_secret='xxxxx',access_token_key='xxxxx',access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) return render_to_response('fbtwitter/index.html') ``` I got the error [paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc](http://paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc) ``` AttributeError AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' Traceback (most recent call last) File "D:\PythonProj\fbtwitter\kay\lib\werkzeug\wsgi.py", line 471, in __call__ return app(environ, start_response) File "D:\PythonProj\fbtwitter\kay\app.py", line 478, in __call__ response = self.get_response(request) File "D:\PythonProj\fbtwitter\kay\app.py", line 405, in get_response return self.handle_uncaught_exception(request, exc_info) File "D:\PythonProj\fbtwitter\kay\app.py", line 371, in get_response response = view_func(request, **values) File "D:\PythonProj\fbtwitter\fbtwitter\views.py", line 39, in index access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2235, in __init__ self.SetCredentials(consumer_key, consumer_secret, access_token_key, access_token_secret) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2264, in SetCredentials self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' ``` It seems I did not import Oauth2 correctly when I tracked the error in twitter.py ``` self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() ``` I even go to twitter.py and add `import oauth2 as oauth` but it couldnt solve the problem Can anybody help?
2013/02/03
[ "https://Stackoverflow.com/questions/14672640", "https://Stackoverflow.com", "https://Stackoverflow.com/users/496837/" ]
``` ‘%A%’; ``` v.s. ``` '%A%'; ``` The first has fancy `'` characters. The usual cause for that is Outlook's AutoCorrect.
You have used `‘%A%’`. SQL doesn't accept this character - this should be `'%A%'`. I could not find this in my keyboard.
53,241,645
In Python 3.6, I can use the `__set_name__` hook to get the class attribute name of a descriptor. How can I achieve this in python 2.x? This is the code which works fine in Python 3.6: ``` class IntField: def __get__(self, instance, owner): if instance is None: return self return instance.__dict__[self.name] def __set__(self, instance, value): if not isinstance(value, int): raise ValueError('expecting integer') instance.__dict__[self.name] = value def __set_name__(self, owner, name): self.name = name class Example: a = IntField() ```
2018/11/10
[ "https://Stackoverflow.com/questions/53241645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5766927/" ]
You may be looking for metaclasses, with it you can process the class attributes at class creation time. ``` class FooDescriptor(object): def __get__(self, obj, objtype): print('calling getter') class FooMeta(type): def __init__(cls, name, bases, attrs): for k, v in attrs.iteritems(): if issubclass(type(v), FooDescriptor): print('FooMeta.__init__, attribute name is "{}"'.format(k)) class Foo(object): __metaclass__ = FooMeta foo = FooDescriptor() f = Foo() f.foo ``` Output: ``` FooMeta.__init__, attribute name is "foo" calling getter ``` If you need to change the class before it is created you need to override `__new__` instead of `__init__` at your metaclass. See this answer for more information on this topic: [Is there any reason to choose \_\_new\_\_ over \_\_init\_\_ when defining a metaclass?](https://stackoverflow.com/questions/1840421/is-there-any-reason-to-choose-new-over-init-when-defining-a-metaclass)
There are various solutions with different degrees of hackishness. I always liked to use a class decorator for this. ``` class IntField(object): def __get__(self, instance, owner): if instance is None: return self return instance.__dict__[self.name] def __set__(self, instance, value): if not isinstance(value, int): raise ValueError('expecting integer') instance.__dict__[self.name] = value def with_intfields(*names): def with_concrete_intfields(cls): for name in names: field = IntField() field.name = name setattr(cls, name, field) return cls return with_concrete_intfields ``` You can use it like this: ``` @with_intfields('a', 'b') class Example(object): pass e = Example() ``` Demo: ``` $ python2.7 -i clsdec.py >>> [x for x in vars(Example) if not x.startswith('_')] ['a', 'b'] >>> Example.a.name 'a' >>> e.a = 3 >>> e.b = 'test' [...] ValueError: expecting integer ``` Make sure to explicitly subclass from `object` in Python 2.7, that got me tripped up when I drafted the first version of this answer.
41,595,720
I am about to upgrade from Django 1.9 to 1.10 and would like to test if I have some deprecated functionality. However using ``` python -Wall manage.py test ``` will show tons and tons of warnings for Django 2.0. Is there a way to suppress warnings only for 2.0 or show only warnings for 1.10?
2017/01/11
[ "https://Stackoverflow.com/questions/41595720", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5047630/" ]
**Solution 1 - Using groups** ``` Private Sub Workbook_Open() With Sheet1 Dim i As Long, varLast As Long .Cells.ClearOutline varLast = .Cells(.Rows.Count, "A").End(xlUp).Row .Columns("A:A").Insert Shift:=xlToRight 'helper column For i = 1 To varLast .Range("A" & i) = .Range("B" & i).IndentLevel Next Dim rngRows As Range, rngFirst As Range, rngLast As Range, rngCell As Range, rowOffset As Long Set rngFirst = Range("A1") Set rngLast = rngFirst.End(xlDown) Set rngRows = Range(rngFirst, rngLast) For Each rngCell In rngRows rowOffset = 1 Do While rngCell.Offset(rowOffset) > rngCell And rngCell.Offset(rowOffset).Row <= rngLast.Row rowOffset = rowOffset + 1 Loop If rowOffset > 1 Then Range(rngCell.Offset(1), rngCell.Offset(rowOffset - 1)).EntireRow.Group End If Next .Columns("A:A").EntireColumn.Delete End With End Sub ``` [![enter image description here](https://i.stack.imgur.com/9qHIz.jpg)](https://i.stack.imgur.com/9qHIz.jpg) **Solution 2 - In case you don't want to modify the workbook data - workaround** Step 1 - Create a `UserForm` and add `TreeView` Control [![enter image description here](https://i.stack.imgur.com/IqPij.png)](https://i.stack.imgur.com/IqPij.png) Step 2 - Add the following code in the `UserForm` code ``` Private Sub UserForm_Initialize() With Me.TreeView1 .Style = tvwTreelinesPlusMinusText .LineStyle = tvwRootLines End With Call func_GroupData End Sub Private Sub func_GroupData() varRows = CLng(Sheet1.Cells(Sheet1.Rows.Count, "A").End(xlUp).Row) With Me.TreeView1.Nodes .Clear For i = 1 To varRows nodeTxt = Sheet1.Range("A" & i) nodeOrd = Sheet1.Range("A" & i).IndentLevel nodeTxt = Trim(nodeTxt) nodeAmt = Trim(CStr(Format(Sheet1.Range("B" & i), "###,###,###,##0.00"))) Select Case nodeOrd Case 0 'Level 0 - Root node nodeTxt = nodeTxt & Space(80 - Len(nodeTxt & nodeAmt)) & nodeAmt .Add Key:="Node" & i, Text:=Trim(nodeTxt) nodePar1 = "Node" & i Case 1 'Level 1 node nodeTxt = nodeTxt & Space(80 - Len(nodeTxt & nodeAmt)) & nodeAmt .Add Relative:=nodePar1, Relationship:=tvwChild, Key:="Node" & i, Text:=Trim(nodeTxt) nodePar2 = "Node" & i Case 2 'Level 2 node nodeTxt = nodeTxt & Space(80 - Len(nodeTxt & nodeAmt)) & nodeAmt .Add Relative:=nodePar2, Relationship:=tvwChild, Key:="Node" & i, Text:=Trim(nodeTxt) nodePar3 = "Node" & i End Select Next End With End Sub ``` Step 3 - Add the following code in `ThisWorkbook` to show the treeview ``` Private Sub Workbook_Open() UserForm1.Show vbModeless End Sub ``` The result [![enter image description here](https://i.stack.imgur.com/4ucdX.png)](https://i.stack.imgur.com/4ucdX.png)
One possibility would be to add a button to each cell and to hide its children rows on *collapse* and display its children rows on *expand*. Each `Excel.Button` executes one common method `TreeNodeClick` where the `Click` method is called on corresponding instance of `TreeNode`. The child rows are hidden or displayed based on the actual caption of the button. At the beginning the source data range needs to be selected when the method `Main` is executed. Problem is that the collection of Tree-Nodes needs to be filled each time the sheet is opened. So the method `Main` needs to be executed when the sheet is opened othervise it won't work. --- *Standard Module Code:* ``` Option Explicit Public treeNodes As VBA.Collection Sub Main() Dim b As TreeBuilder Set b = New TreeBuilder Set treeNodes = New VBA.Collection ActiveSheet.Buttons.Delete b.Build Selection, treeNodes End Sub Public Sub TreeNodeClick() Dim caller As String caller = Application.caller Dim treeNode As treeNode Set treeNode = treeNodes(caller) If Not treeNode Is Nothing Then treeNode.Click End If End Sub ``` --- *Class Module TreeNode:* ``` Option Explicit Private m_button As Excel.Button Private m_children As Collection Private m_parent As treeNode Private m_range As Range Private Const Collapsed As String = "+" Private Const Expanded As String = "-" Private m_indentLevel As Integer Public Sub Create(ByVal rng As Range, ByVal parent As treeNode) On Error GoTo ErrCreate Set m_range = rng m_range.EntireRow.RowHeight = 25 m_indentLevel = m_range.IndentLevel Set m_parent = parent If Not m_parent Is Nothing Then _ m_parent.AddChild Me Set m_button = rng.parent.Buttons.Add(rng.Left + 3 + 19 * m_indentLevel, rng.Top + 3, 19, 19) With m_button .Caption = Expanded .Name = m_range.Address .OnAction = "TreeNodeClick" .Placement = xlMoveAndSize .PrintObject = False End With With m_range .VerticalAlignment = xlCenter .Value = Strings.Trim(.Value) .Value = Strings.String((m_indentLevel + 11) + m_indentLevel * 5, " ") & .Value End With Exit Sub ErrCreate: MsgBox Err.Description, vbCritical, "TreeNode::Create" End Sub Public Sub Collapse(ByVal hide As Boolean) If hide Then m_range.EntireRow.Hidden = True End If m_button.Caption = Collapsed Dim ch As treeNode For Each ch In m_children ch.Collapse True Next End Sub Public Sub Expand(ByVal unhide As Boolean) If unhide Then m_range.EntireRow.Hidden = False End If m_button.Caption = Expanded Dim ch As treeNode For Each ch In m_children ch.Expand True Next End Sub Public Sub AddChild(ByVal child As treeNode) m_children.Add child End Sub Private Sub Class_Initialize() Set m_children = New VBA.Collection End Sub Public Sub Click() If m_button.Caption = Collapsed Then Expand False Else Collapse False End If End Sub Public Property Get IndentLevel() As Integer IndentLevel = m_indentLevel End Property Public Property Get Cell() As Range Set Cell = m_range End Property ``` --- *Class Module TreeBuilder:* ``` Option Explicit Public Sub Build(ByVal source As Range, ByVal treeNodes As VBA.Collection) Dim currCell As Range Dim newNode As treeNode Dim parentNode As treeNode For Each currCell In source.Columns(1).Cells Set parentNode = FindParent(currCell, source, treeNodes) Set newNode = New treeNode newNode.Create currCell, parentNode treeNodes.Add newNode, currCell.Address Next currCell End Sub Private Function FindParent(ByVal currCell As Range, ByVal source As Range, ByVal treeNodes As VBA.Collection) As treeNode If currCell.IndentLevel = 0 Then Exit Function End If Dim c As Range Dim r As Integer Set c = currCell For r = currCell.Row - 1 To source.Rows(1).Row Step -1 Set c = c.offset(-1, 0) If c.IndentLevel = currCell.IndentLevel - 1 Then Set FindParent = treeNodes(c.Address) Exit Function End If Next r End Function ``` --- *Result:* [![enter image description here](https://i.stack.imgur.com/S0pJd.jpg)](https://i.stack.imgur.com/S0pJd.jpg)
39,469,409
I've just created Django project and ran the server. It works fine but showed me warnings like ``` You have 14 unapplied migration(s)... ``` Then I ran ``` python manage.py migrate ``` in the terminal. It worked but showed me this ``` ?: (1_7.W001) MIDDLEWARE_CLASSES is not set. HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES. django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting. ``` And now I have this warning after starting my server. ``` You have 3 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth. ``` So how do I migrate correctly to get rid of this warning? I am using PyCharm and tried to create the project via PyCharm and terminal and have the same issue. ``` ~$ python3.5 --version Python 3.5.2 >>> django.VERSION (1, 10, 1, 'final', 1) ```
2016/09/13
[ "https://Stackoverflow.com/questions/39469409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4727702/" ]
So my problem was that I used wrong python version for migration. ``` python3.5 manage.py migrate ``` solves the problem.
You are probably using wrong django version. You need `django1.10`
44,916,289
When I try to install a package for python, the setup.py has the following lines: ``` import os, sys, platform from distutils.core import setup, Extension import subprocess from numpy import get_include from Cython.Distutils import build_ext from Cython.Build import cythonize from Cython.Compiler.Options import get_directive_defaults ``` and I tried to run `python setup.py install` in terminal but I received the following error: ```none Traceback (most recent call last): File "setup.py", line 9, in <module> from Cython.Compiler.Options import get_directive_defaults ImportError: cannot import name 'get_directive_defaults' ``` I would really appreciate if you could let me know how to fix this.
2017/07/05
[ "https://Stackoverflow.com/questions/44916289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8256442/" ]
Your `package.json` is missing `should` as a dependency. Install it via; `npm install --save-dev should` Also I would recommend you look into [chai](http://chaijs.com/api/bdd/) which in my opinion provides a slightly different API.
**should is an expressive, readable, framework-agnostic assertion library. The main goals of this library are to be expressive and to be helpful. It keeps your test code clean, and your error messages helpful. By default (when you require('should')) should extends the Object.prototype with a single non-enumerable getter that allows you to express how that object should behave. It also returns itself when required with require. It is also possible to use should.js without getter (it will not even try to extend Object.prototype), just require('should/as-function'). Or if you already use version that auto add getter, you can call .noConflict function. Results of (something).should getter and should(something) in most situations are the same** Better u install node dependency should with npm as below ``` npm install --save should ``` [should-reference](https://www.npmjs.com/package/should)
23,421,031
What I put in python: ``` phoneNumber = input("Enter your Phone Number: ") print("Your number is", str(phoneNumber)) ``` What I get if I put 021999888: ``` Enter your Phone Number: 021999888 ``` > > Traceback (most recent call last): File "None", line 1, in > invalid token: , line 1, pos 9 > > > What I get if I put 21: > > Enter your Phone Number: 21 > > > Your Number is 21 > > > What I get if I put 02: > > Enter your Phone Number: 02 > > > Your Number is 2 > > > What I get if I put 021: > > Enter your Phone Number: 021 > > > Your Number is 17 > > > What I get if I put 09: ``` Enter your Phone Number: 09 Traceback (most recent call last): File "None", line 1, in <module> invalid token: <string>, line 1, pos 2 ``` Any ideas what's wrong?
2014/05/02
[ "https://Stackoverflow.com/questions/23421031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3595018/" ]
If you have a `0` before a numeric literal, then it is in octal format. In this case any digit greater than 7 will result in an error. I think you should consider storing the phone number as a string, so use `raw_input()` instead. This will also keep the leading 0's.
@perreal is right. You should use `raw_input` instead: ``` >>> phoneNumber = raw_input("Enter your Phone Number: ") >>> print("Your number is", phoneNumber) Enter your Phone Number: 091234123 Your number is 091234123 ```
23,421,031
What I put in python: ``` phoneNumber = input("Enter your Phone Number: ") print("Your number is", str(phoneNumber)) ``` What I get if I put 021999888: ``` Enter your Phone Number: 021999888 ``` > > Traceback (most recent call last): File "None", line 1, in > invalid token: , line 1, pos 9 > > > What I get if I put 21: > > Enter your Phone Number: 21 > > > Your Number is 21 > > > What I get if I put 02: > > Enter your Phone Number: 02 > > > Your Number is 2 > > > What I get if I put 021: > > Enter your Phone Number: 021 > > > Your Number is 17 > > > What I get if I put 09: ``` Enter your Phone Number: 09 Traceback (most recent call last): File "None", line 1, in <module> invalid token: <string>, line 1, pos 2 ``` Any ideas what's wrong?
2014/05/02
[ "https://Stackoverflow.com/questions/23421031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3595018/" ]
If you have a `0` before a numeric literal, then it is in octal format. In this case any digit greater than 7 will result in an error. I think you should consider storing the phone number as a string, so use `raw_input()` instead. This will also keep the leading 0's.
A `0` before a number is in octal format: ``` >>> 02 2 >>> 021 17 >>> 0562 370 >>> 02412 1290 >>> oct(1) '01' >>> oct(1290) '02412' ``` Using `raw_input()` instead makes sure that the input doesn't have to be something you can call in a shell: ``` >>> number = raw_input('Enter your phone number: ') Enter your phone number: 04081546723 >>> number '04081546723' ``` If you call `021999888` in a shell, here is what happens: ``` >>> 021999888 File "<stdin>", line 1 021999888 ^ SyntaxError: invalid token ``` Look [here](http://en.wikipedia.org/wiki/Octal) for more information on octal numbers.
23,421,031
What I put in python: ``` phoneNumber = input("Enter your Phone Number: ") print("Your number is", str(phoneNumber)) ``` What I get if I put 021999888: ``` Enter your Phone Number: 021999888 ``` > > Traceback (most recent call last): File "None", line 1, in > invalid token: , line 1, pos 9 > > > What I get if I put 21: > > Enter your Phone Number: 21 > > > Your Number is 21 > > > What I get if I put 02: > > Enter your Phone Number: 02 > > > Your Number is 2 > > > What I get if I put 021: > > Enter your Phone Number: 021 > > > Your Number is 17 > > > What I get if I put 09: ``` Enter your Phone Number: 09 Traceback (most recent call last): File "None", line 1, in <module> invalid token: <string>, line 1, pos 2 ``` Any ideas what's wrong?
2014/05/02
[ "https://Stackoverflow.com/questions/23421031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3595018/" ]
A `0` before a number is in octal format: ``` >>> 02 2 >>> 021 17 >>> 0562 370 >>> 02412 1290 >>> oct(1) '01' >>> oct(1290) '02412' ``` Using `raw_input()` instead makes sure that the input doesn't have to be something you can call in a shell: ``` >>> number = raw_input('Enter your phone number: ') Enter your phone number: 04081546723 >>> number '04081546723' ``` If you call `021999888` in a shell, here is what happens: ``` >>> 021999888 File "<stdin>", line 1 021999888 ^ SyntaxError: invalid token ``` Look [here](http://en.wikipedia.org/wiki/Octal) for more information on octal numbers.
@perreal is right. You should use `raw_input` instead: ``` >>> phoneNumber = raw_input("Enter your Phone Number: ") >>> print("Your number is", phoneNumber) Enter your Phone Number: 091234123 Your number is 091234123 ```
67,360,917
i would like to make a groupby on my data to put together dates that are close. (less than 2 minutes) Here an example of what i get ``` > datas = [['A', 51, 'id1', '2020-05-27 05:50:43.346'], ['A', 51, 'id2', > '2020-05-27 05:51:08.347'], ['B', 45, 'id3', '2020-05-24 > 17:23:55.142'],['B', 45, 'id4', '2020-05-24 17:23:30.141'], ['C', 34, > 'id5', '2020-05-23 17:31:10.341']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'cold_id', > 'dates']) ``` The 2 first rows have close dates, same for the 3th and 4th rows, 5th row is alone. I would like to get something like this : ``` > datas = [['A', 51, 'id1 id2', 'date_1'], ['B', 45, 'id3 id4', > 'date_2'], ['C', 34, 'id5', 'date_3']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'col_id', > 'dates']) ``` Making it in a pythonic way is not that hard, but i have to make it on big dataframe, a pandas way using groupby method would be much efficient. After apply a datetime method on the dates column i tried : ``` > df.groupby([df['dates'].dt.date]).agg(','.join) ``` but the .dt.date method gives a date every day and not every 2 minutes. Do you have a solution ? Thank you
2021/05/02
[ "https://Stackoverflow.com/questions/67360917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15817735/" ]
A compiler is allowed to choose if `char` is signed or unsigned. The standard says that they have to pick, but don't mandate which way they choose. GCC supports `-fsigned-char` and `-funsigned-char` to force this behavior.
The shown output is consistent with `char` being an unsigned data type on the platform in question. The C++ standard allows `char` to be equivalent to either `unsigned char` or `signed char`. If you wish a specific behavior you can explicitly use a cast to `signed char` in your code.
67,360,917
i would like to make a groupby on my data to put together dates that are close. (less than 2 minutes) Here an example of what i get ``` > datas = [['A', 51, 'id1', '2020-05-27 05:50:43.346'], ['A', 51, 'id2', > '2020-05-27 05:51:08.347'], ['B', 45, 'id3', '2020-05-24 > 17:23:55.142'],['B', 45, 'id4', '2020-05-24 17:23:30.141'], ['C', 34, > 'id5', '2020-05-23 17:31:10.341']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'cold_id', > 'dates']) ``` The 2 first rows have close dates, same for the 3th and 4th rows, 5th row is alone. I would like to get something like this : ``` > datas = [['A', 51, 'id1 id2', 'date_1'], ['B', 45, 'id3 id4', > 'date_2'], ['C', 34, 'id5', 'date_3']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'col_id', > 'dates']) ``` Making it in a pythonic way is not that hard, but i have to make it on big dataframe, a pandas way using groupby method would be much efficient. After apply a datetime method on the dates column i tried : ``` > df.groupby([df['dates'].dt.date]).agg(','.join) ``` but the .dt.date method gives a date every day and not every 2 minutes. Do you have a solution ? Thank you
2021/05/02
[ "https://Stackoverflow.com/questions/67360917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15817735/" ]
A compiler is allowed to choose if `char` is signed or unsigned. The standard says that they have to pick, but don't mandate which way they choose. GCC supports `-fsigned-char` and `-funsigned-char` to force this behavior.
From the ISO C++ International Standard (Tip of trunk): | §6.8.2 Fundamental Types | [[basic.fundamental]](https://timsong-cpp.github.io/cppwp/basic.types#basic.fundamental) | | --- | --- | > > [7](https://timsong-cpp.github.io/cppwp/basic.types#basic.fundamental-7) - *Type char is a distinct type that has an implementation-defined choice of “signed char” or “unsigned char” as its underlying type.* > > > [...] > > > Not much more to add about that. Compiler vendors are free to choose whichever one they like or feel that is more reasonable given the target architecture, for ARM architectures [it seems that unsigned char is preferred](https://godbolt.org/z/n4M8bfzz9). It appears that the reason behind this is the fact that older versions of ARM, prior to ARMv4, had no native support for loading halfwords and signed bytes, so `char` became `unsigned` by default, and `unsigned` it remains, for legacy reasons. Main source: <http://www.davespace.co.uk/arm/efficient-c-for-arm/memaccess.html>
67,360,917
i would like to make a groupby on my data to put together dates that are close. (less than 2 minutes) Here an example of what i get ``` > datas = [['A', 51, 'id1', '2020-05-27 05:50:43.346'], ['A', 51, 'id2', > '2020-05-27 05:51:08.347'], ['B', 45, 'id3', '2020-05-24 > 17:23:55.142'],['B', 45, 'id4', '2020-05-24 17:23:30.141'], ['C', 34, > 'id5', '2020-05-23 17:31:10.341']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'cold_id', > 'dates']) ``` The 2 first rows have close dates, same for the 3th and 4th rows, 5th row is alone. I would like to get something like this : ``` > datas = [['A', 51, 'id1 id2', 'date_1'], ['B', 45, 'id3 id4', > 'date_2'], ['C', 34, 'id5', 'date_3']] > > df = pd.DataFrame(datas, columns = ['col1', 'col2', 'col_id', > 'dates']) ``` Making it in a pythonic way is not that hard, but i have to make it on big dataframe, a pandas way using groupby method would be much efficient. After apply a datetime method on the dates column i tried : ``` > df.groupby([df['dates'].dt.date]).agg(','.join) ``` but the .dt.date method gives a date every day and not every 2 minutes. Do you have a solution ? Thank you
2021/05/02
[ "https://Stackoverflow.com/questions/67360917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15817735/" ]
From the ISO C++ International Standard (Tip of trunk): | §6.8.2 Fundamental Types | [[basic.fundamental]](https://timsong-cpp.github.io/cppwp/basic.types#basic.fundamental) | | --- | --- | > > [7](https://timsong-cpp.github.io/cppwp/basic.types#basic.fundamental-7) - *Type char is a distinct type that has an implementation-defined choice of “signed char” or “unsigned char” as its underlying type.* > > > [...] > > > Not much more to add about that. Compiler vendors are free to choose whichever one they like or feel that is more reasonable given the target architecture, for ARM architectures [it seems that unsigned char is preferred](https://godbolt.org/z/n4M8bfzz9). It appears that the reason behind this is the fact that older versions of ARM, prior to ARMv4, had no native support for loading halfwords and signed bytes, so `char` became `unsigned` by default, and `unsigned` it remains, for legacy reasons. Main source: <http://www.davespace.co.uk/arm/efficient-c-for-arm/memaccess.html>
The shown output is consistent with `char` being an unsigned data type on the platform in question. The C++ standard allows `char` to be equivalent to either `unsigned char` or `signed char`. If you wish a specific behavior you can explicitly use a cast to `signed char` in your code.
69,046,120
It shows that tables are successfully created when I do `heroku run -a "app-name" python manage.py migrate` ``` Running python manage.py migrate on ⬢ app_name... up, run.0000 (Free) System check identified some issues: ... Operations to perform: Apply all migrations: admin, auth, blog, contenttypes, home, sessions, taggit, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailusers Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK ... ``` But when I create a superuser, it tells me that there is no table Any suggestions? I’m sticking in it for 3 days now so I will be grateful for any help. P.S. I use heroku postgresql hobby-dev. P.P.S. ``` File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/app/.heroku/python/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute return Database.Cursor.execute(self, query, params) django.db.utils.OperationalError: no such table: auth_user ``` Base settings.py <https://pastebin.com/DLh3KrK7> My production configuration (`settings.py`) ```py from .base import * import dj_database_url import environ DEBUG = False try: from .local import * except ImportError: pass environ.Env.read_env() env = environ.Env() DATABASES = { 'default': env.db() } ```
2021/09/03
[ "https://Stackoverflow.com/questions/69046120", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11235791/" ]
Re-check your database configuration. The error trace shows that it's using sqlite as the database backend, instead of Postgres as expected: ``` File "/app/.heroku/python/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute ``` This is then failing because the sqlite database is stored on the filesystem, and filesystems on Heroku are not persistent across commands - so the database you created in the `migrate` step no longer exists when you run `createsuperuser`.
please run these command ``` python manage.py syncdb python manage.py migrate python manage.py createsuperuser ``` please make sure that you in your installed app ``` 'django.contrib.auth' ``` and tell me if you still got the same error and then please add your settings.py
41,875,358
I'm following this guide <https://developers.google.com/sheets/api/quickstart/python> Upon running the sample code they provided (The only thing I changed was the location of the api secret since we already had one set up and the APPLICATION\_NAME) I get this error ``` AttributeError: 'module' object has no attribute 'DEFAULT_MAX_REDIRECTS' ``` Log before the error ``` File "generate_report.py", line 2, in <module> import httplib2 File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 42, in <module> import calendar File "/Users/HarshaGoli/Git/PantherBot/scripts/calendar.py", line 1, in <module> from oauth2client.service_account import ServiceAccountCredentials File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/oauth2client/service_account.py", line 25, in <module> from oauth2client import client File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/oauth2client/client.py", line 39, in <module> from oauth2client import transport File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/oauth2client/transport.py", line 255, in <module> redirections=httplib2.DEFAULT_MAX_REDIRECTS, ```
2017/01/26
[ "https://Stackoverflow.com/questions/41875358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5838056/" ]
I got the same error and investigated on the problem. In my case, it was caused by a file named ''calendar.py" in the same directory. It's said you should avoid using general names that can be used for standard python library.
It may be versioning problem. It could be `python3` version of `httplib2` which cause troubles, try to follow answer from this [post](https://stackoverflow.com/questions/48941042/google-cloud-function-attributeerror-module-object-has-no-attribute-defaul/49970238#49970238)
33,309,904
On my local environment, with Python 2.7.10, my Django project seems to run perfectly well using .manage.py runserver. But when I tried to deploy the project to my Debian Wheezy server using the same version of python 2.7.10, it encountered 500 internal server error. Upon checking my apache log, I found the error to be alternating between these two: ``` [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Target WSGI script '/var/www/proj/proj/proj_wsgi.py' cannot be loaded as Python module. [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Exception occurred processing WSGI script '/var/www/proj/proj/proj_wsgi.py'. [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] Traceback (most recent call last): [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/proj/proj/proj_wsgi.py", line 21, in <module> [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] application = get_wsgi_application() [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] django.setup() [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/__init__.py", line 18, in setup [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] apps.populate(settings.INSTALLED_APPS) [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/registry.py", line 78, in populate [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] raise RuntimeError("populate() isn't reentrant") [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] RuntimeError: populate() isn't reentrant ``` AND this one: ``` [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Target WSGI script '/var/www/proj/proj/proj_wsgi.py' cannot be loaded as Python module. [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Exception occurred processing WSGI script '/var/www/proj/proj/proj_wsgi.py'. [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] Traceback (most recent call last): [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/proj/proj/proj_wsgi.py", line 21, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] application = get_wsgi_application() [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] django.setup() [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/__init__.py", line 18, in setup [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] apps.populate(settings.INSTALLED_APPS) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] app_config.import_models(all_models) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/config.py", line 198, in import_models [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] self.models_module = import_module(models_module_name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] __import__(name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/contrib/auth/models.py", line 41, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] class Permission(models.Model): [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/base.py", line 139, in __new__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] new_class.add_to_class('_meta', Options(meta, **kwargs)) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/base.py", line 324, in add_to_class [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] value.contribute_to_class(cls, name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/options.py", line 250, in contribute_to_class [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/__init__.py", line 36, in __getattr__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] return getattr(connections[DEFAULT_DB_ALIAS], item) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/utils.py", line 240, in __getitem__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] backend = load_backend(db['ENGINE']) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/utils.py", line 111, in load_backend [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] return import_module('%s.base' % backend_name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] __import__(name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 24, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] ImproperlyConfigured: Error loading psycopg2 module: /var/www/ven/lib/python2.7/site-packages/psycopg2/_psycopg.so: undefined symbol: PyUnicodeUCS2_AsUTF8String ``` I have tried many solutions, such as all these links below via Google but still to no avail. [Django stops working with RuntimeError: populate() isn't reentrant](https://stackoverflow.com/questions/27093746/django-stops-working-with-runtimeerror-populate-isnt-reentrant) [Django populate() isn't reentrant](https://stackoverflow.com/questions/30954398/django-populate-isnt-reentrant) I tried moving to python 2.7.3 and the django project managed to work but I need some encoding features in pickle contained in the 2.7.10 version so I need to use that. I have even tried reinstalling a brand new Django 1.8.5 project from scratch on python 2.7.10 but it did not work, giving out the same errors. My proj\_wgsi.py is: ``` import os import sys import site from django.core.wsgi import get_wsgi_application # Add the site-packages of the chosen virtualenv to work with site.addsitedir('/var/www/ven/lib/python2.7/site-packages') # Add the app's directory to the PYTHONPATH sys.path.append('/var/www/proj') sys.path.append('/var/www/proj/proj') # Activate your virtual env activate_env=os.path.expanduser('/var/www/ven/bin/activate_this.py') execfile(activate_env, dict(__file__=activate_env)) os.environ.setdefault("DJANGO_SETTINGS_MODULE", "proj.settings") application = get_wsgi_application() ``` My virtual host conf in apache in /etc/apache2/sites-enabled/000-default is ``` <VirtualHost *:80> ServerName 128.133.218.444 ServerAdmin webmaster@localhost ServerAlias 128.133.218.444 WSGIDaemonProcess 128.133.218.444 python-path="/var/www/proj:/var/www/ven/lib/python2.7/site-packages" WSGIProcessGroup 128.199.218.180 WSGIScriptAlias / /var/www/proj/proj/proj_wsgi.py process-group=128.199.218.180 WSGIPassAuthorization On DocumentRoot /var/www/proj #<Directory /> # Options FollowSymLinks # AllowOverride None #</Directory> #<Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all #</Directory> <Directory /var/www/proj> Order allow,deny Allow from all </Directory> <Directory /var/www/proj/proj/static> Order deny,allow Allow from all </Directory> <Directory /var/www/proj/proj/media> Order deny,allow Allow from all </Directory> <Directory /var/www/proj/proj> <Files wsgi.py> Order allow,deny allow from all </Files> </Directory> #ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ #<Directory "/usr/lib/cgi-bin"> # AllowOverride None # Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch # Order allow,deny # Allow from all #</Directory> ErrorLog ${APACHE_LOG_DIR}/error.log #ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> ``` I have been trying to solve this issue for a couple of days so any help will be highly appreciated.Thank you!
2015/10/23
[ "https://Stackoverflow.com/questions/33309904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2970242/" ]
writing solution in answer separately for readability of others. ``` for i in [i for i, x in enumerate(hanksArray) if x == hanksYear]: print(hanksArray[i-1]) print(hanksArray[i]) print(hanksArray[i+1]) ```
Quick solution for you will be ``` for i in [i for i, x in enumerate(hanksArray) if x == hanksYear]: print("\n".join(hanksArray[i-1:i+2])) ``` There are numerous other problems with your code anyway
33,309,904
On my local environment, with Python 2.7.10, my Django project seems to run perfectly well using .manage.py runserver. But when I tried to deploy the project to my Debian Wheezy server using the same version of python 2.7.10, it encountered 500 internal server error. Upon checking my apache log, I found the error to be alternating between these two: ``` [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Target WSGI script '/var/www/proj/proj/proj_wsgi.py' cannot be loaded as Python module. [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Exception occurred processing WSGI script '/var/www/proj/proj/proj_wsgi.py'. [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] Traceback (most recent call last): [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/proj/proj/proj_wsgi.py", line 21, in <module> [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] application = get_wsgi_application() [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] django.setup() [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/__init__.py", line 18, in setup [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] apps.populate(settings.INSTALLED_APPS) [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/registry.py", line 78, in populate [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] raise RuntimeError("populate() isn't reentrant") [Fri Oct 23 23:31:41 2015] [error] [client 176.10.99.201] RuntimeError: populate() isn't reentrant ``` AND this one: ``` [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Target WSGI script '/var/www/proj/proj/proj_wsgi.py' cannot be loaded as Python module. [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] mod_wsgi (pid=1973): Exception occurred processing WSGI script '/var/www/proj/proj/proj_wsgi.py'. [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] Traceback (most recent call last): [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/proj/proj/proj_wsgi.py", line 21, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] application = get_wsgi_application() [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] django.setup() [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/__init__.py", line 18, in setup [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] apps.populate(settings.INSTALLED_APPS) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] app_config.import_models(all_models) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/apps/config.py", line 198, in import_models [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] self.models_module = import_module(models_module_name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] __import__(name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/contrib/auth/models.py", line 41, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] class Permission(models.Model): [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/base.py", line 139, in __new__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] new_class.add_to_class('_meta', Options(meta, **kwargs)) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/base.py", line 324, in add_to_class [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] value.contribute_to_class(cls, name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/models/options.py", line 250, in contribute_to_class [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/__init__.py", line 36, in __getattr__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] return getattr(connections[DEFAULT_DB_ALIAS], item) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/utils.py", line 240, in __getitem__ [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] backend = load_backend(db['ENGINE']) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/utils.py", line 111, in load_backend [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] return import_module('%s.base' % backend_name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] __import__(name) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] File "/var/www/ven/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 24, in <module> [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e) [Fri Oct 23 23:30:52 2015] [error] [client 176.10.99.201] ImproperlyConfigured: Error loading psycopg2 module: /var/www/ven/lib/python2.7/site-packages/psycopg2/_psycopg.so: undefined symbol: PyUnicodeUCS2_AsUTF8String ``` I have tried many solutions, such as all these links below via Google but still to no avail. [Django stops working with RuntimeError: populate() isn't reentrant](https://stackoverflow.com/questions/27093746/django-stops-working-with-runtimeerror-populate-isnt-reentrant) [Django populate() isn't reentrant](https://stackoverflow.com/questions/30954398/django-populate-isnt-reentrant) I tried moving to python 2.7.3 and the django project managed to work but I need some encoding features in pickle contained in the 2.7.10 version so I need to use that. I have even tried reinstalling a brand new Django 1.8.5 project from scratch on python 2.7.10 but it did not work, giving out the same errors. My proj\_wgsi.py is: ``` import os import sys import site from django.core.wsgi import get_wsgi_application # Add the site-packages of the chosen virtualenv to work with site.addsitedir('/var/www/ven/lib/python2.7/site-packages') # Add the app's directory to the PYTHONPATH sys.path.append('/var/www/proj') sys.path.append('/var/www/proj/proj') # Activate your virtual env activate_env=os.path.expanduser('/var/www/ven/bin/activate_this.py') execfile(activate_env, dict(__file__=activate_env)) os.environ.setdefault("DJANGO_SETTINGS_MODULE", "proj.settings") application = get_wsgi_application() ``` My virtual host conf in apache in /etc/apache2/sites-enabled/000-default is ``` <VirtualHost *:80> ServerName 128.133.218.444 ServerAdmin webmaster@localhost ServerAlias 128.133.218.444 WSGIDaemonProcess 128.133.218.444 python-path="/var/www/proj:/var/www/ven/lib/python2.7/site-packages" WSGIProcessGroup 128.199.218.180 WSGIScriptAlias / /var/www/proj/proj/proj_wsgi.py process-group=128.199.218.180 WSGIPassAuthorization On DocumentRoot /var/www/proj #<Directory /> # Options FollowSymLinks # AllowOverride None #</Directory> #<Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all #</Directory> <Directory /var/www/proj> Order allow,deny Allow from all </Directory> <Directory /var/www/proj/proj/static> Order deny,allow Allow from all </Directory> <Directory /var/www/proj/proj/media> Order deny,allow Allow from all </Directory> <Directory /var/www/proj/proj> <Files wsgi.py> Order allow,deny allow from all </Files> </Directory> #ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ #<Directory "/usr/lib/cgi-bin"> # AllowOverride None # Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch # Order allow,deny # Allow from all #</Directory> ErrorLog ${APACHE_LOG_DIR}/error.log #ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> ``` I have been trying to solve this issue for a couple of days so any help will be highly appreciated.Thank you!
2015/10/23
[ "https://Stackoverflow.com/questions/33309904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2970242/" ]
writing solution in answer separately for readability of others. ``` for i in [i for i, x in enumerate(hanksArray) if x == hanksYear]: print(hanksArray[i-1]) print(hanksArray[i]) print(hanksArray[i+1]) ```
This looks a lot cleaner. ``` for line,val in enumerate(hanksArray): if val == hanksYear: print(hanksArray[line-1]) print(val) print(hanksArray[line+1]) ```
39,091,551
I am planning on making a game with pygame using gpio buttons. Here is the code: ``` from gpiozero import Button import pygame from time import sleep from sys import exit up = Button(2) left = Button(3) right = Button(4) down = Button(14) fps = pygame.time.Clock() pygame.init() surface = pygame.display.set_mode((1300, 700)) x = 50 y = 50 while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: break if up.is_pressed: y -= 5 if down.is_pressed: y += 5 if left.is_pressed: x -= 5 if right.is_pressed: x += 5 surface.fill((0, 0, 0)) pygame.draw.circle(surface, (255, 255, 255), (x, y), 20, 0) pygame.display.update() fps.tick(30) ``` However, when I press on the X button on the top of the window, it doesn't close. Is there a possible solution for this? **EDIT:** Everyone is giving the same answer, that I am not adding a for loop to check events and quit. I did put that, here in my code: ``` while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: break ``` I have also tried `sys.exit()`. **EDIT 2**: @Shahrukhkhan asked me to put a print statement inside the `for event in pygame.event.get():` loop, which made the loop like this: ``` while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: print "X pressed" break root@raspberrypi:~/Desktop# python game.py X pressed X pressed ```
2016/08/23
[ "https://Stackoverflow.com/questions/39091551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2945954/" ]
There are two possible ways to close the pygame window . 1. after the end of while loop simply write ``` import sys while 1: ....... pygame.quit() sys.exit() ``` 2.instead of putting a break statement ,replace break in for loop immediately after while as ``` while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() ...... ```
You need to make a event and within it you need to quit the pygame ``` for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() ```
14,086,830
I'm punching way above my weight here, but please bear with this Python amateur. I'm a PHP developer by trade and I've hardly touched this language before. What I'm trying to do is call a method in a class...sounds simple enough? I'm utterly baffled about what 'self' refers to, and what is the correct procedure to call such a method inside a class and outside a class. Could someone *explain* to me, how to call the `move` method with the variable `RIGHT`. I've tried researching this on several 'learn python' sites and searches on StackOverflow, but to no avail. Any help will be appreciated. The following class works in Scott's Python script which is accessed by a terminal GUI (urwid). The function I'm working with is a Scott Weston's missile launcher Python script, which I'm trying to hook into a PHP web-server. ``` class MissileDevice: INITA = (85, 83, 66, 67, 0, 0, 4, 0) INITB = (85, 83, 66, 67, 0, 64, 2, 0) CMDFILL = ( 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) STOP = ( 0, 0, 0, 0, 0, 0) LEFT = ( 0, 1, 0, 0, 0, 0) RIGHT = ( 0, 0, 1, 0, 0, 0) UP = ( 0, 0, 0, 1, 0, 0) DOWN = ( 0, 0, 0, 0, 1, 0) LEFTUP = ( 0, 1, 0, 1, 0, 0) RIGHTUP = ( 0, 0, 1, 1, 0, 0) LEFTDOWN = ( 0, 1, 0, 0, 1, 0) RIGHTDOWN = ( 0, 0, 1, 0, 1, 0) FIRE = ( 0, 0, 0, 0, 0, 1) def __init__(self, battery): try: self.dev=UsbDevice(0x1130, 0x0202, battery) self.dev.open() self.dev.handle.reset() except NoMissilesError, e: raise NoMissilesError() def move(self, direction): self.dev.handle.controlMsg(0x21, 0x09, self.INITA, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, self.INITB, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, direction+self.CMDFILL, 0x02, 0x01) ```
2012/12/29
[ "https://Stackoverflow.com/questions/14086830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122776/" ]
The first argument of all methods is usually called `self`. It refers to the instance for which the method is being called. Let's say you have: ``` class A(object): def foo(self): print 'Foo' def bar(self, an_argument): print 'Bar', an_argument ``` Then, doing: ``` a = A() a.foo() #prints 'Foo' a.bar('Arg!') #prints 'Bar Arg!' ``` --- There's nothing special about this being called `self`, you could do the following: ``` class B(object): def foo(self): print 'Foo' def bar(this_object): this_object.foo() ``` Then, doing: ``` b = B() b.bar() # prints 'Foo' ``` --- In your specific case: ``` dangerous_device = MissileDevice(some_battery) dangerous_device.move(dangerous_device.RIGHT) ``` (As suggested in comments `MissileDevice.RIGHT` could be more appropriate here!) You **could** declare all your constants at module level though, so you could do: ``` dangerous_device.move(RIGHT) ``` This, however, is going to depend on how you want your code to be organized!
> > Could someone explain to me, how to call the move method with the variable RIGHT > > > ``` >>> myMissile = MissileDevice(myBattery) # looks like you need a battery, don't know what that is, you figure it out. >>> myMissile.move(MissileDevice.RIGHT) ``` If you have programmed in any other language with classes, besides python, this sort of thing ``` class Foo: bar = "baz" ``` is probably unfamiliar. In python, the class is a factory for objects, but it is itself an object; and variables defined in its scope are attached to the *class*, not the instances returned by the class. to refer to `bar`, above, you can just call it `Foo.bar`; you can also access class attributes through instances of the class, like `Foo().bar`. --- > > Im utterly baffled about what 'self' refers too, > > > ``` >>> class Foo: ... def quux(self): ... print self ... print self.bar ... bar = 'baz' ... >>> Foo.quux <unbound method Foo.quux> >>> Foo.bar 'baz' >>> f = Foo() >>> f.bar 'baz' >>> f <__main__.Foo instance at 0x0286A058> >>> f.quux <bound method Foo.quux of <__main__.Foo instance at 0x0286A058>> >>> f.quux() <__main__.Foo instance at 0x0286A058> baz >>> ``` When you acecss an attribute on a python object, the interpreter will notice, when the looked up attribute was on the class, and is a function, that it should return a "bound" method instead of the function itself. All this does is arrange for the instance to be passed as the first argument.
14,086,830
I'm punching way above my weight here, but please bear with this Python amateur. I'm a PHP developer by trade and I've hardly touched this language before. What I'm trying to do is call a method in a class...sounds simple enough? I'm utterly baffled about what 'self' refers to, and what is the correct procedure to call such a method inside a class and outside a class. Could someone *explain* to me, how to call the `move` method with the variable `RIGHT`. I've tried researching this on several 'learn python' sites and searches on StackOverflow, but to no avail. Any help will be appreciated. The following class works in Scott's Python script which is accessed by a terminal GUI (urwid). The function I'm working with is a Scott Weston's missile launcher Python script, which I'm trying to hook into a PHP web-server. ``` class MissileDevice: INITA = (85, 83, 66, 67, 0, 0, 4, 0) INITB = (85, 83, 66, 67, 0, 64, 2, 0) CMDFILL = ( 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) STOP = ( 0, 0, 0, 0, 0, 0) LEFT = ( 0, 1, 0, 0, 0, 0) RIGHT = ( 0, 0, 1, 0, 0, 0) UP = ( 0, 0, 0, 1, 0, 0) DOWN = ( 0, 0, 0, 0, 1, 0) LEFTUP = ( 0, 1, 0, 1, 0, 0) RIGHTUP = ( 0, 0, 1, 1, 0, 0) LEFTDOWN = ( 0, 1, 0, 0, 1, 0) RIGHTDOWN = ( 0, 0, 1, 0, 1, 0) FIRE = ( 0, 0, 0, 0, 0, 1) def __init__(self, battery): try: self.dev=UsbDevice(0x1130, 0x0202, battery) self.dev.open() self.dev.handle.reset() except NoMissilesError, e: raise NoMissilesError() def move(self, direction): self.dev.handle.controlMsg(0x21, 0x09, self.INITA, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, self.INITB, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, direction+self.CMDFILL, 0x02, 0x01) ```
2012/12/29
[ "https://Stackoverflow.com/questions/14086830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122776/" ]
The first argument of all methods is usually called `self`. It refers to the instance for which the method is being called. Let's say you have: ``` class A(object): def foo(self): print 'Foo' def bar(self, an_argument): print 'Bar', an_argument ``` Then, doing: ``` a = A() a.foo() #prints 'Foo' a.bar('Arg!') #prints 'Bar Arg!' ``` --- There's nothing special about this being called `self`, you could do the following: ``` class B(object): def foo(self): print 'Foo' def bar(this_object): this_object.foo() ``` Then, doing: ``` b = B() b.bar() # prints 'Foo' ``` --- In your specific case: ``` dangerous_device = MissileDevice(some_battery) dangerous_device.move(dangerous_device.RIGHT) ``` (As suggested in comments `MissileDevice.RIGHT` could be more appropriate here!) You **could** declare all your constants at module level though, so you could do: ``` dangerous_device.move(RIGHT) ``` This, however, is going to depend on how you want your code to be organized!
Let's say you have a shiny Foo class. Well you have 3 options: 1) You want to use the method (or attribute) of a class inside the definition of that class: ``` class Foo(object): attribute1 = 1 # class attribute (those don't use 'self' in declaration) def __init__(self): self.attribute2 = 2 # instance attribute (those are accessible via first # parameter of the method, usually called 'self' # which will contain nothing but the instance itself) def set_attribute3(self, value): self.attribute3 = value def sum_1and2(self): return self.attribute1 + self.attribute2 ``` 2) You want to use the method (or attribute) of a class outside the definition of that class ``` def get_legendary_attribute1(): return Foo.attribute1 def get_legendary_attribute2(): return Foo.attribute2 def get_legendary_attribute1_from(cls): return cls.attribute1 get_legendary_attribute1() # >>> 1 get_legendary_attribute2() # >>> AttributeError: type object 'Foo' has no attribute 'attribute2' get_legendary_attribute1_from(Foo) # >>> 1 ``` 3) You want to use the method (or attribute) of an instantiated class: ``` f = Foo() f.attribute1 # >>> 1 f.attribute2 # >>> 2 f.attribute3 # >>> AttributeError: 'Foo' object has no attribute 'attribute3' f.set_attribute3(3) f.attribute3 # >>> 3 ```
14,086,830
I'm punching way above my weight here, but please bear with this Python amateur. I'm a PHP developer by trade and I've hardly touched this language before. What I'm trying to do is call a method in a class...sounds simple enough? I'm utterly baffled about what 'self' refers to, and what is the correct procedure to call such a method inside a class and outside a class. Could someone *explain* to me, how to call the `move` method with the variable `RIGHT`. I've tried researching this on several 'learn python' sites and searches on StackOverflow, but to no avail. Any help will be appreciated. The following class works in Scott's Python script which is accessed by a terminal GUI (urwid). The function I'm working with is a Scott Weston's missile launcher Python script, which I'm trying to hook into a PHP web-server. ``` class MissileDevice: INITA = (85, 83, 66, 67, 0, 0, 4, 0) INITB = (85, 83, 66, 67, 0, 64, 2, 0) CMDFILL = ( 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) STOP = ( 0, 0, 0, 0, 0, 0) LEFT = ( 0, 1, 0, 0, 0, 0) RIGHT = ( 0, 0, 1, 0, 0, 0) UP = ( 0, 0, 0, 1, 0, 0) DOWN = ( 0, 0, 0, 0, 1, 0) LEFTUP = ( 0, 1, 0, 1, 0, 0) RIGHTUP = ( 0, 0, 1, 1, 0, 0) LEFTDOWN = ( 0, 1, 0, 0, 1, 0) RIGHTDOWN = ( 0, 0, 1, 0, 1, 0) FIRE = ( 0, 0, 0, 0, 0, 1) def __init__(self, battery): try: self.dev=UsbDevice(0x1130, 0x0202, battery) self.dev.open() self.dev.handle.reset() except NoMissilesError, e: raise NoMissilesError() def move(self, direction): self.dev.handle.controlMsg(0x21, 0x09, self.INITA, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, self.INITB, 0x02, 0x01) self.dev.handle.controlMsg(0x21, 0x09, direction+self.CMDFILL, 0x02, 0x01) ```
2012/12/29
[ "https://Stackoverflow.com/questions/14086830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122776/" ]
Let's say you have a shiny Foo class. Well you have 3 options: 1) You want to use the method (or attribute) of a class inside the definition of that class: ``` class Foo(object): attribute1 = 1 # class attribute (those don't use 'self' in declaration) def __init__(self): self.attribute2 = 2 # instance attribute (those are accessible via first # parameter of the method, usually called 'self' # which will contain nothing but the instance itself) def set_attribute3(self, value): self.attribute3 = value def sum_1and2(self): return self.attribute1 + self.attribute2 ``` 2) You want to use the method (or attribute) of a class outside the definition of that class ``` def get_legendary_attribute1(): return Foo.attribute1 def get_legendary_attribute2(): return Foo.attribute2 def get_legendary_attribute1_from(cls): return cls.attribute1 get_legendary_attribute1() # >>> 1 get_legendary_attribute2() # >>> AttributeError: type object 'Foo' has no attribute 'attribute2' get_legendary_attribute1_from(Foo) # >>> 1 ``` 3) You want to use the method (or attribute) of an instantiated class: ``` f = Foo() f.attribute1 # >>> 1 f.attribute2 # >>> 2 f.attribute3 # >>> AttributeError: 'Foo' object has no attribute 'attribute3' f.set_attribute3(3) f.attribute3 # >>> 3 ```
> > Could someone explain to me, how to call the move method with the variable RIGHT > > > ``` >>> myMissile = MissileDevice(myBattery) # looks like you need a battery, don't know what that is, you figure it out. >>> myMissile.move(MissileDevice.RIGHT) ``` If you have programmed in any other language with classes, besides python, this sort of thing ``` class Foo: bar = "baz" ``` is probably unfamiliar. In python, the class is a factory for objects, but it is itself an object; and variables defined in its scope are attached to the *class*, not the instances returned by the class. to refer to `bar`, above, you can just call it `Foo.bar`; you can also access class attributes through instances of the class, like `Foo().bar`. --- > > Im utterly baffled about what 'self' refers too, > > > ``` >>> class Foo: ... def quux(self): ... print self ... print self.bar ... bar = 'baz' ... >>> Foo.quux <unbound method Foo.quux> >>> Foo.bar 'baz' >>> f = Foo() >>> f.bar 'baz' >>> f <__main__.Foo instance at 0x0286A058> >>> f.quux <bound method Foo.quux of <__main__.Foo instance at 0x0286A058>> >>> f.quux() <__main__.Foo instance at 0x0286A058> baz >>> ``` When you acecss an attribute on a python object, the interpreter will notice, when the looked up attribute was on the class, and is a function, that it should return a "bound" method instead of the function itself. All this does is arrange for the instance to be passed as the first argument.
74,663,591
I'm trying to remake Tic-Tac-Toe on python. But, it wont work. I tried ` ``` game_board = ['_'] * 9 print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2]) print(game_board[3]) + ' | ' + (game_board[4]) + ' | ' + (game_board[5]) print(game_board[6]) + ' | ' + (game_board[7]) + ' | ' + (game_board[8]) ``` ` but it returns ` ``` Traceback (most recent call last): File "C:\Users\username\PycharmProjects\pythonProject\tutorial.py", line 2, in <module> print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2]) ~~~~~~~~~~~~~~~~~~~~~^~~~~~~ TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ``` `
2022/12/03
[ "https://Stackoverflow.com/questions/74663591", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20671383/" ]
``` function put() { var num0 = document.getElementById("text") var num1 = Number(num0.value) var num4 = document.getElementById("text2") var num2 = Number(num4.value) var sub = document.getElementById("submit") var res = num1 + num2 document.getElementById("myp").innerHTML = num1 + num2 } ```
You can use the `+` operator, like that: ``` var num1 = +num0.value; ... var num2 = +num4.value; ``` and this will turn your string number into a *floating* point number ```html <input type="text" id="text" placeholder="Number 1" /> <input type="text" id="text2" placeholder="Number 2" /> <button type="submit" id="submit" onclick="put()">Click Me</button> <p id="myp"></p> <script> function put() { var num0 = document.getElementById("text"); var num1 = +num0.value; var num4 = document.getElementById("text2"); var num2 = +num4.value; var sub = document.getElementById("submit"); var res = num1 + num2; document.getElementById("myp").innerHTML = res; } </script> ```
74,663,591
I'm trying to remake Tic-Tac-Toe on python. But, it wont work. I tried ` ``` game_board = ['_'] * 9 print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2]) print(game_board[3]) + ' | ' + (game_board[4]) + ' | ' + (game_board[5]) print(game_board[6]) + ' | ' + (game_board[7]) + ' | ' + (game_board[8]) ``` ` but it returns ` ``` Traceback (most recent call last): File "C:\Users\username\PycharmProjects\pythonProject\tutorial.py", line 2, in <module> print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2]) ~~~~~~~~~~~~~~~~~~~~~^~~~~~~ TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ``` `
2022/12/03
[ "https://Stackoverflow.com/questions/74663591", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20671383/" ]
``` function put() { var num0 = document.getElementById("text").value var num1 = Number.parseInt(num0) var num4 = document.getElementById("text2").value var num2 = Number.parseInt(num4) var res = num1 + num2 document.getElementById("myp").innerHTML = res } ```
You can use the `+` operator, like that: ``` var num1 = +num0.value; ... var num2 = +num4.value; ``` and this will turn your string number into a *floating* point number ```html <input type="text" id="text" placeholder="Number 1" /> <input type="text" id="text2" placeholder="Number 2" /> <button type="submit" id="submit" onclick="put()">Click Me</button> <p id="myp"></p> <script> function put() { var num0 = document.getElementById("text"); var num1 = +num0.value; var num4 = document.getElementById("text2"); var num2 = +num4.value; var sub = document.getElementById("submit"); var res = num1 + num2; document.getElementById("myp").innerHTML = res; } </script> ```
43,708,668
I have a simplified python code looking like the following: ``` a = 100 x = 0 for i in range(0, a): x = x + i / float(a) ``` Is there a way to access the maximum amount of iterations inside a `for` loop? Basically the code would change to: ``` x = 0 for i in range(0, 100): x = x + i / float(thisloopsmaxcount) ``` where `thisloopsmaxcount` is some fancy python method. Another option would be to implement a whole class for this behaviour.
2017/04/30
[ "https://Stackoverflow.com/questions/43708668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6786718/" ]
Yeah, you can.. ``` a = 100 x = 0 r = range(0,a) for i in r: x = x + i / r.stop ``` but if the range isn't counting 1,2,3... then the `stop` won't be the number of steps, e.g. `range(10,12)` doesn't have 12 steps it has 2 steps. And `range(0,100,10)` counts in tens, so it doesn't have 100 steps. So you need to take into account `(.stop - .start) / .step` as appropriate. And it only works for range, in general a `for` loop could be reading from a network, or something based on user input, where the only way to know when the loop stops and how many loops is when it happens to get to the end.
There's nothing built-in, but you can easily compute it yourself: ``` x = 0 myrange = range(0, 100) thisloopsmaxcount = sum(1 for _ in myrange) for i in myrange: x = x + i / float(thisloopsmaxcount) ```
43,708,668
I have a simplified python code looking like the following: ``` a = 100 x = 0 for i in range(0, a): x = x + i / float(a) ``` Is there a way to access the maximum amount of iterations inside a `for` loop? Basically the code would change to: ``` x = 0 for i in range(0, 100): x = x + i / float(thisloopsmaxcount) ``` where `thisloopsmaxcount` is some fancy python method. Another option would be to implement a whole class for this behaviour.
2017/04/30
[ "https://Stackoverflow.com/questions/43708668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6786718/" ]
Yeah, you can.. ``` a = 100 x = 0 r = range(0,a) for i in r: x = x + i / r.stop ``` but if the range isn't counting 1,2,3... then the `stop` won't be the number of steps, e.g. `range(10,12)` doesn't have 12 steps it has 2 steps. And `range(0,100,10)` counts in tens, so it doesn't have 100 steps. So you need to take into account `(.stop - .start) / .step` as appropriate. And it only works for range, in general a `for` loop could be reading from a network, or something based on user input, where the only way to know when the loop stops and how many loops is when it happens to get to the end.
In your example, you already know the number of iterations, so why not use that. But in general, if you want the number of elements in a (Python 3) `range()`, you can take its [`len()`](https://docs.python.org/3/library/stdtypes.html#typesseq): ``` x = 0 rang = range(12,999,123) for i in rang: x = x + i / float(len(rang)) ``` You still need the temporary variable since it's not the loop itself that knows the length, but the `range` object. `len()` also works on Python 2's [`xrange`](https://docs.python.org/2/library/stdtypes.html#typesseq-xrange).
43,708,668
I have a simplified python code looking like the following: ``` a = 100 x = 0 for i in range(0, a): x = x + i / float(a) ``` Is there a way to access the maximum amount of iterations inside a `for` loop? Basically the code would change to: ``` x = 0 for i in range(0, 100): x = x + i / float(thisloopsmaxcount) ``` where `thisloopsmaxcount` is some fancy python method. Another option would be to implement a whole class for this behaviour.
2017/04/30
[ "https://Stackoverflow.com/questions/43708668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6786718/" ]
There's nothing built-in, but you can easily compute it yourself: ``` x = 0 myrange = range(0, 100) thisloopsmaxcount = sum(1 for _ in myrange) for i in myrange: x = x + i / float(thisloopsmaxcount) ```
In your example, you already know the number of iterations, so why not use that. But in general, if you want the number of elements in a (Python 3) `range()`, you can take its [`len()`](https://docs.python.org/3/library/stdtypes.html#typesseq): ``` x = 0 rang = range(12,999,123) for i in rang: x = x + i / float(len(rang)) ``` You still need the temporary variable since it's not the loop itself that knows the length, but the `range` object. `len()` also works on Python 2's [`xrange`](https://docs.python.org/2/library/stdtypes.html#typesseq-xrange).
42,212,502
I have a list of strings, for example: ``` py python co comp computer ``` I simply want to get a string, which contains the biggest possible amount of prefixes. The result should be 'computer' because its prefixes are 'co' and 'comp' (2 prefixes). I have this code (wordlist is a dictionary): ``` for i in wordlist: word = str(i) for j in wordlist: if word.startswith(j): wordlist[i] += 1 result = max(wordlist, key=wordlist.get) ``` Is there any better, faster way to do that?
2017/02/13
[ "https://Stackoverflow.com/questions/42212502", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7396899/" ]
The data structure you are looking for is called a [trie](https://en.wikipedia.org/wiki/Trie). The Wikipedia article about this kind of search tree is certainly worth reading. The key property of the trie that comes in handy here is this: > > All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. > > > The code could look as follows: ``` words = """py python co comp computer""".split() def make_trie(ws): """Build trie from word list `ws`.""" r = {} # trie root for w in ws: d = r for c in w: d = d.setdefault(c, {}) # get c, set to {} if missing d['$'] = '$' # end marker return r def num_pref(t, ws): """Use trie `t` to find word with max num of prefixes in `ws`.""" b, m = -1, '' # max prefixes, corresp. word for w in ws: d, p = t, 1 for c in w: if '$' in d: p += 1 d = d[c] # navigate down one level if p > b: b, m = p, w return b, m t = make_trie(words) print(num_pref(t, words)) ``` `make_trie` builds the trie, `num_pref` uses it to determine the word with maximum number of prefixes. It prints `(3, 'computer')`. Obviously, the two methods could be combined. I kept them separate to make the process of building a trie more clear.
For a large amount of words, you could build a [trie](https://en.wikipedia.org/wiki/Trie). You could then iterate over all the leaves and count the amount of nodes (terminal nodes) with a value between the root and the leaf. With n words, this should require `O(n)` steps compared to your `O(n**2)` solution. This [package](https://github.com/google/pygtrie) looks good, and here's a related [thread](https://stackoverflow.com/questions/11015320/how-to-create-a-trie-in-python).
42,212,502
I have a list of strings, for example: ``` py python co comp computer ``` I simply want to get a string, which contains the biggest possible amount of prefixes. The result should be 'computer' because its prefixes are 'co' and 'comp' (2 prefixes). I have this code (wordlist is a dictionary): ``` for i in wordlist: word = str(i) for j in wordlist: if word.startswith(j): wordlist[i] += 1 result = max(wordlist, key=wordlist.get) ``` Is there any better, faster way to do that?
2017/02/13
[ "https://Stackoverflow.com/questions/42212502", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7396899/" ]
The data structure you are looking for is called a [trie](https://en.wikipedia.org/wiki/Trie). The Wikipedia article about this kind of search tree is certainly worth reading. The key property of the trie that comes in handy here is this: > > All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. > > > The code could look as follows: ``` words = """py python co comp computer""".split() def make_trie(ws): """Build trie from word list `ws`.""" r = {} # trie root for w in ws: d = r for c in w: d = d.setdefault(c, {}) # get c, set to {} if missing d['$'] = '$' # end marker return r def num_pref(t, ws): """Use trie `t` to find word with max num of prefixes in `ws`.""" b, m = -1, '' # max prefixes, corresp. word for w in ws: d, p = t, 1 for c in w: if '$' in d: p += 1 d = d[c] # navigate down one level if p > b: b, m = p, w return b, m t = make_trie(words) print(num_pref(t, words)) ``` `make_trie` builds the trie, `num_pref` uses it to determine the word with maximum number of prefixes. It prints `(3, 'computer')`. Obviously, the two methods could be combined. I kept them separate to make the process of building a trie more clear.
The "correct" way is with some sort of trie data structure or similar. However, if your words are already sorted you can get quite a speedup in practical terms with some rather simple code that uses a prefix stack instead of a brute force search. This works since in sorted order, all prefixes precede their prefixed word (making it easy to get a result via a simple linear scan). Think of it as a reasonable compromise between simple code and efficient code: ``` prefixes = [] # Stack of all applicable prefixes up to this point (normally very small) max_prefixes = [None] for w in sorted(wordlist): while prefixes and not w.startswith(prefixes[-1]): prefixes.pop() prefixes.append(w) if len(prefixes) >= len(max_prefixes): max_prefixes = list(prefixes) result = max_prefixes[-1] ``` Running on all dictionary words on my Linux box (479828 of them), the above code takes only 0.68s seconds (the original code doesn't complete in a reasonable amount of time). On the first 10000 words, my code takes **0.02s instead of 19.5s** taken by the original code. If you want *really* efficient code (say, you're dealing with gigabytes of data), you're better off using the proper data structures coded up in a cache-friendly manner in C. But that could take weeks to write properly!
52,884,584
I have this array: ``` countOverlaps = [numA, numB, numC, numD, numE, numF, numG, numH, numI, numJ, numK, numL] ``` and then I condense this array by getting rid of all 0 values: ``` countOverlaps = [x for x in countOverlaps if x != 0] ``` When I do this, I get an output like this: [2, 1, 3, 2, 3, 1, 1] Which is what it should, so that makes sense. Now I want to add values to the array so that each number adds itself to the array the number of times it appears. Like this: Original: [2, 1, 3, 2, 3, 1, 1] What I want: [2,2,1,3,3,3,2,2,3,3,3,1,1] Is something like this possible in python? Thanks
2018/10/19
[ "https://Stackoverflow.com/questions/52884584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7010858/" ]
**Updated** Please check below: ``` >>> a = [2, 1, 3, 2, 3, 1, 1] >>> [b for b in a for _ in range(b)] [2, 2, 1, 3, 3, 3, 2, 2, 3, 3, 3, 1, 1] ```
This can be done using list comprehension. So far you had: ``` countOverlaps = [10,25,11,0,10,6,9,0,12,6,0,6,6,11,18] countOverlaps = [x for x in countOverlaps if x != 0] ``` This gives us all non=0 numbers. Then we can do what you want with the following code: ``` mylist = [number for number in list(set(countOverlaps)) for i in range(0, countOverlaps.count(number)) ] ``` This turns 'mylist' into the following output, which is what you're after: ``` [6, 6, 6, 6, 9, 10, 10, 11, 11, 12, 18, 25] ```
42,066,449
So I have a function in python which generates a dict like so: ``` player_data = { "player": "death-eater-01", "guild": "monster", "points": 50 } ``` I get this data by calling a function. Once I get this data I want to write this into a file, so I call: ``` g = open('team.json', 'a') with g as outfile: json.dump(player_data, outfile) ``` This works fine. However my problem is that since a team consists of multiple players I call the function again to get a new player data: ``` player_data = { "player": "moon-master", "guild": "mage", "points": 250 } ``` Now when I write this data into the same file, the JSON breaks... as in, it show up like so (missing comma between two nodes): ``` { "player": "death-eater-01", "guild": "monster", "points": 50 } { "player": "moon-master", "guild": "mage", "points": 250 } ``` What I want is to store both this data as a proper JSON into the file. For various reasons I cannot prepare the full JSON object upfront and then save in a single shot. I have to do it incrementally due to network breakage, performance and other issues. Can anyone guide me on how to do this? I am using Python.
2017/02/06
[ "https://Stackoverflow.com/questions/42066449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1591731/" ]
You shouldn't append data to an existing file. Rather, you should build up a list in Python first which contains all the dicts you want to write, and only then dump it to JSON and write it to the file. If you really can't do that, one option would be to load the existing file, convert it back to Python, then append your new dict, dump to JSON and write it back replacing the whole file.
To produce valid JSON you will need to load the previous contents of the file, append the new data to that and then write it back to the file. Like so: ``` def append_player_data(player_data, file_name="team.json"): if os.path.exists(file_name): with open(file_name, 'r') as f: all_data = json.load(f) else: all_data = [] all_data.append(player_data) with open(file_name, 'w') as f: json.dump(all_data, f) ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
Since it seems like you want to run one, and only one, function depending on the arguments given, I would suggest you use a mandatory positional argument `./prog command`, instead of optional arguments (`./prog --command1` or `./prog --command2`). so, something like this should do it: ``` FUNCTION_MAP = {'top20' : my_top20_func, 'listapps' : my_listapps_func } parser.add_argument('command', choices=FUNCTION_MAP.keys()) args = parser.parse_args() func = FUNCTION_MAP[args.command] func() ```
``` # based on parser input to invoke either regression/classification plus other params import argparse parser = argparse.ArgumentParser() parser.add_argument("--path", type=str) parser.add_argument("--target", type=str) parser.add_argument("--type", type=str) parser.add_argument("--deviceType", type=str) args = parser.parse_args() df = pd.read_csv(args.path) df = df.loc[:, ~df.columns.str.contains('^Unnamed')] if args.type == "classification": classify = AutoML(df, args.target, args.type, args.deviceType) classify.class_dist() classify.classification() elif args.type == "regression": reg = AutoML(df, args.target, args.type, args.deviceType) reg.regression() else: ValueError("Invalid argument passed") # Values passed as : python app.py --path C:\Users\Abhishek\Downloads\adult.csv --target income --type classification --deviceType GPU ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
There are lots of ways of skinning this cat. Here's one using `action='store_const'` (inspired by the documented subparser example): ``` p=argparse.ArgumentParser() p.add_argument('--cmd1', action='store_const', const=lambda:'cmd1', dest='cmd') p.add_argument('--cmd2', action='store_const', const=lambda:'cmd2', dest='cmd') args = p.parse_args(['--cmd1']) # Out[21]: Namespace(cmd=<function <lambda> at 0x9abf994>) p.parse_args(['--cmd2']).cmd() # Out[19]: 'cmd2' p.parse_args(['--cmd1']).cmd() # Out[20]: 'cmd1' ``` With a shared `dest`, each action puts its function (`const`) in the same Namespace attribute. The function is invoked by `args.cmd()`. And as in the documented subparsers example, those functions could be written so as to use other values from Namespace. ``` args = parse_args() args.cmd(args) ``` For sake of comparison, here's the equivalent subparsers case: ``` p = argparse.ArgumentParser() sp = p.add_subparsers(dest='cmdstr') sp1 = sp.add_parser('cmd1') sp1.set_defaults(cmd=lambda:'cmd1') sp2 = sp.add_parser('cmd2') sp2.set_defaults(cmd=lambda:'cmd2') p.parse_args(['cmd1']).cmd() # Out[25]: 'cmd1' ``` As illustrated in the documentation, subparsers lets you define different parameter arguments for each of the commands. And of course all of these `add` argument or parser statements could be created in a loop over some list or dictionary that pairs a key with a function. Another important consideration - what kind of usage and help do you want? The different approaches generate very different help messages.
You can evaluate using `eval`whether your argument value is callable: ``` import argparse def list_showtop20(): print("Calling from showtop20") def list_apps(): print("Calling from listapps") my_funcs = [x for x in dir() if x.startswith('list_')] parser = argparse.ArgumentParser() parser.add_argument("-f", "--function", required=True, choices=my_funcs, help="function to call", metavar="") args = parser.parse_args() eval(args.function)() ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
Since it seems like you want to run one, and only one, function depending on the arguments given, I would suggest you use a mandatory positional argument `./prog command`, instead of optional arguments (`./prog --command1` or `./prog --command2`). so, something like this should do it: ``` FUNCTION_MAP = {'top20' : my_top20_func, 'listapps' : my_listapps_func } parser.add_argument('command', choices=FUNCTION_MAP.keys()) args = parser.parse_args() func = FUNCTION_MAP[args.command] func() ```
At least from what you have described, `--showtop20` and `--listapps` sound more like sub-commands than options. Assuming this is the case, we can use subparsers to achieve your desired result. Here is a proof of concept: ``` import argparse import sys def showtop20(): print('running showtop20') def listapps(): print('running listapps') parser = argparse.ArgumentParser() subparsers = parser.add_subparsers() # Create a showtop20 subcommand parser_showtop20 = subparsers.add_parser('showtop20', help='list top 20 by app') parser_showtop20.set_defaults(func=showtop20) # Create a listapps subcommand parser_listapps = subparsers.add_parser('listapps', help='list all available apps') parser_listapps.set_defaults(func=listapps) # Print usage message if no args are supplied. # NOTE: Python 2 will error 'too few arguments' if no subcommand is supplied. # No such error occurs in Python 3, which makes it feasible to check # whether a subcommand was provided (displaying a help message if not). # argparse internals vary significantly over the major versions, so it's # much easier to just override the args passed to it. if len(sys.argv) <= 1: sys.argv.append('--help') options = parser.parse_args() # Run the appropriate function (in this case showtop20 or listapps) options.func() # If you add command-line options, consider passing them to the function, # e.g. `options.func(options)` ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
At least from what you have described, `--showtop20` and `--listapps` sound more like sub-commands than options. Assuming this is the case, we can use subparsers to achieve your desired result. Here is a proof of concept: ``` import argparse import sys def showtop20(): print('running showtop20') def listapps(): print('running listapps') parser = argparse.ArgumentParser() subparsers = parser.add_subparsers() # Create a showtop20 subcommand parser_showtop20 = subparsers.add_parser('showtop20', help='list top 20 by app') parser_showtop20.set_defaults(func=showtop20) # Create a listapps subcommand parser_listapps = subparsers.add_parser('listapps', help='list all available apps') parser_listapps.set_defaults(func=listapps) # Print usage message if no args are supplied. # NOTE: Python 2 will error 'too few arguments' if no subcommand is supplied. # No such error occurs in Python 3, which makes it feasible to check # whether a subcommand was provided (displaying a help message if not). # argparse internals vary significantly over the major versions, so it's # much easier to just override the args passed to it. if len(sys.argv) <= 1: sys.argv.append('--help') options = parser.parse_args() # Run the appropriate function (in this case showtop20 or listapps) options.func() # If you add command-line options, consider passing them to the function, # e.g. `options.func(options)` ```
Instead of using your code as `your_script --showtop20`, make it into a sub-command `your_script showtop20` and use the [`click` library](https://click.palletsprojects.com) instead of `argparse`. You define functions that are the name of your subcommand and use decorators to specify the arguments: ``` import click @click.group() @click.option('--debug/--no-debug', default=False) def cli(debug): print(f'Debug mode is {"on" if debug else "off"}') @cli.command() # @cli, not @click! def showtop20(): # ... @cli.command() def listapps(): # ... ``` See <https://click.palletsprojects.com/en/master/commands/>
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
If your functions are "simple enough" take adventage of `type` parameter <https://docs.python.org/2.7/library/argparse.html#type> > > type= can take any callable that takes a single string argument and > returns the converted value: > > > In your example (even if you don't need a converted value): ``` parser.add_argument("--listapps", help="list all available apps", type=showtop20, action="store") ``` This simple script: ``` import argparse def showtop20(dummy): print "{0}\n".format(dummy) * 5 parser = argparse.ArgumentParser() parser.add_argument("--listapps", help="list all available apps", type=showtop20, action="store") args = parser.parse_args() ``` Will give: ``` # ./test.py --listapps test test test test test test test ```
Instead of using your code as `your_script --showtop20`, make it into a sub-command `your_script showtop20` and use the [`click` library](https://click.palletsprojects.com) instead of `argparse`. You define functions that are the name of your subcommand and use decorators to specify the arguments: ``` import click @click.group() @click.option('--debug/--no-debug', default=False) def cli(debug): print(f'Debug mode is {"on" if debug else "off"}') @cli.command() # @cli, not @click! def showtop20(): # ... @cli.command() def listapps(): # ... ``` See <https://click.palletsprojects.com/en/master/commands/>
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
There are lots of ways of skinning this cat. Here's one using `action='store_const'` (inspired by the documented subparser example): ``` p=argparse.ArgumentParser() p.add_argument('--cmd1', action='store_const', const=lambda:'cmd1', dest='cmd') p.add_argument('--cmd2', action='store_const', const=lambda:'cmd2', dest='cmd') args = p.parse_args(['--cmd1']) # Out[21]: Namespace(cmd=<function <lambda> at 0x9abf994>) p.parse_args(['--cmd2']).cmd() # Out[19]: 'cmd2' p.parse_args(['--cmd1']).cmd() # Out[20]: 'cmd1' ``` With a shared `dest`, each action puts its function (`const`) in the same Namespace attribute. The function is invoked by `args.cmd()`. And as in the documented subparsers example, those functions could be written so as to use other values from Namespace. ``` args = parse_args() args.cmd(args) ``` For sake of comparison, here's the equivalent subparsers case: ``` p = argparse.ArgumentParser() sp = p.add_subparsers(dest='cmdstr') sp1 = sp.add_parser('cmd1') sp1.set_defaults(cmd=lambda:'cmd1') sp2 = sp.add_parser('cmd2') sp2.set_defaults(cmd=lambda:'cmd2') p.parse_args(['cmd1']).cmd() # Out[25]: 'cmd1' ``` As illustrated in the documentation, subparsers lets you define different parameter arguments for each of the commands. And of course all of these `add` argument or parser statements could be created in a loop over some list or dictionary that pairs a key with a function. Another important consideration - what kind of usage and help do you want? The different approaches generate very different help messages.
``` # based on parser input to invoke either regression/classification plus other params import argparse parser = argparse.ArgumentParser() parser.add_argument("--path", type=str) parser.add_argument("--target", type=str) parser.add_argument("--type", type=str) parser.add_argument("--deviceType", type=str) args = parser.parse_args() df = pd.read_csv(args.path) df = df.loc[:, ~df.columns.str.contains('^Unnamed')] if args.type == "classification": classify = AutoML(df, args.target, args.type, args.deviceType) classify.class_dist() classify.classification() elif args.type == "regression": reg = AutoML(df, args.target, args.type, args.deviceType) reg.regression() else: ValueError("Invalid argument passed") # Values passed as : python app.py --path C:\Users\Abhishek\Downloads\adult.csv --target income --type classification --deviceType GPU ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
If your functions are "simple enough" take adventage of `type` parameter <https://docs.python.org/2.7/library/argparse.html#type> > > type= can take any callable that takes a single string argument and > returns the converted value: > > > In your example (even if you don't need a converted value): ``` parser.add_argument("--listapps", help="list all available apps", type=showtop20, action="store") ``` This simple script: ``` import argparse def showtop20(dummy): print "{0}\n".format(dummy) * 5 parser = argparse.ArgumentParser() parser.add_argument("--listapps", help="list all available apps", type=showtop20, action="store") args = parser.parse_args() ``` Will give: ``` # ./test.py --listapps test test test test test test test ```
``` # based on parser input to invoke either regression/classification plus other params import argparse parser = argparse.ArgumentParser() parser.add_argument("--path", type=str) parser.add_argument("--target", type=str) parser.add_argument("--type", type=str) parser.add_argument("--deviceType", type=str) args = parser.parse_args() df = pd.read_csv(args.path) df = df.loc[:, ~df.columns.str.contains('^Unnamed')] if args.type == "classification": classify = AutoML(df, args.target, args.type, args.deviceType) classify.class_dist() classify.classification() elif args.type == "regression": reg = AutoML(df, args.target, args.type, args.deviceType) reg.regression() else: ValueError("Invalid argument passed") # Values passed as : python app.py --path C:\Users\Abhishek\Downloads\adult.csv --target income --type classification --deviceType GPU ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
At least from what you have described, `--showtop20` and `--listapps` sound more like sub-commands than options. Assuming this is the case, we can use subparsers to achieve your desired result. Here is a proof of concept: ``` import argparse import sys def showtop20(): print('running showtop20') def listapps(): print('running listapps') parser = argparse.ArgumentParser() subparsers = parser.add_subparsers() # Create a showtop20 subcommand parser_showtop20 = subparsers.add_parser('showtop20', help='list top 20 by app') parser_showtop20.set_defaults(func=showtop20) # Create a listapps subcommand parser_listapps = subparsers.add_parser('listapps', help='list all available apps') parser_listapps.set_defaults(func=listapps) # Print usage message if no args are supplied. # NOTE: Python 2 will error 'too few arguments' if no subcommand is supplied. # No such error occurs in Python 3, which makes it feasible to check # whether a subcommand was provided (displaying a help message if not). # argparse internals vary significantly over the major versions, so it's # much easier to just override the args passed to it. if len(sys.argv) <= 1: sys.argv.append('--help') options = parser.parse_args() # Run the appropriate function (in this case showtop20 or listapps) options.func() # If you add command-line options, consider passing them to the function, # e.g. `options.func(options)` ```
``` # based on parser input to invoke either regression/classification plus other params import argparse parser = argparse.ArgumentParser() parser.add_argument("--path", type=str) parser.add_argument("--target", type=str) parser.add_argument("--type", type=str) parser.add_argument("--deviceType", type=str) args = parser.parse_args() df = pd.read_csv(args.path) df = df.loc[:, ~df.columns.str.contains('^Unnamed')] if args.type == "classification": classify = AutoML(df, args.target, args.type, args.deviceType) classify.class_dist() classify.classification() elif args.type == "regression": reg = AutoML(df, args.target, args.type, args.deviceType) reg.regression() else: ValueError("Invalid argument passed") # Values passed as : python app.py --path C:\Users\Abhishek\Downloads\adult.csv --target income --type classification --deviceType GPU ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
Since it seems like you want to run one, and only one, function depending on the arguments given, I would suggest you use a mandatory positional argument `./prog command`, instead of optional arguments (`./prog --command1` or `./prog --command2`). so, something like this should do it: ``` FUNCTION_MAP = {'top20' : my_top20_func, 'listapps' : my_listapps_func } parser.add_argument('command', choices=FUNCTION_MAP.keys()) args = parser.parse_args() func = FUNCTION_MAP[args.command] func() ```
You can evaluate using `eval`whether your argument value is callable: ``` import argparse def list_showtop20(): print("Calling from showtop20") def list_apps(): print("Calling from listapps") my_funcs = [x for x in dir() if x.startswith('list_')] parser = argparse.ArgumentParser() parser.add_argument("-f", "--function", required=True, choices=my_funcs, help="function to call", metavar="") args = parser.parse_args() eval(args.function)() ```
27,529,610
I'm new to python and currently playing with it. I have a script which does some API Calls to an appliance. I would like to extend the functionality and call different functions based on the arguments given when calling the script. Currently I have the following: ``` parser = argparse.ArgumentParser() parser.add_argument("--showtop20", help="list top 20 by app", action="store_true") parser.add_argument("--listapps", help="list all available apps", action="store_true") args = parser.parse_args() ``` I also have a ``` def showtop20(): ..... ``` and ``` def listapps(): .... ``` How can I call the function (and only this) based on the argument given? I don't want to run ``` if args.showtop20: #code here if args.listapps: #code here ``` as I want to move the different functions to a module later on keeping the main executable file clean and tidy.
2014/12/17
[ "https://Stackoverflow.com/questions/27529610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4370943/" ]
At least from what you have described, `--showtop20` and `--listapps` sound more like sub-commands than options. Assuming this is the case, we can use subparsers to achieve your desired result. Here is a proof of concept: ``` import argparse import sys def showtop20(): print('running showtop20') def listapps(): print('running listapps') parser = argparse.ArgumentParser() subparsers = parser.add_subparsers() # Create a showtop20 subcommand parser_showtop20 = subparsers.add_parser('showtop20', help='list top 20 by app') parser_showtop20.set_defaults(func=showtop20) # Create a listapps subcommand parser_listapps = subparsers.add_parser('listapps', help='list all available apps') parser_listapps.set_defaults(func=listapps) # Print usage message if no args are supplied. # NOTE: Python 2 will error 'too few arguments' if no subcommand is supplied. # No such error occurs in Python 3, which makes it feasible to check # whether a subcommand was provided (displaying a help message if not). # argparse internals vary significantly over the major versions, so it's # much easier to just override the args passed to it. if len(sys.argv) <= 1: sys.argv.append('--help') options = parser.parse_args() # Run the appropriate function (in this case showtop20 or listapps) options.func() # If you add command-line options, consider passing them to the function, # e.g. `options.func(options)` ```
You can evaluate using `eval`whether your argument value is callable: ``` import argparse def list_showtop20(): print("Calling from showtop20") def list_apps(): print("Calling from listapps") my_funcs = [x for x in dir() if x.startswith('list_')] parser = argparse.ArgumentParser() parser.add_argument("-f", "--function", required=True, choices=my_funcs, help="function to call", metavar="") args = parser.parse_args() eval(args.function)() ```
48,643,925
I am looking through some code and found the following lines: ``` def get_char_count(tokens): return sum(len(t) for t in tokens) def get_long_words_ratio(tokens, nro_tokens): ratio = sum(1 for t in tokens if len(t) > 6) / nro_tokens return ratio ``` As you can see, in the first case the complete expression is returned, whereas in the second case the expression is first evaluated and stored into a variable, which is then returned. My question is, which way is the better, more pythonic way? I am not entirely sure how Python handles returns from functions. Does it return by reference, or does it return the value directly? Does it resolve the expression and returns that? In summary, is it better to store an expression's value into a variable and return the variable, or is it also perfectly fine (efficiency, and PEP-wise) to return the expression as a whole?
2018/02/06
[ "https://Stackoverflow.com/questions/48643925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1150683/" ]
> > Does it return by reference[?] > > > Effectively yes. When you return an object, the id (i.e. memory address) of the object inside the function is the same as the id of the object outside the function. It doesn't make a copy or anything. > > [...] or does it return the value directly? > > > If you're saying "is it like the 'pass-by-value' argument passing system of many programming languages, where a copy is made and changes to the new value don't affect the original one? Except for returning values instead of passing them?", then no, it's not like that. Python does not make a copy of anything unless you explicitly tell it to. > > Does it resolve the expression and returns that? > > > Yes. Expressions are almost always resolved immediately. Times when they aren't include * when you have defined a function (but haven't executed it), the expressions in that function will not have been resolved, even though Python had to "pass over" those lines to create the function object * when you create a lambda object, (but haven't executed it), ... etc etc. > > In summary, is it better to store an expression's value into a variable and return the variable, or is it also perfectly fine (efficiency, and PEP-wise) to return the expression as a whole? > > > From the perspective of any code outside of your functions, both of the approaches are completely identical. You can't distinguish between "a returned expression" and "a returned variable", because they have the same outcome. Your second function is slightly slower than the first because it needs to allocate space for the variable name and deallocate it when the function ends. So you may as well use the first approach and save yourself a line of code and a millionth of a second of run-time. Here's an example breakdown of the byte code for two functions that use these different approaches: ``` def f(): return 2 + 2 def g(): x = 2 + 2 return x import dis print("Byte code for f:") dis.dis(f) print("Byte code for g:") dis.dis(g) ``` Result: ``` Byte code for f: 2 0 LOAD_CONST 2 (4) 2 RETURN_VALUE Byte code for g: 5 0 LOAD_CONST 2 (4) 2 STORE_FAST 0 (x) 6 4 LOAD_FAST 0 (x) 6 RETURN_VALUE ``` Notice that they both end with `RETURN_VALUE`. There's no individual `RETURN_EXPRESSION` and `RETURN_VARIABLE` codes.
While I prefer the first approach (Since it uses less memory), both expressions are equivalent in behavior. The PEP8 Style Guide doesn't really say anything about this, other than being consistent with your return statements. > > Be consistent in return statements. Either all return statements in a function should return an expression, or none of them should. If any return statement returns an expression, any return statements where no value is returned should explicitly state this as return None, and an explicit return statement should be present at the end of the function (if reachable). > > >
57,948,945
I have a very large square matrix of order around 570,000 x 570,000 and I want to power it by 2. The data is in json format casting to associative array in array (dict inside dict in python) form Let's say I want to represent this matrix: ``` [ [0, 0, 0], [1, 0, 5], [2, 0, 0] ] ``` In json it's stored like: ``` {"3": {"1": 2}, "2": {"1": 1, "3": 5}} ``` Which for example `"3": {"1": 2}` means the number in 3rd row and 1st column is 2. I want the output to be the same as json, but powered by 2 (matrix multiplication) The programming language isn't important. I want to calculate it the fastest way (less than 2 days, if possible) So I tried to use Numpy in python (`numpy.linalg.matrix_power`), but it seems that it doesn't work with my nested unsorted dict format. I wrote a simple python code to do that but I estimated that it would take 18 days to accomplish: ``` jsonFileName = "file.json" def matrix_power(arr): result = {} for x1,subarray in arr.items(): print("doing item:",x1) for y1,value1 in subarray.items(): for x2,subarray2 in arr.items(): if(y1 != x2): continue for y2,value2 in subarray2.items(): partSum = value1 * value2 result[x1][y2] = result.setdefault(x1,{}).setdefault(y2,0) + partSum return result import json with open(jsonFileName, 'r') as reader: jsonFile = reader.read() print("reading is succesful") jsonArr = json.loads(jsonFile) print("matrix is in array form") matrix = matrix_power(jsonArr) print("Well Done! matrix is powered by 2 now") output = json.dumps(matrix) print("result is in json format") writer = open("output.json", 'w+') writer.write(output) writer.close() print("Task is done! you can close this window now") ``` Here, X1,Y1 is the row and col of the first matrix which then is multiplied by the corresponding element of the second matrix (X2,Y2).
2019/09/15
[ "https://Stackoverflow.com/questions/57948945", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10530951/" ]
Numpy is not the problem, you need to input it on a format that numpy can understand, but since your matrix is really big, it probably won't fit in memory, so it's probably a good idea to use a sparse matrix (`scipy.sparse.csr_matrix`): ``` m = scipy.sparse.csr_matrix(( [v for row in data.values() for v in row.values()], ( [int(row_n) for row_n, row in data.items() for v in row], [int(column) for row in data.values() for column in row] ) )) ``` Then it's just a matter of doing: ``` m**2 ```
> > now I have to somehow translate csr\_matrix back to json serializable > > > Here's one way to do that, using the attributes **data**, **indices**, **indptr** - `m` is the *csr\_matrix*: ``` d = {} end = m.indptr[0] for row in range(m.shape[0]): start = end end = m.indptr[row+1] if end > start: # if row not empty d.update({str(1+row): dict(zip([str(1+i) for i in m.indices[start:end]], m.data[start:end]))}) output = json.dumps(d, default=int) ```